But, the software misses the mark by a lot. It's still based on Ubuntu 18.04.
Want python later than 3.6? Not in the box. A lot of python modules need compiling, and the cpu is what you'd expect. Good for what it is, bad for compiling large numbers or packages.
They run a fancy modern desktop based on the Ubuntu one. Sure, it's Nvidia, gotta be flashy. But that eats a quarter of the memory in the device and make the SOC run crazy hot all the time.
These aren't insurmountable issues, they just left a bad taste in my mouth. In theory I only have to do setup changes once, but it's still a poor experience
The more I use linux (and I'v been for almost 15 years), the more strongly believe that using what the Linux distros provide as a development toolchain is an antipattern. Use separate toolchains with SDKs separate from your /usr instead.
> Good for what it is, bad for compiling large numbers or packages.
at worst you can
docker run -it aarch64/ubuntu
on your desktop and compile your stuff there.Your assertion makes no sense. Let me explain why.
You've adopted a LTS release which was made public 2 years ago, was already a couple of years in the making, and is aimed at establishing a solid and reliable base system that can be targetted by the whole world with confidence.
And knowing that, your idea is to bolt on custom tooling that's not installed anywhere or by no one by default and make it your own infrastructure?
Unless you're planning on managing that part of the infrastructure for your customers to use, your race to catch up with the latest and greatest makes zero sense, and creates more problems than those you believe you're solving.
And no, it's not an infrastructure problem. It's a software engineering problem that's yours in the making. If you seek stability and reliability then you target stable and reliable platforms, such as the stuff distributed by default by LTS releases such as Ubuntu 18.04. because that's what they are used for.
But this is an SBC. It's likely to be configured to run its specific task forever. It should automatically log into the default user's account, and automatically start whatever software you want it to run. Software for it is likely to come from cross-compilers and to be sent over SSH.
It doesn't have hundreds of gigabytes of high-speed storage, the entire OS needs to be copied from a microSD card on every boot, so you don't want to have multiple toolchains on it.
I really like the DietPi [1] environment for these purposes (though it doesn't currently support have the Jetson Nano), the setup scripts make it easy to configure the OS for exactly the tools you want.
[1]: https://dietpi.com/
LTS makes sense as deployment platform. Push it to server and deploy own application. Make a demo like subject NVIDIA Jetson Nano SDK and forget about it.
There are better ways to help underpowered machines - use light desktop environment, block ads in browser, add RAM if possible. Kernel and base requirements has not changed much in a decade. About 120 MB with XOrg.
It still only has Python 3.6, but OpenCV and numpy are pre-installed correctly this time so you don't have to compile them which is an improvement.
I'm not doing anything with mine that involves OpenCV, I didn't realise one of its flagship examples was also messed up in the box.
Oh! And I just dug out my script to rebuild if I need to. I forgot a few more annoyances:
* tensorflow isn't installed by default, and it's not obvious how to do the install because you have to specify NV's repo: pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow
* openai gym won't work unless you specify archaic version 0.15.3, there's a buggy interaction between it and NVidia's X11.
* lots of packages need installing that should really have come by default, or were broken by default [eg numpy]
If you don't need CUDA, you can outright use regular Fedora, which uses nouveau. (no reclocking issues or such on Tegra)
I doubt that it largely has to do with the lack of optimisations of the graphical packages in the Linux for ARM architecture, as a x86 Chromebook with 4GB memory and slightly better CPU clock boost rate can offer far superior desktop experience.
1) ubuntu lts - stable releases - I'm a server person I want nothing to change, ever.
2) arch linux - rolling releases - I'm a desktop person, I want the latest of everything, now.
anything between these two extremes kind of sucks, both as a developer and as a user.
advantages:
1: "Q: I want to disable snapd on ubuntu 18.04, how do I do it?"
"A: <20 detailed posts of how to do it>"
1: "user: something isn't working"
"developer: clearly your fault, I haven't changed anything in that project in 4 months"
2: "Q: I want to disable snapd on arch linux, how do I do it?"
"A: why did you install it? look on the wiki."
2: "user: something isn't working"
"developer: everything is up-to-date, please do pacman -Syu and read the wiki"
I just set up a new 20.04 for ML and the official NVIDIA repos for this version were still missing a few cuXYZ libraries. I had to also add the 18.04 repos and symlink them together to get some Frankenstein library mess that works (as far as I can tell).
Sure you can make those things run natively but people using them are used to Python, lot of code is written in Python for it and there is little to no performance penalty of running the code in Python (everything perf. critical is running natively anyway, Python is only used to "glue" those bits seamlessly together).
And once you have RAM in gigabytes even the extra memory overhead of Python becomes a moot point.
But I must admit I don't see the 2GB model as a better value than the $100 4GB one, especially with shared CPU/GPU.
apt-get install python3.7
or
apt-get install python3.8
Instead they should release drivers that work with any Linux distribution (for example Debian, which is now broken because of USB issues).
(there's also the NV prebuilts of an older version of the patchset at https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%252... )
I wish luck to everyone buying these things. You'll need it to run any modern distro in a few years.
Only to realise that lack of software support makes them almost all pretty much completely useless.
I wouldn't use any randomly bought ARM device now for hobby projects, except the Raspberry Pi.
2.) Boot is only possible from microSD card. USB is not supported.
3.) Build BareBox with https://www.barebox.org/doc/latest/boards/rockchip.html
Use `rk-splitboot` to split FlashData from the original boot loader, other option doesn't work. Build it, it will create barebox-radxa-rock.img. It's .img, not the .bin.
Write an SD card as in the instruction.
4.) Follow these instructions, but use the generic image.
https://archlinuxarm.org/platforms/armv7/amlogic/odroid-c1
http://os.archlinuxarm.org/os/ArchLinuxARM-armv7-latest.tar....
5.) Build kernel as in this tutorial, ignore the `dts` stuff. The `dts` comes with the kernel sources.
https://wiki.radxa.com/Rock/Linux_Mainline
https://github.com/torvalds/linux/blob/master/arch/arm/boot/...
6.) Copy the zImage, rk3188-radxarock.dts to the /boot and plug in the UART.
7.) sudo picocom /dev/ttyUSB1 -b 115200 -e w
8.) Boot in the bootloader and execute the following commands:
global.bootm.appendroot=true
global.bootm.image=/mnt/mshc1.0/boot/zImage
global.bootm.oftree=/mnt/mshc1.0/boot/dtbs/rk3188-radxarock.dtb
bootm
9.) If it boots, you can make it permanent by following the BareBox docs.
Edit: Seems like people voted up the Hackaday article and not the medium article. So you've in effect negated everyone's votes.
How do I know which comments are talking about which article?
I can't find it on the product site, nor the BCM2711 datasheet (more like a programmers manual)
Note that embedded radios / modems / codec manipulation is incredibly common. Usually its handled by a DSP (VLIW architecture), but I'd imagine that the GPU architecture has more FLOPs. DSPs probably have less latency, but something like SDR is not very latency sensitive (or more like: the community models latency as a phase-shift and is therefore resilient to many forms of latency)
Note: I'm not actually in any of these communities. I'm just curious and someone else told me about this very same question.
It's pretty neat stuff to think about DSP on hundreds of MHz of bandwidth being something that you can do in software these days. I remember when that was firmly in the realm of "don't bother unless you want to design a custom ASIC" and now it's starting to become even hobbyist-accessible.
Compared to him testing on a Raspberry Pi 4: https://www.youtube.com/watch?v=l4TyYU9Xhcs
(note both those videos are a old so results could be different now)
The video is a year old, but I imagine there would be less bugs now. With this cheaper board coming out, there might be a bit more of development activity.
This Jetson really isn't a platform to run emulated games - for that there are much better boards, with more RAM. Which is a lot more important issue. You can't do miracles with 2GB of RAM.
Jetson is made for computer vision, neural networks and signal processing, where the GPU isn't used as a GPU but as a massively parallel co-processor to crunch through a lot of data at once.
No, not even close. It's embedded systems with GPU acceleration, mostly for machine learning.
(strictly speaking any screen with higher resolution and/or expectations of smooth non-trivial graphics requires some kind of GPU, but I take the question to mean why one would want a powerful one)
As someone who has tried to run a desktop environment on various Pi-like devices for years, with every generation being "the one that will replace your PC" I just laugh.
The modern web is nearly unusable with less than 4 GB of RAM and it's really easy to push beyond that. I personally would not recommend that anyone try to use anything with less than 8 GB of RAM as a general purpose desktop computer anymore.
You can do it, sure, but there will be significant compromises and limitations along the way. A secondhand x86 PC with 8+ GB of RAM is almost always a better choice for general purpose desktop use. Leave the "hacker boards" for roles that benefit more from their compact size, low power consumption, and/or specialty I/O capabilities.
I haven't tried the Pi 8GB, that should at least solve the RAM problem, but the lack of native high-speed storage still likely impacts usability. Here's hoping the major Pi-likes follow suit to offer variants with more memory soon.
---
And of course as noted by other comments, another major reason to think of these things more as appliances is the generally spotty support for updating kernels and the like that is unfortunately standard in the ARM world. That entire side of the industry is infected with NDA culture and has no interest in curing it.
More specific; What is the use case of a battery enabled robot “brain” with machine learning?
But more seriously: robotics.
Only the MAIX platform has a price advantage over this now, and the software is much less mature.
Someone could use this SBC -- not as an SBC, but as a discrete video card...
That is, have a computer without a video card -- communicate with this SBC via USB or Ethernet -- and have it perform all of the functions of a video card to generate the display output...
After all, it's got quite a high power GPU in it...
Yes, it would be a bit slower than a PCIe based high-end graphics card... this idea is for experimentation only.
One idea, once this is done, is to decouple the GUI part of any Operating System you write -- and have that part, and that part alone, run on the SBC / GPU.
Yes, X-Windows/X11 already exists and can do something like this.
But if say, you (as an OS designer) felt that X-Windows was too complex and bloated, and you wanted to implement something similar to Windows' GUI -- but a lighter version, and decoupled from the base Operating System -- then what I've outlined above might be one way to set things up while the coding is occurring...
Again, doing something like this would be for people that wanted to experiment, only...
ORG.APACHE.SLING.API.SLINGEXCEPTION: CANNOT GET DEFAULTSLINGSCRIPT: JAVA.LANG.ILLEGALSTATEEXCEPTION: RESPONSE HAS ALREADY BEEN COMMITTED
Edit: it's live now! [1][2]
[1] https://www.seeedstudio.com/NVIDIA-Jetson-Nano-2GB-Developer...
There are many more Pi's out there so the development community and support is stronger than Nano. The Nano has support, just not in the same area. I have had problems with the Nano where memory gets chewed up in some way and I just reboot it after a while. The Nano does have the advantage of an Nvidia GPU, but it appears that is of no benefit to you.
Although the Nano does not have built-in wifi I don't think that is a substantial issue, but it does mean you need to get a wifi dongle. Of more concern is the need for a fan on the Nano. I have found that it gets hot fairly easily when pushed.
Bottom line is that in my opinion the small amount of memory is a substantial to avoid the Nano 2GB, the 4GB Pi is good and for about $30 more the 8GB Pi is a definite step up.
Of course, if you just want to buy a jetson and just looking for some reason to, that's totally fine too :-D
[0] https://devforum.zoom.us/t/have-you-considered-building-zoom...
The total package ends up significantly more expensive than the "list price" of an RPi itself, and it ends up not making much more sense than buying a used x230 or something off of eBay for <$200.
Double amusing, there is now an ARM core called the X1, a heavyweight follow up to the A76 for bigger form factors.
It would be interesting to see the tests repeated with Raspberry Pi 4B + Intel Neural Compute Stick 2, but I doubt whether there will be any drastic difference as this setup will still be bottlenecked by USB bandwidth when compared to Jetson Nano.
Even if the performance matches some how, the price/value is overwhelmingly favourable to Jetson Nano more so with the new cheaper 2GB version.
[1] https://developer.nvidia.com/embedded/jetson-nano-dl-inferen...
Is this enough power to do voice recognition? I'm sorry if this is a stupid question, I haven't done anything with ML before.
They dropped Display Port, the second CSI Camera Connector, the M2 slot, the USB slot and replaced the barrel jack with the USBC.
Thats a decent downgrade of capability
I lost faith in using raspberry pi 4s because the foundation is very tight-lipped. Want to know why your board is behaving weirdly, or whether the behavior is expected or a result of a defect? TOUGH LUCK. You'll have to rely on outdated forum posts, SO answers, and (if you're really lucky) the good mood of the engineer monitoring these sites.
I'd rather stick to building for a well-understood but expensive platform like x86 than undocumented black boxes.
If you are manufacturing something in larger volume, they will sell you production modules without the dev boards that you can mount on your own boards. But those are still aimed at more expensive products. So this isn't a solution for low-cost high volume consumer products, but there is definitely a market for it.
https://www.nvidia.com/en-us/autonomous-machines/embedded-sy...
And if you just do CUDA experiments with it, those transfer at least partially to other environments (e.g. desktop/server hardware with GPUs) too.
Where do you get the idea this isn't transferable?
+ https://www.aaeon.com/en/c/aaeon-nvidia-ai-solutions
+ https://www.avermedia.com/professional/category/nvidia_jetso...
and you have certain numbers of custom design using the Nano compute module https://developer.nvidia.com/embedded/jetson-nano