Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nvidia's $99 Jetson Nano Is an AI Computer for DIY Enthusiasts (engadget.com)
487 points by plasticchris on March 18, 2019 | hide | past | favorite | 213 comments


I am continually amazed that we're able to buy better and cheaper processors that no one could have dreamed about at such power/cost 50 years ago.

I estimate it would take me and you maybe 1 year to learn how to build / assemble most things you find in your house that cost $1000 in a store -- a couch, a rug, even a simple kitchen appliance (the dumb kind).

But a CPU / computer? I could not invent that given 10,000 years, and yet you can buy one for $100. Amazing.


There was a guy that decided to make a toaster from scratch [0]. Literally from scratch: as in, mining and refining ore, etc. It was a bit of a stunt [1], but it definitely highlighted the advantage of scale and industry we typically take for granted.

[0] - http://www.thetoasterproject.org/page2.htm

[1] - https://gizmodo.com/one-mans-nearly-impossible-quest-to-make...


I love both of these comments.

I ised to talk to friends about “deep thinking” of what it took to make the sinple object in front of them, which they took for granted.

My favorite example was a pen.

Youre so abstracted from what it takes to actually make a simple pen.

Imagine it was post-apocolypse, and youre the only person left on earth - and in addition to all other humans who have dissapeared, all pens/pencils have also dissapeared.

You must build a pen - a ball point pen, to chronicle humanity.

Where do you start?

The simplest of objects have such a complex origin story.


Haven't China only a few years ago managed to produce an actual ballpoint pen? [1]

[1] https://www.washingtonpost.com/news/worldviews/wp/2017/01/18...


Its probably not because they couldn't 'manage', it is likely because it was not economically interesting, they could import tips cheap enough and don't bother manufacturing their own.


I get the point he's trying to make, but really, if I'm alone in the woods and wanted to toast some bread, I'm pretty sure I could just hold it over a fire for a few minutes.


From the toaster projects author's website [1]

> So, firstly, yes, I realise toasting bread over a fire would’ve been a lot easier. But was a piece of toast (or designing a better toaster) really the point of this project?

[1] http://www.thomasthwaites.com/the-toaster-project/


Thanks for the links! I've bookmarked those, I'll show it to those who say 'from first principles', to check if that's what they mean :-)


You should read "From NAND to Tetris". You could totally build a CPU yourself after a year of study. Not something that can compete with a modern microprocessor of course, but then again you probably couldn't build a compressor for a fridge with the same quality as a factory either.


Look at this lead pencil. There's not a single person in the world who could make this pencil. [0]

[0]: Milton Friedman https://www.youtube.com/watch?v=R5Gppi-O3a8


If you wish to make apple pie from scratch, you must first create the universe - Carl Sagan


I also love this showerthought from reddit:

All the materials for wverything made has existed on earth for billions of years, they just werent assembled in the right order yet.


Humans are the right collection of molecules to form a self aware machine that can make showerthoughts. We are literally animate matter.


But did we have any of them before we invented the shower?


There's the guy who spent $1500 and six months making a sandwich from scratch[0], including growing, collecting and killing the ingredients

[0]: https://youtu.be/URvWSsAgtJE


  spent $1500
(By flying from Minnesota to LA and renting a boat to get salt from the sea)


You're right, he cheated by using premade vehicles. He should've made those himself


I think the animal husbandry skill tree would be fastest, but children and a sedan chair would work if more time was available.


Sorry to differ, I think this is twisted by an era where liberal market was held as god like status.

Take the rubber and metal out because for now. To draw a dark line on a surface, you take the first bit of wood you find, grind it into a point and burn it. You have a pencil.

I really believe that the free market centuries made people believe it was the only or most efficient way to get an object done, just like people thought java ee was the only way to make a web application in early 2000s.


You’ve moved the goal posts.

It’s not that the construction of some thing, anything, that can be used to write with is difficult. As you point out a piece of charcoal isn’t hard to make.

Milton Friedman’s point is that even something as simple and inexpensive as a pencil involves people all over the world working together to create it. Some mine graphite, some run ships to move the graphite, some manufacture paint, some grow rubber trees, etc. All of this activity, coordinated and made efficient by the market is behind even a simple thing like a pencil.


I agree on goalpost but still differ on Friedman's point. Culture shifts into thinking you need objects to the point of making you forget what you wanted in the first place. Friedman wants to marvel at the thought of his beloved market.


Friedman does wax lyrical about his beloved market toward the end of that video, but the main point I drew from his quote about the pencil is that it's absolutely mind-blowing to stop and contemplate the incredible cooperation and pre-existing systems required to construct so many of the mundane, dirt-cheap objects of the modern world.

The fact that someone could eschew mass-produced lead pencils in favour of a self-made writing implement doesn't diminish the fact that it's practically impossible for any one individual on the planet to ever actually build a pencil you can buy for the equivalent of a few seconds to minutes of labour. That pencil, and so many other mundane items like it, are artifacts beyond the crafting capabilities of any one human.


After viewing primitive technology and crafting videos I'm convinced of the opposite. Culture makes you think that only mass market can do that.


I feel that again misses the point. Sure, it's perfectly possible for a resourceful and knowledgable human to hand-engineer various tools from the environment. But to make something very much like an ordinary pencil made of wood, graphite, rubber, metal, and paint would be a monumental task, far beyond the cost of a pencil in our established society.

The fact is that globalization and industrialization has democratized the construction of literal artifacts. Consider the single-use plastic bottle, with a precisely machined screw neck and matching lid. It's lightweight, transparent, and will last for years. It would be exceedingly difficult to find and process the raw material to craft such an artifact from scratch, yet millions of them per day are used once and discarded.

Same goes for most office supplies, now that I think of it. And we haven't even touched integrated circuits yet.


I kinda see, the value is in efficiency of delegation but IMO there are drawbacks that this 'efficient' perspective hides, namely people believe you can't do anything without the market.


Yep because history has proven that believe to be correct.


The complexity of a modern economy is more about efficiency than anything else.

If you want a pencil that uses graphite and an eraser, that’s actually fairly easy to construct with the correct raw materials. A single person in the right location and with the right knowledge could have made a single pencil 1,000 years ago, though the pencil would not have been worth the effort. Further, they could not have made 100,000 of them where I could actually buy 100,000 pencils by being part of the modern economy.


Friedman's example is more an illustration that when we want to design a replacement for a working system, there is often a lot more complexity than expected - reinvented wheels are often square. Your example ends up with a piece of charcoal that doesn't have an eraser, gives you slivers, gets charcoal all over, has to be burnt every time you need to sharpen it (with a market-produced lighter or matches), doesn't work with pencil sharpeners, and smells like smoke instead of that "freshly sharpened pencil" smell (yeah, that's a thing).

Your re-invented wheel is square.

Yes, Friedman wants us to examine how the market works - because it works better for these complex coordinations than any other system we've devised. The market has facilitated the discovery and communication of the user requirements, and the price mechanism has coordinated the work of all the producers/transporters/exchangers of all the raw ingredients and intermediate products as well as the final product, and has enabled an ecosystem that produces compatible pencil sharpeners, grips, etc.

And did anyone ever think Java EE was the only way? That was much more an example of a designed system to replace the messy evolved world of CGI/Perl/PHP.


you don't need a lighter to make fire, and you don't need to sharpen, you regrind.

j2ee is an example of mass mistook for an example of good. It was taught as the ultimate goal to marvel at the complexity of the remote objects.


you don't need a lighter to make fire, and you don't need to sharpen, you regrind. j2ee is an example of mass mistook for an example of good. It was taught as the ultimate goal to marvel at the complexity of the remote objects.

ps: if I had to glorify something it's globalization incentives to improve metrology. Precision is what gives you modern things.


The thought experiment was not "how can I draw a dark line on a surface from scratch" but Making a Modern #2 pencil

and no one ever thought java ee was the only way to make a web application, not even in the early 2000's


> I think this is twisted by an era where liberal market was held as god like status.

The author more than the era, but sure. (To the extent it's also true of the era, the author was more of a cause than an effect, being one of the leading evangelists of the market of his generation.)


actually, lead pencils were all originally made in a single location. the outcome now is just due to modern manufacturing. If people wanted to build a decent pencil locally, it would be possible, just not economic.


That's not what was said. A "single person" cannot make a lead pencil. I agree. Ask yourself whether the Primitive Technology guy could make a pencil in the middle of the Australian rain forest. He can make a lot of great things, but in order to make a pencil, he would need additional tools, and those tools would be made by someone else, hence the statement that "a single person cannot make a pencil."


My statement is 100% consistent with Friedman's original statement: https://thenewinquiry.com/milton-friedmans-pencil/

Note that multiple times in history, due to war reasons, societies independently developed making pencils because they did not have access to the resources, and substituted other resources.


You don't even need a year. If you pick a really simple instruction set like in "Mano's basic computer" you can get it done within couple of months.

Ref: http://sandsduchon.org/duchon/cs311/ManoTutorial/ManoIntrodu...

I built one years ago for university course in Matlab Simulink.


See also Sam Zeelof's DIY silicon fab. Even using commercial equipment and supplies, the amount of work needed to make even the most rudimentary IC is quite staggering.

http://sam.zeloof.xyz/category/semiconductor/


If you're not counting physical hardware, it can take even less time than that. I built a MIPS processor in verilog over the course of a month or so back in college


Everyone's missing his point. Yes you could make a computer but an absolute toy compared to a Jetson. You could reasonably get close to the other technologies including a fridge compressor given a year of study. Your home built CPU isn't going to touch the Jetson.


We had a procesor design course at the university. Single participants made 68k complexity designs, groups went to 286 and more complex parts. It was 6-8 hours workload every week for 4 months. A year time is a lot! Manufacturing technology is another issue. Same design will deliver different results on 28 nm and 160 nm nodes.


What I wonder about is the 10x factor here - all of these designs have already been invented and are available for perusal. How many people could come up with something like this with nothing but a knowledge of mathematics?

I sometimes don't even know if I could invent the wheel if it wasn't already invented.


Ref: https://github.com/dugagjin/MIPS A MIPS implementation in VHDL. Most of the source files are < 50 lines of code.

After a little bit of magic, almost any technical field is the application of a few principles.

If you saw a rock roll down a hill, you'd probably wish you could roll your own stuff instead of lifting it.


I doubt you'd even get as far as hand-making the brushless motor that powers the compressor. And that's just the start - the fluid dynamics to design an efficient compressor is years of study in itself.

It's easy to underestimate the complexity of everyday things. But note that fridges are 100 years old, and today's fridges are much better than the fridges of 100 years ago. It took the collective effort of the human race 100 years to get this far.


For me, the "wow" moment was the first Raspberry Pi.

A fully working computer, in the palm of your hands, powered by USB with Full HD video out and Ethernet, for 35 bucks?

Mind blowing.


I saw this exposed a couple of weeks ago at the Vitra Design Museum near Basel:

https://www.dezeen.com/2009/06/27/the-toaster-project-by-tho...

The author tried, as an art project, to build from scratch a <10$ toaster using locally sourced materials and methods of production.

It cost him more than 1000$ bucks, 9 months, and the result was, well, what you see in the photos.


I recognize and respect the point, but there's a pretty major differences: Raw material cost, automate-ability, and scale.

We have processes to get super pure silicon in huge chunks that can be processed basically entirely by computers and robots there on out (and in fat people touching it would be actively bad). Yeah, sure, there's a huge upfront cost in terms of machines to handle and etch the silicon and so on, but you get to amortize it over milllions and millions of units - a new fab was not created for the Jetson Nano.

Meanwhile, basically every appliance, furniture piece, etc both requires more expensive of raw materials, and requires a decent amount of human intervention and, especially for furniture, expertise during manufacturing, meaning that cost per unit doesn't scale as well as electronics. Not to mention they all sell in lower volumes per up-front skill person hour dedicated to that thing.


Good that cannot be manufactured by hand require specialized tools. These can be as simple as a hammer or a fully automated CNC machine. Nowadays the most desired goods can only be manufactured by those specialized tools. Therefore if you want to manufacture a single unit you need those tools. However after building a single tool it doesn't matter how many units you actually produce. Therefore it becomes more expensive to produce less and cheaper to produce more. This is called economies of scale. You cannot build a CPU because building a single one is incredibly expensive per unit. A million? Dirt cheap (only $99 in this case).


> I could not invent that given 10,000 years

Invent CPUs, probably not. That took tens of millions of hours across decades of iterations.

But it’s actually not impossible to create your own CPU from scratch. I’d suggest starting off with a $50 or less FPGA and start learning about the logical building blocks of a modern CPU. You can then advance to building up a basic CPU fairly easily. Creating a full featured ISA and then designing out the circuits and having the chip actually fabbed is not trivial, but I know of many hobbyists who have, so it’s certainly doable if you’re dedicated enough.


And audiophile-quality HiFi hasn't dropped in price a bit ...


Depends which component, maybe speakers or headphones, but there seem to be top-quality open-source DACs that are very cheap to assemble or buy assembled. A google search should turn them up.

Last time I researched it, I found the headphone DAC that I'd buy if I wanted one, but I can't find it now. It had an interesting story around it. It was released anonymously in a blog and then suddenly, after posting dozens of posts per year, the author just dropped off the grid. No one knows what happened to him. When one of the parts went out of production, others made modifications and rehosted the source, even though the original author did not want derivative works, but that kind of gets voided when the original author is, for all intents and purposes, not on this planet anymore.


That's NwAvGuy and the ODAC/O2 units.

I think "audiophile grade" is an ill-defined target there. You can get headphones like the Superlux HD668B for practically nothing (they were running under $30 at one point), and they sound pretty damned good for a pair of $30 headphones. When I was a kid you paid $30 for a pair of crappy discman headphones let alone a nice pair of over-ear. Meanwhile you've got the name brand Audio Technica and AKG sets running $100-200 thanks to moving production to China.

I kind of think people are under-selling just how cheap the "95% solutions" have become nowadays. The really exotic gear made in smaller runs and with meticulous quality control is still expensive, because that's inherently expensive to do, but the consumer-grade stuff has really moved up in quality and down in price.

At this point if you're paying thousands of dollars for audio gear, you're either chasing that last 5% of quality, or you're buying a name and a placebo effect, or both.


Right on point. It's a difficult topic to navigate in. Most of what you buy in stores are the name, royalties, and R&D of digital circuits and a complex new IoT interface.

For true audiophile gear you pay for them to individually test each and every component. Only select the ones that match, and for the overall circuitry to work as expected. All that manual labor to get the last 5% is very expensive, and completely irrelevant for the vast majority of people. They're going to stream their music from an inferior source anyway.


I have good but not audiophile-level gear at home which has been incrementally added to or replaced over the years. TBH, if I were starting over today for whatever reason, I'd probably buy some Sonos equipment or something along those lines and call it a day.


I agree with you, and really wonder if amelius was saying the same thing. The price/quality has improved tremendously, but the price/"audiophile brand" hasn't.


That's because "audiophile" is a term that means over priced scam. Pro gear has gotten cheaper and better.

It might as well be described as using artisanal components, hand crafted etc.


Iirc the $10 apple USB c adapter tested really well, flatter than some budget favorites like Schiit's entry level stuff. Given at low impedance


> 472 gigaflops

Whenever I hear a number of gigaflops or terraflops, I like to look up the history of super computers [0]. This $99 computer is faster (on paper) than the world's fastest supercomputer in 1996, or a bit over 20 years ago. That's pretty cool.

[0] https://en.wikipedia.org/wiki/History_of_supercomputing#Mass...


Not quite. This is FP16, supercomputers are typically benchmarked using double precision flops, so FP64. So roll in a factor of ~4 there, or a couple of years.


FP16 has 11 bit mantiassa, whereas FP64 has 53 bits. Integer multiplication scales quadratically with the number of bits, so a factor of ~23 might be more accurate.


A Pentium 90 was probably fairly common in the mid 90s. Delivered a whopping 0.09 GFLOPS :)


The Power Mac G4 was was briefly a restricted export due to its power (late 90s). This, obviously, gave ultimate childhood bragging rights.

https://www.google.co.nz/amp/s/www.wired.com/2002/01/thats-a...


Apple's ad on that was maximum snark. As tanks surround a graphite G4, the announcer deadpans, "Pentium PCs? Well ... they're harmless."


The 386 was also export controlled for a time.


Not to mention the size of those machines. I visited ASCI RED before they turned it on in 1997 or so and it was multiple long lines of refrigerator sized units. According to the guy who showed it to me, it also required an air conditioner that could have comfortably cooled the local college basketball arena.


Cheaper than the $150 Google TPU Dev Board, and looks like it can do training as well as inference. Also, doesn't require you to send your model to their company. Nice!


Cheaper than the $150 Google TPU Dev Board, and looks like it can do training as well as inference.

Training is a slight win (although it's going to be too slow for anything useful really).

But it looks like the TPU will outperform this somewhat for inference. The 256 core Jetson (this has 128 cores) could run MobileNet-v2 at between 12 and 20 ms per image (depending on batch size)[1], while the USB TPU adapter takes 2.3ms per image [2]

Also, doesn't require you to send your model to their company.

Nor does the TPU dev board.

[1] https://arxiv.org/pdf/1810.00736.pdf (see table 1)

[2] https://coral.withgoogle.com/tutorials/edgetpu-faq/


>> Nor does the TPU dev board.

Yes it does require you to send your model to Google: https://coral.withgoogle.com/web-compiler/


And you have to check a box that says "I agree that my model will only be used for applications that follow Google's AI Principles." Wow.


Which may or may not include automatically killing people via drones.


They're pretty clear that that isn't kosher:

Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

https://www.blog.google/technology/ai/ai-principles/

But drones that spy on people are ok as long as they aren't "violating internationally accepted norms" which sort of sounds like a blank check.


Wouldn't this depend on the definition of "injury"? You could argue that under some definitions of "injury" Google violates its own "AI principles" already.


Oh, interesting! I hadn't seen that.


I suspect it's not going to be like that forever, although I could be wrong. No self respecting AI shop will send them anything production-grade if that's how they want to play it. It'll all be hobbyists and bullshit projects, which, I suspect, is not the clientele they want to attact if it's even a somewhat serious cloud/IoT play. Although this is Google, so it could be canceled in 6 months, too, after the project managers and TLs get their promotions.

Google should license this thing to someone else who can make it in good quantity and sell it really cheap, so others build it into their designs. I'd be pretty excited if that happened.


Seems to lose in Mobilenet and win in other benches according to Nvidia's benchmarks.

I'll wait for independent testing before I drop $100.

https://devblogs.nvidia.com/jetson-nano-ai-computing/


Horseshit in that bench right off the bat: I have a Google Edge TPU board right in front of me and its perf on SSD300 is 70fps, not 48. That's with the browser demo, which (as far as I can tell) includes realtime encoding of h264 for streaming. Almost twice as much as Jetson, and likely in a much more modest power envelope. NVIDIA is known for dishonesty in their benchmarks. Although TPU is, of course, a quantized play, and Maxwell will really suck for that, unless it's been tweaked specifically for this board.

OTOH, fp32 models are _much_ easier to work with, and this thing has more RAM so you can waste it on 32 bit weights, and NVIDIA's software toolkit is second to none. So the Jetson looks pretty tempting as well. I just wish they didn't try to insult my intelligence.


Oh yeah - I never said to trust their benches. Take OEM benches especially vs competitors with the grainiest grains of salt.

When people start getting their hands on them I'll start seeing independent benches, and I think anandtech got their hands on one. Hopefully soon™.


Interesting. I hadn't seen that, but the NVidia numbers on their own products seem credible. I do agree that the flexibility of having real CUDA cores is nice.


couldn't this play games then? seems to be a pretty nice platform for a homebrew videogame console.


definitely could. retroarch would run great under linux, and a wide variety of cores are available.

but im more interested as a cudann box


Yeah, but Google Edge TPU can do faster than realtime (about 70fps) 90-class object detection with SSD300. Just plug in a webcam. I wonder how this compares both in terms of the raw throughput and FPS per watt. If it compares favorably in even one of these without being too terrible in the other, this could be pretty cool.


there is no reason to train on that thing (except for zero shot classification demos). any cheap/lower end Nvidia gpu would do a much better job, and you would then transfer the model to the embedded thing.


Would it do better than a CPU for training? I do my dev on a MacBook Air and use AWS for training, if this cheap gpu will be a few times faster than my Air CPU than I’d be willing to get it. I usually work with medium sized models/data, too big for CPU but don’t need a multi gpu cluster.


Google colab gpu instances are free and likely faster than the jetson (and definitely faster then typical laptop or desktop cpu only training), just save the models to your gdrive

https://colab.research.google.com/notebooks/welcome.ipynb#re...


I would stick to AWS for training. Your CPU will be orders of magnitude slower than most GPUs for training.


Can you recommend a cheap/lower end Nvidia GPU for training for someone who just wants to play around with NNs a bit and isn't interested in peak performance? Could I get something for $100 that would be better than my CPU?


While you can in theory run CUDA on an Nvidia 1030 GPU (currently around $100) it's really not worth it. The cheapest card actually worth buying for ML is the 1050 Ti, which can be had for around $200.

If your budget is $100 I'd take that money and hunt around for various cloud based solutions. Most have introductory offers and/or cheap/free solutions for hobbyists with modest needs. $100 will go a long way on these services if you're careful. Once you've used up your $100 you'll have a much better idea of what, if anything, you actually need.


the other options is Google's free gpu instances on it's colab notebooks https://colab.research.google.com/notebooks/welcome.ipynb#re...

that's what I would recommend for your budget probably


That's correct, you can transfer any trained model to the Nano and run it there using the NVIDIA TensorRT library/toolchain.


Quad core arm, 4gb of ram and gigE!

Seems like a really nice board


Also has an M.2 slot for WiFi, as well as both HDMI and DP...supports two simultaneous displays out of the box.


Why would you use the M.2 for wifi and not a super fast disk in that slot? USB bus should be able to handle wifi just fine.


Because it's a M.2 E slot, which has no SATA. Maybe there are SSD drives now that can use PCI and fit in the smaller E slots? There didn't used to be. If so, great...


just no SATA? or no SATA and no NVMe?


The development kit specifically lists NVMe as an option

https://www.nvidia.com/en-gb/autonomous-machines/embedded-sy...


Tried "search in page" for NVMe, no luck. Not sure what you're seeing.


Weird. I opened it in reader mode. Doesn't seem to be on the normal page. It's a screenshot, maybe a draft?

https://www.nvidia.com/content/dam/en-zz/Solutions/pattern-l...


The "PCIe x 16" probably means that goes with one of the previous, much more expensive, Jetson boards.

This $99 Jetson appears to have no direct support for any type of M.2 drive. Maybe some M.2E PCIe to SATA adapter board, if such a thing exists.


Cool thanks for all the info. I just assumed the M.2 slots were all standardized. Didn't realize they were different.


M.2 E has PCIe ×2, USB 2.0, I2C, SDIO, UART and PCM.

As far as I know, NVMe drives want 4 PCIe lanes.


There are recent low-cost NVMe drives that do 2x, I believe.


If a PCIe device doesn't fall back to the available channels that is most likely a bug. An NVMe from a reputable manufacturer should work in this slot.


I really don't think it's that easy. Find a picture of an m2 drive has a connector that would fit an "M.2 E" slot. The connector would have to look like this: https://www.addonics.com/products/diagrams/M2-E-KEY-WIFI-CAR...

I don't see any.


A more practical solution is to train your network on other beefy CPU/GPU/TPUs and convert+run that on the Nano using TensorRT. TensorRT supports import of models in Tensorflow (UFF) and ONNX formats.


This Jetson Nano is a crippled TX1, nothing more.

We have been building cameras with the TX1 and TX2 for 3 years now. We have seen things you people wouldn't believe ;-)) Now, can we cut through the hype a bit? Ready? Get your rant mask on.

The Tegra (aka Jetson) chipsets are quite buggy at a silicon level. If you find a hardware bug, nVidia will not acknowledge it, or help you (unless you're Nintendo for example, buying millions of pieces, of course)

The tx1, tx2, etc. are a nested maze of blackboxes, which you do not and will not have access to. For example, the camera ISP is accessible by THREE companies in the whole world. If you want to utilize the ISP, you have to go through them. Will those companies help you? Yes, for a very large fee. Why should they make the fee lower? They have almost no competition. OK, so you manage to get a sensor driver from one of those three companies. The sensor driver is, probably, also very buggy and poorly written. Maybe you can rewrite it yourself. The company who wrote the original one might help you anyway with ISP tuning (again for a fee).

nVidia doesn't give a damn about hobbyists or smaller companies. They will willingly mislead you with specs that are outright false and throw your company under a bus without the slightest second thought. We have seen this repeatedly with nVidia - their corporate culture really tends toward arrogant douchebaggery, second perhaps only to GoPro.

So, after all that, it seems that nVidia has produced too much TX1 silicon, so they've crippled it, and put it in a package that they're selling for $99.

I'm not really excited about it :-)


Hope it'll ship with better support that the first Jetson. The one they marketed with all that AI/Machine Vision stuff and then shipped without a camera driver.

This is just following Google's Edge TPU, which probably competes with a Raspberry Pi + Movidius stick. The market there is getting interesting.


This. And all of the third party camera solutions for TX2 cost $500+ when equivalent USB cameras with the same sensor cost $50. They really need to get their act together and sell some NVIDIA-sanctioned camera solutions at scale, and at price points similar to Raspberry Pi cameras.

A lot of third party carrier boards also have a complete sh_tshow of connectors. Auvidea's boards, for example, ship with a Raspberry Pi camera connector, but Raspberry Pi NoIR cameras don't have TX2 drivers, and there are hardly any other cameras that ship with that connector.

Seriously, NVIDIA: Please sell a TX2 devkit that has six non-weird 2-lane CSI connectors and some IMX290 or AR0521 or any other commonly-used robotics sensors for $100 each that plug in and "just work". It would make a lot of people happy to have something to at least start with, and pave the way for third party options to follow the same form factor, connectors, pinouts, board sizes, and so forth.


They've actually fixed this hopefully this time around.

According to their blog post it actually has driver support for the RPi CM2 8MP (IMX219) and they'll be releasing their own Nvidia-sanctioned cameras available from their partners.

It should hopefully just work. No lowlight options at this time however, which means external CCTV is out of the question :(


Cool. Well hopefully some third parties will now create cameras all in the same form factor with the same pinout so that the choice of carrier board and camera can be independently made.


I'm actually hoping that 3rd party carrier boards standardize on the weird 6 csi connector thing, and was a little saddened to see that nvidia's devkit for the nano doesn't use it :(.

I bought a tx2 carrier from connecttech, and half their tx2 boards use a 30 pin connector used by leopard imaging, and the other half use the same ribbon connector that the nvidia devkit uses for its camera. I have $600 worth of camera which doesn't fit the carrier I chose ::face-palm::.

Their xavier carrier, http://connecttech.com/product-category/form-factors/nvidia-..., uses the same connector as the tx2 and xavier dev kits.


What's the market or use case for these camera drivers? It seems like the fancy direct-to-chip camera connections and driver development would be better aimed at sensor manufacturers, not hobbyists trying to build simple vision algorithms.

Why wouldn't you use the Ethernet port with a traditional GigE camera? I've also done some simple projects using an ordinary USB webcam.

I'm interested in this as a small form factor industrial computer. I've run Raspberry Pis and Intel NUCs in lots of manufacturing equipment where you need something that can run a few lines of Python and sit between the PLC and your device. Given this board's processing power, it might be interesting to plug into a little Dalsa or Basler camera and run a vision algorithm. You can already buy simple "vision sensors" from Keyence, Banner, Sick, etc. that integrate what I understand to be a simple ARM chip with the vision sensor and run basic vision algorithms, but they're often hamstrung by the tooling. The ability to perform and communicate results of arbitrary commands in applications where you don't need lots of processing power would be great.

What camera applications are people builidng that need 1.5 Gbps of camera data? I've built assembly lines that build several parts per second and never even come close to being limited by frame rate or network bandwidth.


I ordered a Coral USB accelerated (Edge TPU on a USB) to tinker with for the novelty and it just arrived this afternoon. I will say one giant gap between these two is Google's Edge TPU based products only support TensorFlow Lite.


Yeah! Looks exciting. I will say that the Raspberry Pi is USB 2, so bandwidth between , say, your camera and the Movidius Stick is limited. The google device and this one should have proper high bandwidth interfaces.


Doesn't the Raspberry Pi camera use the CSI connector? If not, why?


It does, but the movidius stick is USB.


I have a Jetson TX2, at the $99, I'm super tempted to buy one to see how it compares. That being said, I would definitely buy this if there is a way to make Plex server work with transcoding.


Wouldn't the Shield TV be a better fit in the same price range? It does Plex transcoding.


I was tempted during the $100 sale late last year, but I was worried about it becoming obsolete (I know its crazy, since its been out for less than 5 years). I like the idea of having my own Linux server though. That way I can set up my own services as needed depending on my usage (I normally tack on a samba server on top of my plex server).


Yeah it's getting almost 4 years old but it still is the best Android TV box.


That's exactly what I want as well!


Anyone know about the binary blob / firmware situation on this one? Will it run mainline Linux?


it runs nvidia's l4t, it is based on ubuntu 18.04

https://developer.nvidia.com/embedded/linux-tegra


Mainline Linux will run but all the fancy stuff won't work. They ship 4.9 kernel which is two years old (though it's LTS). After all it's just a standard ARM board with a fancy GPU and peripherals.


It's Nvidia, so you have to decide whether you want to give ANY money at all to a company that ships binary blobs or not (whether or not this individual product uses them). I'm sure there are product teams at Nvidia that ship "open" products but unfortunately there doesn't seem to be a way to support the good parts of a corporation while neglecting the parts that ship closed binary blobs.


Which hardware companies ship zero binary blobs?


I wish these tiny SBCs came with SATA or m.2 so I could hook up an SSD. I know microSD is catching up, but it's not there yet.


This has an M.2 Key E connector which has PCIe ×2, USB 2.0, I2C, SDIO, UART and PCM. Hypothetically an M.2 PCIe drive could work with a Key M to Key E adapter, but I couldn't find one in a cursory search.


Unfortunately it looks like you lose wireless connectivity then.


If USB is available it may be acceptable to use USB for wifi.


Odroid H2 has 2 X SATA 3.0 - https://www.hardkernel.com/shop/odroid-h2/

A number of others have mini PCIE that you can add a PCIE SATA board to - https://www.amazon.com/Port-Controller-AsMedia-ASM1061-Chips...


I bought an Odriod once. It was DOA and at the insane shipping prices I just took it as a lesson learned to never buy Odroid again. I've gone through probably 20 Pi's and never had one DOA or even die for that matter.


I have many Odroids, none were DOA.

If you are in North America, I would recommend purchasing from https://ameridroid.com/ or https://olympianled.com/brand/hardkernel/

Distributors are listed on https://www.hardkernel.com/distributors/


The olimex olinuxinos have sata options, and some beagleboards. Also, look out for boards with eMMC options. That is a fairly fast onboard disk.


PCIe should be able to do SATA on here - they showed a reference design running PCIe-based SATA devices in their blog post, which was recording 8x1080p30 H.264 to a HDD.


Check out the odroid hc1 or hc2. You still have to boot from the microSD card, but you could put most of the OS on the drive connected to the SATA port.


On the TX2 they had the Auvidea expansion board that was tiny and where they had an M.2 socket. It did have some stuff not working in the kernel tough so you had to do some changes and rebuild it for it to work. A pain in the butt.


the pricepoint is good, relative to the jetson tx1/2 (299-749 usd) and xavier (1099 usd).

some of my notes, there is a massive heatsink on this thing, probably for both the a57 cpu and maxwell gpu, this will make your case a bit larger than say the rpi.

not sure how the a57 compares to rpi’s a53, i assume both are armv8, quad-core.

the inputs seem identical to rpi 3 model b+, hdmi, ethernet, 4 usb (seems 2.0), mini usb, there’s also an additional usb looking port on top of hdmi.

storage seems the same as rpi, except that it’s built in 16G mmc flash whereas rpi you need to separately plugin a microsd card.

overall this has potential if you need to do more gpu intensive work, i like the form factor.

https://www.nvidia.com/en-us/autonomous-machines/embedded-sy...


It would be good to know what bus the Ethernet talks on; with the Pi it's shared over the USB bus and it hampers the performance considerably.

For this reason I went with an Odroid for a small network storage system.

Edit: it has PCIe so it shouldn't be a problem.


The on-board Ethernet on the TX1 is USB3 integrated, IIRC. The system having PCIe is not necessarily a sign of how it gets used or what is using it, unfortunately.


a) a57 is the "beefy" version of a53, it's significantly faster in raw performance so this should stomp an rpi - here's ARM's performance numbers at release: https://community.arm.com/cfs-file/__key/communityserver-blo...

b) above the HDMI is displayport

c) usb is 3.0 all around, great for NAS-style devices

d) production module (for final product) uses 16gb emmc, whereas the devkit is microSD like rpi


The nano has a single USB3 PHY and 3x USB 2. The dev board may have a hub though. https://www.nvidia.com/en-gb/autonomous-machines/embedded-sy...


We are using Rapids.Ai and unfortunately this requires Pascal as a minimum Architecture - unfortunate as we have been looking at a cheap method to create proof of concepts for code we would eventually push to the cloud (functionality rather than speed for example) - this would have been ideal!


I use a pretty old computer for prototyping ML models. Could I use something like this to develop with, saving on buying an expensive laptop or renting a VPS?


The product link in the article leads to an ironic 404 message: "Even AI can't find this page!"

https://www.nvidia.com/en-us/autonomous-machines/embedded-sy...



99$

With 50$ shipping to France.

What a great deal /s

Or straight up 130€ (150$) from Nvidia France. Not quite 99$.


Precisely. I would love to see this succeed but it's not $99 even in the US - with Arrow, there's tax taking it up $120 and then the UK shipping cost is about the same as you found for France ($42-44 depending on option).

Feels like this could have been better managed - especially with the 16 week delivery time! Unlike the Coral which was announced, ordered and in my hands within three days!


I like the barrel jack for power on the development kit version, I wonder what sort of power requirements it has as that seems to be an issue with the RPiS. On a side note that is a very impressive heatsink too!


What problems are you facing on the pi? The USB power to me is a big win. I already have a ton of cables sitting around.


My biggest problem is more of a mechanical issue, the darn micro USB connector is difficult to align and after a while of plugging in and out(especially due to being difficult to quickly see if it's right side up) it becomes loose. As far as power itself, I have only run into catastrophic problems when attempting to use a hand held solar powered battery backup bank and when I tried to use a car cigarette lighter USB adapter. That being said it is my understanding that RPIS use a special power management scheme that actually throttles the processor cycles without any indicator, including the lightning bolt so maybe that laggyness could be power issues too? (Edit: There might be an iindicator of the low power throttle in the kernel log)


I actually bought some small micro usb power dongles with switches on them. So you only plug it into the pi once, then plug the dongle into the actual power source. Also means you can turn it on/off without being forced to pull the plug.

Something like this: https://smile.amazon.com/LoveRPi-MicroUSB-Switch-Raspberry-F...


I just unplug it from the wall...


Unfortunately for me a lot of times that requires digging behind a piece of furniture :)


Pi needs way more than 500ma, which a USB 2 port maxes out at, I think.

I think a pi3 needs at least 2 or 3 amps.


Most phone chargers now are 1.5-2A standard... My first pi (a B, not a B+, gen 1) it was pretty common for phone chargers to be 1A. I still remember over clocking it no problem. Just looking at the 3 chargers sitting next to me they are 3A (phone), 1A (tablet) and 1.6A (IDK, probably a phone). I've had all these for several years (you can probably guess the order). I think I'd be hard pressed to find one that was <1A in my house. 1A chargers are considered "slow" now.

For reference, documentation says Pi3 B needs 700mA - 1A[0] depending on peripherals (they recommend 2.5A, but that's hogwash). To match my experience, I've run B+'s overclocked (and overvolted) with a camera (+250mA) no problem with a 1.5A charger. If I remove the camera they run fine with a 1A charger (so we have a kind of bound there for "typical" usage +/- error in charger). They boot loop with the camera and a 1A charger.

So I'm a little confused by your statement. I've only ever run into a power problem once. And that was a few years ago when I was pushing the pi's CPU/GPU, adding a camera (doing object detection), and using an old USB charger. Swapped that out and problem solved.

I'm not saying that your problem doesn't exist, I just don't really understand how you see it as a problem and what your statement has to do with it. You're also not quite right about power outputs[1], which let's be honest; do you think your phone charges at 500mA? Your numbers are for signal, not power (though I get the easy confusion). 500mA is like what you'd get from plugging it into the computer, not a charger. It isn't the cable that controls the power^, it is the supply. I for one rather like only needing one cable for everything.

[0] https://www.raspberrypi.org/documentation/hardware/raspberry...

[1] https://en.wikipedia.org/wiki/USB#Power-related_specificatio...

^ Well... wire gauges have limits and you can burn up the wire. But you're going to be pretty hard pressed to find a supply that accepts a USB and will also output enough power to burn the cable.


I never said anything about a phone charger.

Are you confused who you are replying to?


Maybe the reply was meant for me? Maybe it's just me but because there is no off switch on a Rpi I am constantly plugging and unplugging it, I have found the receiver or female part of the connector on the board tends to get loose from that, that is where the loss of power seems to be coming from, I have run into this issue with my BLU amazon phone also. I'm just not a fan of the micro USB connector I guess, but maybe it's just me and no one else has thus problem.


Actually I don't know if it is just cheap cables, I am not sure if it is the male or the female part of the connection that is degrading. The next time I get into town I am going to purchase a "Premium" USB to micro cable and a new PSU and will see if that solves my issue. I am concerned that my past ebay and dollar store purchases may have corrupted my stock of cables with factory 2nds and rejects.


I shut it down via the commandline, but, that doesn’t really solve turning it on.


I'm replying to you because I don't know why you're bringing up USB power requirements as an issue.


It's interesting how we can see in real time the commodification of ML hardware. From my understanding, prior to Tensorflow, most serious ML projects involved clusters of NVIDIA GPUs, and production ML software was tightly coupled to CUDA. Google shifted the software landscape by pushing a multi-target ML platform.

Now, all of a sudden, NVIDIA is under intense pressure to keep up with new compute hardware looking to eat their lunch in the AI sector.


Just bought one. Been a while since I played with embedded systems (or ML for that matter) :) Anyone have any cool ideas for how to make this little toy useful?


- Smart camera for a doorbell? Train it to detect a person or a package?

- Smart camera to detect your posture, and message you when you slouch for more than 3 minutes?

- buy tons and tons of sensors and just hook them all up to see if you can detect anything at all? VOC sensors, heat, light, noise, see if any of these things can diagnose comfort, sleep, health issues in your life?


for all of that you can use $30 raspberry pi... for some of that you can use $5 raspberry zero and get wifi on board


Self-driving RC cars?


Now I just need something to DIY with it. Any ideas?


Voice-controlled toilet, or toilet paper hanger, or toilet perfumerie; voice-controlled potted plant watering; voice-controlled head scratcher; voice-controlled backrub chair; voice-controlled pet feeder; I'm sorry, I'm just making fun of voice-control frivolities; voice-control voice-control-diy-ideas-generator.


self driving small car, nerf turret that shoots anyone who is not you, following drone, cat stalking drone, cat stalking self driving "rc" car, auto bird identification camera in your backyard "this is a bird of type X" with voice or screen with instance detection/segmentation


>self driving small car

https://www.amazon.com/dp/B07JMHRKQG

Classic Amazon: they're charging $399 for it, when obviously it's a Starfighter-style test for their AI engineer hiring pipeline. It would hugely increase their own expected revenue to give it away to anyone with an AWS account, but they couldn't resist skimming a couple bucks off of each sucker.


what gear would one best use to make those gadgets self-recharging?

one the non-charging idea side there would be the AI network traffic analyzer (just requires another 1G connector via USB)


Induction charging pad + receiver.


Prime candidate to add to my http://parallac.org collection!


This begs the question, what are the best price-quality ratio cameras compatible with this board?

As I understand, both Jetson Nano and Google Edge TPU/Coral Dev Board would work with the same set of cameras, having the MIPI-CSI2 interface. Is it?

Many ML inference applications are using a camera, yet it's close to impossible to find something very affordable.


Depends what you want. The cheapest decent machine vision cameras that I know of are Basler's dart range, but they're still $100-300. Entry level is the DA2500-14uc (5MP 14fps).

You can also try places like Leopard Imaging or eCon systems. Even then, most of eCon's stuff is $150+ and their APIs feel a bit hacky.

This is all USB3. Unless you really need a mipi solution (you may need an adaptor board to match the), USB is fine. Even a webcam might be good enough.


What qualities does a decent machine vision camera have, besides price? size? resolution? lens/sensor quality? infrared? frame rate? latency? ruggedness?

I have a bunch of inexpensive IP security cameras. I imagine the latency (~second) would be prohibitive for machine vision camera applications, where you probably are making some control decision immediately? I'm curious how else they'd compare.


Yikes that is pricey. Camera costs more than the board, right.


Are the drivers for the graphics, wifi, etc all FOSS? Or is it the usual disappointingly difficult to maintain Nvidia mess?


This is Nvidia, what do you think?


It runs Ubuntu. I doubt Nvidia wrote their own drivers for basic things.


I wonder how the emmc flash holds up over many writes, and what form of wear leveling it uses compared to a consumer class m2 interface, sata3 SSD. One of the main problems with rpi is the microsd media, even "industrial" cards cannot tolerate anywhere near the writes that a $50 SSD can take.


No dedicated memory for the GPU?


Yeah, I couldn't find any information on it. TX2 has 8GB of GPU RAM. I don't even understand how it will work on anything reasonable without any GPU RAM..


Tegra systems use a unified memory controller for both GPU and CPU RAM. This means the GPU can access essentially all system memory as video memory and there is no meaningful distinction between the two. This has always been the case -- similarly, the Xavier Jetson, and the TX2 do not have split GPU/CPU RAM either. They all have unified controllers, so the "8GB of GPU RAM" for the TX2 you mention is also "8GB of CPU RAM".

This thing essentially looks like a chopped down version of the TX1, considering the specs are all pretty much identical. These have 4GB of memory.

Most of the time you'll be running a stripped down headless Linux 4 Tegra on this device, so something like 80% of your memory can otherwise be dedicated entirely to your application (both GPU and non-GPU compute portions) anyway.


Truly amazing. I remember when my university purchased a CM2 for about $5 million. Was about 30 gflops.

http://www.tamikothiel.com/cm/


I hope to put it into a pi-top

https://accounts.pi-top.com/products/pi-top/

not sure if this fits


It won't work.

This doesn't look like it's the same form factor as the Pi. (Inside the pi-top are mounting points that align with the pi's form factor mounting holes.)

Also the pi-top's power switch and supply go through the Pi's GPIO pins so those would need to be a match as well.


I kind of miss the nvidia transformer line of android tablet, I am still running a TF101 as a light client.


I have a TF700T collecting dust on my workbench. It was a great machine for its time though! Probably the best looking display available on an Android tablet at its release, and amazing battery life.


How fast can it run the Inception model on a single image?


Fewer headphone jacks than the Creative Nomad. Lame.


Is there hope for Nouveau on these embedded chips?


To answer my own question, there is active work on this chip family, eg https://www.phoronix.com/scan.php?page=news_item&px=Tegra-X2...

So there's hope that it might be usable for f/oss enthusiasts at some point.


Sold out or pre-order with $50 shipping. But it's definitely a nice piece of HW! Can't wait what clever people make with it.


I’m going to get one as a dumb terminal for SSH and a random browser or two. It could make for a really compelling HTPc device too


From Nvidia's site:

>NVIDIA Jetson Nano modules will be available from distributors worldwide starting June 2019.


that's odd. The one I ordered said it was shipping at the end of march.


All this AI stuff is nice, but the important questions are what are the benchmarks of a retropie?


I really really need to get one of these now. Is anyone at GTC right now?



Could this be used for an offline voice assistant?


Probably. I've recently compiled Mozilla's Deepspeech for the Nvidia TX1 and it was a painful process. They don't have precompiled binaries for ARM64 + CUDA for Deepspeech, so you have to compile Bazel (v18!), Tensorflow from Mozilla's repo, V1.12, and then deepspeech. If I am remembering correctly, it ran pretty slowly on the TX1. I don't know how well it would run on this new device, or if a lighter weight model is available somewhere.


For what are people going to use the Jetson Nano.


Robotics/drones seems like the obvious market.


why would you want to train on the device itself?


In some applications you may want to run a continual learning algorithm so it will continually train on new data as well as make inferences.


I always wonder how one would guarantee stability / convergence when learning in the field. Sounds great, but how can this be pulled off in a reliable way? Just dragging down the learning rate does not suffice in my oppinion, as for effective training you should be at the boundary of stability and greedy updates from my experience


My main computer is a laptop with a Intel UHD Graphics 620 card :) Having a small card like this to speed up training would be so cool.


To fine tune the model to the specific customer needs. For example, learning the parameters of they user's face.


So, engadget is yet another website that hasn't read the GDPR guidelines properly. I cannot be bothered to click through all of their nonsense to read their clickbait content.


$99 seems like a pretty good deal, am I missing anything? 4 gigs of RAM and _reasonable_ eMMC + PCI expansion could allow it to be a cheap casual use workstation, right?

How does it stack up with the RK3399 in the RockPro64? I'm assuming the GPU and software support is better?


From what I read you would have to purchase the $129 board for EMMC support. The $99 board was micro SD only.


What $129 board? For the Jetson Nano, I see only the $99 module or and $99 dev board; neither mention eMMC. Do you mean the RockPro64? I see a $79.99 board that supports eMMC (but isn't in the same league for AI tasks as the Jetson Nano, I think?)


Heres another potential use - Crypto mining


Not really. It's 472 gigaflops for $100 and not particularly energy efficient. It's useless for bitcoin by a large margin and for Ethereum & co. you can get a used GTX 1070 with 6.5 teraflops of compute for about $250.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: