I am continually amazed that we're able to buy better and cheaper processors that no one could have dreamed about at such power/cost 50 years ago.
I estimate it would take me and you maybe 1 year to learn how to build / assemble most things you find in your house that cost $1000 in a store -- a couch, a rug, even a simple kitchen appliance (the dumb kind).
But a CPU / computer? I could not invent that given 10,000 years, and yet you can buy one for $100. Amazing.
There was a guy that decided to make a toaster from scratch [0]. Literally from scratch: as in, mining and refining ore, etc. It was a bit of a stunt [1], but it definitely highlighted the advantage of scale and industry we typically take for granted.
I ised to talk to friends about “deep thinking” of what it took to make the sinple object in front of them, which they took for granted.
My favorite example was a pen.
Youre so abstracted from what it takes to actually make a simple pen.
Imagine it was post-apocolypse, and youre the only person left on earth - and in addition to all other humans who have dissapeared, all pens/pencils have also dissapeared.
You must build a pen - a ball point pen, to chronicle humanity.
Where do you start?
The simplest of objects have such a complex origin story.
Its probably not because they couldn't 'manage', it is likely because it was not economically interesting, they could import tips cheap enough and don't bother manufacturing their own.
I get the point he's trying to make, but really, if I'm alone in the woods and wanted to toast some bread, I'm pretty sure I could just hold it over a fire for a few minutes.
> So, firstly, yes, I realise toasting bread over a fire would’ve been a lot easier. But was a piece of toast (or designing a better toaster) really the point of this project?
You should read "From NAND to Tetris". You could totally build a CPU yourself after a year of study. Not something that can compete with a modern microprocessor of course, but then again you probably couldn't build a compressor for a fridge with the same quality as a factory either.
Sorry to differ, I think this is twisted by an era where liberal market was held as god like status.
Take the rubber and metal out because for now. To draw a dark line on a surface, you take the first bit of wood you find, grind it into a point and burn it. You have a pencil.
I really believe that the free market centuries made people believe it was the only or most efficient way to get an object done, just like people thought java ee was the only way to make a web application in early 2000s.
It’s not that the construction of some thing, anything, that can be used to write with is difficult. As you point out a piece of charcoal isn’t hard to make.
Milton Friedman’s point is that even something as simple and inexpensive as a pencil involves people all over the world working together to create it. Some mine graphite, some run ships to move the graphite, some manufacture paint, some grow rubber trees, etc. All of this activity, coordinated and made efficient by the market is behind even a simple thing like a pencil.
I agree on goalpost but still differ on Friedman's point. Culture shifts into thinking you need objects to the point of making you forget what you wanted in the first place. Friedman wants to marvel at the thought of his beloved market.
Friedman does wax lyrical about his beloved market toward the end of that video, but the main point I drew from his quote about the pencil is that it's absolutely mind-blowing to stop and contemplate the incredible cooperation and pre-existing systems required to construct so many of the mundane, dirt-cheap objects of the modern world.
The fact that someone could eschew mass-produced lead pencils in favour of a self-made writing implement doesn't diminish the fact that it's practically impossible for any one individual on the planet to ever actually build a pencil you can buy for the equivalent of a few seconds to minutes of labour. That pencil, and so many other mundane items like it, are artifacts beyond the crafting capabilities of any one human.
I feel that again misses the point. Sure, it's perfectly possible for a resourceful and knowledgable human to hand-engineer various tools from the environment. But to make something very much like an ordinary pencil made of wood, graphite, rubber, metal, and paint would be a monumental task, far beyond the cost of a pencil in our established society.
The fact is that globalization and industrialization has democratized the construction of literal artifacts. Consider the single-use plastic bottle, with a precisely machined screw neck and matching lid. It's lightweight, transparent, and will last for years. It would be exceedingly difficult to find and process the raw material to craft such an artifact from scratch, yet millions of them per day are used once and discarded.
Same goes for most office supplies, now that I think of it. And we haven't even touched integrated circuits yet.
I kinda see, the value is in efficiency of delegation but IMO there are drawbacks that this 'efficient' perspective hides, namely people believe you can't do anything without the market.
The complexity of a modern economy is more about efficiency than anything else.
If you want a pencil that uses graphite and an eraser, that’s actually fairly easy to construct with the correct raw materials. A single person in the right location and with the right knowledge could have made a single pencil 1,000 years ago, though the pencil would not have been worth the effort. Further, they could not have made 100,000 of them where I could actually buy 100,000 pencils by being part of the modern economy.
Friedman's example is more an illustration that when we want to design a replacement for a working system, there is often a lot more complexity than expected - reinvented wheels are often square. Your example ends up with a piece of charcoal that doesn't have an eraser, gives you slivers, gets charcoal all over, has to be burnt every time you need to sharpen it (with a market-produced lighter or matches), doesn't work with pencil sharpeners, and smells like smoke instead of that "freshly sharpened pencil" smell (yeah, that's a thing).
Your re-invented wheel is square.
Yes, Friedman wants us to examine how the market works - because it works better for these complex coordinations than any other system we've devised. The market has facilitated the discovery and communication of the user requirements, and the price mechanism has coordinated the work of all the producers/transporters/exchangers of all the raw ingredients and intermediate products as well as the final product, and has enabled an ecosystem that produces compatible pencil sharpeners, grips, etc.
And did anyone ever think Java EE was the only way? That was much more an example of a designed system to replace the messy evolved world of CGI/Perl/PHP.
you don't need a lighter to make fire, and you don't need to sharpen, you regrind.
j2ee is an example of mass mistook for an example of good. It was taught as the ultimate goal to marvel at the complexity of the remote objects.
ps: if I had to glorify something it's globalization incentives to improve metrology. Precision is what gives you modern things.
> I think this is twisted by an era where liberal market was held as god like status.
The author more than the era, but sure. (To the extent it's also true of the era, the author was more of a cause than an effect, being one of the leading evangelists of the market of his generation.)
actually, lead pencils were all originally made in a single location. the outcome now is just due to modern manufacturing. If people wanted to build a decent pencil locally, it would be possible, just not economic.
That's not what was said. A "single person" cannot make a lead pencil. I agree. Ask yourself whether the Primitive Technology guy could make a pencil in the middle of the Australian rain forest. He can make a lot of great things, but in order to make a pencil, he would need additional tools, and those tools would be made by someone else, hence the statement that "a single person cannot make a pencil."
Note that multiple times in history, due to war reasons, societies independently developed making pencils because they did not have access to the resources, and substituted other resources.
See also Sam Zeelof's DIY silicon fab. Even using commercial equipment and supplies, the amount of work needed to make even the most rudimentary IC is quite staggering.
If you're not counting physical hardware, it can take even less time than that. I built a MIPS processor in verilog over the course of a month or so back in college
Everyone's missing his point. Yes you could make a computer but an absolute toy compared to a Jetson. You could reasonably get close to the other technologies including a fridge compressor given a year of study. Your home built CPU isn't going to touch the Jetson.
We had a procesor design course at the university. Single participants made 68k complexity designs, groups went to 286 and more complex parts. It was 6-8 hours workload every week for 4 months. A year time is a lot! Manufacturing technology is another issue. Same design will deliver different results on 28 nm and 160 nm nodes.
What I wonder about is the 10x factor here - all of these designs have already been invented and are available for perusal. How many people could come up with something like this with nothing but a knowledge of mathematics?
I sometimes don't even know if I could invent the wheel if it wasn't already invented.
I doubt you'd even get as far as hand-making the brushless motor that powers the compressor. And that's just the start - the fluid dynamics to design an efficient compressor is years of study in itself.
It's easy to underestimate the complexity of everyday things. But note that fridges are 100 years old, and today's fridges are much better than the fridges of 100 years ago. It took the collective effort of the human race 100 years to get this far.
I recognize and respect the point, but there's a pretty major differences: Raw material cost, automate-ability, and scale.
We have processes to get super pure silicon in huge chunks that can be processed basically entirely by computers and robots there on out (and in fat people touching it would be actively bad). Yeah, sure, there's a huge upfront cost in terms of machines to handle and etch the silicon and so on, but you get to amortize it over milllions and millions of units - a new fab was not created for the Jetson Nano.
Meanwhile, basically every appliance, furniture piece, etc both requires more expensive of raw materials, and requires a decent amount of human intervention and, especially for furniture, expertise during manufacturing, meaning that cost per unit doesn't scale as well as electronics. Not to mention they all sell in lower volumes per up-front skill person hour dedicated to that thing.
Good that cannot be manufactured by hand require specialized tools. These can be as simple as a hammer or a fully automated CNC machine. Nowadays the most desired goods can only be manufactured by those specialized tools. Therefore if you want to manufacture a single unit you need those tools. However after building a single tool it doesn't matter how many units you actually produce. Therefore it becomes more expensive to produce less and cheaper to produce more. This is called economies of scale. You cannot build a CPU because building a single one is incredibly expensive per unit. A million? Dirt cheap (only $99 in this case).
Invent CPUs, probably not. That took tens of millions of hours across decades of iterations.
But it’s actually not impossible to create your own CPU from scratch. I’d suggest starting off with a $50 or less FPGA and start learning about the logical building blocks of a modern CPU. You can then advance to building up a basic CPU fairly easily. Creating a full featured ISA and then designing out the circuits and having the chip actually fabbed is not trivial, but I know of many hobbyists who have, so it’s certainly doable if you’re dedicated enough.
Depends which component, maybe speakers or headphones, but there seem to be top-quality open-source DACs that are very cheap to assemble or buy assembled. A google search should turn them up.
Last time I researched it, I found the headphone DAC that I'd buy if I wanted one, but I can't find it now. It had an interesting story around it. It was released anonymously in a blog and then suddenly, after posting dozens of posts per year, the author just dropped off the grid. No one knows what happened to him. When one of the parts went out of production, others made modifications and rehosted the source, even though the original author did not want derivative works, but that kind of gets voided when the original author is, for all intents and purposes, not on this planet anymore.
I think "audiophile grade" is an ill-defined target there. You can get headphones like the Superlux HD668B for practically nothing (they were running under $30 at one point), and they sound pretty damned good for a pair of $30 headphones. When I was a kid you paid $30 for a pair of crappy discman headphones let alone a nice pair of over-ear. Meanwhile you've got the name brand Audio Technica and AKG sets running $100-200 thanks to moving production to China.
I kind of think people are under-selling just how cheap the "95% solutions" have become nowadays. The really exotic gear made in smaller runs and with meticulous quality control is still expensive, because that's inherently expensive to do, but the consumer-grade stuff has really moved up in quality and down in price.
At this point if you're paying thousands of dollars for audio gear, you're either chasing that last 5% of quality, or you're buying a name and a placebo effect, or both.
Right on point. It's a difficult topic to navigate in. Most of what you buy in stores are the name, royalties, and R&D of digital circuits and a complex new IoT interface.
For true audiophile gear you pay for them to individually test each and every component. Only select the ones that match, and for the overall circuitry to work as expected. All that manual labor to get the last 5% is very expensive, and completely irrelevant for the vast majority of people. They're going to stream their music from an inferior source anyway.
I have good but not audiophile-level gear at home which has been incrementally added to or replaced over the years. TBH, if I were starting over today for whatever reason, I'd probably buy some Sonos equipment or something along those lines and call it a day.
I agree with you, and really wonder if amelius was saying the same thing. The price/quality has improved tremendously, but the price/"audiophile brand" hasn't.
Whenever I hear a number of gigaflops or terraflops, I like to look up the history of super computers [0]. This $99 computer is faster (on paper) than the world's fastest supercomputer in 1996, or a bit over 20 years ago. That's pretty cool.
Not quite. This is FP16, supercomputers are typically benchmarked using double precision flops, so FP64. So roll in a factor of ~4 there, or a couple of years.
FP16 has 11 bit mantiassa, whereas FP64 has 53 bits. Integer multiplication scales quadratically with the number of bits, so a factor of ~23 might be more accurate.
Not to mention the size of those machines. I visited ASCI RED before they turned it on in 1997 or so and it was multiple long lines of refrigerator sized units. According to the guy who showed it to me, it also required an air conditioner that could have comfortably cooled the local college basketball arena.
Cheaper than the $150 Google TPU Dev Board, and looks like it can do training as well as inference. Also, doesn't require you to send your model to their company. Nice!
Cheaper than the $150 Google TPU Dev Board, and looks like it can do training as well as inference.
Training is a slight win (although it's going to be too slow for anything useful really).
But it looks like the TPU will outperform this somewhat for inference. The 256 core Jetson (this has 128 cores) could run MobileNet-v2 at between 12 and 20 ms per image (depending on batch size)[1], while the USB TPU adapter takes 2.3ms per image [2]
Also, doesn't require you to send your model to their company.
Wouldn't this depend on the definition of "injury"? You could argue that under some definitions of "injury" Google violates its own "AI principles" already.
I suspect it's not going to be like that forever, although I could be wrong. No self respecting AI shop will send them anything production-grade if that's how they want to play it. It'll all be hobbyists and bullshit projects, which, I suspect, is not the clientele they want to attact if it's even a somewhat serious cloud/IoT play. Although this is Google, so it could be canceled in 6 months, too, after the project managers and TLs get their promotions.
Google should license this thing to someone else who can make it in good quantity and sell it really cheap, so others build it into their designs. I'd be pretty excited if that happened.
Horseshit in that bench right off the bat: I have a Google Edge TPU board right in front of me and its perf on SSD300 is 70fps, not 48. That's with the browser demo, which (as far as I can tell) includes realtime encoding of h264 for streaming. Almost twice as much as Jetson, and likely in a much more modest power envelope. NVIDIA is known for dishonesty in their benchmarks. Although TPU is, of course, a quantized play, and Maxwell will really suck for that, unless it's been tweaked specifically for this board.
OTOH, fp32 models are _much_ easier to work with, and this thing has more RAM so you can waste it on 32 bit weights, and NVIDIA's software toolkit is second to none. So the Jetson looks pretty tempting as well. I just wish they didn't try to insult my intelligence.
Interesting. I hadn't seen that, but the NVidia numbers on their own products seem credible. I do agree that the flexibility of having real CUDA cores is nice.
Yeah, but Google Edge TPU can do faster than realtime (about 70fps) 90-class object detection with SSD300. Just plug in a webcam. I wonder how this compares both in terms of the raw throughput and FPS per watt. If it compares favorably in even one of these without being too terrible in the other, this could be pretty cool.
there is no reason to train on that thing (except for zero shot classification demos). any cheap/lower end Nvidia gpu would do a much better job, and you would then transfer the model to the embedded thing.
Would it do better than a CPU for training? I do my dev on a MacBook Air and use AWS for training, if this cheap gpu will be a few times faster than my Air CPU than I’d be willing to get it. I usually work with medium sized models/data, too big for CPU but don’t need a multi gpu cluster.
Google colab gpu instances are free and likely faster than the jetson (and definitely faster then typical laptop or desktop cpu only training), just save the models to your gdrive
Can you recommend a cheap/lower end Nvidia GPU for training for someone who just wants to play around with NNs a bit and isn't interested in peak performance? Could I get something for $100 that would be better than my CPU?
While you can in theory run CUDA on an Nvidia 1030 GPU (currently around $100) it's really not worth it. The cheapest card actually worth buying for ML is the 1050 Ti, which can be had for around $200.
If your budget is $100 I'd take that money and hunt around for various cloud based solutions. Most have introductory offers and/or cheap/free solutions for hobbyists with modest needs. $100 will go a long way on these services if you're careful. Once you've used up your $100 you'll have a much better idea of what, if anything, you actually need.
Because it's a M.2 E slot, which has no SATA. Maybe there are SSD drives now that can use PCI and fit in the smaller E slots? There didn't used to be. If so, great...
If a PCIe device doesn't fall back to the available channels that is most likely a bug. An NVMe from a reputable manufacturer should work in this slot.
A more practical solution is to train your network on other beefy CPU/GPU/TPUs and convert+run that on the Nano using TensorRT. TensorRT supports import of models in Tensorflow (UFF) and ONNX formats.
We have been building cameras with the TX1 and TX2 for 3 years now. We have seen things you people wouldn't believe ;-)) Now, can we cut through the hype a bit? Ready? Get your rant mask on.
The Tegra (aka Jetson) chipsets are quite buggy at a silicon level. If you find a hardware bug, nVidia will not acknowledge it, or help you (unless you're Nintendo for example, buying millions of pieces, of course)
The tx1, tx2, etc. are a nested maze of blackboxes, which you do not and will not have access to. For example, the camera ISP is accessible by THREE companies in the whole world. If you want to utilize the ISP, you have to go through them. Will those companies help you? Yes, for a very large fee. Why should they make the fee lower? They have almost no competition. OK, so you manage to get a sensor driver from one of those three companies. The sensor driver is, probably, also very buggy and poorly written. Maybe you can rewrite it yourself. The company who wrote the original one might help you anyway with ISP tuning (again for a fee).
nVidia doesn't give a damn about hobbyists or smaller companies. They will willingly mislead you with specs that are outright false and throw your company under a bus without the slightest second thought. We have seen this repeatedly with nVidia - their corporate culture really tends toward arrogant douchebaggery, second perhaps only to GoPro.
So, after all that, it seems that nVidia has produced too much TX1 silicon, so they've crippled it, and put it in a package that they're selling for $99.
Hope it'll ship with better support that the first Jetson. The one they marketed with all that AI/Machine Vision stuff and then shipped without a camera driver.
This is just following Google's Edge TPU, which probably competes with a Raspberry Pi + Movidius stick. The market there is getting interesting.
This. And all of the third party camera solutions for TX2 cost $500+ when equivalent USB cameras with the same sensor cost $50. They really need to get their act together and sell some NVIDIA-sanctioned camera solutions at scale, and at price points similar to Raspberry Pi cameras.
A lot of third party carrier boards also have a complete sh_tshow of connectors. Auvidea's boards, for example, ship with a Raspberry Pi camera connector, but Raspberry Pi NoIR cameras don't have TX2 drivers, and there are hardly any other cameras that ship with that connector.
Seriously, NVIDIA: Please sell a TX2 devkit that has six non-weird 2-lane CSI connectors and some IMX290 or AR0521 or any other commonly-used robotics sensors for $100 each that plug in and "just work". It would make a lot of people happy to have something to at least start with, and pave the way for third party options to follow the same form factor, connectors, pinouts, board sizes, and so forth.
They've actually fixed this hopefully this time around.
According to their blog post it actually has driver support for the RPi CM2 8MP (IMX219) and they'll be releasing their own Nvidia-sanctioned cameras available from their partners.
It should hopefully just work.
No lowlight options at this time however, which means external CCTV is out of the question :(
Cool. Well hopefully some third parties will now create cameras all in the same form factor with the same pinout so that the choice of carrier board and camera can be independently made.
I'm actually hoping that 3rd party carrier boards standardize on the weird 6 csi connector thing, and was a little saddened to see that nvidia's devkit for the nano doesn't use it :(.
I bought a tx2 carrier from connecttech, and half their tx2 boards use a 30 pin connector used by leopard imaging, and the other half use the same ribbon connector that the nvidia devkit uses for its camera. I have $600 worth of camera which doesn't fit the carrier I chose ::face-palm::.
What's the market or use case for these camera drivers? It seems like the fancy direct-to-chip camera connections and driver development would be better aimed at sensor manufacturers, not hobbyists trying to build simple vision algorithms.
Why wouldn't you use the Ethernet port with a traditional GigE camera? I've also done some simple projects using an ordinary USB webcam.
I'm interested in this as a small form factor industrial computer. I've run Raspberry Pis and Intel NUCs in lots of manufacturing equipment where you need something that can run a few lines of Python and sit between the PLC and your device. Given this board's processing power, it might be interesting to plug into a little Dalsa or Basler camera and run a vision algorithm. You can already buy simple "vision sensors" from Keyence, Banner, Sick, etc. that integrate what I understand to be a simple ARM chip with the vision sensor and run basic vision algorithms, but they're often hamstrung by the tooling. The ability to perform and communicate results of arbitrary commands in applications where you don't need lots of processing power would be great.
What camera applications are people builidng that need 1.5 Gbps of camera data? I've built assembly lines that build several parts per second and never even come close to being limited by frame rate or network bandwidth.
I ordered a Coral USB accelerated (Edge TPU on a USB) to tinker with for the novelty and it just arrived this afternoon. I will say one giant gap between these two is Google's Edge TPU based products only support TensorFlow Lite.
Yeah! Looks exciting. I will say that the Raspberry Pi is USB 2, so bandwidth between , say, your camera and the Movidius Stick is limited. The google device and this one should have proper high bandwidth interfaces.
I have a Jetson TX2, at the $99, I'm super tempted to buy one to see how it compares. That being said, I would definitely buy this if there is a way to make Plex server work with transcoding.
I was tempted during the $100 sale late last year, but I was worried about it becoming obsolete (I know its crazy, since its been out for less than 5 years). I like the idea of having my own Linux server though. That way I can set up my own services as needed depending on my usage (I normally tack on a samba server on top of my plex server).
Mainline Linux will run but all the fancy stuff won't work. They ship 4.9 kernel which is two years old (though it's LTS). After all it's just a standard ARM board with a fancy GPU and peripherals.
It's Nvidia, so you have to decide whether you want to give ANY money at all to a company that ships binary blobs or not (whether or not this individual product uses them). I'm sure there are product teams at Nvidia that ship "open" products but unfortunately there doesn't seem to be a way to support the good parts of a corporation while neglecting the parts that ship closed binary blobs.
This has an M.2 Key E connector which has PCIe ×2, USB 2.0, I2C, SDIO, UART and PCM. Hypothetically an M.2 PCIe drive could work with a Key M to Key E adapter, but I couldn't find one in a cursory search.
I bought an Odriod once. It was DOA and at the insane shipping prices I just took it as a lesson learned to never buy Odroid again. I've gone through probably 20 Pi's and never had one DOA or even die for that matter.
PCIe should be able to do SATA on here - they showed a reference design running PCIe-based SATA devices in their blog post, which was recording 8x1080p30 H.264 to a HDD.
Check out the odroid hc1 or hc2. You still have to boot from the microSD card, but you could put most of the OS on the drive connected to the SATA port.
On the TX2 they had the Auvidea expansion board that was tiny and where they had an M.2 socket. It did have some stuff not working in the kernel tough so you had to do some changes and rebuild it for it to work. A pain in the butt.
the pricepoint is good, relative to the jetson tx1/2 (299-749 usd) and xavier (1099 usd).
some of my notes, there is a massive heatsink on this thing, probably for both the a57 cpu and maxwell gpu, this will make your case a bit larger than say the rpi.
not sure how the a57 compares to rpi’s a53, i assume both are armv8, quad-core.
the inputs seem identical to rpi 3 model b+, hdmi, ethernet, 4 usb (seems 2.0), mini usb, there’s also an additional usb looking port on top of hdmi.
storage seems the same as rpi, except that it’s built in 16G mmc flash whereas rpi you need to separately plugin a microsd card.
overall this has potential if you need to do more gpu intensive work, i like the form factor.
The on-board Ethernet on the TX1 is USB3 integrated, IIRC. The system having PCIe is not necessarily a sign of how it gets used or what is using it, unfortunately.
We are using Rapids.Ai and unfortunately this requires Pascal as a minimum Architecture - unfortunate as we have been looking at a cheap method to create proof of concepts for code we would eventually push to the cloud (functionality rather than speed for example) - this would have been ideal!
I use a pretty old computer for prototyping ML models. Could I use something like this to develop with, saving on buying an expensive laptop or renting a VPS?
Precisely. I would love to see this succeed but it's not $99 even in the US - with Arrow, there's tax taking it up $120 and then the UK shipping cost is about the same as you found for France ($42-44 depending on option).
Feels like this could have been better managed - especially with the 16 week delivery time! Unlike the Coral which was announced, ordered and in my hands within three days!
I like the barrel jack for power on the development kit version, I wonder what sort of power requirements it has as that seems to be an issue with the RPiS. On a side note that is a very impressive heatsink too!
My biggest problem is more of a mechanical issue, the darn micro USB connector is difficult to align and after a while of plugging in and out(especially due to being difficult to quickly see if it's right side up) it becomes loose.
As far as power itself, I have only run into catastrophic problems when attempting to use a hand held solar powered battery backup bank and when I tried to use a car cigarette lighter USB adapter. That being said it is my understanding that RPIS use a special power management scheme that actually throttles the processor cycles without any indicator, including the lightning bolt so maybe that laggyness could be power issues too? (Edit: There might be an iindicator of the low power throttle in the kernel log)
I actually bought some small micro usb power dongles with switches on them. So you only plug it into the pi once, then plug the dongle into the actual power source. Also means you can turn it on/off without being forced to pull the plug.
Most phone chargers now are 1.5-2A standard... My first pi (a B, not a B+, gen 1) it was pretty common for phone chargers to be 1A. I still remember over clocking it no problem. Just looking at the 3 chargers sitting next to me they are 3A (phone), 1A (tablet) and 1.6A (IDK, probably a phone). I've had all these for several years (you can probably guess the order). I think I'd be hard pressed to find one that was <1A in my house. 1A chargers are considered "slow" now.
For reference, documentation says Pi3 B needs 700mA - 1A[0] depending on peripherals (they recommend 2.5A, but that's hogwash). To match my experience, I've run B+'s overclocked (and overvolted) with a camera (+250mA) no problem with a 1.5A charger. If I remove the camera they run fine with a 1A charger (so we have a kind of bound there for "typical" usage +/- error in charger). They boot loop with the camera and a 1A charger.
So I'm a little confused by your statement. I've only ever run into a power problem once. And that was a few years ago when I was pushing the pi's CPU/GPU, adding a camera (doing object detection), and using an old USB charger. Swapped that out and problem solved.
I'm not saying that your problem doesn't exist, I just don't really understand how you see it as a problem and what your statement has to do with it. You're also not quite right about power outputs[1], which let's be honest; do you think your phone charges at 500mA? Your numbers are for signal, not power (though I get the easy confusion). 500mA is like what you'd get from plugging it into the computer, not a charger. It isn't the cable that controls the power^, it is the supply. I for one rather like only needing one cable for everything.
^ Well... wire gauges have limits and you can burn up the wire. But you're going to be pretty hard pressed to find a supply that accepts a USB and will also output enough power to burn the cable.
Maybe the reply was meant for me? Maybe it's just me but because there is no off switch on a Rpi I am constantly plugging and unplugging it, I have found the receiver or female part of the connector on the board tends to get loose from that, that is where the loss of power seems to be coming from, I have run into this issue with my BLU amazon phone also. I'm just not a fan of the micro USB connector I guess, but maybe it's just me and no one else has thus problem.
Actually I don't know if it is just cheap cables, I am not sure if it is the male or the female part of the connection that is degrading. The next time I get into town I am going to purchase a "Premium" USB to micro cable and a new PSU and will see if that solves my issue. I am concerned that my past ebay and dollar store purchases may have corrupted my stock of cables with factory 2nds and rejects.
It's interesting how we can see in real time the commodification of ML hardware. From my understanding, prior to Tensorflow, most serious ML projects involved clusters of NVIDIA GPUs, and production ML software was tightly coupled to CUDA. Google shifted the software landscape by pushing a multi-target ML platform.
Now, all of a sudden, NVIDIA is under intense pressure to keep up with new compute hardware looking to eat their lunch in the AI sector.
Just bought one. Been a while since I played with embedded systems (or ML for that matter) :) Anyone have any cool ideas for how to make this little toy useful?
- Smart camera for a doorbell? Train it to detect a person or a package?
- Smart camera to detect your posture, and message you when you slouch for more than 3 minutes?
- buy tons and tons of sensors and just hook them all up to see if you can detect anything at all? VOC sensors, heat, light, noise, see if any of these things can diagnose comfort, sleep, health issues in your life?
Voice-controlled toilet, or toilet paper hanger, or toilet perfumerie; voice-controlled potted plant watering; voice-controlled head scratcher; voice-controlled backrub chair; voice-controlled pet feeder; I'm sorry, I'm just making fun of voice-control frivolities; voice-control voice-control-diy-ideas-generator.
self driving small car, nerf turret that shoots anyone who is not you, following drone, cat stalking drone, cat stalking self driving "rc" car, auto bird identification camera in your backyard "this is a bird of type X" with voice or screen with instance detection/segmentation
Classic Amazon: they're charging $399 for it, when obviously it's a Starfighter-style test for their AI engineer hiring pipeline. It would hugely increase their own expected revenue to give it away to anyone with an AWS account, but they couldn't resist skimming a couple bucks off of each sucker.
Depends what you want. The cheapest decent machine vision cameras that I know of are Basler's dart range, but they're still $100-300. Entry level is the DA2500-14uc (5MP 14fps).
You can also try places like Leopard Imaging or eCon systems. Even then, most of eCon's stuff is $150+ and their APIs feel a bit hacky.
This is all USB3. Unless you really need a mipi solution (you may need an adaptor board to match the), USB is fine. Even a webcam might be good enough.
What qualities does a decent machine vision camera have, besides price? size? resolution? lens/sensor quality? infrared? frame rate? latency? ruggedness?
I have a bunch of inexpensive IP security cameras. I imagine the latency (~second) would be prohibitive for machine vision camera applications, where you probably are making some control decision immediately? I'm curious how else they'd compare.
I wonder how the emmc flash holds up over many writes, and what form of wear leveling it uses compared to a consumer class m2 interface, sata3 SSD. One of the main problems with rpi is the microsd media, even "industrial" cards cannot tolerate anywhere near the writes that a $50 SSD can take.
Yeah, I couldn't find any information on it. TX2 has 8GB of GPU RAM. I don't even understand how it will work on anything reasonable without any GPU RAM..
Tegra systems use a unified memory controller for both GPU and CPU RAM. This means the GPU can access essentially all system memory as video memory and there is no meaningful distinction between the two. This has always been the case -- similarly, the Xavier Jetson, and the TX2 do not have split GPU/CPU RAM either. They all have unified controllers, so the "8GB of GPU RAM" for the TX2 you mention is also "8GB of CPU RAM".
This thing essentially looks like a chopped down version of the TX1, considering the specs are all pretty much identical. These have 4GB of memory.
Most of the time you'll be running a stripped down headless Linux 4 Tegra on this device, so something like 80% of your memory can otherwise be dedicated entirely to your application (both GPU and non-GPU compute portions) anyway.
This doesn't look like it's the same form factor as the Pi. (Inside the pi-top are mounting points that align with the pi's form factor mounting holes.)
Also the pi-top's power switch and supply go through the Pi's GPIO pins so those would need to be a match as well.
I have a TF700T collecting dust on my workbench. It was a great machine for its time though! Probably the best looking display available on an Android tablet at its release, and amazing battery life.
Probably. I've recently compiled Mozilla's Deepspeech for the Nvidia TX1 and it was a painful process. They don't have precompiled binaries for ARM64 + CUDA for Deepspeech, so you have to compile Bazel (v18!), Tensorflow from Mozilla's repo, V1.12, and then deepspeech. If I am remembering correctly, it ran pretty slowly on the TX1. I don't know how well it would run on this new device, or if a lighter weight model is available somewhere.
I always wonder how one would guarantee stability / convergence when learning in the field. Sounds great, but how can this be pulled off in a reliable way? Just dragging down the learning rate does not suffice in my oppinion, as for effective training you should be at the boundary of stability and greedy updates from my experience
So, engadget is yet another website that hasn't read the GDPR guidelines properly. I cannot be bothered to click through all of their nonsense to read their clickbait content.
$99 seems like a pretty good deal, am I missing anything? 4 gigs of RAM and _reasonable_ eMMC + PCI expansion could allow it to be a cheap casual use workstation, right?
How does it stack up with the RK3399 in the RockPro64? I'm assuming the GPU and software support is better?
What $129 board? For the Jetson Nano, I see only the $99 module or and $99 dev board; neither mention eMMC. Do you mean the RockPro64? I see a $79.99 board that supports eMMC (but isn't in the same league for AI tasks as the Jetson Nano, I think?)
Not really. It's 472 gigaflops for $100 and not particularly energy efficient. It's useless for bitcoin by a large margin and for Ethereum & co. you can get a used GTX 1070 with 6.5 teraflops of compute for about $250.
I estimate it would take me and you maybe 1 year to learn how to build / assemble most things you find in your house that cost $1000 in a store -- a couch, a rug, even a simple kitchen appliance (the dumb kind).
But a CPU / computer? I could not invent that given 10,000 years, and yet you can buy one for $100. Amazing.