Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Docker Acquires Tutum (docker.com)
197 points by samber on Oct 21, 2015 | hide | past | favorite | 65 comments


I try and not comment about this too much, but the text on the Docker site is stupidly hard to read. The font color, #7A8491, has a contrast ratio of 3.8/1 (black on white has a ratio of 21/1), which is barely above the w3 accessibility standards for _large_ text (18 point, or 14 point bold - higher for the thin stroke text the Docker page uses (Helvetica Neue Thin)).

Fix this, please, Docker. A few more points towards black isn't going to destroy the look and feel of your page.


Every time somebody brings this up, I always think about this website: http://contrastrebellion.com/


And we come full circle (from their website):

http://thumbsnap.com/i/djeyW72B.png?1021


Beautiful site, but their call to action at the bottom leaves a lot to be desired. I want to "join" something that feels broader than what is projected at the bottom (a text-invite w/ a tweet link).


Indeed — just a couple lines of CSS and this page would be so much easier to read, as right now you really have to concentrate on the very act of figuring out what is on the page:

Before: http://i.imgur.com/usg7GfW.png

After: http://i.imgur.com/chvY7V5.png


Thanks for the feedback, I've sent it along.


Docker clearly doesn't care about design. That's well obvious. They've raised heaps of money and have yet to improve documentation, their main website, or anything regarding the UX of their products and content.


They have the same problem on Docker Hub. Opened an issue a few months ago here : https://github.com/docker/hub-feedback/issues/110


I wonder what would you say about accessibility standards of this website? :-)


The contrast is excellent (19.3/1), the font size is a bit small by default, but zooms acceptably on the desktop. Missing Aria or other accessability tags, but the layout is fairly straightforward, compared to the dom tree.


This seems to be a good move towards a more stable revenue generator for Docker, the company, but I'm more interested in what the long-term Docker ecosystem implications are.

I think Docker is just passing through the early adopter status in terms of actual production usage (it's much more mature in its lifecycle for dev), and having one of many cloud Docker providers owned by Docker might have a chilling effect on other 'container in the cloud' providers using Docker as their primary container format/platform.


Agree with the sentiment. Docker needs to build a product around the runtime, [build, deploy, manage], and then monetize that. It's the only long term play that makes sense. I also interestingly see them as the only company after redhat to truly be able to take on support subscription and perhaps drive huge steady revenue streams from that. The reason I say that is because this really the next logical OS shift. Docker, containers, etc will be that pervasive next layer and you can bet companies will pay for the support.


I mean- AWS, GCE, and Microsoft are already on board, and AFAICT all the little container centric providers are 100% docker. Who else needs to be convinced?


Well, I would say all of those companies are convinced of the open source container engine itself, but much less around Swarm, Compose, and the other components.

In particular, Mesos seems super strong in the orchestration/scheduling layer, and its not clear that Docker will be able to dislodge them.


I honestly think it solves a different problem. Apple runs Siri on the biggest mesos cluster in the world (pretty much verbatim quote from MesosCon 2015. I was there). Twitter runs clusters over 10k physical nodes.

Swarm and compose are cute, but they simply aren't made for that kind of massive scale. The inverse of course, is that they are incredibly easy to setup, and they work for a very large number of workloads reasonably well.

I manage mesos clusters as part of $day_job. There is a certain level of setup required and if you don't need the scale, you simply don't need mesos for it. I really see them as complementary in the longer term. Perhaps a swarm mesos framework could be written. It is highly unlikely that Docker will ever do the resource scheduling better than Mesos.


Great points.

There is a swarm/mesos integration whereby the Swarm API is offered to the developer but it uses the Mesos framework to do the scheduling.

It works, and there's more to come in terms of demos around the corner.


And that is the sweet spot where you get the ease of swarm with the scalability of mesos. Win/win


Both Microsoft and Amazon have deeply integrated Swarm & Compose, with more coming.

Both Microsoft and Amazon offer the Docker Subscription service on their respective Marketplaces and have active campaigns to promote them.

There are Docker Machine drivers that operate with every major cloud provider, allowing you to spin up new Docker nodes in seconds.

And as mentioned elsewhere in the thread, swarm integrates with Mesos and you can imagine any pure resource scheduler/framework can be integrated. Our goal is to provide the Docker experience to developers while being able to target disparate infrastructures, some of which are built purely with Docker and others built with an aggregation of tools.


The problem, in my mind, is that as other container formats that have been in the shadows due to Docker's huge mindshare could easily supplant Docker if other major providers start switching their language away from 'Docker containers' to just 'containers' (witness existing language around ECS from Amazon), and pointing to a different format as their de-facto standard.

Hosting companies will do what's in their best interest, and competing against an 'official' Docker container hosting solution might be less appealing than telling customers about their own system that uses (what may or may not be) a 'better' container format.


Well the bigger problem is if there becomes multiple container formats and everyone doesn't adopt the same one. Hopefully when everyone talks about "containers" in the future they are referring to OCI compatible ones.

I don't think companies should want to lock you in with container formats. The ones that do should die. They should lock you in with services. IBM should lock you into its cloud by making you depend on Watson API, for instance.


I tried Tutum a couple months ago, the onboarding experience was awesome, the free image builder was super fast, metrics of my processes everywhere I loved it... the struggle started until I deployed a real app with little workload: After a couple of hours Metrics didn't worked at all, process got stalled and the whole UI became useless because I had zero visibility of what was going on.

I switched to Heroku only to realize that I had the same problem there too, obviously it was an issue in my app but at least Heroku have me an specific R14 error code and description of what was happening and finally knew what I was dealing with. For the next 48h that I was debugging the memory leak I had my dynos switched to 1X to get even more resource metrics, once the issue was solved I switched my dynos back to hobby.

I'm considering going back to Tutum now that I have deferpanic installed and configured in my app and my Heroku bills are around 100 USD monthly(20USD SSL endpoint x 3 + 7USD hobby dynos x 3 + 22.50USD Compose RethinkDB), but I was shocked to realize how much value a mature PaaS can deliver for clients even for a hobby-ish app like mine.


I did a quick smoke test to see if Tutum passes muster. My smoke test for this kind of tool is if they have a solution for deploying mongodb as a production level service with sharding or at the very least replica sets. Like so many other “let’s do the easy part and stop there” companies, they have a template for starting a single development mongodb node that would be easy enough to do myself. I want a tool that has a repository of templates for making formations of very hard things easy. Openshift is at least working on it: https://github.com/openshift/mongodb Their replica set version is not able to persist data permanently until Kubernetes figures out how to attach separate persistent volumes to pods in the same service. Unfortunately, Amazon is again the only game in town that does exactly what I want: https://aws.amazon.com/blogs/aws/mongodb-on-the-aws-cloud-ne....

If I want to run locally, I have to ditch Docker entirely and just use Ansible: https://github.com/ansible/ansible-examples/tree/master/mong...


I'm not trying to be pedantic, but the phrase is "passes muster." https://en.wiktionary.org/wiki/pass_muster

But "passes mustard" did give me a pretty funny visual image. :)


Ha, well that was stupid of me :)


From my experience running stateful applications should be avoided with the current state of things. We've deployed ElasticSearch, Kafka etc container-less but use containers for all of our stateless services.


So many people say that but if you can't scale anything stateful what is the point of all of this. Often it is the database that needs to scale the most since business applications spend a lot of time there and much less time actually performing cpu intense calculations. I do work for a very large organization that can afford mongodb as a service but for startups without large sums of money, 1400 dollars per month for amazon to run mongodb or 1000 a month for mongolabs to do it seems high. Plus, I want to colocate my data with my apps to limit latency. It just seems like "running stateful services in containers is a bad idea" is parroted over and over because its such a hard problem few people seem capable of solving. Instead they spin up 1000 stateless containers that print hello world and become impressed with themselves.


Indeed. This is why we've developed the plugin ecosystem, and a proper volumes API in docker.

With plugins your volume can either live locally or on some other storage management system, like ceph, gluster, s3, etc. These are all working solutions today. `docker run -v importatndata:/var/lib/mydata --volume-driver ceph` Run that on two hosts and you get the same data.

In 1.9 there is the volume API that allows you to wire this up prior to docker run `docker volume create --driver ceph --o foo=bar --name importantdata`, then `docker run -v importantdata:/var/lib/mydata`


Great, now have someone at Docker package that up in a Docker-Compose, Docker Swarm, Docker Machine template that starts up a Mongodb Replica set and I am happy as a clam.


Agreed! (We're running a few hundred virtual machines; stateless in docker containers, stateful bare on the VM).


I wonder if your Mongo-based test is really the best test. I've never used it, but Mongo has a poor reputation for... actually everything, including sharding/replication.


VMware can do the same.


Do I have this right?

In 2011 dotCloud launches as a platform-as-a-service company.

In 2013 dotCloud releases docker, software based on the lessons they learned building their PaaS product.

In 2013 Tutum starts to build a PaaS based on Docker.

In 2014 Docker (renamed from dotCloud) sells their PaaS to cloudControl.

In 2015 Docker buys Tutum.


Tutum is not a real PaaS, it runs containers in the user infrastructure and can be easily adapted to run on-premise


:-) not far off


Tutum mentions they manage persistent volumes and handle mapping them to containers at runtime, presumably across different nodes/hosts.

Anyone know how that actually works? Is it similar to Flocker at all?


In the last dockerconEU I had a chat with Borja Burgos and he said that they were planning to use flocker for that.

Not sure if finally was the solution ;-)

Congrats too all teams!


Yeah, I know that they are still looking into integrating with Flocker. Not there yet, though.


I've used Tutum for a little while now and I love it. I'm just wondering about what Dockers plans are for when Tutum leaves beta. It would be nice if it stayed free. The potential pricing seems decent enough at $7/node/month, but it could be better.

edit:

updated tentative pricing image url as it looks like someone has deleted it off the tutum slack team site.

http://i.imgur.com/i5k8nkr.png


"The requested file could not be found."


I've updated my post with a new url.


I jumped on Tutum immediately after it was announced, installed it across my test infrastructure, because it scratched an itch that was particularly hard for me. I like to think of myself as a good user / developer, and I submitted a few tickets with some bug reports and feedback, along with a few questions trying to clarify how they were doing what they did.

I have almost never been treated like such a piece of shit in my life. The attitude I got from multiple people on the Tutum team really drove home that they don't give a shit about loners like me, don't have time for my bug reports, and don't care what my questions are. It really left me thinking I must have said something inadvertently that totally offended them, but I couldn't find it anywhere when I looked.

I really hope I just got some of their team on a one-off bad day, or those people have since left, or something. It was really strange, and of course I immediately stopped using their software.


Amazon Elastic Container Service is a robust offering and I can't see why anyone would choose Tutum over ECS.

Yes, you can run Tutum on AWS, but why? When products can't substantially differentiate themselves against AWS, the customer will choose AWS. Customers don't want to be stuck in choice paradox.


Because failover.

Seriously - Tutum lets me rebuild my infra on Digital Ocean if/when AWS dies, and quickly. ECS, by contrast, is just POOF in that scenario.


I'm an avid Tutum fan and an early adopter. Explore and play with this tech it just get's nicer the more you use it. You can run Tutum on your own machines (including your laptop behind a firewall/NAT) with the Bring Your Own Node configuration. I have moved systems from AWS to Digital Ocean with a couple of clicks. I currently have regional failover on Digital Ocean which took me a few minutes to set up.

The whole user experience is awesomely simple. If I had to describe these guys I'd say they are the Digital Ocean of the docker ecosystem.

Their 'Stacks' are so similar to docker-compose that only minor edits are required - and I expect with this acquisition even that will go in time.

And yes, finally a revenue stream for Docker I can believe in!


It's interesting, and logical, that Docker would buy something. I'm only just getting into Docker but everyone I talk to says something along the lines of "no one runs bare docker containers in Production".

I've also been using Tutum and it's made life really easy, especially on my BYONode. I just hope they don't price the product out of reach for us hobbyists.


> "no one runs bare docker containers in Production"

can you elaborate on this?


Only a matter of time before Red Hat acquires Docker...


Yuck! Don't say such things.


I somehow felt like the docker team was closer with the Rancher team, so I thought docker might acquire Rancher at some point. I think this is a move to produce revenue in the future while Rancher is yet another open source project to monetize.


We spend a lot of time with all companies in the ecosystem, and feel it's important to stay close and to support both their technical and business interests.

We're engaged with over 300 companies that are technically integrating in to Docker to enhance their product offerings, with 6-10 added every week.


I'd like to know what Tatum offers in comparison to fabric8.io. It seems the "video intro" and "take it for a spin" links are the same, and not a video introduction. That's disappointing. Maybe someone in charge of the page can fix it please?

Searching for "Tatum video introduction" on a search engine only returns results about a certain movie star, which is not terribly helpful.


The name of the company is Tutum, not Tatum, maybe that will help your search results.

Here is another youtube video: Getting Started with Tutum https://www.youtube.com/watch?v=fnV92aHLmyE


Wow, I totally misread that. I bet I'm not alone :) Thanks for the link KenCochrane.


Well, that was fast... After the announcement of Amazon Container Registry docker had to make some move and this is a great one.

Congrats to the tutum team, they have built a really nice project that makes really easy to build and maintain container pipelines.

Can't wait to see the integrations with the other docker tools...


Speaking of Amazon, I thought that docker was going to hit their bottom line as now people could easily fully utilize their ec2 instances and thus cutting into their already thin margins. However they jumped past this problem with Lambda. Because in the end I don't want to run an operating system. I don't even want to run tomcat. I just want to run a function in tha cloud. Docker makes it easier to shovel around your application while lambda is the next phase of cloud development.


> Docker makes it easier to shovel around your application while lambda is the next phase of cloud development.

Yes, lambda and friends (AWS Mobile Hub, JAWS Stack, etc) are the next paradigm, but it's not ripe yet. For instance the newly introduced cronjob only allows 5 mins periods as minimum, when for one of my projects I needed full cron precision (I was starting to build on Lambda and had to return to docker...)

Maybe in 6-12 months thing will be fully ready...


You should test out IronWorker which allows for 1-2 hours per task run, more languages, and supports docker images for your runtime environments. Although it not as price conscious as lambda it is definitely more feature rich and easy to use.

(I used to work for Iron.io)


I'm waiting for AWS Lambda to support running a Docker container, right now its just Node.js or Java with a specified environment. I suspect they are running Lambda functions in containers already, hopefully just a matter of time before they expose that to any Docker image.


> After the announcement of Amazon Container Registry docker had to make some move and this is a great one.

This has been in the works for much longer than that.

The Tutum team built a pretty amazing product that their users love, and the passion, excitement, and forethought both Borja and Fernando have around the space has impressed me since I met them in October of 2013.


> This has been in the works for much longer than that.

I meant (as a user) that Docker Hub was at risk of losing its appeal because probably couldn't rival AWS's registry (pricing, speed, etc). IMO this move gives Docker a breath of fresh air ...

What's gone fast is the trajectory of these guys from launch (they're not even out of beta) to acquisition..

So, congrats to both teams!


Interesting perspective, thanks.


What is the recommended development environment for this? Docker Toolbox? What's the recommended setup?


So that's why the "Sign in with Github" button was hidden behind a link this morning ... :)


Great news!


Hurray! :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: