Planet Bazaar

December 07, 2016

Elliot Murphy

Dec 6, 2016

Intermittent heavy rains today. Vey glad I bought the hand operated bilge pump, handy to empty the bilge twice during the day. After we returned in the evening I finished provisioning the ubiqity network gear for our mobile network and went out in the boat just before midnight. Then moon was just setting, and the mangroves were still dripping from rain. Headed past the nature preserve into the river, shut off the stern light and just looked at the stars. Stirring a stick in the water and still captivated by the bioluminescence.

by Elliot Murphy at December 07, 2016 05:26 AM

December 06, 2016

Elliot Murphy

Dec 5, 2016

5 day streak. Tonight after getting home everyone agreed to go with me, and we set out on the boat again north on the great canal. Incredibly calm night with a bright moon but many clouds. All the boat lights are so pretty and we saw several docks with underwater lights. One even had a heron standing near the light fishing.

by Elliot Murphy at December 06, 2016 06:23 AM

Dec 4, 2016

Woke up at 6AM and recorded a time lapse of the sunrise at the beach. Then rushed back and headed out in the boat. Rushed too much and realized I left the keys behind after casting off. Tried to lasso a cleat and missed as the wind caught the bimini top. Happily there was an emergency paddle in the anchor locker, so 15 minutes of paddling later I made it back to the dock and headed out to our Sunday appointments. After lunch tried another boat run with rhaiza. Brought the dogs and headed north. Many folks putting lights on their boats. After passing under the bridge one of the dogs got lost their footing and went into the water. Showed rhaiza the overboard procedure, threw out a floating cushion, and got the dog back aboard none the worse for wear.

by Elliot Murphy at December 06, 2016 06:20 AM

December 04, 2016

Elliot Murphy

Dec 3rd, 2016

Last night we found some battery powered clamp on nav lights, tonight after grilling dinner for guests I got the lights rigged and took a short trip North on the grand canal. Many docks have winter lights up, and plenty of sailboats home for the winter. Lights worked well and nothing to worry about in the canal. Much work to do but this trip is helping my mind so much, I feel stronger than I have in years.

by Elliot Murphy at December 04, 2016 04:15 AM

December 02, 2016

Elliot Murphy

Dec 2, 2016

Boat didn't take on water overnight. Shortly after sunrise made some pour-over coffee and was about to head out, then thought it was worth inviting others. Rhaiza and Darcy were both up for a quick cruise. Got the bimini top installed, and drank coffee while looking around the nature preserve. Dogs loved the boat. nice 30 minute cruise before the sun started beating down.

by Elliot Murphy at December 02, 2016 05:28 PM

Dec 1, 2016

Arrived at rental house. Picked up boat from warehouse, helped the owner replace the windshield. Drove out to the causeway, launched, found my way through the canals to the house. Got a ride back to the ramp to recover the truck and trailer. Went to Wal-Mart and bought throwable life preserver cushions and portable navigation lights. Couldn't find a bilge pump so resigned to using a cup to bail as needed.

by Elliot Murphy at December 02, 2016 05:25 PM

October 17, 2016

Mark Shuttleworth

The mouse that jumped

The naming of Ubuntu releases is, of course, purely metaphorical. We are a diverse community of communities – we are an assembly of people interested in widely different things (desktops, devices, clouds and servers) from widely different backgrounds (hello, world) and with widely different skills (from docs to design to development, and those are just the d’s).

As we come to the end of the alphabet, I want to thank everyone who makes this fun. Your passion and focus and intellect, and occasionally your sharp differences, all make it a privilege to be part of this body incorporate.

Right now, Ubuntu is moving even faster to the centre of the cloud and edge operations. From AWS to the zaniest new devices, Ubuntu helps people get things done faster, cleaner, and more efficiently, thanks to you. From the launch of our kubernetes charms which make it very easy to operate k8s everywhere, to the fun people seem to be having with snaps at snapcraft.io for shipping bits from cloud to top of rack to distant devices, we love the pace of change and we change the face of love.

We are a tiny band in a market of giants, but our focus on delivering free software freely together with enterprise support, services and solutions appears to be opening doors, and minds, everywhere. So, in honour of the valiantly tiny leaping long-tailed over the obstacles of life, our next release which will be Ubuntu 17.04, is hereby code named the ‘Zesty Zapus’.

 

by Mark Shuttleworth at October 17, 2016 12:23 PM

May 17, 2016

Mark Shuttleworth

Thank you CC

Just to state publicly my gratitude that the Ubuntu Community Council has taken on their responsibilities very thoughtfully, and has demonstrated a proactive interest in keeping the community happy, healthy and unblocked. Their role is a critical one in the Ubuntu project, because we are at our best when we are constantly improving, and we are at our best when we are actively exploring ways to have completely different communities find common cause, common interest and common solutions. They say that it’s tough at the top because the easy problems don’t get escalated, and that is particularly true of the CC. So far, they are doing us proud.

 

by Mark Shuttleworth at May 17, 2016 08:16 PM

April 27, 2016

Tim Penhey

It has been too long

Well, it has certainly been a lot longer since I wrote a post than I thought.

My work at Canonical still has me on the Juju team. Juju has come a long way in the last few years, and we are on the final push for the 2.0 version. This was initially intended to come out with the Xenial release, but unfortunately was not ready. Xenial has 2.0-beta4 right now, soon to be beta 6. Hoping that real soon now we'll step through the release candidates to a final release. This will be SRU'ed into both Xenial and Trusty.

I plan to do some more detailed posts on some of the Go utility libraries that have come out of the Juju work. In particular, talking again about loggo which I moved under the "github.com/juju" banner, and the errors package.

Recent work has had me look at the database agnostic model representations for migrating models from one controller to another, and also at gomaasapi - the Go library for talking with MAAS. Perhaps more on that later.

by Tim Penhey (thumper) (noreply@blogger.com) at April 27, 2016 09:36 AM

April 21, 2016

Mark Shuttleworth

Y is for…

Yakkety yakkety yakkety yakkety yakkety yakkety yakkety yakkety yak. Naturally 🙂

by Mark Shuttleworth at April 21, 2016 11:40 PM

April 13, 2016

Mark Shuttleworth

Nova-LXD delivers bare-metal performance on OpenStack, while Ironic delivers NSA-as-a-Service

With the release of LXC 2.0 and LXD, we now have a pure-container hypervisor that delivers bare-metal performance with a standard Linux guest OS experience. Very low latency, very high density, and very high control of specific in-guest application processes compared to KVM and ESX make it worth checking out for large-scale Linux virtualisation operations.

Even better, the drivers to enable LXD as a hypervisor in  OpenStack, are maturing upstream.

That means you get bare metal performance on OpenStack for Linux workloads, without actually giving people the whole physical server. LXD supports live migration so you can migrate those users to a different physical server with no downtime, which is great for maintenance. And you can have all the nice Openstack semantics for virtual networks etc without having to try very hard.

By contrast, Ironic has the problem that the user can now modify any aspect of the machine as if you gave them physical access to it. In most cases, that’s not desirable, and in public clouds it’s a fun way to let the NSA (and other agencies) install firmware for your users to enjoy later.

NSA-as-a-Service does have a certain ring to it though.

by Mark Shuttleworth at April 13, 2016 03:28 PM

December 15, 2015

Elliot Murphy

Next comes december

Went to a beautiful family wedding.
The job I loved went away due to the company hitting a rough patch.
Joined a startup, left a startup.
Went fishing in the Allegash. Went camping on Cow Island. Started a DevOps/Security consulting firm. Hired people. Made payroll. Visited Thailand. Rode scooters with my family.

Best year yet.

by Elliot Murphy at December 15, 2015 10:05 PM

November 11, 2015

Mark Shuttleworth

Nominations to the 2015 Ubuntu Community Council

I am delighted to nominate these long-standing members of the Ubuntu community for your consideration in the upcoming Community Council election.

* Phillip Ballew https://launchpad.net/~philipballew
* Walter Lapchynski https://launchpad.net/~wxl
* Marco Ceppi https://launchpad.net/~marcoceppi
* Jose Antonio Rey https://launchpad.net/~jose
* Laura Czajkowskii https://launchpad.net/~czajkowski
* Svetlana Belkin https://launchpad.net/~belkinsa
* Chris Crisafulli https://launchpad.net/~itnet7
* Michael Hall https://launchpad.net/~mhall119
* Scarlett Clark https://launchpad.net/~sgclark
* C de-Avillez https://launchpad.net/~hggdh2
* Daniel Holbach https://launchpad.net/~dholbach

The Community Council is our most thoughtful body, who carry the responsibility of finding common ground between our widely diverse interests. They oversee all membership in the project, recognising those who make substantial and sustained contributions through any number of forums and mechanisms with membership and a voice in the governance of Ubuntu. They delegate in many cases responsibility for governance of pieces of the project to teams who are best qualified to lead in those areas, but they maintain overall responsibility for our discourse and our standards of behaviour.

We have been the great beneficiaries of the work of the outgoing CC, who I would like to thank once again for their tasteful leadership. I was often reminded of the importance of having a team which continues to inspire and lead and build bridges, even under great pressure, and the CC team who conclude their term shortly have set the highest bar for that in my experience. I’m immensely grateful to them and excited to continue working with whomever the community chooses from this list of nominations.

I would encourage you to meet and chat with all of the candidates and choose those who you think are best able to bring teams together; Ubuntu is a locus of collaboration between groups with intensely different opinions, and it is our ability to find a way to share and collaborate with one another that sets us apart. When it gets particularly tricky, the CC are at their most valuable to the project.

Voting details have gone out to all voting members of Ubuntu, thank you for participating in the election!

by Mark Shuttleworth at November 11, 2015 07:23 PM

October 21, 2015

Mark Shuttleworth

X marks the spot

LXD is the lightervisor, a pure-container virtualisation system, the world's fastest hypervisor.

LXD is the pure-container hypervisor

What a great Wily it’s been, and for those of you who live on the latest release and haven’t already updated, the bits are baked and looking great. You can jump the queue if you know where to look while we spin up the extra servers needed for IMG and ISO downloads 🙂

Utopic, Vivid and Wily have been three intense releases, packed with innovation, and now we intend to bring all of those threads together for our Long Term Support release due out in April 2016.

LXD is the world’s fastest hypervisor, led by Canonical, a pure-container way to run Linux guests on Linux hosts. If you haven’t yet played with LXD (a.k.a LXC 2.0-b1) it will blow you away.  It will certainly transform your expectations of virtualisation, from slow-and-hard to amazingly light and fast. Imagine getting a full machine running any Linux you like, as a container on your laptop, in less than a second. For me, personally, it has become a fun way to clean up my build processes, spinning up a container on demand to make sure I always build in a fresh filesystem.

Snappy packages have transactional updates with rollback

Snappy Packaging System

Snappy is the world’s most secure packaging system, delivering crisp and transaction updates with rollback for both applications and the system, from phone to appliance. We’re using snappy on high-end switches and flying wonder-machines, on raspberry pi’s and massive clouds. Ubuntu Core is the all-snappy minimal server, and Ubuntu Personal will be the all-snappy phone / tablet / pc. With a snap you get to publish exactly the software you want to your device, and update it instantly over the air, just like we do the Ubuntu Phone. Snappy packages are automatically confined to ensure that a bug in one app doesn’t put your data elsewhere at risk. Amazing work, amazing team, amazing community!

MAAS is your physical cloud

Metal as a Service

MAAS is your physical cloud, with bare-metal machines on demand, supporting Ubuntu, CentOS and Windows. Drive your data centre from a single dashboard, bond network interfaces, raid your disks and rock the cloud generation. Led by Canonical, loved by the world leaders of big, and really big, deployments. MAAS gives you high availability DNS, DHCP, PXE and other critical infrastructure, for huge and dynamic data centres. Also pretty fun to run at home.

Juju is… model-driven application orchestration, that lets communities define how big topological apps like Hadoop and OpenStack map onto the cloud of your choice. The fastest way to find the fastest way to spin those applications into the cloud you prefer. With traditional configuration managers like Puppet now also saying that model-driven approaches are the way to the future, I’m very excited to see the kinds of problems that huge enterprises are starting to solve with Juju, and equally excited to see start-ups using Juju to speed their path to adoption. Here’s the Hadoop, Spark, IPython Notebook coolness I deployed live on stage at Apache Hadoopcon this month:

Juju model of Apache Hadoop with Spark and IPython Notebook

Apache Hadoop, Spark, IPython modelled with Juju

All of these are coming together beautifully, making Ubuntu the fastest path to magic of all sorts. And that magic will go by the codename… xenial xerus!

What fortunate timing that our next LTS should be X, because “xenial” means “friendly relations between hosts and guests”, and given all the amazing work going into LXD and KVM for Ubuntu OpenStack, and beyond that the interoperability of Ubuntu OpenStack with hypervisors of all sorts, it seems like a perfect fit.

And Xerus, the African ground squirrels, are among the most social animals in my home country. They thrive in the desert, they live in small, agile, social groups that get along unusually well with their neighbours (for most mammals, neighbours are a source of bloody competition, for Xerus, hey, collaboration is cool). They are fast, feisty, friendly and known for their enormous… courage. That sounds just about right. With great… courage… comes great opportunity!

by Mark Shuttleworth at October 21, 2015 07:53 PM

June 22, 2015

Mark Shuttleworth

Introducing the Fan – simpler container networking

Canonical just announced a new, free, and very cool way to provide thousands of IP addresses to each of your VMs on AWS. Check out the fan networking on Ubuntu wiki page to get started, or read Dustin’s excellent fan walkthrough. Carry on here for a simple description of this happy little dose of awesome.

Containers are transforming the way people think about virtual machines (LXD) and apps (Docker). They give us much better performance and much better density for virtualisation in LXD, and with Docker, they enable new ways to move applications between dev, test and production. These two aspects of containers – the whole machine container and the process container, are perfectly complementary. You can launch Docker process containers inside LXD machine containers very easily. LXD feels like KVM only faster, Docker feels like the core unit of a PAAS.

The density numbers are pretty staggering. It’s *normal* to run hundreds of containers on a laptop.

And that is what creates one of the real frustrations of the container generation, which is a shortage of easily accessible IP addresses.

It seems weird that in this era of virtual everything that a number is hard to come by. The restrictions are real, however, because AWS restricts artificially the number of IP addresses you can bind to an interface on your VM. You have to buy a bigger VM to get more IP addresses, even if you don’t need extra compute. Also, IPv6 is nowehre to be seen on the clouds, so addresses are more scarce than they need to be in the first place.

So the key problem is that you want to find a way to get tens or hundreds of IP addresses allocated to each VM.

Most workarounds to date have involved “overlay networking”. You make a database in the cloud to track which IP address is attached to which container on each host VM. You then create tunnels between all the hosts so that everything can talk to everything. This works, kinda. It results in a mess of tunnels and much more complex routing than you would otherwise need. It also ruins performance for things like multicast and broadcast, because those are now exploding off through a myriad twisty tunnels, all looking the same.

The Fan is Canonical’s answer to the container networking challenge.

We recognised that container networking is unusual, and quite unlike true software-defined networking, in that the number of containers you want on each host is probably roughly the same. You want to run a couple hundred containers on each VM. You also don’t (in the docker case) want to live migrate them around, you just kill them and start them again elsewhere. Essentially, what you need is an address multiplier – anywhere you have one interface, it would be handy to have 250 of them instead.

So we came up with the “fan”. It’s called that because you can picture it as a fan behind each of your existing IP addresses, with another 250 IP addresses available. Anywhere you have an IP you can make a fan, and every fan gives you 250x the IP addresses. More than that, you can run multiple fans, so each IP address could stand in front of thousands of container IP addresses.

We use standard IPv4 addresses, just like overlays. What we do that’s new is allocate those addresses mathematically, with an algorithmic projection from your existing subnet / network range to the expanded range. That results in a very flat address structure – you get exactly the same number of overlay addresses for each IP address on your network, perfect for a dense container setup.

Because we’re mapping addresses algorithmically, we avoid any need for a database of overlay addresses per host. We can calculate instantly, with no database lookup, the host address for any given container address.

More importantly, we can route to these addresses much more simply, with a single route to the “fan” network on each host, instead of the maze of twisty network tunnels you might have seen with other overlays.

You can expand any network range with any other network range. The main idea, though, is that people will expand a class B range in their VPC with a class A range. Who has a class A range lying about? You do! It turns out that there are a couple of class A networks that are allocated and which publish no routes on the Internet.

We also plan to submit an IETF RFC for the fan, for address expansion. It turns out that “Class E” networking was reserved but never defined, and we’d like to think of that as a new “Expansion” class. There are several class A network addresses reserved for Class E, which won’t work on the Internet itself. While you can use the fan with unused class A addresses (and there are several good candidates for use!) it would be much nicer to do this as part of a standard.

The fan is available on Ubuntu on AWS and soon on other clouds, for your testing and container experiments! Feedback is most welcome while we refine the user experience.

Configuration on Ubuntu is super-simple. Here’s an example:

In /etc/network/fan:

# fan 241
241.0.0.0/8 172.16.3.0/16 dhcp

In /etc/network/interfaces:

iface eth0 static
address 172.16.3.4
netmask 255.255.0.0
up fanctl up 241.0.0.0/8 172.16.3.4/16
down fanctl down 241.0.0.0/8 172.16.3.4/16

This will map 250 addresses on 241.0.0.0/8 to your 172.16.0.0/16 hosts.

Docker, LXD and Juju integration is just as easy. For docker, edit /etc/default/docker.io, adding:

DOCKER_OPTS=”-d -b fan-10-3-4 –mtu=1480 –iptables=false”

You must then restart docker.io:

sudo service docker.io restart

At this point, a Docker instance started via, e.g.,

docker run -it ubuntu:latest

will be run within the specified fan overlay network.

Enjoy!

by Mark Shuttleworth at June 22, 2015 10:40 AM

May 04, 2015

Mark Shuttleworth

Announcing the “wily werewolf”

Watchful observers will have wondered why “W” is yet unnamed! Without wallowing in the wizzo details, let’s just say it’s been a wild and worthy week, and as it happens I had the well-timed opportunity of a widely watched keynote today and thought, perhaps wonkily, that it would be fun to announce it there.

But first, thank you to all who have made such witty suggestions in webby forums. Alas, the “wacky wabbit” and “watery walrus”, while weird enough and wisely whimsical, won’t win the race. The “warty wombat”, while wistfully wonderful, will break all sorts of systems with its wepetition. And the “witchy whippet”, in all its wiry weeness, didn’t make the cut.

Instead, my waggish friends, the winsome W on which we wish will be… the “wily werewolf”.

Enjoy!

by Mark Shuttleworth at May 04, 2015 02:48 PM

W is for…

… waiting till the Ubuntu Summit online opening keynote today, at 1400 UTC. See you there 😉

by Mark Shuttleworth at May 04, 2015 06:39 AM

February 25, 2015

Elliot Murphy

February in Maine

It's February in Maine. My roof is leaking. My brother has a new girlfriend. My sister-in-law is just got engaged. I'm building new CoreOS clusters on AWS. We discovered all our elasticsearch code needs a total rewrite. I'm still in love with my wife. My monitor is failing. We're making plans for Thailand. I had a meeting today with a very cool hospital CEO who cares more about patients than power. My bosses mother died. I had to scramble to assist a large insurance company that is our customer with investigating a security breach. I'm really sorry I haven't answered that email from Amsterdam. I did a security consult this weekend. I wrote part of a patch to add tags to AWS autoscaling groups in Terraform. I have been working on porting a bitmap library from golang to C and running into funny bit manipulation bugs. I'm building a closet. I'm building a guest bathroom. I went ice fishing for the first time on Sunday. It seems to me that E.B.White is a very important writer. So is DeLillo.

by Elliot Murphy at February 25, 2015 02:41 AM

February 08, 2015

Jelmer Vernooij

The Samba Buildfarm

Portability has always been very important to Samba. Nowadays Samba is mostly used on top of Linux, but Tridge developed the early versions of his SMB implementation on a Sun workstation.

A few years later, when the project was being picked up, it was ported to Linux and eventually to a large number of other free and non-free Unix-like operating systems.

Initially regression testing on different platforms was done manually and ad-hoc.

Once Samba had support for a larger number of platforms, including numerous variations and optional dependencies, making sure that it would still build and run on all of these became a non-trivial process.

To make it easier to find regressions in the Samba codebase that were platform-specific, tridge put together a system to automatically build Samba regularly on as many platforms as possible. So, in Spring 2001, the build farm was born - this was a couple of years before other tools like buildbot came around.

The Build Farm

The build farm is a collection of machines around the world that are connected to the internet, with as wide a variety of platforms as possible. In 2001, it wasn't feasible to just have a single beefy machine or a cloud account on which we could run virtual machines with AIX, HPUX, Tru64, Solaris and Linux so we needed access to physical hardware.

The build farm runs as a single non-privileged user, which has a cron job set up that runs the build farm worker script regularly. Originally the frequency was every couple of hours, but soon we asked machine owners to run it as often as possible. The worker script is as short as it is simple. It retrieves a shell script from the main build farm repository with instructions to run and after it has done so, it uploads a log file of the terminal output to samba.org using rsync and a secret per-machine password.

Some build farm machines are dedicated, but there have also been a large number of the years that would just run as a separate user account on a machine that was tasked with something else. Most build farm machines are hosted by Samba developers (or their employers) but we've also had a number of community volunteers over the years that were happy to add an extra user with an extra cron job on their machine and for a while companies like SourceForge and HP provided dedicated porter boxes that ran the build farm.

Of course, there are some security usses with this way of running things. Arbitrary shell code is downloaded from a host claiming to be samba.org and run. If the machine is shared with other (sensitive) processes, some of the information about those processes might leak into logs.

Our web page has a section about adding machines for new volunteers, with a long list of warnings.

Since then, various other people have been involved in the build farm. Andrew Bartlett started contributing to the build farm in July 2001, working on adding tests. He gradually took over as the maintainer in 2002, and various others (Vance, Martin, Mathieu) have contributed patches and helped out with general admin.

In 2005, tridge added a script to automatically send out an e-mail to the committer of the last revision before a failed build. This meant it was no longer necessary to bisect through build farm logs on the web to find out who had broken a specific platform when; you'd just be notified as soon as it happened.

The web site

Once the logs are generated and uploaded to samba.org using rsync, the web site at http://build.samba.org/ is responsible for making them accessible to the world. Initially there was a single perl file that would take care of listing and displaying log files, but over the years the functionality has been extended to do much more than that.

Initial extensions to the build farm added support for viewing per-compiler and per-host builds, to allow spotting trends. Another addition was searching logs for common indicators of running out of disk space.

Over time, we also added more samba.org-projects to the build farm. At the moment there are about a dozen projects.

In a sprint in 2009, Andrew Bartlett and I changed the build farm to store machine and build metadata in a SQLite database rather than parsing all recent build log files every time their results were needed.

In a follow-up sprint a year later, we converted most of the code to Python. We also added a number of extensions; most notably, linking the build result information with version control information so we could automatically email the exact people that had caused the build breakage, and automatically notifying build farm owners when their machines were not functioning.

autobuild

Sometime in 2011 all committers started using the autobuild script to push changes to the master Samba branch. This script enforces a full build and testsuite run for each commit that is pushed. If the build or any part of the testsuite fails, the push is aborted. This alone massively reduced the number of problematic changes that was pushed, making it less necessary for us to be made aware of issues by the build farm.

The rewrite also introduced some time bombs into the code. The way we called out to our ORM caused the code to fetch all build summary data from the database every time the summary page was generated. Initially this was not a problem, but as the table grew to 100,000 rows, the build farm became so slow that it was frustrating to use.

Analysis tools

Over the years, various special build farm machines have also been used to run extra code analysis tools, like static code analysis, lcov, valgrind or various code quality scanners.

Summer of Code

Of the last couple of years the build farm has been running happily, and hasn't changed much.

This summer one of our summer of code students, Krishna Teja Perannagari, worked on improving the look of the build farm - updating it to the current Samba house style - as well as various performance improvements in the Python code.

Jenkins?

The build farm still works reasonably well, though it is clear that various other tools that have had more developer attention have caught up with it. If we would have to reinvent the build farm today, we would probably end up using an off-the-shelve tool like Jenkins that wasn't around 14 years ago. We would also be able to get away with using virtual machines for most of our workers.

Non-Linux platforms have become less relevant in the last couple of years, though we still care about them.

The build farm in its current form works well enough for us, and I think porting to Jenkins - with the same level of platform coverage - would take quite a lot of work and have only limited benefits.

(Thanks to Andrew Bartlett for proofreading the draft of this post.)

by Jelmer Vernooij at February 08, 2015 12:06 AM

January 20, 2015

Mark Shuttleworth

Smart things powered by snappy Ubuntu Core on ARM and x86

“Smart, connected things” are redefining our home, work and play, with brilliant innovation built on standard processors that have shrunk in power and price to the point where it makes sense to turn almost every “thing” into a smart thing. I’m inspired by the inventors and innovators who are creating incredible machines – from robots that might clean or move things around the house, to drones that follow us at play, to smarter homes which use energy more efficiently or more insightful security systems. Prooving the power of open source to unleash innovation, most of this stuff runs on Linux – but it’s a hugely fragmented and insecure kind of Linux. Every device has custom “firmware” that lumps together the OS and drivers and devices-specific software, and that firmware is almost never updated. So let’s fix that!

Ubuntu is right at the heart of the “internet thing” revolution, and so we are in a good position to raise the bar for security and consistency across the whole ecosystem. Ubuntu is already pervasive on devices – you’ve probably seen lots of “Ubuntu in the wild” stories, from self-driving cars to space programs and robots and the occasional airport display. I’m excited that we can help underpin the next wave of innovation while also thoughtful about the responsibility that entails. So today we’re launching snappy Ubuntu Core on a wide range of boards, chips and chipsets, because the snappy system and Ubuntu Core are perfect for distributed, connected devices that need security updates for the OS and applications but also need to be completely reliable and self-healing. Snappy is much better than package dependencies for robust, distributed devices.

Transactional updates. App store. A huge range of hardware. Branding for device manufacturers.

In this release of Ubuntu Core we’ve added a hardware abstraction layer where platform-specific kernels live. We’re working commercially with the major silicon providers to guarantee free updates to every device built on their chips and boards. We’ve added a web device manager (“webdm”) that handles first-boot and app store access through the web consistently on every device. And we’ve preserved perfect compatibility with the snappy images of Ubuntu Core available on every major cloud today. So you can start your kickstarter project with a VM on your favourite cloud and pick your processor when you’re ready to finalise the device.

If you are an inventor or a developer of apps that might run on devices, then Ubuntu Core is for you. We’re launching it with a wide range of partners on a huge range of devices. From the pervasive Beaglebone Black to the $35 Odroid-C1 (1Ghz processor, 1 GB RAM), all the way up to the biggest Xeon servers, snappy Ubuntu Core gives you a crisp, ultra-reliable base platform, with all the goodness of Ubuntu at your fingertips and total control over the way you deliver your app to your users and devices. With an app store (well, a “snapp” store) built in and access to the amazing work of thousands of communities collaborating on Github and other forums, with code for robotics and autopilots and a million other things instantly accessible, I can’t wait to see what people build.

I for one welcome the ability to install AI on my next camera-toting drone, and am glad to be able to do it in a way that will get patched automatically with fixes for future heartbleeds!

by mark at January 20, 2015 02:00 PM