Friday, December 23, 2011

Kindle Fire

Got this as a present from my mother and step-father (ironically right after I got my step-father one!) Here's my report on it!


1. When is Android not Android?

While much has been made of the fact that the Fire's operating system is Android based, the entire environment is extremely different to that you'd find on a phone or regular tablet. In many cases, the issues go to the core of the system, resulting in a high degree of incompatibility with existing apps. Twitter, for example, isn't available, and won't be until their app is ported properly (infuriatingly, a "bookmark" is being offered as an Android app on the Amazon app store.)

Because the Fire doesn't support "ordinary" Android, you're not going to be able to download the apps you bought from the Android Market to it. You may be able to download some of the apps you bought from Amazon's App Store on it, however.


2. It's a media tablet

It's important to understand what the Fire is and what it isn't. The iPad and the various Honeycomb tablets are essentially amorphous "next generation" portable computers - I put "next generation" in quotes because that's the intention, but I don't think they're there yet.

That's not what the Fire is. The Fire is a media tablet. It's intended to:
  • Let you buy and watch movies
  • Let you buy and listen to music
  • Let you buy and read colour magazines, and to a lesser extent newspapers and books (the regular Kindle's eInk screen is much more optimal for this.)
  • Let you buy and play games
Now, the Fire has a few features outside of that core functionality, such as a web browser, but it would be inaccurate to suggest those define the Fire's function. The Fire lacks some very basic features you'd expect in an Internet device, which Internet tablets since the Nokia N770 have seen as critical:
  • There's no microphone, so you can't use it to talk to people
  • There's no camera, so you can't take pictures and share them, and you can't do video conferencing.
  • There's no GPS, so maps, searches for local businesses, and social networking systems are restricted. (There wasn't a GPS on the first Nokia tablets either, but that was in large part due to the lack of applications using it, but social networking and search tools have embraced the concept and run with it.)
Other features common to tablets that see themselves in part as communication devices, such as Bluetooth and USB hosting (the Fire can be a USB device, but it can't be connected to USB devices like keyboards) are also notably missing.

These features aren't missing because of poor design decisions, they're missing because those are features that just aren't relevant to the Kindle. There may be one or two features missing from the Fire, like an HDMI port, that would make it better, but for the most part, nothing missing described above hampers its intended use, and in some ways adding those functions - in addition to increasing the cost - would merely distract from the Fire's intended function. If you need them, you're looking for a device that isn't the Fire.


3. It's the right size, it really is

I've heard some complaints that the Kindle doesn't work well for fat fingered typists, and that it's too small. With respect, I didn't have either problem at all. Indeed, I found it easier to type on than either my smartphone or my Lenovo K1. And it's way more portable than the latter.

The entire point of a "tablet" - media, Internet, whatever - is that it's supposed to be an ultraportable device, something that you can carry around with you almost as second nature. This is one of my criticisms of the current set of 10" tablets - they're too big for that function. Carrying around a 10" tablet is like carrying around an oversized hardback book where ever you go. The Kindle Fire is about the size - maybe smaller, actually,  or a reporter's notebook.

And despite that, the screen is large enough to comfortably render a web page (if shown landscape) without the text being too small or the buttons too difficult to press. The screen is high quality and perfectly acceptable for movies, PDFs, and other similar content.


4. It's not locked down

A depressing trend in consumer electronics is to "secure" a system by preventing user's from loading their own operating systems. While Amazon has locked down the Fire's version of Android itself - to some extent anyway, you can still sideload your own apps, it's just you can't get root - Amazon has made a point not to prevent users from loading their own operating systems. One hacker has already ported Icecream Sandwich, albeit not in a usable state right now, and CyanogenMod 7 is reportedly already available. I feel a lot more comfortable knowing that I can have a "real" version of Android installed in the near future if the Fire's environment doesn't work for me.


5. The user interface gets a big thumbs up from me

Despite being an Android device, there's only one button on the entire machine, a power button used to turn on or off the screen, or to turn on or off the device. And that's it. Everything else, from volume controls to menu buttons, are implemented on the touchscreen. So Amazon spent a lot of time designing a user interface that's essentially pure - reportedly Google did something similar with Icecream Sandwich.

So, how does it work? Well:
  • At the top of the screen is a status bar with buttons on the left and right for notifications and system controls respectively. You don't have to drag anything down, it's a simple "Tab and go" system.
  • When you're "in" an application, at the very button are the controls for the app, if necessary, and below those a toolbar with "Home", "Back", and "Menu" buttons. The latter toolbar is sometimes semi-hidden if the app controls are present, but it's obvious how to get it to appear when it's semi-hidden. More often, a single toolbar with both app specific controls, and the buttons above, appears. Either way, it's consistent enough to be obvious and easy to use.
  • The one confusion I had was with apps that are full screen, like the book reader for the user manual. In that case, all you have to do is tap the screen to get the toolbars and status bars to appear. But it wasn't immediately obvious. Still, it's something you learn early on, once, and then it's done.
  • Rather than a Desktop analogy, the Fire uses a "bookshelf" analogy, literally drawing bookshelves on screen with any objects you might want to access sitting upon the shelves.
The home screen consists of panels you slide across representing your media and recently accessed apps and web browser tabs (the panels are often screenshots, making the UI familiar to users of the recently discontinued HP Touchpad), together with two bars, one for large icons of favourite apps, the other for quick access to various features of the device.

All in all, it's very easy to use. Still, I had one or two reservations. Apps you've never used before, even built-in apps like email, are often hidden until you search for them for some reason. And on that note, I don't care for the email app - once set up it's pretty crude, and it ignored my telling it that my Google Apps email address was a gmail account and invented some pretty stupid defaults for the IMAP settings. How hard would it have been to base its rejection of a GMail address on, well, whether the address and password actually work on the GMail servers?

Still, if you're using the Fire for email for any serious use, you're doing it wrong.


6. Things I think they should add

The Fire is a great device but I think there are some features that should be added, either because they'd cost little to implement and would be increadibly useful, or because they'd enhance the device considerably.
  1. I'd like to see an HDMI port
  2. Bluetooth would be useful, for a variety of reasons. Many car stereos support Bluetooth now, for example.
  3. I'd like to see some cooperation with Google.
  4. If you're going to put in an email app, make it a good one, or don't do it at all.
  5. The device only has about 6G of storage. I know Amazon likes that whole cloud thingie, but sometimes having large files available offline is good. And what about an SD card slot?

Finally

My ideal "portable computing device" would probably start with something similar to the Fire. I'd add the following (note: I don't expect anyone to do this, I'm just saying this would suit me...)
  • An eInk screen on the other side of the device. I'd gladly give up color from time to time in exchange for better clarity and less eye strain.
  • More input devices including a microphone and front-facing camera
  • Expandable storage
  • A docking station similar to that implemented by the Atrix - allowing you to add a full sized screen, keyboard, and pointing device.
  • A bluetooth link that allows the device and a phone to transparently share an Internet connection.
But... that's just me...

Thursday, December 8, 2011

Amazon Fire

Haven't spent a lot of time with this, as the Fire I bought was actually for my step-father, but I wanted to make a few comments having used one.

1. It's the right size

The iPad may have pioneered the ten inch tablet, but the fact is the form factor is just too large to qualify as easily portable. 7" is a decent size, it'll fit in a larger pocket, although you're pushing it if you do that, but it's big enough (and high resolution enough) to show a web page with very little difference between it and the desktop rendition. The Fire is the size of a paperback book. The iPad (and my Lenovo K1) is the size of large pad of paper. No contest, the Fire wins here.

The fact the screen was large enough actually surprised me somewhat. The first site I visited was Amazon.com, and it looked perfect. No sign of scaling or anything that would make it unusable. Of course, I have pretty good eyesight, so it's possible that someone with poorer eyesight may need to zoom the screen a little.


2. Web browser felt slow

Amazon has been promoting the browser technology they're using, whereby significant amounts of each site are rendered by their servers. The result seemed to be heavy latency, whereby the page itself might appear in a flash, but the flash occurred quite a few seconds after the request. Supposedly you can turn this off, so this fact is more of a "This isn't a feature" than a "The device is flawed" type criticism.


3. Standard ports

USB charging, with a regular micro-USB port. Yay! Should I be happy about this, or just expect it? Well, the K1 experience hasn't enamoured me to proprietary connectors.


4. The keyboard: not as bad as claimed

I had no problems whatsoever setting up my step father to access my wireless network, which required the usual password entry etc. There's been criticisms that the keyboard is too pokey and prone to fat finger problems. I didn't come across any - and I was using the device in portrait mode.


5. Very easy to use and clean

I was fairly impressed with the user interface. Looked nice right out of the box and everything was easy to find.


Conclusion?

There are a couple of features missing from the Fire that prevent it from being a full tablet, but it's a really nice media device and it might push tablet makers to make something around the same price point with the same form factor.

Professionally, I need to be familiar with tablets. The experience of the Fire means I have some idea of what I'd buy for my own use. Nice work Amazon.

Thursday, December 1, 2011

Tablets, Netbooks, and the next big thing

As my regular readers know, I'm somewhat of a tablet skeptic. For the most part I enjoy being a part of this industry, but I actually resented, to a certain extent, the fact that I had to buy a tablet to keep my skills honed. I don't see mine as being something that's going to do much except be a test bed for my own work.

And having owned a tablet now for a week, and bringing it into work every day, buying apps for it, trying my best to make it work, I don't see any reason to change my mind at this point. What's my overall verdict?

A tablet, in 2011 at least, is a device that does a subset of what a full computer does, and doesn't do any of them well, but looks cool doing it.

Now, to be fair, there are some minor advantages a tablet has over, say, a Netbook, but they're not exactly enough - for me - to overcome the disadvantages in general use. A tablet doesn't need to be opened up to be used, which means you can, in theory, carry it around with you while you work, making it always available. In practice, however, a 10" tablet is simply too large to be comfortably carried around all the time, and a smaller tablet is too small to be significantly more useful than a phone.

This portability makes tablets useful for some applications, but not many. I mentioned in an earlier post that I think they'd be great to replace the clumsy PCs in most doctors offices. Doctors and nurses usually, these days, spend a while with a patient entering symptoms into a PC using a user interface that looks like something out of the 1990s and certainly has no flow to it. A tablet would be something a nurse or doctor could carry around with them, entering information in a fairly smooth fashion.  And there are plenty of other industries where I can see this working.

What I don't see as working is the intended audience of the iPad and Honeycomb devices, essentially ordinary people who want to use the web, write emails, and play games, without carrying around a laptop. These are people who would clearly find a Netbook a more versatile and friendly device. A Netbook can run any applications that a desktop can, but the device is much more portable. And most modern applications require more than a finger to operate usefully. You want to be able to type, for example, if you're writing an email. Why would a keyboardless touchscreen be anything but a liability in the modern world?

So why are tablets taking off? And why are Netbook sales falling?

Well, I think the latter is misleading. Netbook sales may be falling, but they're still extremely high. People like Netbooks. And outside of Apple, I don't think many people see tablets as a replacement for Netbooks.

Tablets are selling well because they look awesome. They may not do anything well, but there's a slickness to what they do that's extremely enticing. I'm not finding many people who heavily use tablets after they buy them, but I see a lot of people who generally like the things, especially if they don't have one yet.

That said...

The tablet isn't the first time this concept has been tried. The iPad can trace its lineage, albeit indirectly, to the Apple Newton. The Newton was the first PDA. Newton begat the Palm Pilot which was arguably the first useful PDA. Microsoft, meanwhile, came up with the Tablet PC, and the modern "Tablet" is, in many ways, a hybrid of the two concepts, with the benefits of technologies taken from modern touchscreen phones.

Now, there are some things to note about all of the above. The first is that it's clear that the tablet is simply the latest incarnation of an evolving platform. It's not the first time that platform has been successful, but the temporary success of a platform doesn't mean it's going to last. People want a more portable, personal, computer, while arguably the PC has been going in the other direction over the years with PCs becoming more like the minicomputers and mainframes of old, multiuser behemoths designed to be administered by people who aren't the users of the machines themselves.

The second is to note Microsoft's interest in the platform. Microsoft really wants to produce a successful tablet platform, and my experience of Windows 8 suggests that they may be extremely close to that. One thing is absolutely true: Windows 8 will be installed on many, many, tablets, and users of those tablets are going to want to be able to plug the device into a proper screen and keyboard because that's what "legacy" Windows apps need. By rejecting the concept of supporting existing apps, the Android and iOS based tablets don't force manufacturers to think in terms of a "dockable tablet", and while some - notably ASUS - have tried to do this, the docks essentially run tablet applications rather than apps that would make use of the environment.

While at first glance, the notion of a dual facing tablet, one that has a touchscreen user-interface for "on the road" access to data, and a keyboard interface for productivity, may seem like a kludge, the reality is such an environment may be ideal.

As such, I think Windows 8 will have a massive impact, and may create a generation of touchscreen computers that actually do bring us into a post PC world.

I'd like to hope that Google and other groups such as Canonical (the makers of Ubuntu) will also be a part of this post PC world, but thus far neither have really shown they're looking forward to it. Google, I think, is still orienting itself towards the cloud, and hoping that people see tablets, phones, and PCs as interfaces to the cloud rather than devices that run applications and store data in their own right. The cloud has a major problem with it, which is the requirement to be hooked up to the Internet, and I think as such Google's plans may simply not fit reality for the next decade.

Canonical wants to create a post-desktop Ubuntu, but is having severe problems designing a UI that works. Their latest incarnations of "Unity", the next generation Ubuntu user interface, have thus far been unpopular, working poorly on the desktop and being completely unsuitable for tablet use. I'm sure the problems will be fixed in time, but it's still a shame we're at this point.

The industry is going to be very interesting in the next few years. But unless you have a pressing need to understand the technology, I would not recommend you buy a tablet today.

Tuesday, November 29, 2011

Virtualization options

Anyone who's heard of virtualization probably has discovered there are many different tools out there to help install and run multiple operating systems on a single computer. While many of these systems are similar, the truth is that there are radically different approaches to virtualization that have developed over the years, with advantages and disadvantages for each method. Here's a quick rundown on popular virtualization approaches and technologies, and reasons why you might want to pick one over another. I've tried to keep the terminology consistent below, although many in the industry will prefer different terms. The most neutral term I could think of for the virtual machines themselves is "guest". The "host" is the operating system that guests run under.


Emulation or "Full virtualization"

Emulation is one of the oldest forms of virtualization and exists in many forms, from systems that emulate every aspect of a computer, including the CPU, to versions that use features of the host CPU to sandbox the hosted operating system and which use software merely to emulate the rest of the hardware. Almost all modern emulators use the latter approach if they can get away with it.

The primary advantage of emulation is that it requires virtually no support within the guest operating system, which is made to believe it's been installed on a normal computer with no special requirements.

The disadvantages of emulation are numerous. Even with CPU support, it's slow and inefficient. As an example, if an application on a guest needs to access the network, the data it sends needs to go through two device drivers (the one for the emulator, and the real one), and a virtual device emulator inbetween. Many CPUs don't support virtualization natively and thus can't be used to run the faster emulators. Emulators typically have very clumsy boundaries in terms of limiting CPU, memory, or disk usage.

Despite the disadvantages, emulators tend to be the most popular choices for virtualization today, largely because of the lack of a need to have the writers of the guest operating system be involved in making their systems work. Less of a problem for free operating systems, where the code can be modified, running multiple proprietary operating systems usually requires emulation unless the vendor cooperates with the virtualization system you want to run.


Para-virtualization

Para-virtualization is probably my favorite virtualization system, although it comes at some costs. In para-virtualization, the guest operating system is written such that it is aware it doesn't have complete control over the hardware, and instead cooperates with the "real" host operating system.

Implementations of para-virtualization can be dramatically different. One early system, User Mode Linux, and a related project called Cooperative Linux, allowed a Linux-based operating system to run under another operating system such as GNU/Linux or Windows. The UML kernel would simply talk to the underlying operating system and have it do the heavy lifting work.

A more advanced, generic, system, and my choice when it's available, is Xen. Xen is a host operating system that provides the bare minimum for hosted operating systems, handling starting and stopping operating systems, parceling out memory and CPU resources, and telling each instance what hardware it is allowed to access. This operating system is called a hypervisor. Typically, by default, all resources are assigned to a special guest operating system called the "Dom0", and that operating system provides basic networking and other services to the other guests.

Xen is relatively efficient, with guests able to talk directly to the hardware without going through emulation layers; although the counter to that is that it's generally complicated to set up, with the admin needing to have a high degree of knowledge about the hardware if the admin wants to ensure each guest runs efficiently. The "easy" ways to administer a Xen system can result in a very slow system. And like Emulation, Xen has relatively clumsy boundaries to be set when you need to assign resources to specific guests.

The fact each Xen guest is aware it's running in a virtual environment means that you get certain huge improvements. It's easy, for example, to reboot a computer (as in shut it down, cut the power, wait ten seconds, and start it back up) without actually killing the Xen guests, which - beyond seeing the time move forward suddenly - will act as if nothing has happened. More advanced Xen servers make it easy to migrate guests from one physical computer to another without shutting anything down. To be fair, more advanced emulators have the same capabilities, but Xen does it with the full cooperation of the underlying operating system, which in theory, at least, makes the entire process more reliable. The underlying system is expecting an outage, and so doesn't get upset about it.

Para-virtualization with a hypervisor like Xen's is probably the best compromise between application transparency (applications are completely unaware that they're on a virtual platform) and efficiency. However, the requirement for operating system level support means it's hard to run a fully para-virtualized VM system. Xen supports a fall-back mode where emulation is used to run operating systems that do not support its hypervisor, but obviously the moment you use it you lose the advantages of the para-virtualization approach and might as well use something more geared towards emulation.


Operating system level virtualization (or virtual virtualization!)

An extremely common virtualization scenario is where a single computer serves large numbers of guests that all run the same operating system (or operating system kernel.) This comes about because in Enterprises there's usually a deliberate decision not to diversify too far from a standard platform, because amongst nerds like me, there's usually a favorite system we're comfortable with, and because ISPs that offer VPS services usually offer hundreds of customers the same basic systems.

Unix-style operating systems have offered some tools that provide the ability to host different environments running from a common kernel for quite a while, although the concept has only recently become advanced enough for system administrators to take it seriously. The original system, chroot, permitted a tree of processes to see a branch of the core file system as its root file system, and you'd load up that branch with all the files that make up a Unix system. While it worked for some applications, chroot is too crude to work for anything but the simplest applications. Networking, for example, is still shared amongst environments.

The BSD branches of Unix implemented a system called "jails", which took the chroot concept and added all the other aspects of an operating system to ensure that each "jailed" process tree would really have an entire environment that it could play in without ever seeing any evidence it was part of a bigger whole.

It's taken a while for Linux to adopt the same concept. An early version of the concept is OpenVZ, which is used by many VPS providers, and does exactly what you'd expect from the above. A single kernel runs multiple environments, each seeing a subtree of the file system as being their root file system, each seeing their own network devices, and so on. OpenVZ required a patched Linux kernel, and the patches were never integrated into official Linux, and so it had limited support, but it's proven to be very popular nonetheless.

What can OpenVZ run? Well, essentially any operating system that can use the version of the Linux kernel running on the host. The operating system usually requires some small modifications, so that, for example, it doesn't start checking the disk for errors when it starts up, but once up, most applications will never know the difference between the hosted operating system and the main operating system. You can run a combination of different operating systems as long as those systems all support the kernel you're running.

While OpenVZ isn't supported by the Linux developers, the OpenVZ and Linux developers have been cooperating on a very similar project called LXC. LXC is a Linux-friendly version of the same concept, and can run on an unmodified kernel, because all the modifications needed are being integrated into the main Linux tree. LXC uses a Linux technology called "cgroups" that's designed to replace the functionality OpenVZ's kernel modifications added.

It's important to understand that, while reliable, LXC is not considered production quality yet. That doesn't mean you can't use it, or even entrust your important data to it - the very nature of LXC means it's as reliable as the host operating system. However, you need to understand that LXC has certain security holes in it that are going to take time to fix. Those holes will not affect a normal application, even a bug ridden one, but they do mean if the server is public facing, and a hacker is able to gain access to it, that hacker can theoretically gain access to the host computer too.

Operating system virtualization has advantages and disadvantages over other virtualization systems. Like para-virtualization, it has no need for special hardware support, as the host operating system inherently supports the concept natively. The technology is generally much, much, faster and more efficient than the alternatives, as there's no complexity at all between the operating system and the hardware it runs on. And it's much, much, easier to allocate resources, with it being perfectly possible to increase memory and disk space in real time. You can even give all your guests unlimited resources, and start to rein in any that cause problems when the problems start to show up.

Major downsides? There's no easy way to save a guest instance, so all the fancy Xen functionality where you can reboot your computer or move guests between computers without shutting down the guests themselves is simply impossible. The instances are too ingrained in the host operating system for it to be possible to separate them out and save them. And that also means that critical operating system updates, such as updates to the kernel, require you restart the entire system, the host and the guests.

Another downside is that, as yet, no operating system level virtualization platform for Linux is completely, 100%, transparent. One issue, for example, is that because all memory, real and virtual, is shared, it's difficult to give each environment a picture of how much memory is available. This has caused some applications that specifically check for virtual memory (swap memory) to fail because the Linux APIs that report on available memory cannot sanely report what's available in this kind of environment.


So what should you use?
  • If Xen is an option to you, I encourage you to check it out especially if you're trying to run multiple servers.
  • For testing operating systems or running an additional desktop operating system, I recommend an emulator called VirtualBox. It's free, supported by  Oracle (it's a Sun technology), and it's extremely good. It runs under Windows and Linux.
  • Ubuntu users who need to run servers probably need to look into KVM at the moment. There's more native support within Ubuntu for KVM. I don't particularly like the concept, it's very much emulator oriented, but it might suit what you want to do if you can't get Xen to do it.
  • If you have limited hardware, and you're not too concerned about security, LXC is a pretty decent option. If you're concerned about security, I encourage you to look at OpenVZ. For the most part, when LXC is finished, it should be capable of running OpenVZ VMs unchanged, so the only problem you'll run into with OpenVZ is the lack of support. And that whole "Need to reboot all servers from time to time" thing that applies to LXC too.
 Have fun!

Sunday, November 27, 2011

Setting up network bridging for VMs in Ubuntu

I had cause to set up a network bridge in a new Ubuntu system this weekend. For those unsure of the concept, the idea of bridging (in this context) is to connect virtual machines to a network. The name comes from the hardware concept of bridging, where multiple networks are glued together by the use of a "bridge", a hardware device that does all the routing, mostly transparently.

In Linux, there's a "bridge network device" set up to support providing networking to VMs and other more advanced networking concepts.

In the process of configuring my bridge, I made numerous mistakes based upon a hazy idea of what I was setting up. So, if you need to set up bridges for LXC, KVM, or other systems, this is what you need to know and do.


1. What the bridge device is

The bridge device appears as "br0" (like "eth0") on the host machine and it all but replaces eth0 as your primary network device. It handles all routing to and from the external network, and within the machine to the VMs that are using it. How it appears to each VM depends on the virtual machine system you're using - typically, those VMs will think they have a device called "eth0" that's connected to the external network, but the network they're connecting to is the one controlled by the bridge.

Each bridge device (br0, etc) is associated with a "real" device like eth0. If the bridge device sees traffic coming in from the real device addressed (at a low level) to one of the virtual machines or to the host, it routes traffic to that VM or the host. Likewise,  it routes internal traffic internally, and routes traffic from VMs or the host to external addresses via the "real" device. The addressing is via standard Ethernet MAC addresses, not IP addresses - those are higher level addressing concepts, and the bridge device tries to be protocol neutral. The overall effect is one where the network controlled by the bridge is merely an extension of your existing Ethernet network.


2. How you configure it

I'm not going to cover the VM side as that'll depend on your virtual machine manager, but here's the key points for setting up the system. I'm assuming that you DON'T want NAT - virtually none of my readers will want NAT given their external router already handles that for them. The only exception I can think of is if you're experimenting on your work PC and you aren't the network admin and don't want to particularly bother him or her. Anyway, here goes:

Step 1: You need to disable Network Manager and install bridge networking.

Completely. Network Manager isn't really what you want running on a server in the first place. (To be honest, I've never liked the tool, I see why it's there, but it's not particularly reliable in my experience.

Disable it using:

  $ sudo -s
 <your password>
 # /etc/init.d/NetworkManager stop
 # apt-get remove network-manager

sudo -s makes you root until you exit, I'm going to assume you remain root for the remainder of this article. Be careful! Should I ask you to reboot, remember to sudo -s upon logging back in if you need to repeat any of the steps in Step 1 or Step 2.

You'll also need to add the bridge support. You can do this using:


 # apt-get install bridge-utils



Step 2: Set up /etc/network/interfaces and other configuration files

I'm going to make the following assumptions here.
  • You're using a static IP address. For the examples, it's 10.10.0.3
  • Your router is 10.10.0.1
  • Your DNS server is 10.10.0.2.
  • Your netmask is 255.255.0.0 which makes your broadcast address 10.10.255.255. 
You'll need to modify the examples accordingly for your situation. IPv6, incidentally, should "just work" if you're using stateless configuration - at least for the bridge and host, your VMs are another matter but that's because of peculiarities in the VM implementations, not the bridge system.

First, edit your /etc/resolv.conf and put in something sensible like:

nameserver 10.10.0.2

Second, edit your /etc/network/interfaces file and make it look like this (with the IP addresses changed to suit your network, obviously.)

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
   address 10.10.0.3
   netmask 255.255.0.0
   gateway 10.10.0.1
   bridge_ports eth0
   bridge_stp off
   bridge_maxwait 5
   post-up /usr/sbin/brctl setfd br0 0 addif br0 eth0

The biggest thing you'll probably notice about the above is that there's no configuration for eth0. This is intentional. Other than the interface coming up, you don't want it to have anything assigned to it. The bridge device is going to be the one that receives traffic for your host, it'll handle the routing to and from the Ethernet port.


Step 3: Reboot and check

Now, I know you're probably going to ask "Can't I just test it without rebooting?", well, you can, but you will not know for sure that everything's going to work the next time the system goes down. Given your network is all screwy right now anyway, you're not going to lose anything by doing this. So reboot, and upon restart check that your networking is doing what you think it should.

Type the following to verify the network is up and running properly:

 $ ifconfig

This should show, at minimum, the following devices as being "up":

br0 - which should be configured as above
eth0 - which should be up, but have no IPv4 address, and if it has an IPv6 address, it should only be the link-local address (begins with fe80:)
lo - which should be configured as 127.0.0.1.

$ route -n

This should show, at minimum, the following routes:

Destination: 10.10.0.0: Gateway 0.0.0.0, Genmask 255.255.0.0, Iface br0
Destination: 0.0.0.0: Gateway 10.10.0.1, Genmask 0.0.0.0, Iface br0

$ ping www.google.com

This should work. If it can't find the host, your /etc/resolv.conf is wrong. If the destination host is unreachable, then there's probably a major problem with your configuration so check the numbers and, obviously, make sure your Internet is working fine from another machine on the same network.


Step 4: Configure your VMs

At this point your bridge is working, so you can get to the next messy stage and start configuring your VMs to use it. That's another story, and what you do will depend upon what VM system you're using. Good luck!

Friday, November 25, 2011

Ideapad K1 First Impressions (some updates)

I finally took the plunge and bought an Android tablet this week. It's a Lenovo Ideapad K1, which is essentially a "standard" (ie 10", 1280x800) Honeycomb tablet. As my regular readers know, I'm still unsure about the whole tablet thing, and to be quite honest with you, this device hasn't made me any more positive about the idea.

Lenovo problems

There are four issues with the tablet that are specifically to do with Lenovo's design decisions.

1. Update hell

The first is that the operating system that's pre-installed is horribly out of date, and is bug ridden. To upgrade it, you have to install every single system update that's ever been made. Each update installs, the system is rebooted, and then you repeat the process until you have Honeycomb 3.2. It's horrible, laborious, and completely unnecessary.

2. Unnecessary proprietary dock connector

Popularized by Apple, the proprietary dock connector is a single socket that's essentially a USB socket that's been made wider, uglier, and incompatible with standard USB accessories. Each manufacturer has their own design, and typically the proprietary dock connector is provided instead of, rather than in addition to, a regular USB connector.

It's a stupid, consumer-hostile, system that only exists so that the manufacturer can create, or license, overpriced accessories. You can't use a standard charger or USB cable, should you leave the official versions at home. The K1 has a proprietary dock connector, it has no standard USB connectors. Manufacturers who play these games are demonstrating clear contempt for their customers, and this is always an extremely good reason to avoid them.

3. No USB charging

Want to charge your smartphone? Like most devices that charge using a USB port, all you generally need is the USB cable, so you can plug it in to your computer. This also ensures you have a neat way to keep it around - plug it in, and while you have complete access to your files on your computer, the device is also kept fully charged.

But not the Lenovo K1. The K1 needs to be plugged into the power supply to charge. It will not be charged from your computer's USB port. And, because the same connector is used for USB and charging, this means you cannot charge it and access the files on it via USB at the same time. Well, there's probably an expensive accessory you can buy but...

4. Locked down

Android has a clear hierarchy of accessibility for system tinkerers who want to do advanced things on their devices.

The one that almost all devices support, and thankfully the K1 does too, is the ability to install any application from any source.

But there's a set of other levels too that are generally restricted, to some extent, because if you were to access them on a phone (that is, a device with an almost direct ability to pump out data onto a congested mobile phone network) you could, in theory, do stuff that would affect people other than yourself in a very negative way. These levels are:
  • Root - a level that allows users to install more advanced software
  • Firmware - a level that allows users to install version of Android that didn't come from the manufacturer, for example, CyanogenMod.
The K1 is a tablet. There's no reason for either of the above levels to be disabled on a tablet, any more than you'd expect them on a PC. A tablet is not a mobile phone - at least, this one isn't. But, in their infinite wisdom, Lenovo has, indeed, put in locks to make it harder to access these levels of security.

Which is not to say that people haven't found ways to bypass the above, but right now the process is ugly, and involves exploiting loopholes in the update process. Given Honeycomb is damaged goods - an OS always presented as a prototype version of Android, and whose design is so embarrassing that Google, while making it open source, have gone out of their way to make it difficult to build a real version of, the fact the ability to update the operating system to, say, Icecream Sandwich, without Lenovo's express involvement, has been made difficult is a big red flag.


Honeycomb problems

Honeycomb itself isn't a bad tablet environment. It's fairly open, it looks good, it's easy to use - well, once you get the hang of the buttons. But there are some major issues, and not all of them are going to be fixed in post-Honeycomb operating systems.

1. Poor support

The big argument for using the iPad is that Android doesn't have high quality support in the tablet space, and that's true. I found the following:
  • Most apps don't even bother to consider the tablet form factor. Twitter is a glaring example. It's a horrible app to use on a 1280x800 screen.
  • Many apps don't install at all. (Update: app previously mentioned here, Yahoo Mail, now works, albeit it's a "mobile app made full screen" thing.)
  • Many apps that support the form factor have been restricted from running on entirely compatible hardware. This includes half of Amazon's stuff, Hulu Plus, Qik (the video conferencing system T-Mobile and Sprint are popularizing), and many others. It's usually because of "exclusivity" agreements - but these agreements are often stupid. Qik, for example, depends upon network effects that can only be had via wide availability, so why the hell can it not be installed on any random Honeycomb tablet?
Websites that support a mobile view tend to load in that view, which is pretty ridiculous given the web browser has a larger screen than most Netbooks. And there's no way to tell the browser to pretend to be a non-mobile browser.


2. It's slow.

Honeycomb is chronically slow. Scrolling a web page typically results in chunks being rendered after you've stopped scrolling. Returning to the home screen often takes seconds. Sliding left and right typically causes the screen to move in a jerky fashion. The K1 is a pretty powerful device, it boasts a dual core Tegra, a device that can give most of the better Netbooks a run for their money. The fault has to lie with Honeycomb, either with Google's design or Lenovo's specific implementation.

Update: Just to prove the point, my testing with Grand Theft Auto 3 proved that there's a pretty spiffy CPU under there.  This is a full 3D game, and it's beautifully smooth, running at the K1's full resolution. So why's the web browser so awful? I used use a 233MHz Thinkpad with a 1024x768 screen and a really poor GPU back the early 2000s, and early Firefoxes on that could scroll without problems!


3. Printing

Lenovo has installed some third party support for printing on the K1, but it's not integrated with the operating system, and it's actually a demo - you have to register to make it work. The operating system itself contains no native printing support. Google offers an API called "Google Cloud Printing", supported by some of their apps, which essentially involves a ridiculous system whereby you... I'm not making this up... you run Google Chrome on a Windows PC or Mac on the network with the printer you want to access, go into Settings, select the Cloud Printing option, add a printer from your network, make sure you're online, and then select this printer on your tablet, which uploads the thing that needs to be printed to Google, which in turn sends it back to the web browser on the PC or Mac which then prints the page.

What. The. Hell?

There's nothing in theory to prevent the tablet from running something more like a standard operating system, where printers on your network are accessible, and you just select them and print with them, maybe with some kind of streamlined device driver system, but, well, no, Google wants you to upload anything that needs printing to them, because... because it's a cloud! Woohoo! Cloud computing leveraging synergies for Web 2.0 platforms, yeah!

If the tablet is supposed to be a standalone device that can be used for productivity, it needs a proper system for interfacing with printers. As far as I'm concerned Honeycomb doesn't have this.


4. Google Docs

The productivity side of Honeycomb is supposed to be Google Docs, and I like Google Docs... on a PC. On a tablet the app is almost identical to its mobile phone cousin. You can edit files, but it's a pain, you don't get to see any formatting, and it's not clear to me that any consideration was given to the form factor.

If the tablet is supposed to be the next generation of personal computing devices, you'd have expected Google's own office suite to at least be up to the task.



Conclusion

I hope Icecream Sandwich is better than this. All I can say right now though is that Windows 8 is probably going to take over the entire tablet space. Honeycomb is a prototype, but many of the decisions Google has made (such as Cloud Printing), and the fact even major groups like Twitter, Yahoo, and whoever writes Google Docs, isn't taking the platform seriously, makes me think that this isn't where the industry will ever go.

Monday, October 24, 2011

Android and Open Source (Updated)

(Updates below)

I read a lot of commentary about the Android operating system with a large subset of people convinced that Android isn't "free software" or "open source". Much of this is based on misconceptions. For those who want the executive overview, here's what I'm about to explain.
  • The core Android operating system is intended to be "free software" - that is, made up entirely of code you are free to modify and redistribute as you see fit.
  • Android is usually distributed with a suite of applications written by Google. These applications are proprietary. They do not make up Android proper, but most users consider them part of Android nonetheless. The most important of these proprietary apps is the Android Market.
  • Android is usually distributed with some modifications by the manufacturers of the devices that run it. Those modifications are, more often than not, proprietary. So while the core Android system might be free software, the version on your phone is probably not.
  • One version of Android is almost entirely proprietary. This version is "Honeycomb" and was the tablet peer of "Gingerbread", which was open source. (***UPDATE - see below***)
  • Google have made it clear the latest version of Android, Icecream Sandwich, will be free software. The source code for this operating system, they say, will be released once devices based upon the system are available.(***UPDATE - see below***)

Free Software and Open Source

I'm going to go off on a slight tangent here. There are two terms that are usually used to describe the same thing, "Free Software", and "Open Source". Advocates of the latter usually argue that the terms are equivalent, that all software that is free software is open source and vice versa.

The term "Free Software" is defined by a group called the Free Software Foundation, and essentially means software you are free to modify and redistribute. Sometimes there are some minor conditions applied to that freedom, such as a stipulation that anyone who receives code you've modified must be given the same rights to the modified version that you did.

The term "Open Source" is defined by a group called the Open Source Initiative, and essentially means software that anyone can contribute to. Emphasis is put, by the OSI, on the idea that nobody controls the process, anyone can pick up a copy of the software and share their contributions with everyone else. The reason that emphasis is there is because one of the goals of those promoting open source was to get businesses involved and supportive of the concept - promoting the idea that different businesses, even rival businesses, can work together on a piece of software for their mutual benefit.

In the case of Android, the term "Free Software" definitely fits, as Google has been at pains to ensure the basic rights associated with Free Software are given to end users. However, the term "Open Source" might not be as applicable as it is to, say, the Apache HTTPD web server, or the Linux kernel. Google makes the source code available under a set of free software licenses, but it doesn't encourage participation in development by non-Google entities. Third party development is, instead, limited to what are essentially friendly forks of the Android system.

Does that mean Android isn't "open source"? Open Source advocates would probably argue that it is, because third parties can still contribute, it's just they don't get much say in what one specific version of the system looks like. But it certainly means that the term "Free Software" is a less misleading description of the system.


Core Android and the AOSP

Android's core is Free Software, and a Google project manages this core under the title AOSP (Android Open Source Project.) The AOSP covers the entire Android system, and it's possible to build a device that's fully functional and extremely capable using just the AOSP code.

Thus far, every version of Android that is available on a device has been released under AOSP, with the exception of Honeycomb. Source for Icecream Sandwich, the very latest version of Android, is pending, and Google have announced it will be released just as soon as devices with the operating system become available. (***UPDATE - see below***)

The bulk of AOSP is licensed under one of two licenses. The Linux kernel is licensed under the GPL version 2, a popular "reciprocal" Free Software license. The rest of AOSP is generally licensed under the Apache license, a popular "permissive" license. Google could not choose an alternative license for Linux as they do not own the Linux kernel, but the rest of the system was licensed under the Apache license so that manufacturers can make proprietary modifications to core Android services if they choose to.



Google's Proprietary Add-ons

Virtually all phones running Android come with something called the "Android Market". It's not merely a useful tool, for most it's a critical tool required to obtain third party applications. While Android is open and allows apps to be "side loaded" - that is, directly copied onto the device and installed - many applications are simply unavailable that way, and the Market doesn't merely make applications available in an easy to install way, but it also does a fine job keeping track of updates.

Despite this, the Market is not actually part of Android. The Market is, strictly speaking, an optional app that most phone makers choose to install. And the Market is proprietary, you cannot obtain the source code for it, and you cannot redistribute it without permission.

Google has a number of reasons for wanting the Market to be proprietary. These are:
  • Many developers who want to distribute apps for Android want a secure channel to do so through, be it because their apps are proprietary too and they want users to pay money for them, or simply to ensure that if a user wants a copy of their app they get it through a trusted channel, and thus know the app is "real" and not a fake version.
  • Google wants to have some leverage over the manufacturers of devices that run Android, to ensure the devices they make can interoperate with one another. Google doesn't want developers unhappy because they find their apps do not work on many devices because of major differences between those devices and the standard Google model.
The CyanogenMod operating system is one third party version of Android that initially hit problems with the proprietary Market app. In this case, the CM people weren't doing anything outside of the Android "spirit", and Google and CM ultimately reached an understanding to ensure that CM users could install the Market app without too much trouble. But the episode lead to some misunderstanding, with many people assuming that Google had told CM that Android itself was not Free Software, when in fact the issue was simply with the Google proprietary apps for Android.

Many devices exist that do not include the Google suite installed, because they do not conform to Google's rules about how an Android device should behave. This includes the vast majority of non-phone devices (tablets, MP3 players, etc), with the exception of some Samsung tablets, and those tablets running Honeycomb. Usually such devices include an alternative to the Android Market, albeit usually one with fewer applications.


Your phone maker's proprietary add-ons

The version of Android that comes installed on your phone has usually been customized in many ways by the manufacturer before release. The majority of these customizations are proprietary. These customizations include:
  • Proprietary operating system extensions such as alternative keyboards (like Swype), or even low level services like UMA (T-Mobile's "Wifi calling")
  • Reskinned user interfaces (like HTC's "Sense")
  • Additional apps (like T-Mobile's "My Device")
All of these add up to a situation whereby you might consider Android itself to be free software, but the version on your phone is only partially free.


Honeycomb and Gingerbread

Honeycomb - versions 3.0-3.2 of Android - were kept proprietary for a variety of reasons. Google was quite open about this, and never suggested that other versions of Android would get the same treatment. (**UPDATE - see below**)

Despite the version numbers, Honeycomb is a "peer" of the Gingerbread (2.3-2.3.x) version of Android, not a more recent version. Prior to the release of Icecream Sandwich, I've heard some people complain that "Android isn't open source because the latest version is proprietary" - that simply isn't true, Honeycomb is not the latest version. Analysis of Gingerbread suggests that the two operating systems share some code not in their predecessor, 2.2.

Why did Google make the decision to close Honeycomb? Honeycomb's poor performance and clues left in Gingerbread have lead many to believe that Honeycomb itself is just not a particularly good operating system. Officially, Google has made it clear that Honeycomb is a one-off fork, and they don't want to see anyone basing real code upon it. And keeping Honeycomb proprietary also helped Google exert quite a bit of control over the first official "Google tablets", ensuring they were high quality, high performance devices.

Unlike Honeycomb, Icecream Sandwich is a single operating system that incorporates the functionality found in both tablet and phone editions of Android. ICM is, architecturally, where Google wants Android to go, and so they have less reason to want to limit its distribution. Likewise, Google has less reason to want to control what devices run ICM given that high quality tablets are now in circulation.


Icecream Sandwich

The latest announced version of Android is "Icecream Sandwich", also known as 4.0. ICM will be available on devices within a few weeks, and Google has made it categorically clear that once those devices are available, the AOSP version of ICM will be made available too. (**UPDATE - see below**)


Conclusion

As operating systems go, there's no reason to describe Android as anything other than "free software" or "open source". As a whole, it is at least as free as, say, Ubuntu. It is important however to remember that Google has a certain amount of control over future versions of the system, and one day Google could do to a version of Android what it did to Honeycomb. It's also important to remember that, like other operating systems, you'll probably end up having to use one or two proprietary apps to make full use of the system, and that your phone's manufacturer also has a lot of say in what exactly you end up with.



Updates

Since this article was written, Google has released an AOSP version of Icecream Sandwich. Additionally, the sources to Honeycomb have also been released under the same open source licences, but there's a catch: Google isn't indicating what revisions of each file were put into production.

So at this point, all versions of Android are technically open source and free software. It remains the case however that manufacturer's "improvements" to the operating system, and the Google suite of applications including Android Market, are proprietary.

Thursday, September 29, 2011

When you've already decided the story

I don't know about you, but if you have any interest in the markets, then the news around a month ago became very surreal. The S&P, a ratings agency, "downgraded" the US debt on a Sunday. The media saw this as a major story (which, I guess it was), and automatically assumed that the markets would panic the following day because government bonds were no longer considered as solid an investment, at least by one major ratings agency, as they were.

The following day, there was a sell off - but not of government bonds. In fact, government bonds rose in value. The market was getting rid of stocks and buying government bonds. Why? Well, actually this had been going on for a week. In the previous week, the market had collapsed almost every day, but government bonds had rose in value, as panicked investors sold what they saw as unreliable stocks, and bought into what they saw as safe bonds. And these investors were panicked because... well, I assume because the economy isn't very good right now. Governments across the world are implementing so-called "austerity" measures in order to reduce their own debts, which is sucking money out of the economy, increasing unemployment, and making it much harder for those self same governments to run stimulus programs to revive their flagging economies.

In that climate, nobody really cared what the S&P had to say, especially as the S&P wasn't saying "The US government is going to default", just "The US government has a fractionally higher chance of not paying its bonds in time as it did previously."

So what did the media do? Well, having decided to write a "Markets collapse due to S&P rating change" story on Sunday, they wrote it on Monday anyway! And they've continued to push this line since, as have the politicians, the pundits, and pretty much everyone except investors and economists - you know, the people who actually know what they're talking about but who rarely get any air time.

The Amazon "tablet"

So with that in mind, I invite you to take yesterday's coverage of Amazon's new color Kindle with a pinch of salt.

For the last few months, a number of publications have heard rumors (and sometimes seen evidence) that Amazon was going to release an "Android tablet". With the words "Android" and "tablet" generally associated with "Competition with Apple" and "iPad", the media wrote the story: Amazon was going to take on Apple when everyone else had failed. It would be a mega tablet, low cost, but capable, and would beat the iPad because it would be so cheap. iPad iPad iPad.

And yesterday Amazon finally released its new range of Kindles. There's a low cost model, plus a slightly more expensive model called the Kindle Touch, and then there's the color Kindle, the Kindle Fire. The latter is the iPad competitor.

Or so the media said.

But, hang on. Is anyone else producing the same thing? Why yes, one obscure company nobody has ever heard of is producing a similar line of devices. They don't have a "cheap" version, but they do have a low cost touch-based reader, and a slightly more expensive, Android based, thing, that can run apps, etc, with a color LCD.

The company? Barnes and Noble. Which is arguably Amazon's primary competitor.

So if Amazon is producing a range of devices that happen to be extremely similar to its largest competitor, then how does the "Competing with Apple" thing come about?

Part of this is a misunderstanding of the market. The iPad is not simply a color eReader, it's a more sophisticated device that's designed for basic computing tasks. The iPad can be used as an eReader, but few people buy it as one because it's suboptimal for the task. You can't read the screen in certain lighting conditions, you have to keep the thing on a charger when you're not using it, it's expensive, and, well, it's big. Really big. The Kindle might not fit in your pocket (although the Kindle Touch may do) but it's small enough to go anywhere where you'd take a book.

B&N initially produced an eInk Nook before releasing the Nook Color, which seemed to be in response to demands from people who felt that actually the iPad was a superior eReader because, well, it was in color. And it's sold moderately well, but largely to people who saw it not as an eReader, but as a neat portable media and web widget. It's a poor eReader, and a poor tablet, but it's not really intended to be either. And it's given B&N the opportunity to sell certain services they wouldn't be able to otherwise.

Meanwhile, seeing eInk as the best technology for reading, B&N released a Nook Touch recently, that uses touch screen technology to produce a nicer, easier to use, eInk eReader. B&N obviously see the Nook Color as worthwhile, but not as the "future" of eReading.

Amazon's Kindle, in the mean time, has continued to sell like hot cakes. But that said, as a Kindle user myself, I can tell you it's far from perfect. The user interface is klunky, there are some nice features like web browsing that are hidden in the device and are difficult to use because the device was never designed with that in mind, it's capable of playing music but, again, doesn't do it well, and it really needs a revamp. Quite honestly, until yesterday's announcement, I too was considering a Nook. A Nook Touch.

Meanwhile, Amazon themselves have a number of services that they want to sell that would benefit from being less dependent upon third parties. For example, you can use a Roku box to watch a movie with Amazon Video-on-demand, but the Roku box isn't well marketed and you need to know it can do that.

So Amazon has revamped its Kindle range. The eInk Kindles are designed for reading. The new Kindle Fire is designed as, essentially, a network connected media player for Amazon's content. You'd use the Kindle Fire where you currently use an MP3 player AND a DVD player. And that's it. It has some other nice features, like a web browser, and Amazon intends to make it easy to buy apps for it so you can extend its functionality. But it's not positioned as a next generation portable computer. If you want an iPad, the chances are you're not going to be satisfied by the Kindle Fire.

Why? Well:
  • The Kindle Fire has a smaller screen. This limits the capabilities of the apps you'll want to use with it.
  • You can't connect the Kindle Fire to the Internet unless you have an existing Internet connection. (Interestingly, 3G versions of the other Kindles save for the cheap one are available.)
  • You can't use the Kindle Fire for any communication beyond email and IM. There's no microphone, for example.
  • You can't use the Kindle Fire to create anything - you can't take pictures for example.
I don't think anyone at Apple is particularly concerned by the Kindle Fire. I'm not suggesting there's no overlap, and Apple does make some money (not a lot, but enough to be concerned about) selling the same kinds of content that Amazon does, but that's not Apple's core business, and people who are buying iPads are, at least, expecting better capabilities. The iPad is primarily a communications and media device. The Kindle Fire is primarily a media device.

The media had already written the story, and so what happened yesterday was somewhat confusing. But if you look at the products in the narrow way defined by yesterday's spinning, then nothing makes sense and you'd be forgiven for not really being able to make an informed decision about what to buy, or whether to buy anything at all.

Here's what you need to know:

  • If you really are limited for cash, need to read eBooks, and have access to the Internet the $80 Kindle is designed for you. 
  • Regular eBook readers will almost certainly find the $100 & $150 (Wifi and 3G respectively) Kindle Touchs exactly what they want to use.
  • If you're looking to replace your MP3 with something more advanced, something with a big enough screen for occasional web browsing, something you can watch movies on (or plug into a TV to watch movies on), and something that allows you to play games, then if your phone doesn't do this already, or the phone does but has too small a screen, then the Kindle Fire is intended for you.
If you're considering an iPad, then you probably want more than the Kindle Fire, because, to be quite honest, there are already low cost Android devices that have a spec somewhere between the Kindle Fire and the iPad anyway, and for some reason you've ruled them out! Moreover, choosing between the two is a little like choosing between a computer or a games console, or between a raincoat and a T-shirt. The devices might have overlap, but they are optimal for their intended purposes, which are slightly different.

Tuesday, August 23, 2011

And then there were three

We're down to three major tablet "platforms", Blackberry, Android, and iOS. HP has withdrawn its webOS tablets, and while some hope that other manufacturers may license the system from HP, it seems highly improbable that will happen. HP's decision last week left a dark cloud over the OS, and indeed, over itself. And that's a terrible end for what was a great powerhouse of innovation - or the merger of at least four powerhouses (HP, DEC, Compaq, and Palm) of innovation, at least.

How do I see this playing out? Well, here's how I see the I see the situation, as a software developer biased towards free/open source software, and someone who's been using tablets since the Nokia N800, and who dabbled with PDAs and handheld computers right the way back to the 1980s.

Let's start off with some random thoughts:
  • I'm still, frankly, staggered by the iPad's success. I didn't think it would get anywhere. It's very clearly more expensive and less useful than a Netbook. Yet it's selling as fast as Apple can make them.
  • I don't see many iPad owners actually using the things!
  • By any measure, the better non-iPad tablets in the $300-600 range are better value and more powerful than the iPad, yet they're not selling.
  • I can see how tablets would help in businesses, especially customer facing roles from sales people to doctors. Bizarrely, I'm not seeing them used there.
Given this, and I may be completely wrong about this, but I'm strongly of the opinion that people who are looking for something to solve a problem (that is, people concerned about functionality) are not, in general, buying tablets. A tablet is much more, right now, of a "want" item than a "need" item, something that looks great and desirable, but isn't something you'd have ever thought you need. And as a result, naturally, the company that's got the best reputation for "design" is cleaning up. It may seem snarky, but the only company that can probably make a successful iPad competitor right now is Dyson!

So should Google, RIM, et al, follow HP in just giving up? Well, I have no idea. Here's what I think.

First, I think there are real applications for the tablet form factor, right now, that aren't being exploited. Watching a nurse and then a doctor attempt to navigate an arcane medical recording/diagnosis system last Wednesday, it became very clear to me that cheap, network connected, tablets would have an optimal user interface for doing much the same thing, in a way that would be natural and work well.

Is anyone doing creating such applications? Probably, but it's going to be a while before such systems become commonplace. Developers have to get used to the form factor, and ironically, one thing holding us all back is the iPad. A closed system, the iPad doesn't lend itself to experimentation. The alternatives are only just starting to emerge - Honeycomb has been out for, perhaps, six months now, and the impression I get is that Google sees it as a rushed beta, not as anything they're proud of. The HP Touchpad has only really been available for a month, and the Playbook a little longer, but not by much.

Which brings me on to a second point, while the underlying concept may be decades old, a viable version is still something everyone is trying to thrash out. The iPad brought some ideas to the table, concentrating on the multitouch concepts pioneered by the iPhone, and this has lead to another concerted effort to try again.

Will the multitouch tablet succeed where the PDA, HPC, et al, failed? Well, by itself, if limited to the current players, I'm inclined to wonder if this type of device is a flash in the pan.  After all, if the majority of buyers of iPads end up not using them, then is it probable they'll buy any more, or recommend them to friends in the long run? But there's another factor that should be taken into account before writing off the form factor completely.

That factor is Microsoft's Windows 8.

The previews we've seen of Windows 8 thus far suggest Microsoft is going all-in to produce a universal version of Windows that will be as at home on a multitouch tablet as it is on a regular desktop computer. Microsoft isn't new to tablet computing, Bill Gates was using a tablet as his primary machine from the late nineties, and that computer was running Windows. A quick search on tech vendor sites usually produces a list of "Tablet PCs" which have the tablet form factor, but run a "Tablet edition" of Windows.

Despite its pioneer status, Microsoft has been conspicuously absent with the new generation of tablets. While Apple, Google, RIM, and HP ported their mobile operating systems to the tablets, there's no suggestion that Microsoft is planning to, ever, send Windows Mobile in the same direction.

What's the difference between Microsoft's approach, and Apple's? Well, Apple's iPad runs a stripped down version of OS X that is, pretty much, completely incompatible with its desktop cousin. This is by design, and Apple's view, quite rightly, is that the different user interface of the iPad means that developers need to take a different approach, they can't make crude ports of their existing tools and expect them to work.

But iOS isn't merely a different version of OS X to its desktop cousin, it's a considerably less powerful operating system and its closed. The result is that the iPad is always going to be somewhat limited, developers have to be willing to rewrite their software completely, and their software will be limited by the crude environment in which it runs.

Not so Windows 8. In the worst possible case, a user will be able to use an existing, unported, application unchanged. But developers will find it easier to make changes to their existing applications so that they work well on a Windows 8 tablet, but lose nothing in functionality.

That's a game changer because at that point the multitouch tablet ceases to be an expensive toy, limited to web browsing and the odd game, and becomes something capable of replacing a laptop computer, or even a desktop.

The question here, I guess, is do we want that? Is a tablet really a natural form factor for a computer? I guess time will tell, and we won't know until a serious effort is made to adapt software to work using a natural tablet user interface.

So... where does Google and RIM fit in this world? I'm not sure they do. Android has a lot of potential, but Google aren't likely to produce a post-beta version of their tablet UI until later this year, and the operating system is missing an enormous amount of functionality necessary for something that would be a desktop replacement.  RIM, likewise, only has a mobile operating system.

To that extent I kind of understand where HP is coming from, and kind of don't. HP had high hopes for webOS and was talking up the possibility of the high level parts of webOS running over Windows on desktop systems in the future. But if Microsoft is going in that direction anyway, adding webOS to Windows would be pointless. And as a mobile system, webOS would never be able to make it alone, any more than Android or the Blackberry operating system.

There's plenty, of course, that Microsoft can do to make a mess of things. Windows 8 may well need considerably more power than will be available in early multitouch tablets. And there's a part of me, as a big Android fan and supporter of free and open source software, that hopes the forthcoming "Icecream Sandwich" version (the version after Honeycomb) will be a success. But Google has a lot of work to do if they want to be sure they can head off Windows. At the very least, they need to reconsider the "Chromebook" project and consider integrating Android into it. Android itself needs the advanced networking and security features commonly associated with modern desktop operating systems. The development infrastructure needs massive improvements, and Google should consider helping port some of the more advanced applications to the system. I'm not convinced Google can do any of this, or moreover that they see it as necessary.

Still, we'll see. It's going to be interesting to see what RIM and Google do over the next twelve months. And also Apple, to be honest, because it looks as if they're keen to move a lot of multitouch functionality into Mac OS X, and it may well be that the iPad is replaced, eventually, by a tablet iMac. 

Wednesday, July 27, 2011

Asterisk tip: ast_sip_ouraddrfor: Address remapping activated in sip.conf but we're using IPv6, which doesn't need it. Please remove "localnet" and/or "externaddr" settings.

Ever seen this "error" message from Asterisk?

"ast_sip_ouraddrfor: Address remapping activated in sip.conf but we're using IPv6, which doesn't need it. Please remove "localnet" and/or "externaddr" settings."

If you've enabled IPv6 with your Asterisk installation, but have also configured it to understand it's on a private network, accessing the outside world using port forwarding, then you've almost certainly come across the message above.

Well, I have some good news and bad news. The good news is that it's wrong. The code, as written, appears to assume that any incoming IPv6 connection is proof that the NAT hacks someone might have coded in their sip.conf are unnecessary. Of course, unless you live in an IPv6-only universe, that just isn't true.

The bad news is that until the Asterisk people fix it, you'll have to either live with it, or recompile Asterisk. If you plan to do the latter, edit chan/sip_chan.c, search for this bit of code:

ast_log(LOG_WARNING, "Address remapping activated in sip.conf "
                                "but we're using IPv6, which doesn't need it. Please "
                                "remove \"localnet\" and/or \"externaddr\" settings.\n");

Surround it with /* */ to disable that line, save, and recompile.

I'll submit a bug report one of these days. One did exist, once, but it appears to have been closed due to confusion about the validity of the message.

Wednesday, June 29, 2011

Low cost devices to hook up to your Asterisk system

If you're considering setting up an Asterisk PBX (or any other type of VoIP PBX for that matter) you've probably asked yourself what's involved. Obviously you need a computer to run the PBX on, but you're probably interested in knowing how you hook that PBX up to your phone system, and to the phone system.

Well, the good news is that the sudden interest in VoIP means that a lot of the equipment you need that used to cost a fortune is now available at consumer prices. Let's go through it.


Hooking up to the PSTN

The PSTN - public switched telephone network - is the name for the international telephone network we know and love. You'll want to route calls in from it, and out to it. You might even want to play around with a number of different systems to get the best value for money.

If you already have an existing telephone line, and want to use it, the easiest method is to use something called an FXO. An FXO connects to a telephone line and routes calls to and from it to your PBX. Any recommendations? Well, I haven't tried it, but the Grandstream Handy Tone 503 is a low cost device that includes both an FXO and an FXS in one package. What's an FXS? We'll get to that in a moment. The HT503 routes calls from the phone system to your Asterisk PBX via your Ethernet network, and uses the industry standard SIP protocol, which is Asterisk's second language (after IAX.) The HT-503 costs around $60 at the time of writing. Another option is the Cisco SPA3102 which has roughly the same capabilities.

If you don't already have an existing telephone line, or you'd like to migrate to an entirely all-IP network and get rid of your phone line, you also have a number of options.

One ultra cheap, but not quite ready for prime-time, system is Google Voice. Google Voice offers VoIP using a system called Jingle, and Asterisk includes native support both for Jingle, and for Google Voice's version, which includes a few quirks. I've not had problems calling out using Google Voice, but incoming calls are proving to be a problem, so bear that in mind. Until I can tell you exactly how to make it work, I'm not going to cover this... yet. Be aware that if you're going to go down that route, you need Asterisk 1.6 or better. I recommend Asterisk 1.8. There are numerous guides on the Internet to how to do this, but the only version I found that worked was, well, the official Asterisk documentation.

You can also use Google Voice using an intermediary. IPKall offers a free incoming phone number, one based in Seattle. You can configure Google Voice to call that number, and also configure Asterisk to both accept calls via that number, and when an outgoing call needs to be made, have Google Voice call the number and route it via that. You can use this system if you use an older version of Asterisk. The entire system is transparent, but it's somewhat ugly, and I'd question whether anyone wanting a reliable phone service should use this system. Still, configured correctly, you can have a phone service that works just like a regular phone system, but one that costs no more than your existing Internet connection.

Free is free, and it always comes with some health warnings. In particular, bear in mind that you don't get 911 service with the above system. So if you're planning to go the free route, make sure there's other options. A prepaid cellphone will do the job.

Talking of cellphones, you can also use a cellphone that has Bluetooth capabilities with Asterisk. Calls can be routed via the cellphone, and received via the cellphone, as long as it's within range of your PBX, and you've compiled and installed the necessary add-ins to Asterisk (and, naturally, you have Bluetooth on the computer that runs Asterisk.) I'm wary of this route, as Bluetooth is great in theory, but often leaves a lot to be desired in terms of reliability, but it's certainly an option. If you have a family plan with unlimited calling, you could add a line, and use that.

For paid VoIP services, there are many, many, VoIP providers. Expect to pay around $10 per month, sometimes less, sometimes more, for a provider that offers SIP. Be very aware that there are a lot of frauds out there, companies that exist solely to rip off the big telcos, and while you might not be concerned about AT&T's bottom line, being disconnected and without phone service unexpectedly can be a major problem.

For a relatively trustworthy source of recommendations, I suggest BroadbandReports.com. Be careful, as a lot of websites that promote themselves as unbiased are actually run by the providers themselves.

What do I do? Well, I signed up with a company that offers SIP phone service, and I route my US calls via them. My international calls are routed out using Google Voice.


Hooking up your own phones

So that deals with the outside world, what about actually hooking up your own phones to the Asterisk server? You have a variety of options. The option most people start out using is a so-called "Soft phone", a SIP client that runs on their PC, but nobody wants to have to sit at their PC when they make and receive phone calls.

You have at least four low-cost options when it comes to hooking up phones to your network, you can use any or all of them. We just mentioned Soft phones, but what are the other options?

The most obvious is an "FXS", a device that hooks up a regular telephone - with an RJ-11 jack - to your network. There are many options here, all of which provide one or more RJ-11 jacks, each individually configurable to act as a SIP client as far as your PBX is concerned.
  • The Grandstream HandyTone 502 is a very low cost VoIP adapter that's proven to be immensely popular, not least with VoIP providers themselves. Expect to pay well under $50 for this device. Grandstream also offers the HandyTone HT286 which only has a single port, it's a little cheaper if your needs are more modest. And as I said above, the Grandstream Handy Tone 503 is another option that combines an FXO and FXS in one box.
  • Cisco/Linksys offers a range of FXSes including the Cisco SPA2102 and the Cisco PAP2T. And, as I said above, the Cisco SPA3102 combines a single FXS and an FXO. Expect to pay between $50 and $70 for these devices.
There's not a lot to differentiate between the HT502, SPA2102, and PAP2T, so check the reviews and prices and go from there. I've been using two HT502s - one supplied by my VoIP provider - without problems.

All of the above devices are configured using a web interface, and do not need special software installed on a PC or anything like that.

Option three: use a phone system that supports VoIP out of the box. Siemens offers two DECT cordless phone base stations that support SIP directly - as in you plug it in to your network, bring up a web browser, enter the details of up to six SIP accounts, tell the device which handset uses what and when, and then, well, it just works. I recommend the Siemens Gigaset A580IP based upon my own experience with it, it's very low cost for what it is, and works well with most of Siemens' other DECT handsets.

Option four: Android. Android Gingerbread includes a SIP client built-in, and prior versions of Android can have a third party SIP client, like Linphone, installed. While trying to make this work over your cellular connection may, ultimately, not be worth the effort, you can effectively turn any Android cellphone into a cordless extension phone when it's on Wifi in your own home.

And not just Android cellphones. Android is becoming a standard operating system for MP3 players, most of which are open enough to support the installation of SIP clients.

In fact, any tablet or portable media device that is capable of running a SIP client, that has a microphone and speaker, and has a Wifi connection, is capable of being an extension on your phone network. Useful to know if you have an old Nokia tablet running Maemo, for example.

Something I invite you to look into: The Archos 28 4 GB Internet Tablet (Black) is $80, and in theory has the spec necessary to make it into a very cool cordless phone. I'll be curious to hear your experiences using it.

What are your experiences of the above devices? Did I miss anything you think is worth recommending?





(All of the above links are Amazon Affiliate links, I get a small kick-back if you buy from there, but this only factored in to my decision to link to Amazon, not in my choice of products! And I'm sure you wanted to know where to buy these from anyway!)

Friday, June 24, 2011

Android Gingerbread and Asterisk PBX

(This article has been heavily revised since it was originally written to include some how-to advice)

One of the nice features of Blogger is it has some monitoring software built in that allows you to learn how many people are reading your blog, and, to a certain extent, why. The "Why" is generally 'This guy searched for 'why are my underpants green?' or 'what elephants eat bamboo shoots?' - as in, someone searched for those phrases on Google, and found my site in the search results.

No sooner had I posted my article today about a particular Asterisk PBX error message, than someone landed on my site on that very article - but, alas, they were searching for information about why they were having problems getting the Android Gingerbread client to talk to their Asterisk server.

Well, that's a good topic because I'd been researching just that. Let's talk about this in more detail.

Android Gingerbread (2.3) contains a built-in, fully integrated, SIP client. It's fairly clean, configures itself properly, and - I think quite intentionally - has very few configuration features.

Asterisk supports SIP natively. If someone has a SIP client, and you let them use it, they can make calls via your Asterisk PBX.

So, if Gingerbread supports SIP, and Asterisk does, this means the two ought to work OK, right? Well, it's hit and miss.

Gingerbread, Asterisk, and NAT

Let's first of all cut to the most likely reason why you're reading this article. You've configured your Gingerbread Android phone to use your Asterisk server. You're hoping it'll work, because you want to make Voice over IP calls over the cellular data network. Or perhaps you're OK with only making calls when you have Wifi coverage, but you're not having a lot of luck making calls from, say, coffee shops.

If your phone is behind NAT (and if you're on a mobile network, the chances are you are), then you'll find real problems getting the audio to work. The phone will happily register, you'll even hear ringing when you phone extensions over the SIP client, but once the phone's answered, you'll not hear anything. What's going on?

OK, here's the problem: Android's implementation of SIP is excellent. It's a clean implementation that does exactly what a SIP client should do.

So the issue's with Asterisk, right? Well, no, Asterisk's implementation of SIP is excellent. It's a clean implementation that does exactly what a SIP client should do, and then some.

Well, if it's not Asterisk and it's not Android, then where's the problem? Well, it's neither: it's the Internet. The Internet sucks.

To be precise, it's today's Internet that sucks. When we finally move over to IPv6, many of these problems will be resolved. But not yet.

To understand why, let's talk a little about SIP. SIP is designed to efficiently route a voice over IP call (actually it can do more than that, but VoIP is its primary use these days.) Generally speaking, there are three parties involved in a typical SIP call, a registration server, the caller, and the callee. In some cases, the registration server and the caller or callee is the same (at least from the point of view of either), but that's a topic for another time.

Now, how it's supposed to work is this. The caller places a SIP call. They start with the address of the callee, in the form sip:account@domain (eg sip:paul@sip.harritronics.com) The caller's client looks up "domain", figures out where the SIP service is associated with it, and then sends what's called an INVITE to that server for "account".

Prior to this, the callee has registered with the server. When the server gets the INVITE message, it checks the callee ("account") is registered, and if so sends an INVITE to the callee.

The callee then either accepts or declines the call. If they accept it, what generally then should happen is the registration server should ask both callee and caller's SIP clients how to contact one another, send each other the other's information, and then back out of the way. Or, if that's awkward, the server can act as an intermediary, but either way the server will need to know how each party wants to receive audio, either to tell the other party, or to route the audio itself.

Note - I know that's complicated, so let's explain it using two examples.
You make a call to party b@bsip.com. Your SIP client contacts your SIP proxy, and says it wants to make a call to b@bsip.com, which it does by sending an INVITE message. The SIP proxy finds the SIP server associated with bsip.com, and forwards the request. bsip.com, in turn, tells b@bsip.com that it has an incoming call.
Example 1: If your proxy wants to route the call manually, it'll ask your client where to send the audio from b@bsip.com. It'll then receive audio from b@bsip.com, and forward that audio to where your client asked it to be sent.
Example 2: If your proxy, and bsip.com, doesn't want to route the call, your proxy will ask your client where to send the audio from b@bsip.com. It'll then pass that information to bsip.com, which in turn will pass that information to b@bsip.com. b@bsip.com's client will then send audio directly to where your client asked it to be sent.
And herein lies the problem. Android's Gingerbread client gets the request, looks up its own Internet address, and says "Send it to me at this address." It sends the details to the server, and the server tries to send audio to that address.

But that address is, on a mobile network, almost always an Intranet address. (Just to add insult to injury, T-Mobile - for reasons I cannot fathom - has chosen to eschew the standard Intranet blocks and use some IP addresses allocated to the Department of Defence instead. I'm not making this up.) So when the server sends the address to the other party (or tries to send audio to it itself), it gets lost in the Internet.

OK, you may well ask, well, why not ignore the IP address the client sends, and just use the address the message came from? Well, because that would break SIP. Imagine, for a moment, if one of the "parties" involved is actually forwarding their calls. Maybe they're doing it directly, perhaps paul@sip.harritronics.com maps to paul@sip.stuart-office.harritronics.com; maybe they're doing it indirectly - a company that offers VoIP phone service might forward a call through a range of servers before it gets to you.

SIP's designers took this into account. The messages involved in setting up the call are forwarded, so that in the end the two clients will still end up having a direct connection (which is reliable and has little latency) rather than one where every packet is going through a hundred intermediate servers (which is inefficient, unreliable, and raises the latency.) If the intermediate server were to deliberately ignore the address (which is of the client involved) and put in the address the request came from (which might just be another server), then that would break everything.

So it's Gingerbread's fault for doing everything as it should. And it's Asterisk's fault for doing everything it should. And it's SIP's fault for being designed to be efficient. Or, well, it's the Internet's fault for sucking.

Why does the Internet suck? Because there aren't enough IP addresses, and so most people have to use NAT to get online. And NAT is a one way protocol, you can reach people outside of NAT, but you can't reach them within.

All of this said, is there a solution (beyond us moving over to IPv6)? Well, SIP is frequently used together with a protocol called STUN. STUN helps a SIP client determine what the IP address is that it's traffic is really going out on, and using some dirty tricks that many routers support, the SIP client can determine a configuration that might work for SIP.

It's dirty, it's not the way to do things, and that's probably why Gingerbread doesn't use STUN.

Asterisk and NAT

Now, at this point you're probably wondering if there's a workaround. After all, Asterisk's developers can't be happy that their system doesn't work with NAT. Well, Asterisk has some NAT workarounds, but they're imperfect. Asterisk is reluctant to assume that an IP address it's told to send data to is wrong, and it generally assumes that the client will make some sort of effort to get around NAT itself, meeting it in the middle. So while Asterisk even supports a "nat=" option in sip.conf, both universally and for each individual client, unless the client makes some kind of effort to present proper IP addresses itself (or by happy accident it works anyway) this will not be enough to make a client work from behind NAT.

Workarounds

There are very few workarounds that'll get Gingerbread working with Asterisk over NAT. Some things you should consider when trying to solve the problem yourself.

  • Have you considered a different protocol? Asterisk supports IAX, a "dumb" protocol that sacrifices efficiency for easier routability.  IAX clients exist for Android.
  • Likewise, SIP clients that are generally better at routing around NAT, and cooperating with Asterisk, are available, although I'll be honest with you and tell you I haven't had much luck with any of them.
  • Finally, you can use VPNs to give yourself direct access to the network running your Asterisk server. My only comment on this is that VPNs are a little messy in Android, they don't stay up for very long, and you generally have to reauthenticate yourself every time you connect to one. VPNs do work, however, they can be used to route SIP.

Using your Gingerbread device on your own network with Asterisk

OK, now we have the question you wanted answered answered in a way you probably didn't want, let's ask the related question "What are the settings I need to make it work at all?"

ie:
  • You're OK with it only working when directly connected to your network
  • You understand it'll probably not work when at a coffee shop
  • You understand it'll almost certainly never work when on cellular data
  • Although... that VPN thing will make it work too.
Here's the settings.

From my own sip.conf:

ignoreregexpire=yes

[paulscellphone]
type=friend                     ; Pretty much all devices that make both incoming and outgoing calls are "friends"
secret=password123       ; This is the password
host=dynamic                 ; We're not going to care what IP address the cellphone is using, but this is an area you can lock things down with
nat=yes                           ; nat=yes virtually never breaks anything, there's no reason not to have it on
directmedia=no               ; Because we're behind NAT we want the server to take care of audio routing
callerid=Paul's Cellphone <102> ; The caller ID we want to show internally 
context=harritronics-internal   ; "harritronics-internal" is the context I use for my office.
disallow=all                    ; Default - reject codec
allow=gsm                      ; Accept the GSM codec
allow=alaw                     ; Accept G.711 aLaw
allow=ulaw                     ; Accept G.711 uLaw
caninvite=no                   ; These settings confirm we want the PBX handling the audio
canreinvite=no
qualify=yes                    

The only two settings there that are unusual for the Gingerbread client are "qualify=yes" and "ignoreregexpire=yes". They cover the fact the Android Gingerbread SIP client doesn't renew its registration when you'd expect it to, and so Asterisk thinks it's fallen offline after a few minutes. 

Conclusion


So, that's the answer. You can use Gingerbread's SIP client as long as the client (and the Asterisk server) both have real IP addresses, or are both connected to the same network, without any problems. However, if the Gingerbread device is behind NAT, and not on the same network as the Asterisk server, then you're unlikely to get anything to work.

I hope that helps someone out there too.