Tag Archives: Linux

Can we please stop saying open source is more secure?

I’ve argued for a long time the "open source means more eyeballs means more secure" argument was complete bunk. I’m not particularly happy that the GnuTLS bug – which appears to have been there for up to nine years – has shown I was right. As John Moltz puts it:

This SSL bug may have been in the code for nine years. Please, tell me again that trope about how Mac users blindly think their computers are invulnerable to attack. And it’s not like it’s the only one the platform’s had.

The point is not how many eyeballs look through code (and as Watts Martin points out, no one looks through a lot of that old code). It’s the quality of the eyeballs which matters. If a hundred mediocre coders look through a bunch of code, they’ll never see the same issues that a single really good one will see. People aren’t functionally equivalent units of production.

As Steve Jobs put it:

"In most businesses, the difference between average and good is at best 2 to 1, right? Like, if you go to New York and you get the best cab driver in the city, you might get there 30% faster than with an average taxicab driver. A 2 to 1 gain would be pretty big.

"The difference between the best worker on computer hard-ware and the average may be 2 to 1, if you’re lucky. With automobiles, maybe 2 to 1. But in software, it’s at least 25 to 1. The difference between the average programmer and a great one is at least that.

"The secret of my success is that we have gone to exceptional lengths to hire the best people in the world. And when you’re in a field where the dynamic range is 25 to 1, boy, does it pay off."

Why the Android ecosystem isn’t like Windows

One of the most often-repeated statements about the competition between iOS and Android in mobile phones is that Android is bound to win because it’s following the same model as Windows did in “winning” the PC market. An operating system, licensed to all-comers, with a range of hardware makers all competing should (the theory goes) drive down costs and increase innovation, just as happened in the PC market.

There’s only one problem: The way the Android ecosystem works is nothing like the Windows market.

In the PC market, Dell didn’t get to build its own customised version of Windows, then make its customers wait to get an update – if it supplied one at all.

When a new version of Windows came out, you didn’t have to rely on Dell to get it – you just bought it, direct from Microsoft. You might have to download some drivers, if they weren’t included (for generic PCs, they often were). But that was often from the maker of the particular affected components, not Dell.

In the Android world, if you have (say) an HTC phone you can’t get an update from Google. You have to wait for HTC to provide it – and they have little incentive to create it in a timely manner. Neither do they have the resources: they’re operating on slimmer margins than Google, and don’t have the software chops. They didn’t make Android, they just tinkered with it. And working out what breaks their tinkering in a stock Android update isn’t always trivial.

What Google has created is in danger of ending up far more like the world of Linux: disparate, fractured “distributions” which are semi-compatible as long as a volunteer geek has taken the time and trouble to port, test and package whatever software you want.

It’s not too late to change this, but Google has to take more responsibility if it wants Android to be a long-term success.

Enhanced by Zemanta

Why free software will remain a niche, in a nutshell

Steven J Vaughan-Nichols on The new Debian Linux: Irrelevant? | ZDNet:

“For example, the default Debian distributions won’t include any proprietary firmware binary files… If, as is likely if you’re using a laptop or a PC with high-end graphics and you find you’re running into hardware problems, the Debian installation program should alert you the problem. That’s fine as far as it goes, but the installation routine won’t automatically download the missing firmware from the Web. Instead, you’ll need to pause the installation while you fetch the missing in action firmware from either the Debian non-free firmware ftp site or the vendor’s site.

The theory is that by doing this outraged users will demand that hardware vendors will open-source their device drivers, or, at the least, let Linux developers write open-source drivers for proprietary hardware. In practice, it doesn’t work that way.”

While the commitment to free software inconveniences users who want to mix-and-match how much they use it, it will remain a niche choice.

Ubuntu changes its desktop from GNOME to Unity – Computerworld Blogs

Ubuntu changes its desktop from GNOME to Unity – Computerworld Blogs:

Mark Shuttleworth, founder of Ubuntu and the company behind it, Canonical, surprised the hundreds of Ubuntu programmers at the Ubuntu Developers Summit when he announced that in the next release of the popular Linux operating system, Ubuntu 11.04, Unity would become the default desktop interface.

Unity is Ubuntu’s new netbook interface. While based on GNOME, it is own take on what an interface should look and act like. Shuttleworth explained that Canonical was doing this because “users want Unity as their primary desktop.”

What’s interesting is that this parallels what Apple is attempting to do with Mac OS 10.8 (“Lion”) – move the the default desktop metaphor away from the windowed environment that we’ve had for years in favour of something else.

I’m not surprised that this is coming from Canonical, though. If any company has pushed Linux away from being something that’s only suitable to hobbyists to a genuinely user-friendly OS, it’s Shuttleworth and his team.

Daring Fireball’s wishful thinking

I totally understand where John Gruber is coming from with his post on “The OS Opportunity“. The problem is that there’s a whole lot of wishful thinking in there.

First of all, go read John’s post. Rather than try and summarise here and potentially mischaracterise what he’s saying, you just go read it.

Back? Good. Then we’ll begin. John’s first point is that what kept people on DOS was simply file compatibility:

“In those days, before DOS ran most competing platforms out of the market, interoperability and data interchange were at best difficult, and often impossible. Data was stored in incompatible file formats written to incompatible floppy disks1 by incompatible apps compiled for incompatible CPU architectures. Even later in the ’80s, when networking became common (at least in businesses) the network protocols were proprietary.

That was the world where DOS won out. Get everyone on DOS and you could all open each other’s WordPerfect and 1-2-3 files, if only by sharing them on floppy disks. So DOS gained users, and because it gained users it got developers, and because it gained developers it got more users.”

While this is partially true, it ignores two other factors which always mitigate against switching platform – and which continue to do so today.

The first is familiarity. Familiarity, to geeks like me and John, is something you often avoid like the plague. Geeks like us like tinkering with new stuff, learning how to do new things with new tools. We switch because it’s fun (today’s example of this from John: Switching to Camino. Only geeks like us look at switching browsers as the kind of thing you can do on a whim. Why else does the blatantly inferior IE retain so much market share?)

But for someone with years of experience of DOS (or Windows), running WordPerfect and Lotus, switching to an alternate operating system and set of applications was always a big deal. The path of least resistance was always to stick with the platform you’re on, because learning new stuff got in the way. GUIs mitigated this a bit – but didn’t change the situation with applications. For someone who’s been using Excel professionally for 10 years, switching to Numbers is a big, big deal – and that’s despite Numbers being pretty easy to get your head around.

This is even more apparent in the business world, where switching means training hundreds of users in how to use the new tools. There’s a very good reason why corporates tend to be a couple of versions behind the latest, even for products where there’s a clear, delineated upgrade path and a level of familiarity.

The second reason is the oldest one in the book: money. If you’re a seasoned Windows user, switching from Windows to Mac doesn’t just cost you the time to learn new applications (even when there’s a Mac version of a Windows app, they’re usually different enough to cause angst). You have to actually buy the applications, because few (if any) companies give freebies to switchers.

Of course, this second issue isn’t an issue if you’re switching from closed source to open source. And some of it is also negated by being able to use freebie tools on the web. But the more complex your needs, the less likely it is that either can fill them. And the quality of both free online tools and open source stuff is (to be kind) variable, particularly when it comes to the kind of simplicity of interface design that someone switching OS’s is going to appreciate. I know – I’ve done it.

“A similar feedback loop is going on with the iPhone today, but it’s far less sticky. The DOS/Windows monopoly grew impregnable because it was a platform where the only way to play along was to join it.”

John’s right that this feedback loop is going on with the iPhone, and that it’s less sticky, but there’s two reasons for that. First, the smartphone software market is nascent: it’s in the equivalent of the era of (as John puts it) “the Apple II, the IBM PC and DOS, Commodore, Atari, Acorn. The TI-99/4A.”. People forget that DOS wasn’t the only game in town – only the weight of IBM’s brand and the anti-trust rules which allowed Compaq and a slew of others to clone the IBM PC really made it the overall winner. Even the iPhone, which is massive in terms of mindshare, only has 17% of the smartphone market. That’s about as much as the Apple II had at its high-point. The smartphone market is still massively fragmented – and it’s a very open question whether that will continue.

John’s bet, I think, is that it will continue to be fragmented – although I don’t think he overtly states this, so please forgive me if I’m reading something in that’s not really there.

I think that assuming this is true says a lot about what you believe is the future of mobile software. If you think that smartphone software is fundamentally one-trick apps, throwaways, stuff which is easy to develop and easy to dump, then jumping from one smartphone to another is always going to be easy.

But if you think that developers are going to create more and more complex apps, and that these are what consumers will increasing demand and use, then switching becomes more of an issue. The fact that Omnifocus is only on iPhone will almost certainly mean that my next phone will also be an iPhone, despite my constant pain at the fact that the iPhone doesn’t multi-task. If there was no Omnifocus, I would switch. And I suspect that I’m going to be increasingly not alone – with 100,000 apps, the potential for the “just one app that I need” being on iPhone grows.

“If Palm can create WebOS for pocket-sized computers — replete with an email client, calendaring app, web browser, and SDK — why couldn’t these companies make something equivalent for full-size computers? The hard part of what Palm is doing with WebOS is getting acceptable performance out of a cell phone processor.”

Because no one would buy it. It’s not like people haven’t tried. There’s a very good reason why people have chosen Windows netbooks over Linux ones, even when Linux has been cheaper – they want to run the apps they are familiar with. And they don’t generally just want web apps – they want native ones. Rich beats thin, every time.

“These PC makers are lacking in neither financial resources nor opportunity. What they’re lacking is ambition, gumption, and passion for great software and new frontiers. They’re busy dying.”

And this is where John’s wishful thinking really comes to the fore. Who, exactly, is dying? HP, which made $2.2 billion profit in its last quarter? Dell, which made $472 million profit? While those aren’t as good as Apple’s numbers (because SteveJ has played a very smart game), neither looks like a company that’s “busy dying” to me.

Reblog this post [with Zemanta]

Chrome OS is not a threat to Windows « GartenBlog

“Launching a new PC OS is not easy even if your target is a cloud. Targeting netbooks in 2010 isn’t the answer either. As I’ve pointed out, netbook are laptops with a pivotal axis of price. We’re seeing netbooks with 12″ screens, full sized keyboards and 300gb of storage. Does anyone think that netbooks aren’t going to evolve further? Consumers have overwhelmingly rejected Linux flavored netbooks for Windows capable machines that they could actually accomplish things on, such as run PC applications.” 

While I disagree about netbooks being only about the price, Michael is completely correct to point out that customers have generally rejected Linux-based netbooks in favour of Windows ones. Although I think there’s a lot of mileage in improving the Linux experience on netbooks (and Moblin/UNR are already ahead here), given the choice I would expect the majority of people to buy Windows.

Of course, the key question is whether they’ll continue to have that choice, given Microsoft’s transition to Windows 7. But given the date of Chrome OS’ release, which isn’t until some time next year, we’ll know the answer to that question before Chrome comes out.

Another thing to note: Chrome (the browser) has had almost no success in gaining market share. And a whole OS is a much more difficult sell to consumers than a browser. If I was a betting man, I wouldn’t bet on Chrome OS getting more than single-digit market share any time soon.

Posted via web from Ian Betteridge’s lifestream

Reblog this post [with Zemanta]

How my computing needs affected switching to Linux

In response to my post about switching to Ubuntu Linux, Charles Arthur tweeted a question asking about my computing needs. It’s a good question, because – obviously – how you use your computer will often determine your platform of choice.
My needs are pretty diverse, but largely I’m a media monkey. Text is the most important medium I generat, which means that OpenOffice is probably my most-used application. But, like most journalists, bloggers and writers I also need to mess around with images, edit the occasional video and play with sound.
On the Mac, the applications I used for these tasks were:

  • Graphics: an ancient copy of Photoshop.
  • Video: iMovie, although I hated the “upgrade” to 08 with a passion.
  • Sound: Fission, and GarageBand for multitracking stuff.

With Ubuntu, these have been replaced with:

Could I switch to Linux if, say, I was a professional video or audio editor? Probably not. For both of those tasks, specialist applications like Final Cut Pro mean that Linux isn’t really an option (no doubt someone will pop up now to contradict me!) But for what I do, all the tools I need are there. Some of them (like Kino) are actually better than what i had before. And, importantly, they’re free – in all the sense of the word.

Switching

You might have gathered from some of my more recent posts that I've switched platform. My main machine is now a Dell laptop, running Ubuntu 8.10.

I've been using Macs since 1986, and have owned one more or less continuously since 1989. Machines that have been through the mill of my day-to-day keyboard bashing include the Mac Plus, LC 475, PowerBook Duo, iBook and MacBook Pro. I've earned a living writing about Macs and attended more Macworld Expos than I can count.

But unless Apple has a change of direction and creates some very different machines, I think that I've probably bought my last one.

Continue reading