UpFront
UpFront
- LJ Index, February 2007
- LAMP Gets a J
- They Said It
- The Almost Inevitable Migration for All
- diff -u: What's New in Kernel Development
- A White Box Phone
LJ Index, February 2007
1. Number of Gannett newspapers that will “crowdsource” editorial from readers and bloggers: 91
2. Number of US intelligence agencies that will join to create “Intellipedia”: 16
3. Number of ways “distributive networks” of citizen journalists covered the November 2006 elections: 9
4. Percentage of US businesses with fewer than ten employees that don't have better than dial-up Net access: 60
5. Minimum thousands of certificates Microsoft will distribute per year allowing its customers to use SUSE Linux: 70
6. Minimum millions of dollars Novell will receive from Microsoft for those certificates: 240
7. Rounded thousands in dollars per certificate: 3.4
8. Linux-based hosters among Netcraft's top 50 most reliable hosting providers for November 16, 2006: 23
9. Windows-based hosters among Netcraft's top 50 most reliable hosting providers for November 16, 2006: 11
10. BSD-based hosters among Netcraft's top 50 most reliable hosting providers for November 16, 2006: 7
11. Solaris-based hosters among Netcraft's top 50 most reliable hosting providers for November 16, 2006: 4
12. Number of Linux-based hosters among Netcraft's top four for October 2006: 4
13. Number of Linux-based hosters among Netcraft's top ten for October 2006: 5
14. Number of FreeBSD-based hosters among Netcraft's top ten for October 2006: 2
15. Number of Solaris-based hosters among Netcraft's top ten for October 2006: 3
16. Number of Windows-based hosters among Netcraft's top ten for October 2006: 0
17. Position of IBM's Linux-based Blue Gene/L (at Lawrence Livermore National Laboratory) among Top 500 Supercomputers for 2006: 1
18. Trillions of calculations per second measured on Blue Gene/L: 280.6
19. Other new Linux-based supercomputer clusters at Lawrence Livermore National Laboratory: 4
20. Price in millions for Lawrence Livermore National Laboratory's four new supercomputer clusters: 11
1: Wired News
2: Washington Post
3: NewAssignment.net
4: OECD.org (June 2006 stats)
5–7: Silicon Republic.com
8–11: Netcraft.com (during the past 24 hours from the date listed)
12–16: Netcraft.com
17, 18: Top500.com and IBM
19, 20: InformationWeek
LAMP Gets a J
You can't add the letter J to LAMP and spell anything sensible. So, some took to calling Linux “stacks” with Java “LAMJ”, “LAMPJ” or “LAMP-J”. But they never seemed legitimate—not so long as Java didn't have an open-source license that passed muster with the rest of the L+ alphabet.
That changed on November 13, 2006 (as we go to press here), when Sun finally announced that it would be releasing Java under the GPL—specifically under version 2, which is the license Linux has used since it came out and has stuck with (at Linus' insistence), even after version 3 was announced last year.
Sun has hinted for some time that it would go with the GPL for Java. Jonathan Schwartz, the company's CEO, hinted as much in a conversation I had with him on stage at the Syndicate conference in December 2005. Now Jim Thompson has another intriguing question: “Is Solaris going to be the original GPLv3 *nix platform?”
In his blog, Jonathan writes, “The GPL is the same license used to manage the evolution of GNU/Linux—in choosing the GPL, we've opened the door to comingling the communities, and the code itself. (And yes, we picked GPL version 2—version 3 isn't available, but we like where the FSF is headed.)”
Whether or not Solaris and the FSF arrive at the same place, Jim has one more question: “What are the chances that Ubuntu would offer a version of its distro with a GPL-ed Solaris kernel underneath?”
Redraw your own conclusions.
Jonathan Schwartz' blog: blogs.sun.com/jonathan
Jonathan Schwartz' blog post: blogs.sun.com/jonathan/entry/fueling_the_network_effect
Jim Thompson: www.smallworks.com
They Said It
Companies come and go but you only get one reputation.
—David Sifry (to Doc Searls on the phone)
Good Web 2.0 sites follow the UNIX design model: do one thing well, and play well with others.
—Evan Prodromou, www.linuxworld.com/news/2006/110906-web20-openid.html
Coding up the simplest thing that could possibly work is really about this: If you can't keep five things in your head at one time and make a decision, try keeping three things in your head. Try keeping just one thing in your head, and see if you can make a decision. Then you can think of the next thing. And amazingly, when you write some of this dumb, straight-ahead code, it often turns out that it was all that was required. It works great. When a second programmer comes back later and reads the code she might say, “The people who wrote this are morons. They just wrote a simple linear search here. This thing's ordered, so they could have done a binary search. They could have used a hash table here. Why are they doing a linear search?” Well, because a linear search worked. And when the other programmer looked at the linear search, she understood it in a minute.
—Ward Cunningham, www.artima.com/intv/simplest3.html
Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing ever happened.
—Winston Churchill, www.brainyquote.com/quotes/quotes/w/winstonchu135270.html
The Almost Inevitable Migration for All
This Linux Journal issue is about migration, but migration means many things to many people. Few companies and even few homes use only one operating system.
A lot of Linux advocates use Apple notebooks running OS X, a BSD derivative. Many of these Linux advocates use iPods or BlackBerrys. Many are forced to use Windows systems due to work pressures.
Likewise, a certain company in Redmond, Washington, often talks about the “cost” of migrating to Linux from its own operating system, and uses this “cost” to boost the Total Cost of Ownership (TCO) of our favorite operating system and design strategy of Free Software, without acknowledging the higher value of end users having control over their own destinies.
This same company also ignores the fact that it has forced more “migrations” over the years by making its customers move from DOS to Windows 3.1, Windows 95, Windows 98, Windows NT, Windows XP, Windows 2000 and Win ME and are now looking for an even greater migration to the 64-bit Vista. Although we can hope that this company has learned from all of the other 64-bit operating systems and their migration issues (Linux moved to 64-bit in 1995), we can assume that there will be some hiccups along the way. And, this seems to be borne out in the stages and delays that have come along with Vista (probably one of the most “beta-ed” of all operating systems), with cautions to companies from the producers of Vista to “test, test, test”.
Therefore, in reality, migration is really integration, unless you are lucky enough to be able to start from scratch with a one-operating-system strategy and maintain it throughout time. I call this the one-egg, one-basket mentality, and to do it with an operating system that you have zero control over is just plain suicide.
So, when I talk to people about migrating to Linux, I work on several levels. I tell them first to do “the easy stuff”.
The first part of moving to a Free Software strategy in your environment is to start getting used to Free Software while you analyze your needs. Note that I do not differentiate between personal use or corporate use when I say “analyze your needs”, because in reality, the procedure is the same for both. Only the size and complexity of the project may differ.
First, start to learn about Free Software. Go to your local bookstore, go to the operating system section, pick out a few books, go to the coffee shop that is attached to the bookstore and look through the selections for the one or two books that can help you get started in Free Software.
Ask around your development group or your local university to see whether there is a Linux User Group near you, and sign up for the mailing list. Do not worry if you are a newbie to the list. Simply read the list, look at the archives, and if someone asks you a question, just look wise and say, “Yes, I will go along with that.”
While you are learning about Free Software, look into the objectives of organizations like the Free Standards Group (FSG) and the Linux Professional Institute (LPI). The FSG talks about the importance of written standards and how to ensure that you are not locked into a specific version of any distribution of Free Software, and the LPI tells you the breadth of information your system administrators will need to know to maintain your Free Software operating systems.
Other organizations to investigate are the Free Software Foundation, the Open Source Initiative and other community organizations, to get a better idea of what Free Software is all about.
Next, list your activities and needs. Do not say, “I need brand-name this and brand-name that.” Instead, list the needs as more generic things. For example, “I need a word processor, but I do not need a presentation package.” Or, “I need a database, but it does not have to be relational.” Or, “I need a database, and it needs to be object-oriented.” If you start listing your needs on a generic basis, you may find you can deal with a much simpler, lighter-weight solution than you originally imagined. You also may find that this solution fits a smaller system with less memory and CPU needs.
And, you also may find that certain parts of your organization or home have different needs than other parts do. You then can make a decision to use a more focused solution for one particular need or a more general solution for all of the other needs.
While you are focusing on determining the needs of the solutions, make sure you evaluate future growth and things like security, availability and scalability.
In addition, as you start to think about future needs, consider hiring a Free Software developer or a system administrator that is familiar with Free Software. All other things being equal, Free Software people will be easier for ensuring quality (due to the openness of the source code used in their projects, the mailing-list entries and so forth) and also will help generate community interest that might leverage your company's solutions.
A friend of mine who was a system administrator for a large company was also a Free Software person. Every day he would write Free Software to help him do his job, and every night he would go home, sit beside his spouse on the couch, and while she watched TV, he would write additional code and submit it to the source pool. The next day, he would go in and find that a lot of other people were doing the same thing. His comment to me was: “maddog, it is like speaking into a megaphone....I say so little and I get back so much.”
[Just to show that I am not chauvinistic in this case, I'd like to point out that there are female system administrators who go home and sit beside their husbands and code while their husbands watch TV.]
After you have determined your needs, you now can start to think about alternatives in cost.
It is not without reason that some of the first uses of Free Software were in the use of generalized appliances—machines and systems that end users did not see or care what the operating systems were, as long as they were stable, scalable, secure and inexpensive. These appliances manifested themselves in DNS servers, firewalls, routers, Web servers and file-and-print servers.
Why should people put a DNS server on the same machine as their highly tuned, high-performance hardware database machine? Why not split that functionality off to a smaller, less-expensive (and perhaps older) dedicated system? Even if you are a virtualization fan, the idea of using a different partition for your DNS server allows for separation of function, which in turn may allow for greater stability between components.
People setting up Web server farms quickly learned that their customers could not tell the difference between a highly expensive proprietary system serving up Web pages from a much less expensive commodity hardware solution running Free Software, other than the fact that the price/performance was better, and therefore, allowed for more machine power and greater overall reliability.
Database companies were able to sell a total solution to their customers using a free operating system running on less expensive hardware, and end-user client programs could not tell the difference in data coming over the Internet.
Of course, there are also database solutions that are recognized to be Free Software in and of themselves. MySQL and PostgreSQL are two of them. Being careful to utilize standard interfaces and commands gives you the most freedom when using any of the database products or projects.
File-and-print servers could be set up that supported not only Windows clients, but Apple clients, UNIX clients and Linux clients with the same server at the same time, invisible to the end user.
A large health company in Australia was interviewed in 1996 about whether it used Free Software. The interviewee was told “No” by the CIO, that the company did “important things” and would not use “hobbyist software” to do them. Unfortunately for that CIO, his staff members had been told to use Windows NT for a file-and-print server, and after failing a number of times (and experiencing his wrath), they turned to a Free Software operating system, and by that time had been using it for six months. When asked when they were going to tell their CIO that they had been using “hobbyist software” to do “important things”, they estimated that “another six months of flawless operation” would do the trick.
This explains why reports from analyst companies had a “step function” in Free Software usage in the 1998–2000 era. The analysts stopped surveying CIOs and started talking to system administrators, who actually were implementing the solutions with Free Software. When the system administrators confessed to using Free Software, the charts produced by analysts showed drastic change.
In all of these areas, Free Software is mostly or totally invisible to the real end user (including home end users), and at most it requires training of system administrators to configure and set up the systems.
Another area where Free Software can be used is on the already-existing desktop. Again, if you list the functionality needed for the “job”, instead of brand names, often a more robust solution appears.
An obvious solution is “Web browser” rather than Internet Explorer. Various Web browsers exist in the Free Software community, and each has its own advantages. Some are smaller and easier to embed. Some use less eye candy and are easier to use on the small screen (or leave more real estate for other applications on the larger screen). Some are more portable across various operating systems, so if your people are moving from one system (OS X to Windows, for example) you may want to use a browser that works well in many environments.
Other areas of upper-level compatability are things like word processors. For the most part, I use OpenOffice.org. It works on all the operating systems that I would have wanted to use in the past ten years: Windows, Linux (including Alpha Linux), Solaris and FreeBSD.
I often questioned why someone would want to use an office system that ran on only one operating system, or even two? I found it difficult to live with needing two operating systems on my desk—one to do my work and one to communicate with my management and sales staff. Today, I need only one system on my desk, because my solutions run across multiple operating systems.
Many Free Software solutions run on multiple operating systems. The GNU compilers, for instance, have been providing programmers with an excellent set of tools for more than 20 years. They have allowed programmers to concentrate on the basic algorithm without having to worry about the incompatibilities of syntax and semantics that can occur across compilers written by different organizations and for different hardware architectures. It is true that some commercial companies do the same things with their very excellent commercial compilers. This provides the end user with customer choice.
So far, I have been talking about everything but the cold-turkey movement from a proprietary solution to a Free Software solution. Now I am going to say something that will (I am sure) surprise a lot of Free Software people.
If you have a solution that is working fine for you, is incredibly stable, has no bugs, is reasonable in price, is from a solid company that is not looking to change its products radically (thus causing migration problems of its own), comes from responsive vendors and all of your end users (including you) are happy with it, please do not change it. This is what I call the “All Pain, No Gain” migration. Even in the best cases, everyone will ask you, “Why did we do this?” In the worst cases, the migration will fail, you will be the goat, and your choice of Free Software will be held to blame.
Instead, look for new projects, or large projects using expensive hardware or software, or projects for which the software is not fulfilling their needs. This is where Free Software tends to be the most flexible and cost-effective solution. For new projects, the training costs are typically the same. There will be training costs for either proprietary software or Free Software, and this typically is not as much of a differentiator as it would be with re-training.
Down through the ages in computing we have moved from giant, single-program machines to giant, multitasking machines to smaller, single-tasking mini-computers to multitasking mini-computers to smaller, single-tasking micro-computers to multitasking micro-computers and so forth, while still maintaining a lot of the older, “larger” computers. We also have moved from single mainframes to time-sharing systems to distributed systems and back again. In my view, what people really want is a time-sharing system of unlimited size and power, with very secure virtual firewalls, which can be available 25x8 (not just 24x7), and where backup and recovery are done automatically and with someone else's money.
With the advent of the World Wide Web, a lot of applications now are going to be browser-based, with the applications and data (for the most part) residing on a back-office server. This promises an ease of system administration and security that are hard to supply with the pure distributed model.
Fortunately, the Linux Terminal Server Project (LTSP) solves a lot of the hard logistics of setting up a “thick and thin client” system. Although the costs of a modern-day desktop system reduce the need to squeeze every cent you can out of old hardware as desktop thin clients, it is still true that fat clients continue to expand in system resource needs while thin clients grow much more slowly, and are more stingy with desktop resources. It is also true that although hardware and networking have been increasing in capabilities over the years while prices have been dropping, the number of well-trained system administrators has not been keeping up, so a better model for end-user software configuration is needed.
You can go several places to look for applications that meet your needs.
First, determine whether any of the applications you currently use and appreciate have gone “Free Software”. Many proprietary products now work on a free operating system or have developed a Free Software strategy, opening up their code and licensing while increasing their market share and support revenue. A very good example of this is Project.net (www.project.net), whose current owner, ICS, determined that making his project freely available and opening up the source code was the best way of doing business.
Other software may be developed directly from projects being listed on repositories, such as www.sourceforge.net and www.freshmeat.net. These repositories not only list the code and installation procedures, but also help weigh the receptiveness to the software from the community.
Finally, custom applications are not as expensive or difficult to build today as they were a few years ago. Using modern-day middleware, libraries of Free Software code and Web-based applications, you may find that developing an application tailored to your needs is a small investment compared to using an off-the-shelf application that requires you to change the way you do business to fit the application.
In some cases, stubborn applications keep people from moving to the environment they desire. Some of these applications are needed infrequently and may be handled by a dual-booting system or by running a product, such as WMware (www.vmware.com) or Win4Lin (www.win4lin.com), to allow you to run the applications simultaneously, albeit at a very slight performance hit.
Another great option is CodeWeavers' CrossOver products (www.codeweavers.com), now also available for the Intel OS X systems. CodeWeavers is based on the freely available Wine Project, and the parent company has been helpful in extending and expanding Wine's capabilities for many years.
Although I have heard of migrations that have gone cold turkey (turning off one system while turning the other on) successfully, I have heard of many more that failed. Nothing takes the place of a good transition strategy when going from an old system to a new one. Parallel running of the two systems is best, along with testing to see whether archived data is still available on the new systems.
Another good trick is to get the most enthusiastic office workers involved early and make sure that they have a good experience as they migrate over to the new tools. Every office has people like this. They buy the latest and greatest gadgets and are openly receptive to new things. Once they are enthusiastic about the new system, they often can help move other people over.
Do not be afraid to think outside of the box. I met a man with very old legacy code that was working well on very old hardware. He was concerned that the very old hardware was becoming more and more difficult to replace, and wanted to port it to Linux. I told him that although porting was a possibility, I would just run a hardware emulator for that hardware on top of Linux and use that to support his applications and customers “forever”. He looked at me strangely, smiled, and walked away.
Likewise there are Free Software DOS emulators will allow DOS applications “forever”, and modern-day CPU speeds sometimes make these applications run blazingly fast.
Use portable languages like Perl, Python and others to make your applications run on as many systems as possible.
Finally, when your system works well, evangelize what you have done. Write a paper about it, talk to your local Linux User Group, give a talk at LinuxWorld or write an article for Linux Journal. After all, probably a hundred or more other people are exactly where you are today and would like to have more freedom in their software.
diff -u: What's New in Kernel Development
“I'm not a huge fan of the LGPL, especially with the recent issues of GPLv3. The reason? The LGPL is expressly designed to be compatible with the GPL, but it's designed to be compatible with any version (and you can't limit it, the way you can the real GPL). So you can take LGPL 2.1 code, and relicense it under GPLv3, and make changes to it, and those changes won't be available to a GPLv2 project.”—Linus Torvalds
The sysctl call, allowing users to configure kernel parameters at runtime, is likely to go away. This goes against the standard doctrine of never breaking user space, but the kernel folks may get away with it this time, because it looks as though there are no user-space programs that actually use sysctl. Apparently, people do their kernel configuration operations in other ways. If you or someone you love depends on sysctl, you might consider raising the issue on the linux-kernel mailing list while there's still time. Linus Torvalds and Andrew Morton have both expressed the opinion that taking sysctl out would be the right thing to do—Linus because no one uses it and Andrew because it would be a shame to leave a big wad of such useless code in the kernel permanently, if a viable alternative existed. But, in case it really would just break too much stuff, Albert Cahalan has volunteered to be the official sysctl maintainer if one is needed.
The Multimedia Card subsystem is now the Multimedia Card and Secure Digital subsystem, and Pierre Ossman has submitted a patch making himself the new maintainer. Russell King, the previous maintainer, had stepped down and marked the subsystem “orphaned”. Meanwhile, Jiri Slaby has added new maintainer entries for the Moxa SmartIO/IndustIO Serial Card driver and the Multitech Multiport Card driver, in both cases naming himself as the official maintainer.
An anonymous kernel tester has reported some benchmarks showing that ext4 is about 20% faster than either ext3 or Reiser4. Although a useful (and perhaps gratifying) result, Theodore Ts'o pointed out that what was really needed was an automated testing infrastructure, so that each version of each filesystem could be compared, and the particular results correlated to the specific patch that either sped things up or slowed things down. And, various other folks suggested incorporating tests for other filesystems as well. The original poster agreed that this would be great, but he or she (and Ted) also pointed out that the amount of work required to create such an infrastructure would be massively big. It does not look as though an automated filesystem benchmark is coming any time soon, though you never know.
It's apparently wiki season in kernel land. Valerie Henson has created two wikis, one for filesystems at linuxfs.pbwiki.com and the other for huge memory pages at linux-mm.org/HugePages. As one might expect, the filesystem wiki is a bit more active than the huge pages wiki. To go along with these collaborative projects, Valerie has also started up two IRC channels on irc.oftc.net: #linuxfs and #hugepages. Meanwhile, Darren Hart and Theodore Ts'o have started up a wiki for real-time support at rt.wiki.kernel.org, and in fact, the generic wiki.kernel.org site may offer generic wiki hosting services to any legitimate kernel project. Just ask the site administrators to set it up for you! At the same time, as Ted points out, you should make sure there is at least a person or two to act as editor and maintainer, or your wiki is likely to become stale. Nothing like stale wiki to clear the sinuses, I say!
A White Box Phone
As we know too well, embedding Linux in a device doesn't make it “open”. And although there are open Linux-based embedded devices, telephones fitting that description have been rare.
The OpenMoko phone aims to change that (openmoko.com). Funambol (pronounced foo-nahm-ball), a Taiwan-based manufacturer whose official ambition is “to bring the customer benefits of open source software to the $300 billion global mobile market”, launched OpenMoko to generally positive reviews. From my own contact list, these ranged from Gordon Cook's (gordoncook.net) “This is AMAZING STUFF” (in fact, I heard about it first from Gordon) to Bob Frankston's (frankston.com) “No Wi-Fi? Huh?” and “As I've discovered with my current programmable phone, having Wi-Fi and GPS can make a big difference.”
But the quotage that matters most comes from Harald Welte, who wrote this in his blog at gnumonks.org (gnumonks.org/~laforge/weblog/2006/11/08/#20061108-my_no_longer_secret_project):
In this project I'm responsible for the system-level software design and implementation. This means: kernel, drivers, GSM communication infrastructure, etc.
So why is this project so exciting? Because it's [yet another] Linux phone? No. It's because this is the first time (to the best of my knowledge), that a vendor is:
involving (hiring) prominent community members to do the actual architecture design and implementation;
planning to completely open up their Linux distribution for any contributed development, e.g. use a package manager that can access arbitrary package feeds;
trying very hard to make sure almost everything will be Free Software, from drivers up to the UI applications;
actively providing documentation and interfaces for third-party development on any level of the system, from debug interface, boot loader, kernel, middleware through the UI applications;
using X11 to allow users to run any existing X11 Linux application (within resource constraints).
So basically, from a Free Software community level, this is exactly the kind of phone you want to get involved with, and play with. Yes, it's not the perfect phone. It runs a proprietary GSM stack on a separate processor. There are some minor, self-contained proprietary bits on the back end side in userspace. But well, it's probably the best you can do as a first shot of a new generation of devices, and without too much existing market power to put on upstream vendors.
Beats anything you'll ever read in a press release.
I'll give the last word to Brad Fitzpatrick (brad.livejournal.com), father of LiveJournal OpenID, memcached and other fine hacks. After news of OpenMoko hit the streets, Brad wrote, “On sale Jan 2007...I'm totally getting one”.