Monday, March 17, 2008

Software bugtraps

Software bugtraps

Software that makes software better


From The Economist print edition(Mar 6th 2008)

Computing: Programmers are using a variety of software tools to help them produce better code and keep bugs at bay

MODERN civilisation depends on software, so it needs to be as reliable as possible. But software is produced by humans, who are fallible. The programs they create are prone to crashes, bugs and security holes. What can be done? A good way to make more reliable software may, oddly enough, be to use even more software. Programmers are increasingly calling upon bug-squashing tools for help at various stages in the software-development process. Some of these tools help programmers to work together more effectively. Other tools scrutinise the resulting software, or its underlying source code, for problems. And still others help project managers put numbers on code quality, programmer productivity, and the cost-effectiveness of fixing particular bugs, so they can decide which bugs ought to be fixed first.

Things are improving, insists Daniel Sabbah, who started programming over 30 years ago and is now general manager of IBM's Rational Software unit, which makes software-development tools. Such tools “really have gotten much better over the years,” he says, though their impact is difficult for ordinary users to see, in contrast with the far more obvious improvements in hardware performance, network speeds and storage capacity. Unlike whizzy new hardware, which is quickly adopted by manufacturers, new programming tools and techniques can take several years to percolate through the software industry, he says.

Not everyone agrees with Dr Sabbah's rosy view. Even if the tools are better, the number of bugs in newly written code has remained constant at around five per “function point”, or feature, says Capers Jones of Software Productivity Research, a specialist consultancy. Worse, says Mr Jones, only about 85% of these bugs are eliminated before software is put into use. Dr Sabbah responds that such numbers do not show whether software is effective—bug-free code that does not do something useful, or does it two years too late, is not much help to a business, he says. And broader metrics suggest that things are, indeed, improving: the Standish Group, a consultancy that produces a biennial “CHAOS Report” on the state of software development, found that 35% of software projects started in 2006 were completed on time, on budget and did what they were supposed to, up from 16% in 1994; the proportion that failed outright fell from 31% to 19%.
Software as a social science

According to Jim Johnson, the chairman of the Standish Group, most of this improvement is the result of better project management, including the use of new tools and techniques that help programmers work together. Indeed, there are those who argue that computer science is really a social science. Jonathan Pincus, an expert on software reliability who recently left Microsoft Research to become an independent consultant, has observed that “the key issues [in programming] relate to people and the way they communicate and organise themselves.” Grady Booch of IBM Rational once tracked 50 developers for 24 hours, and found that only 30% of their time was spent coding—the rest was spent talking to other members of their team.

Programmers generally work together using a software platform called an “integrated development environment”, which keeps track of different pieces of code and assembles them when required into a complete program, or “build”, for testing. But many firms no longer have all their programmers and testers in the same place, or even in the same country. So it has become necessary to add features to programmer tools to allow coders to communicate with each other, request design changes, report problems and so on.

This field was pioneered by CollabNet, with the launch in 1999 of Subversion, a collaborative platform for programmers which now has more than 2.5m users. Subversion integrates with existing programming tools, including IBM's Eclipse, and offers features such as project-management features, discussion threads and support for quality-assurance engineers.

In 2007 IBM announced a similar effort called Jazz, which (as the name implies) is intended to foster creativity and collaboration among programmers. The idea is to provide a standardised way for existing programming tools to handle change requests, project updates and scheduling details for a particular project, not just code. As well as improving communication between far-flung programmers, centralising this information could also allow managers to track a project's progress more precisely.

High-level improvements in project management, and in the distribution and testing of new versions of a particular piece of software, are a useful, top-down way to improve the quality of software. But just as important are the low-level tools that scrutinise the actual code to look for bugs, conflicts, security holes and other potential problems. Such tools, which are now proliferating, can be divided into two main types: dynamic-analysis tools, which examine software as it runs to work out where breakdowns happen, and static-analysis tools, which look at code without actually running it to look for fundamental flaws.

Analyse this

To use a mechanical analogy, dynamic analysis is like watching a machine in operation, whereas static analysis is like poring over its blueprints. “Dynamic-analysis tools say, ‘Well, you've got a problem on something over here,'” says David Grantges, a technical manager of application security at Verizon Business, a unit of the American telecoms giant. “Static-analysis tools say, ‘You've got a problem on line 123.'” The two types are complementary, and Verizon, like most firms, uses both, he says.

Static analysis, being more difficult, is the younger of the two disciplines. In recent years several start-ups, including Klocwork, Fortify and Ounce Labs, have entered the field. Static analysis is best done as close as possible to the programmer, because the earlier a bug can be identified, the cheaper it is to fix. (An industry rule of thumb is that a bug which costs $1 to fix on the programmer's desktop costs $100 to fix once it is incorporated into a build, and thousands of dollars if it is identified only after the software has been deployed in the field.)

In February Klocwork released Insight, a new version of its static-analysis tool that can run on a programmer's desktop every time code is submitted for a build. The advantage of this approach, says Gwyn Fisher, Klocwork's technology chief, is that programmers do not need to wait for a build in order to test their code. And when a whole team uses Insight, it can spot potential conflicts between code written by different programmers. Brian Chess, Fortify's chief scientist, says such tools can also spot mistakes that programmers are known to make routinely, such as allowing “buffer overflows” and “SQL injection”, both of which can open up security holes.

Dynamic analysis involves running a chunk of code with a variety of test inputs to see if it performs as expected, and to make sure it does not do anything undesirable such as crashing, going into an endless loop or demanding more and more memory as it runs. This process can be automated to a certain extent, but guidance from the programmer or tester, in the form of test scripts, is usually required.

Both static and dynamic analysis have been around for a while, but encouraging more programmers to use them is not always easy. It is especially hard to spread these tools beyond large companies, which have the staff to support them. Veracode, a firm based in Burlington, Massachusetts, thinks the answer is to offer code testing as an online service. Chris Wysopal, the firm's co-founder and technology chief, says that his company's tool will broaden the market for software testing by giving smaller companies “a blood test” to check their code. At the moment, he says, “we're where network security was in 1995, when some people didn't even have a firewall.”
A question of priorities

Another approach is to integrate testing tools more closely with existing programming tools. If testing tools do not fit neatly into a company's existing way of doing things, developers will not use them, notes Alberto Savoia at Agitar Software, the maker of a tool called Agitator which automatically produces test scripts for use in dynamic analysis. Seth Hallem, the co-founder of Coverity, which makes a static-analysis tool, expects greater integration between programming and testing tools in future.

But analysis tools that spot potential problems, useful though they are, can in turn cause new problems. John Viega of McAfee, a big security-software firm, used to run a start-up that sold a static-analysis tool called CodeAssure (which is now owned by Fortify). He says he did not realise how daunting such tools were to use until he tried selling them. “People would use our tool and find out that they had many reliability problems and many potential security problems, but the cost of researching and fixing them all was astronomical, so they would give up,” he says.

Not all bugs are worth fixing—but how can programmers decide which ones to concentrate on? Jack Danahy, technology chief of Ounce Labs, says the expertise required is the software equivalent of interpreting an MRI image. But his company is doing its best to automate the process, with a static-analysis tool that spots problems and estimates the risk associated with each one.

A similar risk-analysis approach is also being applied to software on a larger scale, through efforts to develop metrics for code quality and programmer productivity. Atlassian, an Australian developer of software tools, last year released Bamboo, which tracks trends in code over time, such as the number of bugs found. Veracode's analysis service has a code-scoring tool that gives grades to code. And Mr Savoia has developed a system to assess the quality of software written in Java, which he has jokingly named “change, risk, analysis and predictions”, or CRAP. His software plug-in, which determines the “crappiness” of a particular piece of code, has been downloaded by hundreds of programmers. Given that programmers are paid a total of half a trillion dollars a year, Mr Savoia estimates, the industry needs better tools to assess the quality of their work.

To this end, America's National Institute of Standards and Technology (NIST) is doing its best to create the software equivalent of the “generally accepted accounting principles” used in the financial world. Its Software Assurance Metrics and Tool Evaluation (SAMATE) project is intended to offer companies a way to quantify how much better their code will be if they adopt particular tools and programming languages.

Paul Black of NIST says its first report, on static-analysis tools, should be available in April. The purpose of the research is “to get away from the feeling that ‘all software has bugs’ and say ‘it will cost this much time and this much money to make software of this kind of quality’,” he says. Rather than trying to stamp out bugs altogether, in short, the future of “software that makes software better” may lie in working out where the pesticide can be most cost-effectively applied.

Saturday, March 15, 2008

Linux & Open Systems

Linux & Open Systems

Does open-source development model work for business users?
Carefully choosing between community-supported and enterprise versions is key, they say

Todd R. Weiss 06/03/2008 08:34:12

When using open-source software, businesses usually choose between a free, community-supported version of an application or a fee-based enterprise version that includes support, service, updates and other features.

Business users also have to decide for what purpose they want to use open-source software and how critical it will be to their business processes. Free, community-supported versions are fine for testing or noncritical needs, but when the work is mission-critical, users say they are more likely to pay for enterprise versions of open-source applications.

Jeremy Cole, a co-founder of MySQL consulting vendor Proven Scaling, said that sometimes this split development model can cause unintended problems. One issue, he said, is that businesses, which need to rely on stable, mature code, aren't always getting what they pay for.

At MySQL, Cole said, "they release the enterprise version more often than the community version." What that means is that "while enterprise users are getting fixes faster, they're essentially running untested code," he said.

Others agree that such concerns are valid. Such issues are growing in importance as more large companies buy open-source companies, adding a boost to open-source software in enterprise systems. Sun Microsystems' recent acquisition of open-source database vendor MySQL AB is the most recent evidence of this trend.

Bill Parducci, CTO of Think Passenger, which builds online communities for companies and their customers, said open source code is important to his three-year-old start-up because it lowers technology costs and allows customization of key source code.

"The concept of an organization pushing out the code faster so their clients can get the code faster, I don't agree with that," Parducci said. "Customers can't keep up." Because of such pressures, Linux vendor Red Hat doubled the length of its new version cycles several years ago to better meet the needs of its customers, he said. "Software is more stable and supportable when [new versions are] less frequent. There's no value in software that doesn't work predictably."

Parducci said he is seeing more examples of software that takes a "hybrid approach" between open source, closed source, functionality, risk and support. "At the end of the day, you need to solve a problem," he said. "I think we're finally over the day of people running up the hill with a flag of open source or a flag of anti-open source."

Think Passenger uses a host of open-source applications, including Red Hat Enterprise Linux, CentOS Linux, Iona Technologies' Fuse Message Broker, Jetty Web server and Terracotta's network-attached memory applications.

Parducci said he uses the paid enterprise versions of most applications so he can get expert support and the most stable code. With Iona, "they take it, they stabilize the releases, they package it together and put support around it," he said. "It's the same basic code as the community version with support and stabilization. It's working out well for us."

Parducci said he looks at whether a prospective open-source vendor is trying to upsell to a proprietary version of its product or whether a proprietary version is needed to maintain full functionality with other products. "To me, that really becomes a red flag," he said. "Are they supporting the open-source stuff just to sell me up to the other side?" Working with most open-source vendors has been satisfactory, he said, but there is room for improvement, particularly among the smaller vendors. Such vendors need to ensure "timely feedback and improved communities" so that business users can get the help they require, he said. "I think it's still going through the learning and growth phase. People are still figuring out how to staff it."

Enterprise versions worth the cost

Justin King, a systems administrator for the Human Neuroimaging Laboratory at Baylor College of Medicine in the US, said he's found that community versions of open-source applications are adequate for his needs, but that buying enterprise versions save a lot of time in using many products because they are more developed and include useful administrative features. King said he uses open-source applications from Red Hat, Web infrastructure management vendor Hyperic Inc. and others. "In the enterprise versions, in most cases, the main thing is stability," he said. "You can live without having certain [new and improved] features. The absolutely most critical thing is uptime and stability."

"The best model to look at is Red Hat," King said. "They've got [the community supported] Fedora [version of Linux] and it changes frequently. Then there's Red Hat Enterprise Linux that's stable and supported [for enterprise users]. That's the correct model of enterprise open source as far as I'm concerned."

For mission-critical business users, "nobody in their right mind is going to rely on something" that doesn't have adequate support and stable releases, King continued. "They'll go with supported versions if it exists to run their business. At the end of the day, if something's broken and nobody on-site can figure it out ... it's cheaper to call the support guy and choke him until he figures it out."

Gautam Guliani, the executive director of software architecture at Kaplan Test Prep and Admissions, a university entrance exam testing company, said he prefers to buy enterprise versions of all open-source applications used in mission-critical roles. Using community-based applications in pilot projects and noncritical business functions is acceptable, he said, but if his company wants to use it, it will pay for the enterprise version to get the support.

More road map direction

Kaplan uses a small assortment of open-source applications, including JBoss middleware, Red Hat Linux and Alfresco Web content management software. Getting adequate and timely support hasn't been a problem in general, Guliani said, but getting future road map direction from open-source vendors can be tougher than with proprietary vendors. "The development road map is not as thought out as much sometimes as we'd like with open-source companies," he said. "Some do it well, but for most there is room for improvement."

What open-source vendors offer to his business, he said, is lower costs for support, deepening maturity, code flexibility, "a much deeper level of transparency into the software products," and a higher rate of innovation.

"The releases tend to come more frequently" with open-source vendors, he said. "If they come too often, it can be a problem. At least if they're coming often, we can choose not to upgrade to a new release. Most open-source vendors have realized that if they bring out a new version, that they shouldn't drop support for the old one too fast."

What's happened, say analysts, is that open-source software has quietly become an integral part of corporate IT, whether through community-based or enterprise versions.

Raven Zachary, an analyst at The 451 Group, said companies don't even look at software as being open source or proprietary, but analyze it based on what will work best for them.

"I don't run into enterprises very often that would be willing to give up functionality," he said. "Enterprises are going to purchase technology that will allow them to do their jobs. Sometimes that means proprietary. Sometimes that means open source. Generally, large enterprises are going to make decisions about what is right for them regardless of whether it's open source or proprietary, based on value."

Donald DePalma, an analyst at Common Sense Advisory, said business users with large data centers are typically using enterprise versions of open-source applications because of their mission-critical requirements. "Individual rogue business units are using community-supported versions," he said.

"There are levels of open source use," DePalma said. "MySQL is so widespread in use that it seems almost Oracle-like in its commercial viability, so users don't even see a distinction. I think we'll see more of this moving forward."

Poor nations gain more choices in computing

Poor nations gain more choices in computing

There is debate on whether developing nations should invest in computers over classrooms and textbooks
Dan Nystedt (IDG News Service) 13/03/2008 09:30:51

The One Laptop Per Child Foundation (OLPC) has highlighted the need to provide computing to kids in the developing world, but headlines surrounding the group's US$100 laptop PC have attracted a growing number of companies and organizations trying to figure out how the digital world can help those most in need.

The rush to climb aboard this trend has gotten downright nasty in some cases. While there is no doubt altruism plays a role in decisions to help out, there are other reasons, such as profit and market share.

Some OLPC leaders, for example, have been accused of academic egotism, as well as using their project to expand the use of the Linux OS. Microsoft's donation of time, software and cash to the cause has been characterized as a way to counter Linux and spread Windows. Intel has been accused of building a rival laptop, the ClassMate PC, as a way to ensure its microprocessors are at the heart of computers for kids in poor countries. The OLPC laptop uses chips from rival AMD.

Reading the hubbub surrounding the issue almost makes one forget the main purpose: the kids

Some groups also take issue with the educational philosophy behind OLPC, and there is even disagreement on whether developing nations should invest in computers over say, classrooms and textbooks. Some nations are too poor to buy computers for their schools, much less lay new power lines and Internet connections to actually make them useful.

For example, Fair International, an aid organization from Norway that is also trying to bridge the digital divide with computer labs in schools, has accused OLPC of "misleading poor countries into taking a high investment risk for a new type of technology, the success of which is very uncertain. With uncertain definitions of target groups and heavy international marketing, OLPC appears to be trying to create a need which has not existed before and which does not exist at all in the world's richer regions."

The group upgrades second-hand computers with the latest software to equip computer labs for schools in countries including Eritrea, Gambia, Kenya, Romania and Tanzania.

Other aid organizations focus more on building classrooms and filling them with books.

Room to Read, a nonprofit from the US, focuses on some of the poorest areas in the world, including rural Nepal and Vietnam.

Founded by a former Microsoft executive, Room to Read works with local communities to build libraries for as little as US$4,000, and schools with several classrooms for around US$20,000 to US$35,000. The group also builds computer labs at a cost of about US$30,000 in some schools, but uneven development within countries means only some areas are suitable for such labs, like big cities with reliable power grids.

Everything the group does is funded by donations.

That an increasing number of companies and organizations are working with developing countries on computing issues is good news for people in poor areas, especially where they have little or no access to electronic devices or the Internet.

But most of these giving efforts are young and must continually refine and improve their efforts, and in some cases, their motivation. They are working in extreme conditions, deserts, jungles and mountains, as well as in villages so poor they can barely afford classrooms, much less electricity or Internet connections.

There are 4 billion people in the world living in poverty today, according to a recent report by the World Resources Institute, a US think tank. Their income levels range from US$3.35 a day in Brazil to US$2.11 in China and US$1.56 in India, the report said.

School systems in such nations have as little as US$20 per year per student to spend. Other issues, such as a lack of school buildings in some communities and difficulty in finding qualified teachers, are even a bigger headache.

The central African country of Rwanda, for example, is promoting computer use in schools in order to create a more technology-oriented economy, and the nation's technology minister says computers can cut certain costs if they last a long time.

"The cost of a computer is lower than five to six textbooks over five to six years," said Romain Murenzi, Rwanda's Minister of Science, Technology and Scientific Research. Textbooks don't last so long in his country because of the dampness in many areas, and wear and tear, he said.

And it's important to have Rwandan kids using the computers. "If you give a child a laptop, you have put that kid on par with a kid in Europe," he said.

His nation bought 10,000 OLPC laptops last year, and plans to purchase 20,000 more this year, despite the huge issue of cost to his country. It's important to have kids start to learn how to use digital devices and the Internet as soon as possible to help build an information technology economy in the country, he said.

"You need access to the devices," he said. "You cannot learn to ride a bicycle without the bicycle."

But since the cost per child is such an issue, he said the company that offers the lowest cost will win in Rwanda. And there are alternatives to OLPC, even beyond rival low-cost laptops such the ClassMate PC and Asustek's Eee PC.

NComputing, a for-profit company, is taking a different approach that could end up costing far less than such laptops. The company uses virtualization to essentially turn a single PC into a mainframe serving between seven and 30 workstations.

PCs are so powerful these days that they can serve far more than one person with little impact on user experience, unless the person is using the PC for gaming, scientific calculations or multimedia, said Steve Dukker, chairman and CEO of NComputing, in a phone interview.

"You can call us the unexpected benefit of virtualization," he said.

The NComputing system cuts costs tremendously. The cost per child of each NComputing system, for seven users, is $US70 each, in a system running at just 1 watt per user.

Those are the kinds of statistics, per user cost and wattage, that make a difference.

Hitting a cost of $100 per OLPC laptop is actually easier than making a system that uses less than 2-watts of power, OLPC Chairman Nicholas Negroponte said in an interview earlier this year. Power is both a cost and access issue in most developing nations.

NComputing's biggest sale so far has been to schools in Macedonia, where Dukker said the machines pay for themselves in power savings in just six months.

"Electricity in the developing world will cost five times what it costs in the US," he said. Many countries don't have power grids up, and most don't buy enough oil, coal or other materials to gain bulk cost savings like the US does. Other sources put electricity costs higher than his estimate, at six to 10 times what it costs in the US.

Training teachers and students how to use computing devices is also a challenge.

OLPC has worked to empower kids by making its laptops as easy to use as possible, as well as offering training for teachers. The point over time is for the kids to find educational opportunities on the Internet and elsewhere using the laptops

"What we want is for children to keep a passion for learning," said Negroponte, in a speech earlier this year. To that end, OLPC has added a camera to its laptop as well as games, in addition to encyclopedias and other books for school.

Similar to Murenzi, he believes that getting the laptops into children's hands is the goal so they can start learn to use them, whether in class or playing games or on the Internet.

"I don't draw a sharp distinction between entertainment and education because when you're trying to capture a child's whole life, and not just the time when a student is in class with a teacher, it's seamless," said Negroponte.

Not everyone agrees with the philosophy, and some said the entertainment aspect of the Internet, including movies, music, games and even pornography, can be a distraction.

NComputing, for example, has built in ways for teachers to monitor what students are doing on the Internet through software that allows the teacher to see what every terminal is doing.

Other organizations working in developing countries argue that kids and teachers need more instruction and direction on where these new digital skills can take them, say by learning how to create spreadsheets so they can become a valuable part of the workforce.

Microsoft, which has been working with computing issues in developing nations for the past 10 years, said training is critical.

"Just giving people the computers, throwing them out there, it's not enough," said Orlando Ayala, senior vice president of Microsoft's Unlimited Potential Group, in a phone interview.

Clearly, there are many choices for governments in developing countries to make on what's best for students, not just in what kind of computers and which OS they should come with, but also the kind and level of training. The number of companies and organizations offering free or low-cost technology continues to expand, as do their reasons for helping.

And in the end, the developing world shouldn't be seen as a profit center, a brand name battle ground, or a Petri-dish for some ego-driven experiment on education. But instead as a place full of desperately poor kids seeking a brighter future.

Free Domains

Labels