Netflix vs. Blockbuster

Before 2004, Blockbuster was operating mainly using the physical stores throughout the country. In 2006, Blockbuster had 5194 stores through the country. 70% of the population was considered to be less than 10 minute of drive from a blockbuster store. Blockbuster charged about $3-$4 per rental for a fixed amount of time. The customer had to pay late fees if the title was not returned in the stipulated amount of time. More than 10% of blockbuster revenue represented the “late fees”. Only the high demand (majority of which comprised of newer tittles) were stocked. It was hard to find old, unpopular or independent films at the store. The economics of offering less popular films was not favorable. Blockbuster’s growth strategy was based on opening new locations to expand geographic coverage and increase market share. Increasing competition from Netflix, Redbox, Amazon on demand and other VOD services resulted in declining revenues for blockbuster (see Exhibit 5). The company filed for bankruptcy on September 23, 2010 wiping out more than $1B of debt. Some 54.8% of voters in a weekly poll said creditors shouldn't lend the company any more money, while 45.2% are still hopeful that Blockbuster will be able to rebound with more cash [1]. Recently the deadline to file a bankruptcy and restructuring plan was extended till March 21, 2011. Blockbuster planned to close 72 stores by the end of 2010 and another 110 stores in the first quarter of 2011 [3].
On the contrary, Netflix offered an online DVD rental service with no physical store location. Launched in 1998, the company focused on early adopters and offered only the DVD format even though the VHS cassette was prevalent at time of period. The widespread acceptance of DVD format was a presumption and risk for Netflix business model. Users could search through the collection and select the desired title. The DVD was mailed to the customer using the USPS service. Earlier, Netflix used the per rental charge model as other companies plus the shipping charges. However Netflix soon realized that it was spending $100 to $200 for a customer to make one $4 rental. As a solution, Netflix started the prepaid subscription based model. The new model allowed greater customer retention, turned the long delivery time to an advantage and most importantly allowed Netflix to offer “unlimited” DVDs per month. The new “all you can eat” model was an attractive alternative to the traditional per-day fee structure with late fees. A diagram of this system can be seen in Exhibit 1.
The next hurdle Netflix faced was high demand for hit and new movies and the user frustration with the movie unavailability. The recommendation system was developed which makes suggestions of movies that are available and might be of interest based on preference and history. The success of the recommendation system decreased the demand for newer releases to 20% of the total demand compared to the 70% for the traditional video rentals. A positive “network effect” was generated from the large customer-generated rating system. As a start-up company, Netflix did not have any business relationship with major studios. Netflix mainly acquired titles from smaller studios at a minimal discount. Sarandos who had excellent relationships with the studios and was veteran in the DVD rental industry was successful in forming revenue-sharing agreements with the major studios. The average wait time when there was just one warehouse in Sunnyvale was about 1 week. The one day delivery model was very useful. Today, Netflix has 44 distribution centers across the country, which can deliver to more than 90% of 6.6 million subscribers within a single business day. Using optimized processes, Netflix’s employees could open and re-stuff an average of 800 DVDs per hour, allowing the entire distribution center network to ship over 1.6 million DVDs per day. The efficient and minimalistic work processes are part of their overall competitiveness (see Exhibit 4). Their operating cost in relation to revenues is less than half that of their competition Blockbuster (see Exhibit 5). In addition, Netflix collaborated with USPS, to further reduce the turnover time and cost. Netflix received a standard discount for presorting the mail by zip code. As Netflix became known as the de facto source for independent and foreign movies, the smaller studios were interested in partnering with Netflix to market their movies. In 2006, Netflix started to acquire rights for some independent movies through its Red Envelope Entertainment subsidiary. Another growing concern for Netflix was the high attrition rate (churn rate), which grew from 3.6% in 2002 to 6.3% in 2006. Athough, Blockbuster and Netflix compete in the same video rental market they really do different jobs for consumers. Blockbuster has made its core business the idea of a “movie night”. They assumed that most movie rentals were impulse decisions for people who want to watch a movie right now. These are usually new releases and so this is a majority of what Blockbuster stocks. Netflix on the other hand has evolved to view movie watching as a regular part of daily entertainment. They appeal to the customers who do not see “movie night” as an event but instead as an ordinary form of entertainment like watching television. The ability to hold movies longer and the convenience of receiving/returning through mail is perfect option for this type of consumer. So, Blockbuster and Netflix cater to two different types of movie renters. A Blockbuster customer would probably watch fewer movies but the ones they do watch would be new releases rented impulsively. A Netflix customer would watch movies more frequently. They would also be more interested in lesser-known films and would plan out their rentals in advance.
Blockbuster and Netflix have differing business models and their operations strategies reflect it. As stated above, Blockbuster focuses on the “movie night” crowd who want a new release movie right away. So, they operate in a brick and mortar way where customers come into the store and leave with a movie. Their business model is to allow consumers to make an impulse decision to rent a movie and get it right away. To accomplish this they have an operations strategy of actual stores located heavily across the United States and stocked with mostly mainstream titles. Netflix on the other hand has created a business model around the idea that consumers want convenience and selection more than they want to be able to make an impulse decision. They then created an operations strategy to accomplish this. Monthly fees instead of rentals, mail delivery instead of pickup, and a wide choice of movies instead of just new releases are all a part of their operations strategy. It certainly appears that Netflix’s business model will win out in the end as shown by the stock price of the two companies in Exhibit 2 and 3.
With the rise in availability of broadband Internet and Internet connectible devices, the video on demand business model has gained prominence. VOD is a pay per view ability to access any multimedia content to an individual Web browser or TV set based on user requests. VOD systems either stream content through a set-top box, a computer or other device, allowing viewing in real time, or download it to a device such as a computer, digital video recorder or portable media player for viewing at any time [5].
I will bet short on Netflix unless, it starts VOD for “all” of its titles available. Netflix offers “some” select few titles for online viewing but this will not be enough in the future. Allowing users to watch any movie they want right away would be an enormous strategic advantage for Netflix. This would obviously take some negotiating with the other entities in the movie business. The DVD makers would be strictly opposed to this but if the price was right a deal could surely be made. In addition to expanding their selection of VOD titles, allowing playback easily on televisions will also be important. If a consumer owns a 50” HD television they are not going to want to watch a movie on their computer, even if it is being watched “on demand”. So, giving playback on internet enabled televisions (all brands) is going to be an important step towards winning the VOD battle.
Another Netflix disadvantage is the limited TV programming available to watch online. Services like HULU Plus offer a huge collection of high definition TV programming. The free HULU service is also available to view limited TV programming. While it may be hard to work out deals with the movie studios, working on deals for television shows (both past and present) seems easily possible. Also, a customer would be more willing to sit and watch a 30 minute television episode at their computer where as they might not be willing to do the same with a 2 hour movie. This eliminates the struggle to get VOD onto televisions and is another reason that focusing on “on demand” television show viewing could be arranged rather soon.
Even though they are currently struggling, Blockbuster, has started the “Total Access” service. The Total Access service allows the customers to order and return titles via mail, second, order and return the title at participating physical store, third, rent or purchase movies online on-demand. In addition to the movies, Blockbuster also offers access to Playstation 3, Xbox 360 and Nintendo Wii games at no extra charge. According to PwC, the video game industry will grow at 6.7% compound annually for the five-year period to $12.5 billion. It has already surpassed the music spending from the consumer spending point of view. Therefore, the offering the video game with movie DVD and blu ray disk can be vital factor for success.
Blockbuster has also started to set-up vending machines at local grocery stores and gas stations. For newer titles Red Box has been the preferred service in the recent times. Red box vending machines can be found in the everyday grocery stores like Kroger, Wal-Mart, Biggs etc., Pharmacies and even local markets. Red box offers convenience of reserving a movie through the Internet (an iPhone app is also available) to avoid the inconvenience of not finding the movie required. As of April 2007, kiosks had averaged 49.1 rentals per day and $37,457 a year in revenue [4]. Redbox is the cheapest movie rentals, which charges $1.00 for a DVD and $1.50 for a blue ray disk for one night. Redbox allows the customer to return the rental disk at any location. In addition to the rental business, customers also have the option to purchase the disks from the Red box vending machines. Other VOD providers like Qriocity have exclusive connectivity with Sony, which is a major player in the HD TV market. In short, Netflix has some stiff competition in the days ahead.

1. Blockbuster Creditors Should Call It Quits, Poll Says, TheStreet, 30 Jan 2011. URL:"
2. Blockbuster wins 3-month restructuring extension, Reuters, 20 Jan 2011. URL:"
3. Blockbuster wins 3-month restructuring extension, Reuters, 20 Jan 2011. URL:
4. Redbox, Wikipedia, Accessed: 31-Jan-2011, URL:
5. Video On Demand, Wikipedia, Accessed: 31-Jan-2011, URL:
6. NetFlix Annual SEC Report (2010) URL:

Exhibit 1
Exhibit 2
Exhibit 3
Exhibit 4: SWOT Analysis
-Few employees, low overhead costs
-Predictable revenue via monthly subscriptions
-Efficient/minimalistic supply chain
-High DVD attrition rates ~4.2%
-Small Library
-Availability of new releases
-Continue to utilize WOM Marketing via referral systems
-Partnerships with production agencies
-Competition, existing and new offering VOD capabilities
-Blockbuster Total Access

Exhibit 5
Source: Netflix Business Model, White Paper; Coates A., Deshpande A., Lopez N.

Strategy as a Wicked Problem: Citibank

Companies today face a vast array of difficulties and challenges. Most of these challenges can be addressed and remedied with well-established processes; however, some fall into a separate category known as WICKED PROBLEMS [1]. WICKED PROBLEMS are identified by John Camillus as problems that “[have] innumerable causes, [are] tough to describe and [do not] have a right answer [1].” WICKED PROBLEMS involve stakeholders with different values and priorities, making it difficult for managers to clearly identify the root cause of a particular problem. Once a company identifies it is faced with a WICKED PROBLEM, it must invent creative methods to deal with the dilemma as no preexisting solution is available. This predicament is primarily due to the fact that WICKED PROBLEMS, by their definition, have no historical precedent for which to benchmark appropriate managerial decision making.
Citibank is currently facing WICKED PROBLEMS; a variety of complex and intertwined reasons, involving various groups of stakeholders (to include numerous government agencies). Each party possesses a distinct set of priorities related to future industry regulation, viability, operations and profitability. The source of these troubling issues is not easily identified as they revolve around a complex network of occurrences: the collapse of the real estate industry, liberal lending practices, stringent government regulations and lax policies are a few of such underlying issues. During the global financial crisis in 2008-09, the “too big to fail” Citibank received more than $50 billion via taxpayer money in the form of the Troubled Asset Relief Program (TARP). The goal was to avoid a financial meltdown and strengthen the banks so that they can lend which in turn will revive the economy and reduce the high unemployment rate. However, because of the stricter regulations and lending practices, lending to businesses has plunged resulting in credit crunch [3]. On the contrary, to reduce cost, Citibank exercised massive lay-offs which further deteriorated its public image [4]. Another strategy adopted to reduce operational cost which ultimately had serious security consequences was outsourcing the information technology and support center tasks to India. Fraud incident were recorded where more than $350,000 from Citibank account holders in New York were stolen [6]. To make matter worse, there was a security leak of more than 600,000 social security errors [7]. Within the last few years, the share price of Citibank has plummeted from $37 to $4 resulting in an uproar from shareholders. If Citibank were to actively address these issues related to WICKED PROBLEM(S), the analysis would likely reveal a network of complex and interconnected root causes. For example, in order reduce massive losses from its credit card business; Citibank allegedly closed a limited number of co-branded MasterCard accounts including Shell, Citgo, ExxonMobil and Phillips 66-Conoco oil partner cards [2]. The arbitrary credit card closing of credit cards led to dissatisfied customers and potential lawsuits. If the problems plaguing the Citibank were thoroughly reviewed, one would have great difficulty in identifying which particular problem should be the immediate focus of attention. In order to attack WICKED PROBLEMS, it is important to use multiple strategies in order to bring the root cause (or more likely causes) to light.
There are frameworks and guidelines available to identify WICKED PROBLEMS but not clearly tame them with quantified results. There is a need to formulate multiple variable (constraints) stochastic optimization methodology to obtain the best possible strategy with the various interrelated constraints. In case of Citibank some of the considerations will be repayment of TARP funds, lending practices, foreclosure processes, hiring and layoff, restructuring and new business opportunities in the emerging markets. Last year, Citibank separated into two businesses, Citicorp and Citi Holdings, to optimize the company's global businesses for future profitable growth and opportunities [8]. Even though many of the corporate strategies may not address all segments of the WICKED PROBLEM, the change in policies may provide the company with valuable insight in order to identify other opportunities to contest. A well defined mission statement which clearly communicates the Citibank’s ethos, core competencies, growth projections and future aspirations will help guide managers and their decision making when tackling such WICKED PROBLEMS. It is extremely important to involve stakeholders, document opinions, and communicate during strategy formulations. In case of Citibank the various stakeholders will comprise of the government, shareholders, customers and its own employees. The implementation of new strategies developed to combat WICKED PROBLEMS may be extremely costly. The development of pilot programs and scenario analysis may be the best way to control the implementation of a particular solution for such issues. Companies faced with a WICKED PROBLEM need remember that their focus should not lie on “fixing” the predicament with a single strategy. The goal is to grasp a greater understanding of the problem(s) which come from multitude of strategies. Some of these strategies may indeed fail, thus providing additional insight into the WICKED PROBLEM’S solution.

1. “Strategy as a WICKED Problem”, John C. Camillus, Harvard Business Review, May 2008
2. “Citibank Cancels Credit Cards with Little Warning”, Accessed: 14-Nov-2010.
3. “Lending Falls at Epic Pace”, Michael R. Crittenden, Marshall Eckblad, The Wall Street Journal, 24-Feb-2010.
4. “Citigroup's Layoffs Could Reach 24,000 This Year”, Accessed: 14-Nov-2010.
5. “Citi's Statement on TARP Repayment”
6. “Citibank call centre fraud reveals Indian data protection deficit”, Computer Fraud and Security, Vol. 2005, Issue: 4-Apr-2005. pp. 3.
7. “Citibank Exposes 600,000 SSNs.” Information Management (15352897), v. 44 issue 3, 2010, p. 10-10.
8. “Citi to Reorganize into Two Operating Units to Maximize Value of Core Franchise”, Citigroup Inc. Press Release, 16-Jan-2009.

Source: Deshpande A., Evans A., Roberts M., Citibank’s Wicked Problems, White Paper.

The Manufacturing Information Age: The Future of Machining “Apps”

The world of “Apps” is the new wave of machining. Learn about the most cutting edge advancements in machining connectivity and 24/7 monitoring that will allow real-time decision making. The “new to the world” machining apps and data collection models will change the way manufacturers do business. I will be presenting about the upcoming machining standards, manufacturing data management with cloud computing, energy assessments for sustainable manufacturing and legacy machine monitoring strategies.

When: Wednesday, October 20, 10:30-12:30 pm
Where: Advanced Manufacturing & Technology Show, Dayton Ohio
More information available at:

The Open Source Juggernaut

Open Source software is a community-based, evolutionary software development methodology that’s radically changing the world’s software landscape. Companies like Microsoft have relentlessly attempted to sow Fear, Uncertainty and Doubt (FUD) in the minds of today’s technology decision makers regarding Open Source solutions, but the steady concerted effort within each Open Source community to augment their project’s security, stability and feature set is steadily winning the hearts and minds of today’s enterprise IT departments. As of March, 2009, Apache is running on 2/3 of the world’s web servers, and BIND has even a higher market share, working as the preferred DNS server for 98% of the world's hosting environments. Other Open Source projects like the Firefox web browser, Asterisk PBX, and Sugar CRM, are slowly but surely grabbing a significant market share in their respective sectors and transforming the way we do business.
Open Source solutions are:
  • Free of Licensing Fees – The costs for Open Source software are as good as it gets . . . nil, nada, nothing. You’re not charged for using, improving or re-distributing Open Source software.
  • Flexible – The main benefit of using open source software is not the cost of the software, but the wider economical margin of being able to build the right solution for any unique technical challenge. Open Source software offers its users unparalleled flexibility, so that any business requirements to customize, integrate, or support their applications are not constrained by a software license.
  • Transparent – Open Source means that what you see is what you get. You can inspect the code line by line to ensure that no disgruntled programmer has buried logic bombs, trapdoors, Trojan horses, viruses or any other nasty surprises in the code. There is no worry that the weak link in a security strategy might be some proprietary application with poor defensive measures. You can add security features to Open Source if you wish and ensure a consistent level of protection across all applications in the system.
  • Vendor Control Liberated – Often, organizations can be ‘locked-in’ to software products because the costs of switching to alternatives are prohibitively high. Proprietary software vendors can ‘lock’ users in to their products by ensuring that they’re not readily compatible with potential rivals and can then increase the price of product upgrades or support without too great a risk of losing their customer base. Not only is there absolutely no incentive for Open Source developers to inhibit compatibility, Open Source projects tend to use open standard formats so there is little danger of being ‘locked-in’ to the application if something more compelling happens to comes along.
  • Innovative – Open Source communities and projects encourage innovation. New ideas, needs and problems you think are important are probably already on the minds of others. As you work together with an Open Source community, you can better define your needs and suggest changes to the developers. Innovation is a significant key for building a competitive business, and with Open Source projects, the underlying technology can easily be improved and customized, giving you a more competitive edge.
  • Standard Based – For many Open Source developers, peer review and acclaim is important, so it's likely that they will prefer to build software that is admired by their peers. Highly prized Open Source projects are distinguished by clean design, reliability and maintainability, with an adherence to standards and shared community values. By publishing the source code, developers make it possible for users of their software to have confidence that there is a basis for their claim of coding excellence.
As a result, the Open Source model is absolutely transforming the software landscape all around the world. The Deshpande & Riehle study, "The Total Growth of Open Source", describes how the number of Open Source projects, total number of Open Source lines of code, and the way the software model is expanding into new domains & applications, are all growing at an exponential rate. Established commercial software giants like IBM, Microsoft, and Citrix are pouring millions of dollars into Open Source projects, and changing the way they do business to work under or collaborate with the open source world.

This Open Source model is a juggernaut . . . it’s a freight train coming down the side of an enormous mountain, and much to the chagrin of the commercial software establishment, little can be done about the way it is going to fundamentally change the way we all do business.

The Challenge of Deploying Open Source Technology within the Enterprise

Although Open Source software, OS platforms, and OS development tools provide an absolutely compelling case to reduce costs and effectively solve business problems, there are a number of challenges the ITC staff must understand in order to minimize any potential costs/risks of using Open Source solutions. The risks of Open Source will depend somewhat on the individual customer environment, the industry the company is operating in, the knowledge and skill of internal personnel, the licensing model chosen, the organizational and governance models the company is employing, and whether the customer intends to change the code. Open Source projects are developed in various software languages, run on different platforms and databases, and they encompass a huge variety of end user applications........................

Open Government in Higher Education


Open educational resources, open content, open access, open research, open courseware—all of these open initiatives share, and benefit from, a vision of access and a collaborative framework that often result in improved outcomes. Many of these open initiatives have gained adoption within higher education and are now serving in mission-critical roles throughout colleges and universities, with institutions recognizing reduced costs and/or increased value related to access and quality. If such a social organization of cooperation and production (i.e., openness) does indeed enhance the creation, delivery, and management of the critical products and services required by an institution of higher education to fulfill its mission, the next logical question is whether open development and governance can have a broader applicability—beyond software, resources, courses, learning objects, and content. Can understanding the principles and practices that govern open-source initiatives, and the communities of practice that manage them, provide a potential reference model for institutions of higher education? Can colleges and universities improve administrative and academic planning and decision-making processes within institutional governance through these open principles and practices?

Our Introduction to Openness
According to the Open Source Initiative (OSI): "The 'open source' label was invented at a strategy session held on February 3rd, 1998 in Palo Alto, California." The idea was inspired by the seminal work "The Cathedral and the Bazaar," first presented in 1997 by Eric Raymond, whose analysis, "centered on the idea of distributed peer review, had an immediate and strong appeal both within and (rather unexpectedly) outside the hacker culture."1 Originally, Raymond believed there was "a certain critical complexity above which a more centralized, a priori approach was required" and that the most important software "needed to be built like cathedrals, carefully crafted by individual wizards or small bands of mages working in splendid isolation."2
One of the earliest open-source technologies to enter the campus portfolio was Linux, in the 1990s. As Raymond noted: "Linus Torvalds's style of development—release early and often, delegate everything you can, be open to the point of promiscuity—came as a surprise. No quiet, reverent cathedral-building here—rather, the Linux community ..........

1. Open Source Initiative, "History of the OSI," <>.
2. Eric S. Raymond, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary(Sebastopol, Calif.: O'Reilly Media, 1999), p. 21.
3. Ibid., pp. 21–22.
4. Donna Scott and George J. Weiss, "Linux Marches toward Mainstream Adoption," Gartner Research, November 11, 2003, <>.
5. "Tech Budgets Get Some Relief; Cautious Support for Open Source Applications," 
2004 Campus Computing Survey, <>.
6. Rob Abel, "Open Source Quick Survey Results," May 3, 2005, <>.
7. Lois Brooks, "Considering Open Source: A Framework for Evaluating Software in the New Economy," EDUCAUSE Center for Applied Research (ECAR) Research Bulletin, January 2, 2007, pp. 2–3, <>.
8. Ibid., pp. 9, 6.
9. Rob Abel, "Best Practices in Open Source in Higher Education Study: The State of Open Source Software," March 1, 2006, <> < open source hed 030106.pdf>. Quotes from "Open to Open Source," Inside Higher Ed, March 1, 2006, <>.
10. Archives of the American Scientist Open Access Forum, <>; David Wiley, "Defining 'Open,'" Iterating toward Openness, November 16, 2009, <>; "Learn for Free Online," BBC News, September 22, 2002, <>; UNESCO, "Forum on the Impact of Open Courseware for Higher Education in Developing Countries: Final Report" (Paris, July 1–3, 2002), <>; Susannah Fox, "Open Research Since 2000," Pew Internet & American Life Project, April 27, 2010, <>. See also Amit Deshpande and Dirk Riehle, "The Total Growth of Open Source," Proceedings of the Fourth Conference on Open Source Systems (OSS 2008) (New York: Springer Verlag, 2008), pp. 197–209, <>.
11. "Wireless Networks Reach Half of College Classrooms; IT Security Incidents Decline This Past Year," 2006 Campus Computing Survey, <>.
12. "IT Budgets Are Down--Again!," 2009 Campus Computing Survey,<>.
13. "uPortal Steering Committee," Jasig website, <>.
14. "Leadership," Liferay website, <>.
15. Colin Currie, "What Is Openness, Anyway?," EQ, vol. 32, no. 1 (2009), <>.
16. "EDUCAUSE Values: Openness," EDUCAUSE Review, vol. 44, no. 1 (January/February 2009), <>.
17. In addition to the Internet as a platform for production, new development processes—collectively known as agile software development—have emerged in commercial environments such as Blackboard, RedHat, Sungard, and Oracle, as well as in open-source projects such as Moodle and Sakai, promising greater responsiveness to users.
18. "Category:Governance," P2P Foundation Wiki, <>.
19. Brad Wheeler, "Open Source 2010: Reflections on 2007," EDUCAUSE Review, vol. 42, no. 1 (January/February 2007), <>

Column: An open community

by Michael Nielsen on May 1, 2010

Full Article:

In January of 2009, Tim Gowers initiated an experiment in massively collaborative mathematics, the Polymath Project. The initial stage of this project was extremely successful, and led to two scientific papers: “A new proof of the density Hales-Jewett theorem” and “Density Hales-Jewett and Moser numbers”. The second of these papers will soon appear in a birthday volume in honour of Endre Szemeredi. The editor of the Szemeredi birthday volume, Jozsef Solymosi, invited me to submit an introduction to that paper, and to the Polymath Project more generally. The following is a draft of my introductory piece. I’d be very interested in hearing feedback. Note that the early parts of the article briefly discuss some mathematics, but if you’re not mathematically inclined the remainder of the article should be comprehensible. Many of the themes of the article will be discussed at much greater length in my book about open science, “Reinventing Discovery”, to be published early in 2011.

At first appearance, the paper which follows this essay appears to be a typical mathematical paper. It poses and partially answers several combinatorial questions, and follows the standard forms of mathematical discourse, with theorems, proofs, conjectures, and so on. Appearances are deceiving, however, for the paper has an………..

……. Linux is just one project in a much broader ecosystem of open source projects. Deshpande and Riehle have conservatively estimated that more than a billion lines of open source software have been written, and more than 300 million lines are being added each year. Many of these are single-person projects, often abandoned soon after being initiated. But there are hundreds and perhaps thousands of projects with many active developers.

……………… A similar process is beginning today. Will pseudonyms such as D. H. J. Polymath become a commonplace? How should young scientists report their role in such collaborations, for purposes of job and grant applications? How should new types of scientific contribution – contributions such as data or blog comments or lab notebook entries – be valued by other scientists? All these questions and many more will need answers, if we are to take full advantage of the potential of new ways of working together to generate knowledge.

Full Article:

Introduction to the Polymath Project and “Density Hales-Jewett and Moser Numbers”

by Michael Nielsen on May 1, 2010

Full Article:

In January of 2009, Tim Gowers initiated an experiment in massively collaborative mathematics, the Polymath Project. The initial stage of this project was extremely successful, and led to two scientific papers: “A new proof of the density Hales-Jewett theorem” and “Density Hales-Jewett and Moser numbers”. The second of these papers will soon appear in a birthday volume in honour of Endre Szemeredi. The editor of the Szemeredi birthday volume, Jozsef Solymosi, invited me to submit an introduction to that paper, and to the Polymath Project more generally. The following is a draft of my introductory piece. I’d be very interested in hearing feedback. Note that the early parts of the article briefly discuss some mathematics, but if you’re not mathematically inclined the remainder of the article should be comprehensible. Many of the themes of the article will be discussed at much greater length in my book about open science, “Reinventing Discovery”, to be published early in 2011.

At first appearance, the paper which follows this essay appears to be a typical mathematical paper. It poses and partially answers several combinatorial questions, and follows the standard forms of mathematical discourse, with theorems, proofs, conjectures, and so on. Appearances are deceiving, however, for the paper has an………..

……. Linux is just one project in a much broader ecosystem of open source projects. Deshpande and Riehle have conservatively estimated that more than a billion lines of open source software have been written, and more than 300 million lines are being added each year. Many of these are single-person projects, often abandoned soon after being initiated. But there are hundreds and perhaps thousands of projects with many active developers.

……………… A similar process is beginning today. Will pseudonyms such as D. H. J. Polymath become a commonplace? How should young scientists report their role in such collaborations, for purposes of job and grant applications? How should new types of scientific contribution – contributions such as data or blog comments or lab notebook entries – be valued by other scientists? All these questions and many more will need answers, if we are to take full advantage of the potential of new ways of working together to generate knowledge.

 Full Article:

Open vs closed source software: The quest for balance

Sebastian v. Engelhardt, Andreas Freytag, Stephen M Maurer, 29 October 2010

Governments are increasingly interested in promoting open source software. Yet policymakers have seldom laid out any clear theoretical or empirical justification for these policies. This column explores recent studies suggesting that open source and proprietary software strengthen each other and should co-exist – too much open source could actually be a bad thing.

Open source software (OSS) like the operating system Linux is marked by free access to shared source code that is developed in a public, collaborative manner. While most of this activity was originally non-commercial, over the past decade companies have been asking themselves whether similar OSS methods can be made to earn a profit. This has led to an explosion of OSS-based business models and investments throughout the information and communications technologies sector (Ghosh et al. 2002, Dahlander and Magnusson 2005, Lerner et al. 2006).

Governments are similarly intrigued and have begun experimenting with various pro-OSS measures including procurement preferences, tax breaks, and grants (Lerner and Tirole 2005, CSIS 2008). At first, the implicit policy assumption seemed to be that OSS was inherently more efficient than proprietary, or “closed source”, software (CSS)1. This argued for almost any policy that promised to increase the amount of OSS. More recently, however, some politicians have begun to argue that society needs a “balance” of CSS and OSS firms (CSIS 2008). But how can policymakers recognise the right “balance”? Pro-OSS interventions make very little sense if there are too many OSS firms already.

The threshold question
The threshold question, of course, is whether governments can influence OSS at all. Ten years ago, most scholars were pessimistic. This was sensible in an era when OSS was driven by non-commercial incentives like altruism, reputation, and signalling. How do you influence a “movement” dominated by college students? (Schmidt and Schnitzer 2003).

Since then, however, things have changed dramatically. Deshpande and Riehle (2008) report that the OSS sector grew from about 500 projects in 2001 to 4,500 in 2007. Furthermore, this growth was dominated by business models in which companies contribute to a shared code base in hopes of increasing consumer demand for some related product (e.g. hardware, software) or service. This for-profit outlook is clearly responsive to government’s traditional tax-and-spend policy levers.

Western governments, then, should have little difficulty influencing OSS development. Governments in the developing world will, as usual, face bigger challenges. von Engelhardt and Freytag (2010) study differences in OSS activities across 70 countries. They find that the main predictors of OSS activity are generalised cultural factors like interpersonal trust, favourable….

Integrated Marketing Services | Definition 6: THE AGE OF THE DEVELOPER

Posted by Tom Kirszenstein on Tuesday, November 17, 2009

I recently read that the White House has chosen an Open Source CMS (Content Management System) to develop their government Web site. This announcement caught my attention for several reasons--not only are many agencies moving their clients to open source and praising it's virtues, I also started using Drupal this past year and found it remarkably fast and easy to setup and maintain my own Web sites with quality results. Despite some criticism of open source over the years--more and more commercial (and government) developers are choosing it.

It's hard to argue against the benefits of free software, especially when results show that the software does what we expect, often exceeds expectations, and provides more opportunities for expansion than many proprietary products. While relative newcomers Drupal and Wordpress lead the pack for CMS offerings, open source mainstays such as Linux and Perl have been around for many years--not only surviving, but thriving over time. In a study by Amit Deshpande and Dirk Riehle of SAP Labs, LLC, Total Growth of Open Source results have shown that "the total amount of source code and the total number of projects double about every 14 months." Open source enables freedom for both users and developers to move & change quickly when needed, as well as providing more flexibility with software decisions such as to upgrade or not to upgrade. It's really no surprise that businesses and individuals are moving to open source at exponential rates.

Of course, Open Source has always been very much associated with Free, although there are other solid reasons to choose it beyond its cost. The pool of development resources is not limited to a specific company or provider, but instead is seemingly unlimited. As a specific open source project becomes popular, more and more developers start contributing, growing and adding to the code. Not only do they enhance the software to make it better for everyone, but they also create markets for their own support services. The better the code is--more people will use it-- and the more support is needed. Large developer communities have evolved around each software project, contributing to its growth, and administering its support. These open source communities are continually coming up with new innovations, powerful add-ons, extensions, and effective tools.

With so many open source choices available, even the……

Data is the next Intel Inside | Daniel's Web 2.0 related Blog

Data is the next Intel Inside | 11/Mar/2010
This blog entry is about the future of data and its effect on Web 2.0 applications.  In the lecture, we learned that Web 2.0 as we know it relies (in part) on open-source software and code.  Many elements of web access are already provided by open-source software – from Apache servers (that host the web content) to database management tools, like MySQL.
There can be little doubt that open-source development is on the rise.  In fact, according to Deshpande & Riehle (, it accounts for a large portion of the web server market.  To add to this, they describe the growth of open-source development tools as “exponential.”  The question is, are open-source applications providing the platform for a more “open”, free Internet?
Unfortunately, the answer is most likely no.  The reason is because open-source technologies are being absorbed by new IT giants, who have formed as a result of Web 2.0.  Like Intel in processor development, Google has a significant proportion of market share, as shown through by various measures (such as Nielsen polls).  But it could be argued that Google will only support the idea of open internet while it serves the interest of the business.  In fact, in an article about the strength of Google, Messina argues that “Google decides which ports it wants to open and for whom.” (
All of the above relates to the concept of “Data as the next Intel Inside” because, as O’Reilly argues, “…data as the Intel inside is the one that will ………….


by Morgan Currie, Christopher Kelty, Luis Felipe Rosado Murillo, University of California, Los Angeles*

By looking at the history of long-lasting and successful Free and Open Source Software (FOSS) projects, one can observe a common trajectory: they tend to start with a few core developers, then increase in code base size, complexity, and number of contributors and users, then finally find it necessary to create a formal organization to help coordinate the development efforts, maintain hosting infrastructure, secure funding, manage donations, seek partnerships, and protect its members from patent and copyright disputes. The question we discuss in this paper is “what are the characteristics of participation in those projects that do not describe this common trajectory?”
In order to respond to this question, we will compare projects with different trajectories: both those which were initially sponsored by a company and then created a community around them, and those that never constituted (or refused to constitute) a formal organization. By addressing this question, we will highlight fundamental differences and similarities between projects: what makes them grow or fail to attract and foster collaboration and public participation. In order to establish parameters for comparison, five dimensions of FOSS projects will be compared and discussed: 1) project genealogy; 2) tasks (how are they defined, described, and distributed?); 3) alliances (who are the partners? Are they from the public sector, private sector, or both?); 4) governance(is there a formal procedure for decision-making? If not, how are decisions made?) 5) availability (which licenses are used? What is the rationale behind the decision of using a particular license?). We will explore the following projects in order to respond to the questions above:, Debian, Android, and Xara Extreme Linux.
This article is based on research data from the project “Birds of the Internet”, sponsored by National Science Foundation (NSF), and hosted at the Center for Society and Genetics at UCLA. The project uses interpretative social science methods to explore and compare features of participation across a wide range of projects (not limited to Free and Open Source projects). By using comparative analysis, the project seeks to further concept development in the general domain of Internet public participation.

For more than three decades Free and Open Source Software (FOSS) has generated an intense and intricate dispersion of technical objects and practices based on global collective efforts. Recent anthropological and sociological accounts of Free Software as a political, technical, and cultural practice further investigated the ongoing dispute regarding individual property over intangible goods and the opposition created by FOSS to the advancement of the transnational intellectual property regime (Coleman 2005; Kelty 2008; Leach 2009; Weber 2005). FOSS offered viable alternatives for remote coordination, distribution, and innovation in software development, made possible by the virtue of its licensing schemes: the constant rebuilding effort over a set of public software licenses which allowed (re)distribution, free use and adaptation of software code. The resulting sociocultural phenomena are situated in between, at least, two major registers: the general reciprocity oriented towards the free circulation of software as public good, and the market economy in which computer technicians offer their computing expertise for remuneration.
From an anthropological standpoint, FOSS is curiously made up by boundary practices in a multitude of social ties and sociotechnical arrangements, bringing together persons, associations, and technical objects: it is a form of craft that is hard to analyze without problematizing the boundaries of established categories and oppositions, such as individual/society, material/immaterial, discourse/practice, private/public, gift/market, persons/objects, work/leisure, and code/expression. In this sense, Free Software is better approached as a quasi-object (Serres 2007) assuming different forms but mainly organized around intersecting recursive publics (Kelty 2008). Public administrators, for instance, may advocate for FOSS as tool for social change, given its potential to foster digital inclusion. Among computer hackers, it is often defined as a highly valued expression of oneself and his/her technical competence. For artists and free culture activists, it is construed as a set of tools to empower cultural production. In the past decade we experienced an implosion of FOSS, currently being practiced under the new rubric of “Open Access”, “Open Data”, and “Open Source Hardware”.
This article analyzes FOSS projects’ participatory structures with informally negotiated or legally formalized aspects that relate to their growth over time. As pointed out by Coleman (2005), “most FOSS projects in their infancy, including Debian, operated without formal procedures of governance and instead were guided by the technical judgments of a small group of participants” (Coleman 2005, p. 325). Formalization typically comes about to address issues of scale and management. Riehle and Deshpande (2006) demonstrated that FOSS projects increased in size exponentially between 1998 and 2006, since “the total amount of source code and the total number of projects double about every 14 months” (Riehle and Deshpande 2006, p.11). As projects scale up, more is at stake beyond the purported division between FOSS and proprietary development models (Lakhani 2007; West 2009). As FOSS projects grow, they tend to organize their activities into businesses, NGOs, and foundations to coordinate software development work and manage intellectual property rights, profit and fund-raising purposes. Spontaneous gatherings of half a dozen hackers become formal organizations over time, transforming substantially the very social fabric which constitutes software development projects.
In our analysis of Internet-based participatory projects more generally (Fish et al. 2011) we proposed two distinct entities which are generally present: first is the “Formal Social Enterprise” (FSE) – legal organizations with formal decision-making procedures that are composed of at least one contractually obligated employee. On the other end of the spectrum are the “Organized Publics” (OP), or the community of participants ………..


Economic Force: Free Flow of Information - EconoMonitor

Author: BrainTrust @RealEconomy.Org  ·  April 9th, 2009  
How Information Flow is Shaping the Economic Landscape
The Internet has radically changed the availability and cost of information. The Internet isn’t just moving advanced technology across town, or across the nation. It’s moving it from one nation to another, from areas of high concentration of technology to areas of low concentration. It is leveling the playing field.
This revolution of information access is at least as important as the other major revolutions, such as the Industrial Revolution of the late 19th century.  The graph below shows the exponential growth in access to information via the Internet:

It’s an Economic Earthquake
Free flow information isn’t simply a technological change. It represents a fundamental and massive economic force. The economy cannot remain static in the presence of such a powerful shift.  The strategies that will work in tomorrow’s economy must take this revolution into account.  They must recognize the effects and harness the power of this force in order to be successful.
What is the economic impact of free flow of information?  How does it affect today’s business climate?  How does it affect your job? What should be our national policy on information flow in the years ahead? Let’s explore these questions in detail.

The Economic Impact of Free Flowing Information
Let’s examine how the flows of information have changed in the past 15 years. Think of outsourced software development, and global supply-chain management, and online patent databases, open-source software projects, and Wikipedia. Think about AliBaba, the web-based global marketplace for manufactured items. Everyone has nearly equal access to the latest technological information, regardless of which society created that knowledge. This has produced a sudden and accelerating shift in the relative capabilities of whole nations, and has moved entire economies away from certain kinds of economic activity and toward others.  Manufacturing has moved to emerging economies.  Technical skill has followed via the outsourcing boom.  If manufacturing know-how and technical know-how has flowed around the world, can other forms of know-how be far behind?
In addition to changing the worldwide competitive-advantage landscape, the Internet has clearly transformed many areas of our society, such as media, entertainment, commerce, and the retail shopping experience.  The presence of a mechanism to quickly comparison shop prices and get the lowest one, or even to conduct online auctions has meant a huge shift toward online shopping, at the expense of local retailers and, of course, jobs.  But the elimination of retail and other service sector jobs has not been the only result of the internet explosion.  The internet has also had a fundamental impact on other forms of employment as well.

Leveling World Wage-Rates
Most studies place the percentage of workers that are classified as knowledge workers anywhere from 30-50% of the workforce in the most service-oriented economies, such as the UK and US economy.  A knowledge worker is one whose contributions depend on the development and synthesis of ideas.  While knowledge work has expanded and is expected continue expanding, knowledge workers in the developed world are facing fierce price competition from the places where the information is now more freely flowing.  This isn’t your ordinary price competition, though. A software engineer’s salary in an emerging market can be 10% of the salary in a developed country.  In other words, an engineer in India may make $7,000, while his U.S. counterpart expects to make $70,000.  In a globalized economy, who will win this price war?

Opening the Floodgates – Open versus Closed Intellectual Property (IP)
On a recent trip to an aerospace museum in Tucson, Arizona, I was standing in front of a Kaman HOK-1 Twin Rotor helicopter when I met a man with an interesting story to tell. As I studied the design of the helicopter and commented on it, the retired engineer standing next to me told me what really impressed him about Kaman.  It wasn’t their helicopters, it was their bearings.  He told me the story of his days designing landing gear for companies such as Boeing, and how no company could match the Kaman self-lubricating bearing products.  Nobody knew what was inside, and for the longest time, Kaman would not even file a patent on their technology, because that would mean they would have to disclose how they did it.  Kaman shut others out of their market for years and successfully deployed their bearings on many aircraft platforms. (  Kaman’s approach to information flow was simple – trust no one because then no one can duplicate what you do.
In today’s economic landscape, Kaman’s approach to IP protection and business building is a relic from the past.  Today’s corporate strategy is usually built around a different set of criteria.  Instead of building basic technology and creating a product line around it, today’s corporations add value to technology they buy.  They may develop a market by integrating hardware, and software and meeting an end user need.  Instead of technology differentiation, what is more important is time-to-market, head-to-head price competition, and being in the right place at the right time.   What becomes deemphasized is the traditional model of building a product line from company owned technology and owning the market because your engineering and know-how is fundamentally just better.  One clear culprit in this changing landscape is the free flow of information.
Open development. Open Source. Open Access. Open Architecture. Open Standards.  The Open movement has elicited fundamental changes in the way information is shared, especially in the technology industry. The growth of open source software projects, for instance, has been exponential in the past decade.  What is interesting is to compare the graph below to the growth of the internet.  Clearly, the more information flows, the more collaboration between people occurs.  This is very evident in the open source world.
Source: The Total Growth of Open Source Amit Deshpande and Dirk Riehle Proceedings of the Fourth Conference on Open Source Systems (OSS 2008). Springer Verlag, 2008. Page 197-209.
While it can be difficult to assess the impact of open source on, say, overall software employment levels in the US, it suffices to point out that, overall, open source tends to move the software talent pool away from fundamental technology development and toward a value-add support model instead.
While the software industry has been significantly impacted by the free-flow of information, the domain of computer circuit design has proven much more resistant to this free information flow.  So called IP cores (blocks of hardware logic) have been a fixture of the hardware design industry for some time, they are much harder to use in an open development process.  In the hardware arena, the notion of selling IP (in the form of IP cores, Application Specific Integrated Circuits (ASICs) and silicon chips) is much more developed.  A well known example of IP reuse illustrates this point.  Consider the the graphics engine first introduced to the world via the Sega Dreamcast.
Sega’s Dreamcast game console has been called many things, but a roaring commercial success is not among the terms commonly used.  However, the PowerVR 3D graphics engine found in the Dreamcast has had an illustrious history. Eventually taken over by silicon IP vendor Imagination Technologies, the PowerVR IP now powers most mobile 3D applications on cellular phones, including Apple’s phenomenally successful iPhone.  In this case, the same IP resulted in two very divergent commercial products,  ........

Kineo: E-Learning Market Update (June 2008)

This month we reflect on the rise of open source software and the implications for the e-learning market.

The growth of open source continues to be the dominate theme of this decade. The IDC in 2007 issued a report which argued “the market for standalone open source software (OSS) is in a significant growth stage”. The IDC forecast that adoption of OSS will accelerate through to 2011 as barriers to adoption get knocked down. Significant demand for standalone open source software (OSS) will see the market grow at a rate of 26pc a year to reach US$5.8bn by 2011, according to IDC.
We are in the early stages of the development and deployment of OSS according to Matt Lawton, program director of IDC's Open Source Software Business Models research program. "The market is still quite immature, especially now that we see active open source projects in all layers of the software stack. Although we see healthy growth in revenue from standalone open source software, we must keep in mind that revenue will substantially lag behind the distribution of open source software. Many distributions of standalone open source software are free, while paid distributions typically are based on pay-as-you-go subscriptions rather than pay-up-front license fees."
IDC's study revealed that the drivers for OSS adoption, and in particular commercial adoption of OSS, include the growing realisation that OSS “provides them with more choice and leverage with proprietary software vendors.”
IDC said that worldwide revenue from standalone open source software reached US$1.8bn in 2006.
The benefits to organizations are real and tangible.
“Organizations are saving millions of dollars on IT by using open source software. In 2004, open source software saved large companies(with annual revenue of over $1 billion) an average of $3.3 million. Medium-sized companies (between $50 million and $1 billion in annual revenue) saved an average $1.1 million. Firms with revenues under $50 million saved an average $520,000.”
Walli, S., Gynn, D., Rotz, B. V. The Growth of Open Source Software in Organizations: A Report.
A 2008 survey on the future of open source software found that:
  • Approximately 81 percent of respondents feel the economy’s turbulence is “good” for open source software
  • Respondents revealed that the top three factors that make open source software attractive include: lower acquisition and maintenance costs; flexibility/access to libraries of community-developed code; and freedom from vendor lock-in
  • More than 55 percent of respondents believe that in five years 25-50 percent of purchased software will be open source vs. proprietary
  • The Web Publishing/Content Management market is expected to be most vulnerable to disruption by open source in the next five years
  • Respondents expect the Security Tools be least vulnerable to disruption by open source in the next five years
The key findings of the survey are outlined in the slides below.
Amit Deshpande and Dirk Riehle at SAP Research, have undertaken a recent study of the growth of open source and conclude that the growth of open source software code is growing exponentially.
The adoption of open source software is becoming mainstream. For example BT recently decided to provide the open source Sugar CRM system to its customers rather than the commercial Siebel product.

What are the implications for e-learning?
The big development in the e-learning market has been open source learning management environments.
A recent Gartner survey of higher education entitled "Higher Education E-Learning Survey 2007: Clear Movements in the Market" found "clear movement in the market" toward more open-source platforms in 2007. 26 percent of platforms on surveyed campuses were on open source e-learning system such as Moodle or Sakai. Gartner projects that number will grow to 35 percent by the end of 2008.
In the corporate sector last year’s E-learning Guild report found that over 25% of small and medium sized businesses were using Moodle, the open source LMS. At Kineo we are strong supporters of open source software and if you are not familiar with Moodle you can try our free Moodle LMS demo.
There have been less significant developments in the authoring and development areas although there are open source authoring tools such as eXe and many free development tools such as Audacity for audio recording and editing which we review this month.
There are also a range of very useful open source and free testing tools, you can find out more at Mark Aberdour’s excellent
The real implications of open source software though are still be felt. Open source software is a disruptive force. The free nature of such software means that new tools can be distributed and adopted widely in a very short period of time. The e-learning market could be affected by developments of open source software from the education sector which transfer over to the commercial sector, as is happening with Moodle. We believe we will see further open source developments from assessment software to authoring tools to learning environments which will change the shape of e-learning over the coming years.

The Bucket: Open Source Growing At an Exponential Rate

Posted on24 May 2012.

sipmeister writes “Two computer scientists who work for enterprise software giant SAP have shown that open source is growing at an exponential rate. Not only is the code base growing exponentially, but also the number of viable projects. Researchers Amit Deshpande and Dirk Riehle analyzed the database of open source startup and looked at the last 16 years of growth in open source. They consistently got the best fit for the data using an exponential model. Relating this to open source market revenue, Desphande and Riehle conclude that open source is eating into closed source at a non-trivial pace.”

Socialized Software: The Curse of Open Source License Proliferation

Posted on May 8, 2008 by Mark

I remember when the big open source debate was whether a piece of software was really open source, meaning it was released under an Open Source License ProliferationOSI-approved license. The tides are shifting, debates now center around which open source license to use. Adding to the complexity of the debate is proliferation of OSI-approved licenses. Now discussions are rising over the open source licenses that are in the best interest of all stakeholders of an open source project. In the case of collective software works there is also the added intricacies of license compatibility.
Part of the problem is that companies are trying to drive their own vanity licenses that reinforce their branding and leverage the goodwill associated with the open source seal of approval. SugarCRM once mounted an offensive asking for acceptance of their Sugar Public License (a derivative of the OSI-Approved Mozilla Public License) that for a brief time was gaining popularity among commercial open source developers. The license was rejected and Sugar has since moved to the GPLv3. Ironically the Common Public Attribution License (CPAL) submitted by Social Text, which bears many similarities to the Sugar Public License, was accepted by the OSI. Even Microsoft has successfully lobbied the OSI-board for approval of two licenses. The Microsoft Public License (M-PL) and the Microsoft Reciprocal License (Ms-RL) which are very similar to the BSD and GPL licenses.
The number of open source projects has grown considerably over the last ten years, actually exponentially according to a paper delivered by Amit Deshpande and Dirk Riehle in March of this year. According to Black Duck Software knowledgebase the most common open source license used by open source projects is the GPL version 2.0. According to that same source 94% of open ….

The open source renaissance

Full Article:

By Brian Gentile, Jaspersoft CEO

It occurred to me recently that the open source movement is really nothing less than a renaissance.  Perhaps that sounds grandiose, but stay with me.

Take, for example, U.S. patent and copyright protection laws and policies.  They reinforce proprietary, “closed source” rights and policies.  As a result of this system, many substantial U.S. companies have formed around breakthrough ideas, but incentives are in place for those companies to guard and protect their intellectual property, even if others outside the company could extend or advance it more rapidly.
Now, to be clear, patent and copyright protection is necessary because it properly encourages the origination of ideas through the notion of ownership.  But, too few people consider the upside of allowing others to share in the use of their patents and copyrights, because they think such distribution will dilute their value — when, in fact, sharing can substantially enhance the value.  Fundamentally, "open source" is about the sharing of ideas big and small and the modern renaissance represents newfound understanding that sharing creates new value.
In many areas of science, the sharing of ideas (even patents and copyrights) has long been commonplace.  The world's best and brightest physicists, astronomers, geologists, and medical researchers share their discoveries every day.  Without that sharing, the advancement of their ideas would be limited to just what they themselves could conjure.  By sharing their ideas through published papers, symposiums, and so on, they open up many possibilities for improvements and applications that the originator would have never considered.  Of course, the internet has provided an incredible communication platform for all those who wish to collaborate freely and avidly and is, arguably, the foundation for this renaissance.
That’s why it’s ironic that one of the laggard scientific disciplines to embrace open source is computer science.  For the past 40 years, for example, incentives have been strong for a company to originate an idea for great software, immediately file a patent and/or register to copyright it, and then guard it religiously.  No one would have thought that exposing the inner-workings of a complex and valuable software system so that others might both understand and extend it would be beneficial.  Today, however, there are countless examples where openness pays off in many ways.  So, why has computer science and software lagged in the open source renaissance?
That computer science is an open source laggard is ironic because the barriers to entry in the software industry are relatively low, compared to other sciences.  One might think that low entry barriers would reduce the risk to and promote the sharing of ideas. But, instead, software developers (and companies) have spent most of the last 40 years erecting other barriers, based on intellectual capital and copyright ownership — which is perplexing because it so limits the advancement of the software product.  But, such behavior does fit within the historical understanding of business building (i.e., protecting land, labor and capital).
Another relative laggard area — and an interesting comparison — is pharmaceuticals and drug discovery.  When I talk with colleagues about this barrier-irony phenomenon, this is the most common other science cited (i.e., another science discipline that has preferred not to share).  But, in drug discovery the incentives not to share are substantial because the need to recover the enormous research costs through the ownership of blockbuster drugs is extremely high.  In fact, because the barriers to enter the pharmaceuticals industry are quite high, one might think that would promote openness and the sharing of ideas, given that few others would genuinely be able to exploit them.  But, once again, the drive to create a business using historically consistent methods has limited the pharmaceuticals industry to closed practices.
So, returning to computer science and software, maybe the reasons for not sharing are based on the complexity of collaboration? That is, it’s hard to figure out someone else’s software code, unless it’s been written with sharing fundamentally in mind.  Or maybe there’s a sense that software is art, and I want to protect my creative work — more like poetry than DNA mapping.
Either way, the renaissance is coming for the software industry. Software will advance and solve new problems more quickly through openness and sharing.  In this sense, computer science has much to learn from the other areas of science where open collaboration has been so successful for so long.
Fortunately, the world of software is agile and adept. According to research by Amit Deshpande and Dirk Riehle at SAP Research Labs, during the past five years the number of open source software projects and the number of lines of open source software code have increased exponentially.  The principles that this new breed of open source software have forged are already leaving an indelible mark on the industry.  Soon, its proponents believe, all software companies will embrace these fundamental open source principles:  collaboration, transparency and participation.  The course of this renaissance will be our guide.

I would be interested in your feedback on these ideas because the open source renaissance is well underway and I plan to be a model historian.