Giving It Away, Making Money

The burgeoning “Internet Economy” is redefining operational assumptions and models for all organizations within the public and private sectors. This is particularly evident as free access to information increases and the clash between open source and proprietary development of software intensifies. But the transformation underway does not stop in the realm of bits and bytes; it is spilling into the traditional mainstays of agriculture and all types of industry and threatens to alter our most basic tenets of how to market, value, and receive compensation for our creativity, collaboration, and contribution. This posting explores some of the novel approaches underway in response to these changes and set the stage for viable business models in the near future.

The long tail of the Internet provides opportunities for individuals to post information, knowledge, experience, and insight from one location and reach potential audiences almost anywhere else in the world at any time. Countless millions of individuals, businesses, and organizations of all types use websites, wikis, blogs, etc. to do just that. Collectively, the number of intelligent insights and innovative ideas posted every minute is sufficient to change the world many times over.

Despite the countless, remarkable observations and viable solutions presented, it is difficult for all but a narrow slice of contributors to make a living from doing so via Internet media. Unless there is a subscription fee to the site, the content of postings is free to read. In many instances, incorporating or reproducing that content elsewhere only requires acknowledgement of the original contributor / author to do so.

Under these circumstances it is difficult to receive payment for the work itself. Instead, payment is made based on what else readers do in and around the material they are reading: how many embedded links they check in the posting, how many advertisers around the periphery of the posting do they visit, how many RSS feeds and email notifications to they elect to receive, to name a few.

Such are the metrics and dynamics of the “Internet economy”. In a October 22, 2007 PC World article by Len Rust entitled, “Web 2.0 Revives Internet Economy” states,

Revenue from the large range of content and services available from the Internet is rapidly increasing globally; travel, gambling, adult content, music and health services are particularly popular, and social networking services are flourishing. It is estimated that by 2010 more than US$2 billion will be spent on social network advertising in the US alone.

Information is power for those who have it when others don’t. When information is free, it is a great equalizer. This equalizing feature is changing the business models of corporations that made their fortunes from a portfolio of proprietary offerings, as suggested in the article, “Facing Free Software, Microsoft Looks to Yahoo“, by Matt Richtel in the February 9, 2008 edition of The New York Times:

Nearly a quarter-century ago, the mantra “information wants to be free” heralded an era in which news, entertainment and personal communications would flow at no charge over the Internet.

Now comes a new rallying cry: software wants to be free. Or, as the tech insiders say, it wants to be “zero dollar.”

A growing number of consumers are paying just that – nothing. This is the Internet’s latest phase: people using freely distributed applications, from e-mail and word processing programs to spreadsheets, games and financial management tools. They run on distant, massive and shared data centers, and users of the services pay with their attention to ads, not cash.

Such widespread distribution of free software—in many instances accompanied with open source code, as well (see essay entitled, “What is Open Source” in the first chapter of Open Source for the Enterprise by Dan Woods, a book featured on Tim O’Reilly’s website, —raises a basic question: where is money made in such an environment?

The sequence of diagrams that follow seeks to address this troublesome detail beginning with the first one below which plots options for ownership of software and availability of source code on axes of free versus paid and open versus closed.

The central dichotomy is from the bottom-left, where both the software and source code are given away, to the upper-right, where there is a charge for a software license and the source code is not available as depicted in the next diagram below:

Public sector / non-profit institutions are represented in the lower-left quadrant where the deliverable and how to make it are given away. The private sector / for-profit businesses of all types dominate the upper-right quadrant where the deliverable is sold and the intellectual property (IP) that defines its design and production method is tightly held. Within the very different realities at either end of this diagonal dichotomy, traditional administrative and business models have enjoyed a distinct separation of function, role, and design. However, the advent of the Internet economy has facilitated a steady migration and blending between the ends, opening the two adjacent quadrants for development of new administrative and business models.

As the Internet economy becomes more established, it is affecting all types of organizations. This is illustrated in the diagram below by the addition of a “Portfolio (What) – Practices (How) – Assets (With What)” triangle circumscribing the diagonal arrow. This triangle is positioned to emphasize that the WHAT and HOW of an organization are fluid on price and openness, but the investment in what it takes to do the WHAT and HOW must be exceeded by sales revenue, if a for-profit business, or matched by gifts and volunteer efforts if a non-profit entity. The consequence of not doing so is cessation of operation.

Looking specifically at a business that goes beyond software and source code to the design and manufacturing of tools, machines, and equipment systems, let’s consider something simple like a machine to mold compressed earth blocks (CEB). Advanced Earthen Construction Technologies, Inc. (AECT) offers several CEB machine models; the “Impact 2001A Series” can be towed, is hydraulically operated, powered by a 7 HP diesel engine, has the capacity to make approximately 300 blocks / hr . It is protected by U.S. patent, manufactured in Texas and can be shipped anywhere. Price for near-new is approximately $28,000. This is clearly in the upper-right quadrant. AECT’s goal is to set the price and control the IP such that their investment in assets (people, facilities, equipment, and operations) is covered and they are profitable over the long run.

An alternative is going to Marcin Jakubowski’s Factor E Farm project “the most important social experiment in the world” to emerge out of the Internet economy in the opinion of Michel Bauwens in this P2P Foundation posting. Marcin is working in parallel on a multitude of collaborative projects that when complete will provide a portfolio of products and services useful as a “civilization starter kit” for those who are committed to building a basic and robust infrastructure for a “Global Village“economy. One of these projects is a CEB machine formerly named “The Liberator“. Marcin estimates,

Parts for The Liberator as detailed below are approximately $1000. The machine will cost an estimated $3-5K, depending on manufacturing abilities.

Open source design is one of the main reasons why Marcin’s CEB machine is less expensive. As he states,

Here are the capitalization requirements for fabrication capacity. The Cost column reflects the price structure if off-the-shelf tools and materials – and proprietary development procedures – are utilized. This cost is conservative, as intellectual property costs are probably higher than the $10k that was specified. The alternative route, or the Open Source Cost, is that which utilizes open source know-how and is built on a land-based facility. The open source option means that certain equipment may be fabricated readily from available components when a design and bill of materials is available.

Obviously, Marcin has and continues to invest considerable personal resources into his “Factor E Farm”. What is the business model through which he makes money, or does he give it all away and ask for contributions?

In a Global Villages Yahoo! Group posting, Marcin explains it as follows:

…how do we get the work funded? The collaborative microfunding is perhaps the right idea. The Core Teams develop technical details. Then we fund prototypes, optimization, and the building of optimal production facilities. Why should low product cost be feasible? Because we have a lean operation with little overhead, and if funded, we have low-cost production capacity that can match even slave goods and mass production. The new economic age is here. We are not talking of many hundreds of thousands of capitalization requirements for similar enterprise. We are talking of open-source-fed production facilities that will cost on the order of $10k to build. There is cascading cost reduction, for example as we use our CEB to build the facility, or the solar turbine to power it.

As such, ‘capitalization costs’ are ‘zero’- fundraising covers the cost. So far, we’ve operated 100% on voluntary contributions. R&D costs are zero – they are distributed collaboratively. All the costs are zero zero zero, outside of materials and labor. We capture the value of labor – but even if we charge $100/hour for the CEB – with optimized fabrication time predicted to be 20 hours per machine – that is still $3500 for a machine – factor 8 lower than the competition, as you can check for yourself. That $100/hour is very well worth it – if it’s not being dissipated in wasteful production ergonomics and wasteful product design. Moreover, all proceeds are used to fund further open product development.

And that brings us to the diagram below which adds a boundary of “common value” in the foreground and another boundary for “differentiated value” in the background.

Marcin’s model illustrates how to strike the critical balance between giving it away and making money. As he mentions, the R&D costs for the CEB machine are zero because they are distributed collaboratively and the results are open source and freely accessible for all. This is anchoring against the “common value” boundary. Setting the price for the machine at $3700 covers the cost of materials and fabrication and if set at $5000 it generates a reasonable profit that can be reinvested or used as further compensation. When compared to $25000 for a competitive model from AECT, this clearly bumps up into the “differentiated value” boundary.

Marcin envisions using the Internet to widely disseminate information about the CEB machine, take orders, expand operations, offer training, initiate “open franchises”, distribute manufacturing capacity, and prompt further “localization“. These represent ways to play in the space between the boundaries where some activities are done for nothing and others garner compensation. It is that agility to remain pliable in the intervening space that IS the sound business model to stay on track. This is a lesson suitable for any business to consider.

Originally posted to New Media Explorer by Steve Bosserman on Saturday, February 9, 2008

Greenhouses That Change the World

Richard (Rick) Nelson is the inventor of SolaRoof, a novel approach to greenhouse design and function that integrates a unique covering, heating / cooling system, and infrastructure / framework. It will revolutionize the greenhouse industry. More than that, once the materials are certified for use in human habitation, it will be disruptive to the housing and building industry as well. So what is SolaRoof, anyway, and why does it carry such potential to change the world? Let’s find out.

Revolutionary Technology:

The greenhouse construction is unlike any other. Rather than a single layer of covering or glazing there are two. Each layer is a laminate of woven fiber mesh sandwiched in between two sheets of transparent plastic material. The laminated layers are sealed against the top and bottom of the roof and wall frames to create air-tight spaces. This combination by itself offers hardly any insulating value. However, fill the space with bubbles—yes, bubbles—and the equation becomes totally different!

The distance between the two layers varies depending on the desired amount of insulating value. Each inch is roughly equivalent to an R-factor of 1. A distance of a little over a yard yields an R-factor of nearly 40. That is almost unheard of in traditional construction techniques. And given the transparency of the two layers of covering, over 80% of the photosynthesis-catalyzing sunlight reaches the inside of the greenhouse.

In the (now defunct) SolaRoof webpage: Green Buildings for Urban Agriculture and Solar Living, two illustrations show how the process works from one extreme season to the next. Quite ingenius!

Here is a picture of a greenhouse unit as its side is being filled with bubbles:

And here is what it looks like when the cavity is completely full:

Unbounded Architectural Form:

While the technology is intriguing, it is only part of the picture when determining the disruptive value of SolaRoof. Another feature is that the shape of the structure is no longer confined to a standard box or cube that characterizes many homes, buildings, or greenhouses. It can be made to fit into an infinite array of shapes, sizes, and configurations. One of Rick’s collaborators, Harvey Rayner, who is the founder of Solar Bubble Build, describes the possibilities of the SolaRoof medium as follows:

Architecture has been a long-held passion for me, but my unwillingness to engage in academic study has kept me from pursuing any real investigation into this field. Now, having started this project initially as a practical solution to expanding my wife’s rare herb growing business, I have become engrossed in the process of designing, building and developing this technology.

Increasingly, I am viewing this work as an inroad towards one day creating pure and functional architectural forms. For me, this new breed of building gets right to the heart of how form can follow function. I believe with this technology as a starting point, unique structures can be derived which reflect the beauty of the inner workings of this truly sustainable building solution.

Several examples of Harvey’s designs are featured on Bluegreen Future Buildings.

Open Source for Everyone:

While Rick has spent over thirty years developing and refining the technologies associated with SolaRoof materials and applications, the bulk or his output is non-proprietary and open source. Anyone is welcome to join the SolaRoof Yahoo! Group (restricted) wherein there are member information exchanges, articles about SolaRoof, photos, and diagrams–all is free for the taking. Rick sums it up quite well in his introduction to the SolaRoof group:

You are welcome to join this open source collaboration where we are developing and sharing DIY(Do It Yourself) know-how for building transparent solar structures. To enhance our collaborative development of the SolaRoof methods we now are building a knowledge base where everyone can contribute to building the SolaRoofWiKi. SolaRoof structures may include but are not limited to greenhouses, sunspaces, roof gardens, residential spaces… The goal of our discussion is how to use the sun’s energy to grow food, cool and heat spaces efficiently, rather than rely on fossil fuels and power grids. Our technology includes the use of bubbles to shade and insulate glazing systems, together with liquid solar collection and thermal mass storage, although any related discussions are welcome!

SolaRoof Saves World:

This bold pronouncement was found on (now defunct), one of Chris Macrae’s video experiments. Chris, knowledge management and branding expert, offered the following endorsement of the power and potential of SolaRoof:

Rick has also been using what I amateurishly call photosynthesis agriculture and architecture innovations for over 20 years. I have known him for about 4 years in my capacity of hosting radical innovation meetings round London. I suggest a triple-wishing game – you mention a region or peoples in the world where sustainability that matters most to you, and Rick answers with what is doable now, what is developing, and what is his biggest collaboration wish for the region.

Perhaps we can make more videos if people decide this is one of the world’s great unknown practices worthy of a bit more open source weaving. (see Chris Macrae and Rick Nelson on YouTube)

Equally, if anyone knows someone else with inventions that empower every community to the other side of sustainability fuels (water, food, energy) crises, please see if they will join in. I would like to publish a small leaflet on the 5 most radically open guides to how peoples everywhere could collaborate around human sustainability if we take up the challenge and agree urgency is so great that we don’t need any marginal solutions; we need radical experiments…

And in a posting on (now defunct), Chris describes the ramifications for SolaRoof as the underpinnings for a new business enterprise, Life Synthesis LLP, that aspires to the following:

…to shelter residential communities within SolaRoof systems. This includes replacing conventional resource and energy intensive climate control systems with new and dynamic structures that capture and use solar energy by bringing daylight and plants into buildings. These ecologically designed, low cost SolaRoof structures use energy capture, storage, and cooling methods to incorporate plants, water and water based liquids, creating an integrated ecosystem within the building itself. SolaRoof is both accessible and affordable to those living in poverty, but at the same time desirable to the affluent.

The Ball Is in Your Court

And that becomes an open invitation for any and all to take the opportunity afforded by this concept and apply it so that it makes a positive difference for the people directly involved, the community, the planet. Well worth the time and effort required to take up the challenge and do the right thing!

Originally posted to New Media Explorer by Steve Bosserman on Tuesday, October 9, 2007

Cycles of Communication and Collaboration

Recently, several of us were going over the litany of new terms for communication and collaboration “tools” that were less than 10 years old: blogs, mashups, crowdsourcing, webinars, podcasts, etc. just to name a few. It became obvious as we discussed it further than many of us in the conversation were relatively clueless when it came to defining what each is, what it does, how it works, and what were the benefits. As in most instances where there are too many dots and no clear picture in mind to connect them; a framework would be helpful.

The quadrants and circular arrow in the diagram below illustrate a progressive path of organized social interactions. It starts with the blog: the primary virtual means by which an individual, almost any individual who has access to the Internet, announces to the world – here is who I am, what I think, and what I care about. It is a powerful statement of independent thought, self-awareness, and clarity of purpose that any blogger makes simply by posting a message.

When individuals put themselves out there, they are likely to be “discovered” by others who share common principles, interests, and affinities. One of the most likely places to be “found” is in a social network. In these far-ranging open communities individuals extend their connectivity, learn more about themselves and each other, strengthen their affiliations, and become more intentional about doing certain activities together rather than individually.

At this stage, groups engaging in some sort of purposeful collective effort benefit by having collaboration spaces for more efficient and effective teamwork. Here they can work openly among themselves to give projects definition, open their assumptions for testing and scrutiny to those beyond their team boundaries, and adapt the projects to what is learned for more successful and acceptable results.

Of course, such results warrant wider exposure in global networks. News releases are presented to a broader audience for further sponsorship, investment, or utilization–and feedback. Learning and adaptation are triggered on a broader scale. Quickly we know if what we are doing is having the intended result; do others believe in it as we do; what other steps can we take to increase the viability, and sustainability of our offering.

This is where the cycle returns full circle to the blog where further commentary and endorsement (or not) about the news release is made in the context of what is important to the blogger—and the cycle starts anew. Of course, beginning the next round means starting at a different place and time, having more experience, and making additional discoveries between rounds. These are multiple cycles that radiate out in a spiral model approach.

So how do the tools we have available fit into this communication and collaboration cycle? A few are highlighted in the picture below.

There are many nuances and subtleties to each stage of the cycle; and there are certainly MANY more examples of software and system tools that can be included. However, this should give you a feel for how the overall process functions.

Here’s an example for your consideration:

In 2001, Roger Beck, a teacher at Worthington Kilbourne High School in Worthington, Ohio, initiated a program called Building Academic Skills and Experiences (B.A.S.E.). B.A.S.E. integrates twelfth grade English, Government / Economics, and Technology Education. In 2004, this program was linked with Habitat for Humanity in a housing construction project called Home B.A.S.E. In his most recent blog postings, Mr. Beck outlines progress on the current 2006 – 2007 Home B.A.S.E. project – a LEED-certified home – that is now drawing to a close.

One of those groups whose members are very supportive of Roger and his team is a local Worthington social network named, Sustainable Worthington. An announcement to their members stated the following:

WKHS Home B.A.S.E. LEED house pilot project, 258 N. 21st Street, Columbus, OH, Saturday, September 22, 2007, 2:00 p.m.: Roger Beck, the teacher who developed this excellent program, will give us a tour of this amazing house, which offers green solutions to an amazing array of issues facing every homeowner. For a preview, go to where you will see great photos and an archive of the weekly project updates. Come to be educated and inspired!

When Mr. Beck and others initiated their most recent project last year, they sought volunteers in a number of collaborative spaces. This one, the Columbus Chapter of the Construction Specifics Institute, heralded the following headline:

COLUMBUS CSI CHAPTER INVITES YOU TO TAKE PART IN GIVING A HAND UP: Do a good deed, network with fellow CSI members and be part of building a pilot LEED home!

The attraction of volunteers to this project was based on past successes such as the one that ended in 2006 which was described in the news release1 following a public meeting for Green Energy Ohio in May 2006. It includes a link to a blog dedicated to the project.

The tour is today; I am participating. That will result in another blog posting – stay tuned…

Originally posted to New Media Explorer by Steve Bosserman on Saturday, September 22, 2007

  1. News release is no longer available online

What Is an “Integrated Solution”?

A colleague of mine called the other day and wanted to know how I would answer the question, “What is an ‘integrated solution’?” It seemed that he was entertaining this concept with others in senior management and he was struggling to find an answer that didn’t sound like gobbledygook or pure philosophy and offered nothing pragmatic. In typical management fashion, he needed an answer right away. And, it would be particularly helpful if it could be condensed into a 10-second sound bite that anyone could comprehend. Of course, pressing all that knowledge and insight into a 10-second statement is quite a challenge; one that launched us into a 30-minute animated conversation. Here’s the 10-second version:

An integrated solution:

  • Targets what a specific customer’s organization – business, not-for-profit, government agency, etc. – is providing (portfolio), the manner in which it does that (model), and the context in which it operates
  • Appeals to the combination of values considered most important by an individual customer
  • Provides a package of products, services, and technologies that function more effectively as a whole than the sum of the individual elements that comprise it

So, what is so difficult about that?

While this definition of an integrated solution makes intuitive sense, it challenges management’s conventional wisdom. Here are some reasons:

First, organizations do not really know their individual customers: what each is doing, why they are doing it, or the realities that drive them to do what they do the way they do.

The result: generalizations are made about customer buying patterns, marketing and advertising campaigns based on those generalizations are developed and launched, and products and services are developed and delivered that respond to the aggregated assumptions about customer buying behaviors. However, they don’t really get behind the behaviors to understand the specific dynamics about what the individual customer is doing, why, and how. Instead, the organization strives to stay competitive solely on features and price. To improve their bottom lines, they focus on making what they offer more attractive, cranking up sales / delivery volumes, and reducing operating / product costs.

To lump customers together into various segmentation schemes creates a psychological distance between the provider and the customer and keeps the customer in the position of having to define and acquire the integrated solution.

Second, organizations do not know how to measure the value of what they provide with metrics other than money.

The result: assumptions are made about price, return, cost, and profit that are predicated on a narrowly defined value equation. In other words, the value proposition does not include key factors that drive customers to make buying decisions. Certainly no customer intentionally chooses a portfolio of deliverables and operational model that loses money. At the same time, though, customers are increasingly interested in the impact their operations have on factors such as the environment, safety and security, and quality of life as indicated by the following metrics: What percentage of energy consumed to produce and deliver is green? How far away from the point of use was something produced? Can what is produced be tracked and traced from inception to delivery? How much carbon is emitted versus sequestered in producing it? How much total time does a customer spend to define an integrated solution, bring it together, implement it, and follow-up to see that it functions as intended? What is the balance between a customer’s investments in the operation compared to other of life’s interests?

To focus solely on money misses the point of differentiation that distinguishes an integrated solution: its capacity to answer to multiple drivers within a unique customer’s evaluative framework.

Third, organizations measure their success on providing profitable standalone products and services rather than combinations of products and services that can be easily integrated.

The result: the sale of the products and services becomes the prime objective and additional features and functions are thrown in as incentives to sweeten the pot, beat what the competition is offering, and win the deal. In other words, option packages default to bundling techniques rather than giving the customer the best deal for the price based on improvements in the customer’s business operation. Furthermore, there is often an even greater difficulty in putting combinations together that cross from one brand to another. To connect solution elements from two or more brands, the customer must purchase ancillary parts, components, and modules both in hardware and software. In many instances the customer must purchase an entire system from one brand in order for it to function within a more comprehensive solution even though several elements that comprise the system are still functional. This makes it challenging for the customer to leverage investments in assets.

To assume that customers will always be drawn to purchase a product or service based on its reputation, capability, and price alone is a questionable strategy: technology advances and integration increases; integrated solutions become easier and more commonplace; they become the new baseline from which one enters or stays in a market.

But how long will this take? What if an organization is already doing quite well with standalone products and services? How can the incremental add of selling a solution ever equal the advantage that comes from simply selling more products or services?

Hold an iPhone. Think back ten years. How many pieces of electronic gadgetry would you have to carry to equal what the iPhone can do – if such functionality was even possible? Think past all the things it can’t do or do as well as you like and fast forward five years into the future. What will be the degree of integration you can anticipate then? It’s hard to imagine but one thing you can count on – there will be more integration and in different ways than you thought!

Increasingly, customers have more choices. That is a good thing – Commons to a point. Unfortunately, the customer is left having to sort through countless combinations and possibilities to come up with the best-suited solution alternatives. As technology continues to get smaller, faster, stronger, more embedded, more intelligent, and more integrated, each customer will expect choices to be measured according to their full value as effective integrated solutions. The successful organization of the future is one that establishes its reputation as a trusted provider of integrated solutions. In effect, it will earn the right to be the integrator for the customer. Does this mean that organizations will have to change the way they relate to customers? By all means! To have this distinction saves it from “commodity hell” and positions it for a sustainable future. Integrated solutions: it’s the future that is fast upon us! And that’s the 1-second version!

Originally posted to New Media Explorer by Steve Bosserman on Sunday, July 15, 2007

Thoughts about Value-Add

Value-add dominates our economic scorecard. It is relatively easy to calculate in a manufacturing setting where value is added through material transformation at each step as a product moves from raw material to finished goods. Customers monetize this value by the purchase of products they anticipate will add value to their processes. Value-add also pertains to certain services, like financial and legal, that require a certified, licensed, or bonded provider that possesses or delivers specialized skills or knowledge. The consequence for not utilizing these services is the customer assumes risk.

The concept of value-add also plays a role in information technology and data services. Here, though, the meaning is vague. What value can be assigned to having data or to having data in a usable format? This instance of value is intangible and determined by the receiver of the message. Nowhere is intangible nature of value-add more evident that in marketing strategies and advertising campaigns. Information that induces a user to pay for a product or service has value only to the producer. Value-add for the customer or client occurs at the next step — the point of utilization.

So much for the traditional view of value-add. Here is where the current issues of globalization – localization come into play. A robust business strategy can entertain and exercise both sides!

On the globalization side, value-added products are produced and services provided far from their points of utilization and consumption. Success is driven by appropriate economies of scale. This situation will continue for many years to come as customers and clients exploit lower cost alternatives. On the localization side, value-added products and services are produced in close proximity to their points of utilization. Success here is driven by economies of scope. This situation enables products and services to be integrated into specific applications or solutions that are tailored for highly localized contexts.

How, though, does one put a value-add strategy in place? Re-enter data and information. Essentially, a successful strategy is a contextually relevant plan of action, conceived through the knowledgeable (and hopefully, wise!) interpretation of data and information, and executed by a skillful tactician. No matter how similar or unalike the challenges, relevant data and information are the common denominator.

Assuming everyone has access to the same data and information, there is no value-add. With universal access to the same data and information, there is a significant benefit. The rates increase in the discovery of new knowledge, application of knowledge already learned, and transfer of experience with applied knowledge from one place to another. In other words, accessibility to data and information makes the global human system of knowledge generation and utilization more effective, efficient, and expansive.

Since data and information carry no particular value unless one does not have access to them, they form a unique type of “commons“. Anyone may contribute to the pool of data and information, all benefit, and the quality of what is available is not diminished or compromised by the number of users. In fact, the quality and variety increases with more participants as evidenced by Wikipedia.

This “commons” approach is a cornerstone in “open source” philosophy wherein volunteers contribute entries and edits and the content is free to use. Originating in the realm of software development and usage, open source applies to any instance where people collaborate in the development, sustainability, and scalability of a system whereby end-users freely pull what they need from the system and respond to their unique circumstances. Participants increase the working knowledge about the system as they act locally and provide feedback to designers / developers so they improve the system’s robustness, range, and ease of use.

A comprehensive business strategy judiciously positions an “open source” / “free knowledge” dimension on the globalization and localization continuum. What to share in open forums, what to hold as proprietary and reserve for limited audiences, how much to contribute in the development and sustenance of open source endeavors, how much to invest in products, services, and technologies for satisfactory returns, where to standardize products, services, and technologies for economies of scale, where to proliferate economies of scope solutions within localized contexts – these are the kinds of questions about openness, standardization, and uniqueness that drive effective business strategies for all organization types.

Technology gets smaller, faster, stronger, more embedded, more integrated, and more intelligent. Localization increases. The value-add equation is redefined and a greater significance is placed on unique solutions. Addressing the above questions helps organizations adapt within their ever-changing operational landscapes. The implication being that organizations network and collaborate more broadly to energize, inspire, and focus their subject matter experts where it counts most – learning what customers and clients need in their business and social contexts and responding with value-added alternatives. Isn’t that “business as usual”?

Originally posted to New Media Explorer by Steve Bosserman on Thursday, July 12, 2007

A Broader Framework in Which Localization Occurs

One of the drivers behind technology development is the quest for human equivalence – the point where technology performs at a level of functioning that is equal to or greater than the functioning of the human brain. While it is speculative at best to estimate if and when such a goal is achieved, recent history illustrates that the increase in capability and capacity of technology is ramping up a rather steep slope. And if we are to trust the application of Moore’s law, technology’s prowess is doubling every 18-24 months. At that rate, it doesn’t take much to project a future wherein technology is closing in on human equivalence.

As a trend develops it is useful to be able to track its progress and anticipate its trajectory. Choosing or crafting a set of markers that give indication of a trend’s speed, depth, and scope as it gains influence and becomes an impetus for change is critical. While there are many markers from which to choose, the most durable and universally applicable sets concerns value added, particularly, where and how value is added.

The simple Wikipedia example about making miso soup from the above link is a good one to illustrate how advances in technology change the value-added equation. First, the value of the soup as the end product is comprised of the value added by the farmer to grow the raw product, soy beans, plus the value added by the processor to the soy beans to produce tofu, plus the value added by the chef to the tofu to prepare the soup. This “value package” utilizes a combination of equipment, input, labor, and know-how applied in various locations, stages, and timeframes—and is based on a specific capability and capacity level of technology.

What happens when technology develops further? There are several possibilities: the soy beans are grown in close proximity to the preparer; the yield of soy bean plants and desired quality and characteristics of the beans are increased; the equipment that harvests soy beans conducts post-harvest operations that condition the beans for making tofu; this equipment is smaller and more compact which accommodates localized production; methods of packaging, storing, and shipping soy beans or tofu are more integrated thereby consuming less energy and taking less time. In these instances, advances in technology are applied to the value-added equation dramatically altering the value package. The result is a system utilizing less costly and more productive equipment, requiring fewer inputs and less labor, and deeply embedding human knowledge and experience into new processes and tools. This has the potential to be transformational—and in relatively short order, too!

While the example of soy may represent a somewhat narrow space within which profound change can be noted, it does highlight where and how value added steps are enabled by technology. These changes can be witnessed in a broader sense through the lens of large social and economic “eras.” The first of these, industrialization , brought developments in technology to bear on centralizing facilities, equipment, and people in the production process where capital investments could be amortized through economies of scale.

As production technologies become scalable, logistics are more integrated and efficient, and information and communication technologies are more pervasive, powerful, and responsive, manufacturing operations are dispersed close to those areas where lower cost skilled or tractable labor is available. This is the impetus for “globalization.” Attendant to the distribution of manufacturing capability is the transfer of technology and subject matter expertise. This significantly increases the technological competence of the lower cost workforce. In this regard, globalization heightens the ability of people to utilize new technologies when presented and results in a more evenly distributed capability worldwide.

This puts us on the brink of the next era: localization. The embedded link goes to one of my earlier postings about this phenomenon, so I will not wax on about it again here. However, one quick observation: localization is the inevitable outcome of technology continuing to cost less, get stronger, fit into smaller spaces, run faster, be embedded in more operations, streamline processes, and sense, respond, adapt, learn, and sustain itself despite problems and challenges. To put such an imperative into perspective, the more we transfer technology from one place to another under the auspices of globalization, the more potential we are placing in the hands of the recipients to utilize those technologies in developing localized applications. Constant application of technology that packs more punch at lower cost is what SUSTAINS the drive toward localization. Without technology localization would merely be an updated term for the back-to-the-land movement of some 40 years ago. While localization may imply a different lifestyle choice, it is actually honoring well-deserved quality of life factors while continuing to take advantage of what an improved standard of living provides.

What happens beyond localization as technology continues its trek to become smaller, faster, stronger, etc.? Imagine assembling the end product from molecules – at the point of utilization – precisely at the time it is needed? Yes. Get small enough and one is into the basic building blocks of material: molecules. This is the realm of nanotechnology, specifically, molecular manufacturing.

While such a concept has the earmarks of science fiction or the paranormal and, indeed, there are many who contend it is one or both, technology will continue to shrink the distance from production to utilization until they are as close to the same as possible and the material manifestation will be of the immediacy and convenience of what is conceived virtually. The development timeline for molecular manufacturing suggests a useful output rests some distance in the future and that it will come at considerable expense.

This time is needed. Eric Drexler, one of the leading thinkers in the field of nanotechnology, co-founder of the Foresight Institute, and currently, Chief Technical Advisor for Nanorex, Inc. is a clear advocate for “responsible nanotechnology.” Citing the hypothetical possibility of the world turning into “gray goo” should molecular nanotechnology run amok, Drexler advises the imposition of a stringent ethical framework on these technologies before they are endowed with the capability of self-replication. Not bad counsel regardless whether one buys into Drexler’s future vision for nanotechnology.

And maybe that’s the reason we need to spend time in localization before leaping ahead to what’s next. It is the strength of the community experience where we learn to act upon our value as society rather than default to the strength and survival of the fittest individuals. This is the intent behind the Nanoethics Group. As an extract from their mission states, “By proactively opening a dialogue about the possible misuses and unintended consequences of nanotechnology, the industry can avoid the mistakes that others have made repeatedly in business, most recently in the biotech sector – ignoring the issues, reacting too late and losing the critical battle of public opinion.”

Yes. One can only imagine what happens if the machine – nanotechnology, in this instance – has the unfettered capacity to choose who survives with no more ethical framework in place to guide it than the ones we humans use today…maybe we are not quite ready for human equivalence!

Originally posted to New Media Explorer by Steve Bosserman on Tuesday, July 10, 2007

A Voice for Localization

In response to my earlier posting about Localization, Bob Banner, publisher / editor of HopeDance sent me an email noting that Julian Darley was the founder and director of the Post Carbon Institute. While James Howard Kunstler was an Institute Fellow, he has his own website that covers a wide range of related topics. Please note that my 6 July posting is now updated to reflect this correction.

Bob also mentioned in his email that Issue 62 of Hope Dance Magazine 1 is “…a special issue we did on RELOCALIZATION that features BALLE, Judy Wicks, the PCI, Michael Shuman’s Small-Marts, Local Living Economies, Bill McKibben, many book ad film reviews, a LOCALIZATION FILM FESTIVAL and more.. all in a tabloid of 56 pages.” If you are interested in Localization, you will find this issue chock-full of useful information that can be quickly applied in a wide range of localities. Take a look!

In addition, he printed an extra 2,000 copies that are available in lots of 50 for $25, which includes shipping. If you be interested in hard copies for local distribution, please contact Bob and Hope Dance at this embedded link.

Originally posted to New Media Explorer by Steve Bosserman on Sunday, July 8, 2007

  1. No longer available online

The Case for Localization

Over the past five months I have dedicated considerable attention to “localization.” According to Wikipedia, “Localization may describe production of goods nearer to end users to reduce environmental and other external costs of globalization.”

The Relocalization Network, which is affiliated with Julian Darley’s Post Carbon Institute defines “relocalization” as “- a strategy to build societies based on the local production of food, energy and goods, and the local development of currency, governance and culture. The main goals of Relocalization are to increase community energy security, to strengthen local economies, and to dramatically improve environmental conditions and social equity.”

Another way to consider localization is to see it as the shrinkage of distance between the point of production and the point of utilization or consumption. It is the conversion of bits and bytes into material form as close as possible to where that form will be used. In contrast, globalization is the virtualization of experience, knowledge, and innovation so that intellectual property created can travel from anywhere to anywhere quickly, easily, at minimal cost.

Of course, one can look at the world today and effortlessly conclude that neither of these is current reality. We take advantage of low-cost skilled labor to manufacture in different countries only to move parts, components, modules, and whole goods vast distances to reach the place of final assembly or sale. Quite obviously, we produce far from the point of use in many instances.

So what will change this? It is a matter of impetus. Concern about continuation of the fossil fuel economy is one such prompt. There are those who support the contention that the fossil fuel economy is unsustainable due to depletion of reserves, especially oil. Still others claim that regardless of how much fossil fuel there is our consumption of it adversely impacts the environment resulting in severe consequences over time. And there are those who maintain that the political ramifications of buying fossil fuel from countries whose governments adhere to a different moral framework are not advantageous. These are certainly powerful factors directed toward changing current business models and social dynamics.

However, another deciding factor is technology and how people appropriate it. As technology continues to get smaller, faster, stronger, more embedded, and more intelligent it facilitates localization. At one time high capital investment costs and the resultant economies of scale prompted centralization and regionalization to amortize these investments. Now, rapid technological advances are placing increased integration, capability, and capacity in more compact and powerful packages. This scales the costs and complexities downward at a propitious rate. And it leads to very innovative applications that change how we live, what we do, and how we do it: iPhone , Cheetah Prostheses, and CEREC by Sirona.

Societies also change when their members are introduced to these technologies and latitude is given for people to experiment with them in their local context. Working with various technologies and their local applications increases the number of people who are “domain experts” in the solutions that arise. Such domain expertise accelerates the rate of application successes with local markets which sets up the possibility of exporting those solutions to markets elsewhere through a phenomenon called “innovation blowback“. But do those global enterprises want that? Sounds highly disruptive!

Here is where the interplay between globalization and localization can be discerned. The initial aim of globalization is to move technology to areas of the world that have a decided labor cost advantage and ship the goods produced or services provided back to the originating point of the technology. This offers a payback for the investment in the technology and provides an advantage until competition finds a location of similar or lower labor cost. As the competence, capability, and capacity increase in those countries that have been enrolled in globalization, their domain expertise about the products and services they are producing or providing increases stage by stage: parts, components, modules, systems, and whole goods. Ultimately, the originator seeks to establish markets and ramp up SALES of those products and services in low-cost countries. Their driving interest is to see that domain expertise continues to forge markets for native flagship products and services into non-native, new markets close to low-cost production and support operations. This means changing the practices and behaviors of people in low-cost markets. But do those governments want that? Sounds highly disruptive!

India and China represent two examples. Both have significant populations engaged in subsistence agriculture. Many of the world’s largest agricultural equipment manufacturers have production operations or are in the midst of setting them up in those countries. Of course, one of the main reason they went there at the outset was to tap sources of low-cost, skilled labor. Now, the next step in leveraging their investments is to establish local markets for the same or similarly scaled products as those that are successful in North American and European markets. That means introducing to India or China the kind of mechanized agricultural practices practical in two continents that have a combined population equal to India OR China is going to be exceedingly disruptive. The net result of mechanization on that scale is the elimination of the work people are doing with technology. Such unemployment forces people to move out of rural areas and into the urban centers for employment. Can the infrastructures of those urban centers and the surrounding environment sustain such an increase in population?

What kind of solutions would an India or China develop? Perhaps ones which are smaller, faster, stronger, more embedded, and more intelligent? In other words, the development IMPERATIVE of technology plays evenly across all countries, societies, and markets. HOW that imperative is exercised, though, is specific to the conditions within the local markets.

What if the domain expertise about those technologies feeding products and services going to North American and European markets was diverted to solutions for local markets? Would the solutions for an India and a China be different than what works so well in North America or Europe? Would those solutions not only have value locally, but also well-serve markets further from home? And of equal or even greater interest, what happens in the long run if countries like India or China perceive they are not being supported by corporate interests based in North America or Europe to develop locally-appropriate solutions which would clearly be in the best interests of those countries?

And these questions constitute a prompt to consider localization further in future posts…

Originally posted to New Media Explorer by Steve Bosserman on Friday, July 6, 2007 and updated on Sunday, July 8, 2007

Boids, Integrated Structures, and Renewable Energy

About 20 years ago, Craig Reynolds, developed an artificial life program entitled, Boids, that simulates the flocking patterns of birds. One of the compelling features of Boids is that despite random starting points and infinite range of action enjoyed by each boid, through adherence to three simple rules a consistent behavior pattern among the boids is quickly established and maintained.

Boids exemplifies a principle in complex adaptive systems termed “emergence.” Emergence is a key concept in organization design. It has particular relevance when the issues of control, dependence, and autonomy in centralized and decentralized structures are recast into integrated structures such as networks, communities, and teams.

My previous posting, “Lessons from the Grid,” focuses on distribution of responsibility and authority to generate electricity, by whatever type of renewable energy source, to individual homeowners and business owners. Net metering connections to the grid enable owners to sell excess electricity generated to the utility company and draw from the grid as necessary during times of insufficient electricity generated locally. This is a win-win solution: an expanding network of home and business owners, representing multiple families, neighborhoods, and communities, are actively involved; participants meet their individual and local needs, first, then, sell their surplus to meet regional and global demand; and, the localization of electric power generation through “green energy” is more efficient and consumes less “brown energy.”

Distribution of electricity generation among the masses and the resulting win-win solution for the majority is an example of emergent behavior and the formation of integrated structures. Like “Boids,” this phenomenon is driven by three simple “rules” that define the social system in which emergence and integrated structures occur:

1) Universal participation.

The point of rapid development and deployment of information and communication technology (ICT) capabilities is to get everyone connected at a basic level. One need look no further than the geometric increase in the number of cell phones, Internet service providers, email addresses, blogs, videos, web-based services, etc. to see that the world is getting “wired!”

Every uptick in participation only heightens the number of advocates, providers, customers, buyers, and sellers available. Each has different experiences, perspectives, and ways of describing and meeting needs and wants. It is beyond the capability of highly centralized organizations to respond to the needs of so many independent agents. And it is beyond the capability of any one, “decentralized” individual to be both autonomous AND disconnected and expect to have needs and wants met while enjoying a respectable quality of life.

2) Meet individual and local needs, first; then, sell any surplus.

On its way to “human equivalence,” technology gets faster, smaller, stronger, more embedded, more integrated, and more intelligent with each turn in development. This has the effect of putting capabilities and capacities into the hands of the individual what was heretofore only available to the wealthiest or those with the largest assets to underwrite substantial ventures. The entirety of the Industrial Age is characterized by “managing” monopolistic interests dictating what was in vogue, what was available, and what was affordable. Now, with the Information (Digital) Age evolving into the Knowledge Economy and the “Relationship Age,” it is increasingly possible to dismantle the hulking centralized structures in the public and private sectors and distribute their power and authority to individuals and groups working in concert with one another at the grassroots. People at the local level can pull from vast global networks of “virtualized” information, knowledge, and resources and “materialize” them in local applications.

The result is people now have the means to meet their needs for fundamentals like food, energy, clothing, shelter, and safety without having to depend on others. It also creates the opportunity for them to produce MORE than they need so that the excess can be sold in further markets. This challenges the authority of comparative advantage when it comes to life-sustaining basics. Each day, advances in technology give more people the opportunity to produce sufficient renewable energy to feed, clothe, and house themselves – to meet their basic needs. And when people have their basic needs met, challenges to their security and safety are reduced; they can speculate, take risks, learn, and contribute their learning more broadly into global networks.

3) Consume what is produced locally, convert / process excess to standardized / higher value form, and ship to nearest point of use.

Unchecked globalization encourages people at local levels to compromise their buying power by sending raw or first-stage processed materials to worldwide destinations or further value-add processing. Because materials at this stage have their lowest value, the compensation for them is least. However, when finished products return from where further value is added their prices are out of reach. The net effect is the local economy is depleted of its resources and the people are unable to care for themselves. Of course, some corporations invest in facilities located closer to the raw or rough finished materials to take advantage of lower cost labor in subsequent value-add processes and stages. The finished goods are priced beyond the reach of employees and their compensation is insufficient to afford necessities. Once again, they are unable to care for themselves at a local level. Worse yet, the cost in use of fossil fuels to transport raw materials, work-in-process, and finished inventory from one part of the world to another only exacerbates the problems besetting local economies mentioned previously.

The “localization-to-globalization” model operates in reverse. It encourages people to consume what they produce rather than sending it elsewhere only to have to buy it back later. Also, it fosters the conversion of excess into standardized form of higher value in order to have a broader market which is easily accessed. Using renewable energy sources like solar, wind, biogas, etc, to generate electricity has more efficiency than the individual sources of energy because the energy is converted from a more difficult to use form to a standardized form. As an example, everyone can use electricity pulled from the power grid. Not everyone can use DC current from a photovoltaic array or a tank of biogas, although each can be used to generate electrical power for the grid.

These three “rules” drive the formation of many different integrated structures as localization takes root and globalization builds from it. How well these rules are followed in the development of business cases and plans is an indicator of the viability of the business under consideration.

For example, earlier this month, Biopact announced the headline, “Green giant Russia to produce 1 billion tons of biomass for exports.” That’s a lot of raw material! Now, will Russia process it into fuel or ship it elsewhere for processing? The article is unclear which direction this will go. However, it would seem that the environmental advantage of growing biomass material for fuel would be offset by the amount of fuel required to transport the raw material to a remote point for processing. In addition to the logistics issues, a business plan built on the comparative advantage Russia apparently has to grow biomass but not to process it into usable fuel is risky. Expecting another region or country to invest in the processing facilities yet not have control over the flow of raw material from considerable distance away is…well…dicey.

In contrast, Iowa grows more corn1 ethanol than any other state. It could ship corn to other states to process ethanol. However, the approach is to localize ethanol production from corn2 and keep the value in the hands of the producer while reducing transportation costs. Maybe there’s another lesson in here from the Iowan farmers?!

More business possibilities will be analyzed according to these three rules in subsequent postings…stay tuned!

Originally posted to New Media Explorer by Steve Bosserman on Monday, February 12 2007

  1. Original link no longer available.
  2. Original link no longer available

Lessons from the Grid

The electrical power grid is a study in organizational behavior. Take how electricity is generated and distributed to the point of consumption. Huge power plants or arrays – fueled by “green energy” sources such as solar, wind, biomass, geothermal, and hydroelectric, or “brown energy” sources such as fossil fuels and nuclear – concentrate electrical power generation to take advantage of “economies of scale.” The resulting current is transmitted through an extensive redundancy of power lines, cables, substations, circuit breakers, switches and transformers – oftentimes referred to as the “power grid” – to individual consumers across wide areas.

Organizationally, this is a centralized model. Power is concentrated in a select number of locations and authority is distributed to other points as needed and according to priorities driven by limited supply during periods of peak demand. The overall system, no matter how inefficient or costly, strives to be convenient, available when needed, standardized in delivery, and transparent during use. The goal is to please the most and dissatisfy the least so that fundamental assumptions about the design of the system are unquestioned, significant investments in infrastructure modernization or extensive system redesign are delayed, and increases in operational costs, along with services, are passed fluidly to the consumer. In other words, the existing power structure prevails and remains unchallenged and the consumer is dependent on that structure to get what is needed and wanted.

For every movement, there is a counter-movement. There are those who regard being “on the grid” as a lifestyle that epitomizes wanton consumerism, promoting waste, excess, banality, and destruction of the environment. Their alternative is to live “off-the-grid” disconnected from public services including electrical power. Initiated during the 1960’s and ’70’s, the “back to the land” movement is often synonymous with off-the-grid solutions such as energy from solar, wind, and biomass sources.

The off-the-grid approach represents an alternative organization structure – a decentralized model. In this instance, power is held by a wide range of relatively small, independent individuals / families who are in total control of an electrical power system that meets their consumption requirements. As with many decentralized structures, one’s destiny is in one’s hands. However, the limits of these structures become apparent when consumption patterns change and more power is required or disaster strikes and there is no opportunity for a quick recovery.

As the title of this blog article, “Solar Power FAQs: Will The Electricity Meter Run Backwards?” posted on the Alba Energy website suggests, some homes with photovoltaic (PV) panels generate sufficient electricity during the day to meet and exceed the immediate consumption needs of the home. In a different twist to back to the land homes sporting off-the-grid solar-powered systems, the scenarios presented in this article illustrate how consumers can load surplus electrical power generated by solar panels onto the grid and receive financial credit for doing so through various net metering plans.

While a national mandate for electric power companies to offer fair net metering practices is not in place – albeit Jay Draiman of Northridge, CA, author of “Mandatory Renewable Energy – The Energy Evolution – R12,” touts this as a necessary step in overcoming our dependence on fossil fuels – momentum is gaining in several states as commitment to renewable energy is strengthened. One of these is New York where the New York Energy Research and Development Authority (NYSERDA) promotes net metering / remote net metering and interconnections 1 through a range of incentive programs directed toward offsetting the installation costs for small-scale solar systems and encouraging connection to the public power grid in order to facilitate net metering.

From an organizational standpoint, this represents a very different structure – the integrated model. Although neither centralized nor decentralized, integrated structures blend a centralized surplus distribution and backup system with a decentralized network of small-scale operations. Such interdependence distributes responsibility and authority to individual members in the social system so they can engage in self-sustaining behavior patterns while linked to a broader network of resources and markets. Individuals are in control of investments, operating expenses, and utilization of resources. They can take care of themselves first, sell the surplus, or if circumstances warrant, buy what they need or want when they are unable to provide enough by themselves.

The combination of electrical power grid, PV panels, and net metering represents one way developments in technology influence organization structure and design. As systems technologies become more powerful, pervasive, and transparent, sub-systems will become more embedded, integrated, and interdependent. The same concept applies to computers, the Internet, and payment for posting articles on a website or blog. As information and communication technologies continue to evolve, they will empower individuals to THINK independently, work openly and in parallel, and collaborate when opportunities arise for bargains and balances to be struck among the various comparative advantages, surpluses, and deficits in the larger system.

Thereby comes one of the unintended but inevitable consequences of pursuing “green energy” sources for power generation in lieu of “brown energy” sources: the fundamental organization structure and assumptions for organization design shift. Control is no longer held by a central body, be it a corporation, government, or special interest group; nor is it fractured and splintered to such a degree that collective effort is no longer possible. Instead, it is held in balance at the point where production, distribution, and consumption work in unison with one another for the advantage of the system rather than favoring the interests of a few at the expense of the many. Conventional wisdom may differ, but the world will be a better place for it!

Originally posted to New Media Explorer by Steve Bosserman on Wednesday, February 7, 2007

  1. The original New York Energy $martSM Program is no longer available