Friday, December 30, 2011

Loving What I Do For Living



A few months back, I was helping a very large customer of ours to help simplify as well as automate their process of trading financial instruments. During one of my many visits to their office, I met a person who was trying to explain to me his job in supporting the people that are involved in this super complex process. I always ask a lot of questions — until they're totally annoyed and ready to kick me out of the room — to get a complete understanding of the business rationale behind whatever they're thriving for and their personal motivation behind it. Something unusual happened at this meeting. Instead of getting into the gory technical details of how they get things done, he chose to tell me a short and simple story.

"You know, um.. there's this early morning meeting everyday that Peter goes to with a bunch of other people. They all gather around a large table in a dimly lit conference room with a bunch of printed spreadsheets, a laptop, and a large calculator. Peter has a cup of coffee in one hand and a cigarette in the other hand talking to people who have coffee cups in their one hand and cigarettes in the other hand. This is their lives. I am concerned about Peter and I want him to stop smoking. Can you please help me?"

Now, this is the job that I love that makes me get out the bed and run for it. This is the human side of enterprise software. It's not boring.

Photo Courtesy: Jane Rahman

Wednesday, December 14, 2011

Design thinking: A New Approach To Fight Complexity And Failure


Photo credit: String Theory by Michael Krigsman

The endless succession of failed projects forces one to question why success is elusive, with an extraordinary number of projects tangling themselves in knots. These projects are like a child’s string game run amok: a large, tangled mess that becomes more convoluted and complex by the minute.

IT projects fail all the time. Business blames IT, IT blames the system integrator (SI), who then blames the software vendor. After all this blaming and shaming, everyone goes back to work on another project without examining the project management methods and processes that caused the failure. And, so, they fail again.

There’s no one definition of design thinking. It’s a mindset and set of values that applies both analytical and creative thinking towards solving a specific problem. Design thinking is about how you think and not what you know; it is about the journey and not the destination.

Having followed Michael Krigsman’s analysis of IT project failures, it became evident that design thinking can play an important role in improving enterprise software development and implementation. 
The design thinking approach offers a means to address the underlying causes of many project failures — poor communication, rigid thinking, propensity toward tunnel vision, and information silos.

I have distilled important lessons from design thinking into six principles that can help stop project failures. Along the way, we will draw comparisons with Agile development, since that distinction is often a source of confusion when discussing design thinking.

These six principles, based on design thinking, can help any project team operate more successfully.

1. Put a multi-disciplinary team in charge

You can’t pin down project failure on one person or one topic and yet we continue to use a person-centric method to manage projects. No one on a project team wants to fail. If you collectively put responsibility of the failure or success on the shoulders of the team and get them trained and motivated to think and behave differently you will mitigate much failure.

Multidisciplinary teams champion the user, business, and technology aspects of a project in a more comprehensive manner than would otherwise be possible. Typically, an IT team talks to business stakeholders who then talk to end users, which creates communication gaps, delays, and inefficiency. Far better to create a single team that includes participants from all areas, creating a single unit that includes multiple perspectives.

Try to staff your project team with “T-shaped” people, who possess a broad understanding and empathy for all the IT functions, but who also have deep expertise in one domain to champion that perspective. This approach can ensure that your solution is economically viable, technologically feasible, and delights the end users. A more balanced team also humanizes the project and its approach. Stay small and resist the temptation to set up very large teams. If you believe the “two-large-pizza-team” rule, those projects are team-driven and tend to be more successful. Start-ups can build something quicker because they are always short on people. As your group get bigger and bigger, other people tell you what to do and team members feel less connected to their work as it relates to the outcome.

2. Prepare for failure in the beginning

I recommend kicking off the project with a “pre-mortem workshop.” Visualize all the things that could go wrong by imagining that the project has failed. This gives the team an opportunity to proactively look at risks and prepare to prevent and mitigate them. I have sat through numerous post-mortem workshops and concluded that the root causes of failures are usually the same: abstract concepts such as lack of communication, unrealistic scope, insufficient training, and so on. If that’s true, why do we repeat the same mistakes, causing failure to remain a common situation? Primarily because many people find it hard to imagine and react to abstractions, but can relate much better when these concepts are contextualized into their own situation.

3. Be both vision- and task-driven

Design thinking emphasizes storytelling, shared vision, and empathy towards all stakeholders involved in a project. On many projects, participants focus exclusively on their own individual tasks, thus becoming disconnected from the big picture.

While design thinking strives to connect participants to the larger vision, Agile development can be very task-driven. Everyone gets a task without necessarily understanding the big picture, or vision, or even seeing the connection between his or her tasks and the final outcome. In this situation, a project can fail and people may not understand their role, thinking they failed due to someone else’s work. If participants don’t realize their tasks contributed to a failure, they won’t try to learn and change.

On the other hand, vision-driven approaches are very powerful. People perform their tasks, but the story and vision persist throughout the project; the same story gets told by different people throughout the lifecycle of the project to avoid that big picture fading away. All the tasks have a bigger purpose beyond their successful execution. Even good project managers miss this point. At review meetings, it is important to evaluate what the team did right but also revisit the vision and examine how recent outcomes fit the overall story.

4. Fail and correct then fail again

Design thinking contradicts other methodologies that focus only on success. In design thinking, failing is not necessarily a bad idea at all; however, we fail early and fail often, and then correct the course. In many projects, people chase success without knowing what it looks like or expecting to fail; therefore, they do not learn from the process.

One of the challenges with traditional project management is the need to pick one alternate and run with it. Turns out that you don’t know everything about that alternative and when it fails, due to the irreversible decision that you made, you can’t go back. Far better to iterate on a number of alternatives as fast as you can before deciding which one will work. This approach requires a different way of thinking and planning your project.

5. Make tangible prototypes

Agile proposed creating unstructured documentation as opposed to making structured requirement documents. But, unfortunately, that is not enough to solve many problems. One of the core characteristics of design thinking is to prototype everything, to make a tangible artifact and learn from it. The explorative process of making prototypes makes people think deeply and ask the right kind of questions. It’s said that “computers will never give a wrong answer but it will respond to a wrong question.” The prototypes encourage people to focus on what I want to know as opposed to what I want to say. This is very important during the initial design phase of the project.

One of the biggest misconceptions about prototypes is that people think they are too complex to make and are overhead or a waste of time. This isn’t true at all. Prototypes can be as simple as a hand-drawn sketch on a paper or as complex as fully functional interactive interface. The fidelity of a prototype is based on what kind of questions you want answered. People tend to fill in gaps when they see something raw or incomplete whereas hi-fidelity prototypes can be too complete to solicit meaningful feedback. As I already mentioned, most people respond better to an artifact as opposed to an abstract document. Prototypes also make the conversation product-centric and not person-centric. They also help to get team members on the same page with a shared vision.

6. Embrace ambiguity

One of the problems with traditional project management methodologies is that they make people spend more time in executing the solution and less time on defining the problem. Design thinking encourages people to stay in the problem space as long as they can. This invariably results in ambiguity, which is actually a good thing.

Ambiguity fosters abductive thinking — a mindset that allows people to explore what is probable with the limited information on their hands without concerns about proving or concluding that it actually works. It helps people define a problem in many different ways, eventually letting them get to the right problem they eventually should focus on.

This also supports the emergent approach that design thinking advocates as opposed to a hypothesis-driven approach. In a hypothesis-driven environment, people tend to focus on proving a premise created by a small group people. Rushing to a solution without defining the problem, and having no emergent framework in place to include the insights gained during later parts of the project, certainly contributes to failure.

ORGANIZATIONAL BARRIERS TO SUCCESS

Even the best methodology requires organizational commitment to success. For design thinking to work, it is also necessary to address these common organizational issues, each of which can impede progress and limit successful outcomes.

Lack of C-level commitment: Although design thinking is applicable at all levels in an organization, executive management must bless it by publicly embracing and practicing design thinking. Top down initiatives and training only go so far.

When the employees see their leaders practice design thinking they are more likely to embrace and practice it themselves. The same is true with adoption of social media and collaborative tools inside an organization. The best signal to your employees is by showing them a firm belief in the method by practicing it firsthand and sharing positive outcome.

Resistance to change: People in any organization are usually fundamentally against change, even if they believe it’s a good thing. They don’t want to get out of their comfort zone and therefore practice the same methods that have resulted in multiple failures in the past. Changing behavior is difficult but fortunately design thinking can help.

One of the ways I have taught design thinking is by taking people away from their primary domain and have them solve a very different kind of problem such as redesigning a ticket vending machine or a fast food restaurant. My team was hugely successful since it was a completely different domain and it didn’t interfere with their preconceived notion of how a project should be executed. People’s reservations are tied to their domain; they are willing to adopt a new method and new way of thinking if you coach them outside of their domain and then encourage to practice it in their comfort zone.

Lack of industry backing: Despite being informal, undocumented, and non-standards-based methodology, Agile experienced widespread adoption. I would attribute this success to two things: a well-defined manifesto by lead industry figures and organizations publicly committing to adopt the methodology. Design thinking lacks these attributes.

Even though industrial design companies such as IDEO has evangelized this approach, there’s still confusion around what design thinking actually means. This also makes it difficult to explain design thinking to a wider audience. If a few organizations publicly endorse design thinking, create a manifesto, and share the best practices to gain momentum, many of the adoption hurdles will go away.

Lack of key performance indicator (KPI) frameworks: Design thinking faces the same challenge that most Enterprise 2.0 tools face: lack of measurable KPIs.

For number-driven leaders, lack of a quantifiable framework to measure and monitor the impact of a new methodology is a challenge. Some leaders are good at adopting new ways of doing things and others are not. In these cases, isolate a project that you can’t measure and start small. Contain the risk but pick a project that has significant upside, to keep people engaged and motivated. You may still fail, or not achieve a desired outcome, but that’s what the design thinking is all about.

It’s worth noting that Agile, as a software project methodology, has well defined quality and reliability KPIs such as beta defects, rejected stories during a scrum cycle, and the delta between committed and delivered stories.

Fail early and course correct the next time. Remember that adoption and specific practice need correction and not the method itself. Don’t give up.

FINAL THOUGHTS

During my extensive work on design thinking - practicing, coaching, and analyzing — I often talk with people who believe that design thinking is merely a methodology or approach for “visual design.” This view is a false perception. Design thinking comprises a set of principles one can apply during any stage of the enterprise project lifecycle along with other project management methodologies. This approach is valid for the CEO and executive management all the way to the grass roots.

Another common point of confusion is the distinction between design thinking and Agile methods of software development. The primary difference is that Agile offers a specific set of prescriptive processes while design thinking encapsulates a set of guidelines and general principles. Although not the same, the two approaches are highly complementary (even on the same project), because both recognize the benefits of using iterative work cycles to pursue customer-centric goals.

Always remember that real people work on every project. The best methodologies are inherently people-centric and help participants anticipate likely causes of failure. Visualizing failure early in a project is an excellent means to prevent it from occurring. We’re all human and may make mistakes but certainly no one wants to fail.

Design thinking can make potential failure a learning tool and not a final outcome.
_______

I had originally published this post as a guest blog post on Michael Krigsman's IT Project Failures blog

Wednesday, November 30, 2011

Coming To A Place Near You: A Private Cloud Spiked With Big Data


Netflix similarity map
Yesterday, I moderated a couple of panels at the Big Data Cloud event. I have been a keynote speaker, panelist, moderator, and participant for many conferences in the last few years. It has always been a pleasure to see the cloud and big data becoming more and more mainstream. Here are my quick observations and insights from the event:

Private cloud getting momentum: As a public cloud proponent I thought I would never have to write this. But lately I have seen more and more interest in private cloud; new start-ups, established cloud vendors, and large legacy vendors are designing private or hybrid cloud solutions. Vendors have recognized that prospects and customers have started to take cloud very seriously but they still have the same concerns what they had few years back: security, moving data to public cloud, and giving up control. I am not interested in the private/public debate (though I do love to mess with fellow clouderati on Twitter on this topic). My take on this trend is that the vendors should do whatever it takes to move the organizations to the cloud, private or public. Once companies dip their toes, they themselves will realize what's good for them.

Big Data as a serious category: A few days back, I blogged about big data going mainstream. Coming out from this event, it felt like, today Big Data is where cloud was a couple of years back. When I asked people a few years back "What's Hadoop?" They would reply "Huh?" Now, everyone wants to know more about Hadoop, Hive, HBase, S4, Oozie, Pig, Cassandra, and other big data frameworks. They're interested in analyzing and comparing available solutions. They're asking all the right questions. The VC investment in this category has been record high. Hadoop World was a sold out event this year with 1500 participants. Milind Bhandarkar, Chief Architect with Greenplum Labs, mentioned that in 2008, during the first Hadoop summit, they had to coax people to come to the summit. The people who willingly came to the summit either worked for Yahoo or Facebook. We have come a long way and there's a long way to go but this is a rock solid category. As the first set of big data infrastructure companies settle in we will see people building killer applications and PaaS solutions specifically designed to leverage big data. It is encouraging to see more and more companies and venture capitalists recognizing that the data is worth a lot more if they have the right tools and right people — the data scientists — to do something interesting with it. For example, Greylock partners have hired DJ Patil as a "data scientist in residence" to help them with evaluating their opportunities and advising their portfolio companies on big data strategies.

Rise in popularity of open source frameworks: If you follow the history of open source you'll realize that when a proprietary way of doing things become popular, commercial vendors pose a lock-in threat, and things don't work as expected, developers get frustrated and start to work on filling that gap by building open source technology. Linux started that way and so were many other open source projects. This is why I am excited to see OpenStack gaining rapid momentum. It's slowly becoming a de-facto standard to build a commercial cloud solutions. I also like Cloud Foundry since many companies that I know of, ISVs and large IT shops, won't use a public PaaS. They would prefer to launch their own PaaS solution in the cloud. Without an open source solution, it does become a big challenge.

Monday, November 7, 2011

Early Signs Of Big Data Going Mainstream


Today, Cloudera announced a new $40m funding round to scale their sales and marketing efforts and a partnership with NetApp where NetApp will resell Cloudera's Hadoop as part of their solution portfolio. These both announcements are critical to where the cloud and Big Data are headed.

Big Data going mainstream: Hadoop and MapReduce are not only meant for Google, Yahoo, and fancy Silicon Valley start-ups. People have recognized that there's a wider market for Hadoop for consumer as well as enterprise software applications. As I have argued before Hadoop and Cloud is a match made in heaven. I blogged about Cloudera and the rising demand of data-centric massive parallel processing almost 2.5 years back, Obviously, we have come a long way. The latest Hadoop conference is completely sold out. It's good to see the early signs of Hadoop going mainstream. I am expecting to see similar success for companies such as Datastax (previously Riptano) which is a "Cloudera for Cassandra."

Storage is a mega-growth category: We are barely scratching the surface when it comes to the growth in the storage category. Big data combined with the cloud growth is going to drive storage demand through the roof and the established storage vendors are in the best shape to take advantage of this opportunity. I wrote a cloud research report and predictions this year with a luminary analyst Ray Wang where I mentioned that cloud storage will be a hot cake and NoSQL will skyrocket. It's true this year and it's even more true next year.

Making PaaS even more exciting: PaaS is the future and Hadoop and Cassandra are not easy to deploy and program. Availability of such frameworks at lower layers makes PaaS even more exciting. I don't expect the PaaS developers to solve these problems. I expect them to work on providing a layer that exposes the underlying functionality in a declarative as well as a programmatic way to let application developers pick their choice of PaaS platform and build killer applications.

Push to the private cloud: Like it or not, availability of Hadoop from an "enterprise" vendor is going to help the private cloud vendors. NetApp has a fairly large customer base and their products are omnipresent in large private data centers. I know many companies that are interested in exploring Hadoop for a variety of their needs but are somewhat hesitant to go out to a public cloud since it requires them to move their large volume of on-premise data to the cloud. They're more likely to use a solution that comes to their data as opposed to moving their data to where a solution resides.

Monday, October 31, 2011

Bangalore Embodies The Silicon Valley

I spent a few days in Bangalore this month. This place amazes me every single time I visit it. Many people ask me whether I think Bangalore has potential to be the next Silicon Valley. I believe, it's a wrong question. There's some seriously awesome talent in India, especially in Bangalore. Don't copy the Silicon Valley. There are so many intangibles that Bangalore won't get it right. And there's no need to copy. Create a new Silicon Valley that is the best of both worlds.

If you want some good reading on what makes silicon valley the Silicon Valley, read the essay "How to be Silicon Valley" by Paul Graham. Bangalore does have some of these elements - diversity, clusters, a large number of expats etc. It's quickly becoming a true cosmopolitan city in India. You don't need to know the local language (Kannada) to live there. It does have a few good colleges such as IIM and IISC, but no IIT. The real  estate boom in Bangalore is a clear indicator of what's going on in the city with regards to the spending power of the middle class and the upper middle class. Most large IT multinationals have a campus in Bangalore. The companies such as Accenture have more people in Bangalore than in the US.

So, what's wrong?

Lack of entrepreneurial mentorship

If you go back to the roots of the early success of the Silicon Valley you will find that the venture capitalists community mentored the entrepreneurs to bring innovation to life. Steve Jobs had an idea, but no business plan. Some of the entrepreneurs became serial entrepreneurs and some became the investors who in turn mentored other entrepreneurs. This cycle continued. I don't see this in Bangalore. Not only the VC funding is not easily accessible (more on this below), but there are no early investors that I see are spotting the trends and mentoring the entrepreneurs.

I spoke to many entrepreneurs in Bangalore. Let me tell you - they do not lack the entrepreneurial spirit. They are hungry and they are foolish. And they are chomping at the bit to work on an exciting idea, but they do lack someone to mentor them and take them through the journey.

Where have all the designers gone?

A couple of years ago I was invited at the National Institute of the Design (NID), a premier design school in India, for a guest lecture. They told me that design is not a discipline that easily attracts good talent in India. They are competing with the engineering schools. India lacks designers. This is the age of experience start-ups. A very few engineers have the right design mindset. If they want to be successful, they absolutely need to work with the designers who are impossible to find and hire. This talent gap is hurting to manifest the vision of a founder into a product that the consumers would love to use. Flipkart and Red Bus are my favorite start-ups but they are few and far between.

Math and Science would only take you so far

It's not just Math and Science that has created the Silicon Valley. It's the right balance of creativity, business acumen, and engineering talent. The schools in India, even today, are not set up to let students be more creative. They are still fixated on Math and Science since they guarantee good jobs. The Silicon Valley entrepreneurs followed their dreams. In the US, it's about studying what you like and chase a career that you are happy with and not to pick a certain kind of education just because they provide good jobs. Unfortunately, creativity is hard to teach. It's ingrained into the culture, society, and the systems. If India has to get this right, this needs to start at the education and a support system that has a place for jobs other than Math and Science.

I have been following the education reforms in India and private sector investment into K-12 schools. They are encouraging. I don't believe Bangalore or India for that matter will have math or science issue anytime soon, but it will certainly have entrepreneurial issues to jump start new companies and manage their ever growing engineering workforce. I was invited to speak at IIM Ahmedabad, one of the best business schools in India. During my conversation with the faculties, I was told that the most pressing issue for the elite business schools in India is to scale their efforts to create the new class of mid-management that can manage the rapidly growing skilled workforce.

Obama keeps saying more and more people in the US should study math and science to be competitive. I don't believe that's the real competition. The real competition is what can you do if you did know math and science or if you had access to people who knew it.

Lack of streamlined access to capital

A lot has been written about this obvious issue and I don't want to beat this further. I just want to highlight that despite of all the money that the individuals and large corporations have earned in India, a very little is being invested into venture capital since the VC framework, processes, and the regulations aren't as streamlined. It's not a level playing field. In the Silicon Valley, the venture money is commodity. If you have a great idea, team, or a product, the investors will run after you to invest into your company. Bangalore is far from this situation. But it shouldn't have to be. What's missing is not the available money but a class of people who can run local funds by investing into the right start-ups. Most US VC firms have set up shops in India, but I don't think that's enough to foster innovation at the grassroots level. Bangalore needs Indian firms to recognize the need for a local VC community that can work with the system to make those funds available to the entrepreneurs.

The picture: I took this picture inside one of the SAP buildings in Bangalore during the week before Diwali.

Thursday, October 27, 2011

Make To Think And Think To Make



I'm a passionate design thinker and I practice design thinking at any and all opportunities. Design thinking is part art and part science. John Maeda is one of my favorite thought leaders on design. He published a post talking about art as a form of asking "what do I want to know" rather than "what do I want to say."

As a product manager, making a product goes from what do I want to know — the requirements — to what do I want to say — manifestation of the requirements into a working product. I call it "Make to think and think to make". I make prototypes — make to think — similar to a form of an art, to help me think and ask the right questions to fulfill my needs of "what I want to know". The human beings better respond to tangible artifacts as opposed to abstract questions. These conversations stimulate my thinking to execute on those requirements — "think to make" — similar to "what do I want to say." The design thinking cycle continues.

Friday, September 30, 2011

Disrupt Yourself Before Others Disrupt You: DVD To Streaming Transition Is Same As On-Premise To Cloud


Recently, Netflix separated their streaming and DVD subscription plans. As per Netflix's forecast, they will lose about 1 million subscribers by the end of this quarter. The customers did not like what Netflix did. A few days back, Netflix's CEO, Reed Hastings, wrote a blog post explaining why Netflix separated their plans. He also announced their new brand, Qwikster, which will be a separate DVD service from Netflix's streaming website. These two services won't share the queues and movie recommendations even if you subscribe to both of them. A lot has been said and discussed about how poorly Netlflix communicated the overall situation and made wrong decisions.

I have no insider information about these decisions. They might seem wrong in short term but I am on Netflix's side and agree with the co-founder Marc Randolph that Netflix didn't screw up. I believe it was the right thing to do, but they could have executed it a little better. Not only I am on their side, but I see parallels between Netflix's transition from DVD to steaming and on-premise enterprise ISVs' transition from on-premise to cloud. The on-premise ISVs don't want to cannibalize their existing on-premise business to move to the cloud even if they know that's the future, but they don't want to wait long enough to be in a situation where they run out of money and become irrelevant before the transition.

So, what can these on-premise ISV's learn from Netflix's decisions and mistakes?

Run it as a separate business unit, compete in the right category, and manage street's expectations:

Most companies run their business as single P&L and that's how the street sees it and expects certain revenue and margins. Single P&L muddies the water.The companies have no way of knowing how much money they are spending on a specific business and how much revenue it brings in. In many cases, there is not even an internal separation between different business units. Setting up a separate business unit is a first step to get the accounting practices right including tracking cost and giving the right guidance to the street. DVD business is like maintenance revenue and the streaming is like license revenue. The investors want to know two things: you're still a growth company (streaming) and you still have enough cash coming in (DVD business) to tap into the potential to grow.

Netflix faces competition in streaming as well as in their DVD business, but the nature of competition is quite different. For the enterprise ISVs competing with on-premise vendors is quite different than competing with SaaS vendors. The nature of business — cost structure, revenue streams, ecosystem, platform, anti-trust issues, marketing campaigns, sales strategy — is so different that you almost need a separate organization.

Prepare yourself to acquire and be acquired:

Netflix could potentially acquire a vendor in the streaming business or in the DVD business and that makes it easy for them to integrate. This is even more true in the case of ISVs since most of the on-premise ISVs will grow into the cloud through acquisitions. If you're running your SaaS business as a separate entity, it is much easier to integrate the new business from technology as well as business perspective.

Just as you could acquire companies, you should prepare yourself for an exit as well. Netflix could potentially sell the DVD unit to someone else. This will be a difficult transaction if their streaming business is intertwined with their DVD business. The same is true for the enterprise ISVs. One day, they might decide to sell their existing on-premise business. Running it as a separate business entity makes it much easier to attract a buyer and sell it as a clean transaction.

Take your customers through the journey: 

This is where Netflix failed. They did not communicate to the customers early on and ended up designing a service that doesn't leverage existing participation of the customers such as recommendations and queues. There is no logical reason why they cannot have a contract in place between two business units to exchange data, even if these two units are essentially separate business entities. The ISVs should not make this mistake. When you move to the cloud, make sure that your customers can connect to their on-premise systems. Not only that, you need to take care of their current contracts and extend them to the cloud if possible and make it easy for them to transition. Don't make it painful for your customers. The whole should be great than the sum of its parts.

Run your business as a global brand:

Learn from P&G and GE. They are companies made up of companies. They do run these sub-companies independently with a function to manage them across. It does work. Netflix has a great brand and they will retain that. As an on-premise ISV you should consider running your on-premise and cloud businesses as sub-brands under single brand umbrella. Branding is the opposite of financials; brand is a perception and financials is a reality. Customers care for the brand and service and the street cares for the financials. They seem to be very closely related to each other for a company looking inside-in but from an outside-in perspective they are quite different. There is indeed a way to please them both. This is where the most companies make wrong decisions.

Wednesday, September 7, 2011

Freemium Is The New Piracy In The SaaS World

It is estimated that approximately 41% of revenue, close to $53 billion, is "lost" in software piracy. This number is totally misleading since it assumes that all the people who knowingly or unknowingly pirated software would have bought the software at the published price had they not pirated it. RIAA also applies the same nonsense logic to blow the music piracy number way out of proportion. The most people who pirate software are similar to the people who pirate music. They may not necessarily buy software at all. If they can't pirate your software, they will pirate something else. If they can't do that, they will find some other alternative to get the job done.

Fortunately, some software companies understand this very well and they have a two-pronged approach to deal with this situation: prevent large scale piracy and leverage piracy when you can't prevent it. If an individual has access to free (pirated) software, as a vendor, you're essentially encouraging an organic ecosystem. The person who pirated your software is more likely to make a recommendation to continue using it when he/she is employed by a company that cannot and will not pirate. This model has worked extremely well. What has not been working so well and what the most on-premise vendors struggle with is the unintentional license usage or revenue leakage. Customers buy on-premise software through channels and deploy to large number of users. Most on-premise software are not instrumented to prevent unintentional license usage. The license activation, monitoring, and compliance systems are antiquated in most cases and cannot deal with this problem. This is very different than piracy because the most corporations, at least in the western world, that deploy the on-premise software want to be honest but they have no easy way to figure out how many licenses have beed used.

In the SaaS world, this problem goes away. The cloud becomes the platform to ensure that the subscriptions are paid for and monitored for continuous compliance. You could argue that there is no license leakage since there are no licenses to deal with. But, what about piracy? Well, there's no piracy either. This is a bad thing. Even though a try before buy exists, there's no organic grass-roots adoption of your software (as a service) since people can't pirate. In many countries where software piracy is rampant, the internet access is not ubiquitous and bandwidth is still limited. This creates one more hurdle for the people to use your software.

So, what does this mean to you?

SaaS ISV: It is very important for you to have a freemium model that is country-specific and not just a vanilla try-before-buy. You need to get users start using your service for free early on and make it difficult for them to move away when they work for someone who can pay you. Even though you're a SaaS company, consider a free on-premise version that provides significant value. Evernote is a great example of this strategy. It shouldn't surprise you that people still download software, pirated or otherwise. Don't try to change their behavior, instead make your business model fit to their needs. As these users become more connected and the economics work in their favor, they will buy your service. It's also important to understand that the countries where piracy is rampant, people are extremely value conscious.

On-premise ISV: Don't lose your sleep over piracy. It's not an easy problem to solve but do make sure that you're doing all you can to prevent it. Consider a freemium business model where you're providing a clean and free version to your users. If the users can get enough basic value from a free version, they are less likely to pirate a paid version. What you absolutely must do is to fix your license management systems to prevent unintentional license usage. Help yourself by helping your customers who want to be honest. The cloud is a great platform to collect, clean, and match all the license usage data. You have a little or no control over customers' landscapes but you do have control over your own system in the cloud as long as there's a little instrumentation embedded in your on-premise software and a hybrid architecture that connects your on-premise software to the cloud. In nutshell you should be able to manage your licenses the way SaaS companies manage their subscriptions. There are plenty of other benefits of this approach including the most important benefit being a SaaS repository of your customers and their landscapes. This would help you better integrate your future SaaS offerings and acquisitions as well as third-part tools that you might use to run your business.

Wednesday, August 24, 2011

Life Is Too Short To Remove A USB Stick Safely


Today, Steve Jobs resigned as a CEO of Apple. I think I will remember this day and so will others.

“Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle. As with all matters of the heart, you’ll know when you find it. And, like any great relationship, it just gets better and better as the years roll on. So keep looking until you find it. Don’t settle.” — Steve Jobs

For me, Apple is not just a personal choice that is better than other alternatives, but it's also an ongoing proof of what's possible if you believe in what you think is the right thing to do. It's also about the elements of design and endless perseverance that I can thrive for. Thanks Steve for showing what's possible and wish you all the best with your health and a speedy recovery. I hope you can stay on and mentor others at Apple for what's going to be a great future of computing.

Wednesday, August 17, 2011

Parallelism On The Cloud And Polygot Programmers


I am very passionate about the idea of giving developers the control over parallelism without them having to deal with the underlying execution semantics of their code.

The programming languages and the constructs, today, are designed to provide abstraction, but they are not designed to estimate the computational complexity and dependencies. The frameworks such as MapReduce is designed not to have any dependencies between the computing units, but that's not true for the majority of the code. It is also not trivial to rewrite existing code to leverage parallelism. As, with the cloud, when the parallel computing continues to be a norm rather than an exception, the current programs are not going to run any faster. In fact, they will be relatively slower compared to other programs that would leverage parallel computation. Robert Harper, a Professor of Computer Science at Carnegie Mellon University recently wrote an excellent blog post - parallelism is not concurrency. I would encourage you to spend a few minutes to read that. I have quoted a couple of excerpts from that post.

"what is needed is a language-based model of computation in which we assign costs to the steps of the program we actually write, not the one it (allegedly) compiles into. Moreover, in the parallel setting we wish to think in terms of dependencies among computations, rather than the exact order in which they are to be executed. This allows us to factor out the properties of the target platform, such as the number, p, of processing units available, and instead write the program in an intrinsically parallel manner, and let the compiler and run-time system (that is, the semantics of the language) sort out how to schedule it onto a parallel fabric."

The post argues that language-based optimization is far better than machine-based optimization. There's an argument that the machine knows better than a developer what runs faster and what the code depends upon. This is why, for relational databases, the SQL optimizers have moved from rule-based to cost-based. The developers used to write rules inside a SQL statement to instruct the optimizer, but now the developers focus on writing a good SQL query and an optimizer picks a plan to execute the query based on the cost of various alternatives. This machine-based optimization argument quickly falls apart when you want to introduce language-based parallelism that can be specified by a developer in a scale-out situations where it's not a good idea to depend on a machine-based optimization. The cloud is designed based on this very principle. It doesn't optimize things for you, but it has native support for you to introduce deterministic parallelism through functional programming.

"Just as abstract languages allow us to think in terms of data structures such as trees or lists as forms of value and not bother about how to “schedule” the data structure into a sequence of words in memory, so a parallel language should allow us to think in terms of the dependencies among the phases of a large computation and not bother about how to schedule the workload onto processors. Storage management is to abstract values as scheduling is to deterministic parallelism."

As far as the cloud computing goes, we're barely scratching the surface of what's possible. It's absolutely absurd to assume that the polygot programmers will stick to one programming model and learn to spot difference between parallelism and concurrency. The language constructs, annotations, and runtime need to evolve to help the programmers automate most of these tasks to write cloud-native code. These will also be the core tenants of any new programming languages and frameworks. There's also a significant opportunity to move the existing legacy code in the cloud if people can figure out a way to break it down for computational purposes without changing it i.e. using annotations, aspects etc. The next step would be to simplify the design-deploy-maintain life cycle on the cloud. If you're reading this, it's a multi-billion dollars opportunity. Imagine, if you could turn your implementation-specific concurrent access to resources into abstract deterministic parallelism, you can indeed leverage the scale-out properties of cloud fairly easily since that's the guiding principle behind the cloud.

There are other examples you would see where people are moving away from an implementation-centric approach to an abstraction that is closer to the developers and end users. The most important shift that I have seen is from files to documents. People want to work on documents; files are just an instantiation of how things are done. Google Docs and iPad are great examples that are document-centric and not file-centric.

Photo courtesy: bass_nroll on Flickr

Thursday, July 28, 2011

Plotting for serendipity


I rarely eat lunch at my desk. Eating lunch in a cafeteria is such a precious opportunity to waste. I plot for serendipity. That's right. Some of the best conversations that I have had with people — on my way to a cafetaria or in the cafeteria — are purely serendipitous, but they're not purely accidental. I even pick a cafeteria that requires me to walk a little more. I believe you can always create opportunities for good things to happen to you. When people say "It's a small world", they are so wrong. The world isn't small but they're at the right place at the right time to think it's a coincidence. The coincidences do happen but there's a larger force behind orchestrating the possibilities for such coincidences to occur.

The same applies to creativity. You can design an epiphany.

I have heard people say that they had an epiphany while they were in shower. It's not the shower but it's illumination followed by a prolonged incubation, two phases of creativity. The other two phases are preparation and verification. Preparation is a phase where you decide that you want to solve a specific problem. When you continue to work on a problem over a period of time, your brain, the unconscious, never stops working on it even if consciously you're not spending any time on it. This is called incubation. This lasts for a while. The "shower moment" is the illumination phase where you finally figured out a solution, after your brain unconsciously kept solving it for the entire night, and hence the metaphor of glowing bulb for innovation. What remains is the verification phase to prove that the solution works. We all do this, but we don't spend enough time on the incubation phase and hence many ideas don't go beyond that. You can plot for this epiphany by not letting a problem go for a while even though you think that you don't have enough time to work on it. I tell my students to start working on their projects early on for better results for that purpose. It feels counterintuitive that you could solve a problem by spending less time on it as long as you keep solving it for a longer duration off and on.

I have blogged about cloud being a natural platform to design tools that could create network effects. The tools that create network effects also offer an opportunity for digital serendipity. I have discovered many people through Twitter and learned quite a few things that I would have never explicitly made an attempt to learn. And, I'm not the only one who has had such an experience. I'm a big fan of social tools and platforms that enable opportunities for such serendipity to occur. There're only so many cafes and water fountains in the physical world; the digital world is far bigger in that sense.

Design your routine to plot for serendipity and epiphany and credit yourself instead of the shower. Creativity can be tricked. You will be positively surprised.

Cross-posted on my personal blog

Tuesday, July 5, 2011

Designing Terms Of Service Is As Important As Designing A Product

Dropbox revised their Terms of Service (TOS) over the long weekend. That triggered a flurry of activities on Twitter. Dave Winer even deleted his Dropbox account saying that he would revisit it once the dust settles. A lot of people concluded that there's nothing wrong in the new TOS and that people are simply overreacting. And then Dropbox updated their blog post, twice, explaining that there is nothing wrong with new TOS and cleared some confusion. I would let you be the judge of the situation and the new TOS. This post is not about analyzing the new TOS of Dropbox, but it's about looking at more basic issue in product design. What we witnessed was just a symptom.



Let me be very clear - your product design includes getting the TOS and End User License Agreement (EULA) right before you open up the service. The way the most TOS and EULA are worded, an average user can't even fathom what the service actually does, what information it collects, what it shares, and most importantly what's that it absolutely won't do. It's ironic that the simplicity element of Dropbox's design — there will be a folder and that will sync — made it extremely popular and when they designed the TOS, they had to publish a blog post with two updates and 3000+ comments to explain and clarify the new TOS to the very same users. There's something wrong here.

For a product or a service to have a great experiential design, it's absolutely important to get the TOS and EULA right upfront and even validated by end users. People release their product in beta and go to a great length to conduct usability study to improve the product design. Why exclude TOS?

I have worked with some great lawyers, but they don't make a good product designer. I'm a big fan of constraints-based design. Lawyers are great at giving you constraints - the things that you can and cannot do. Start there. Get a clear understanding of legal ramifications, ask someone other than a lawyer to write a TOS, get it signed off by a lawyer, and most importantly validate by end users. Then, start the product design using those constraints. If you feel too constrained, go back and iterate on TOS. Drafting a TOS is not different than prototyping a product.

I would rather have bloggers, thought leaders, and end users critique the product design on my blog instead of TOS. I would love to work on that feedback as against getting into a reactive mode to stop the bad PR and legal consequences. Thomas Otter says "law exists for a reason." Don't exclude lawyers but please don't let lawyers drive your business. Educate them on technology and end users and most importantly, involve them early on. The lawyers are paid to be risk-averse. As an entrepreneur, you need to do the right thing and challenge the status quo to innovate without jeopardizing the end users. It's a tough job, but it can be accomplished.

I don't want to single out Dropbox. There are other companies who have gone through the same cycle and yet I don't see entrepreneurs doing things differently. In this process, the cloud gets a bad rep. What happened to Dropbox has got nothing to do with what people should and should not do in the cloud. That would be a knee-jerk response. The fundamental issue is a different one. Treating symptoms won't fix the underlying chronic issue.

Wednesday, June 15, 2011

5 Techniques To Deal With Spam: Open Letter To Twitter


I love Twitter, but lately, I am getting annoyed by Twitter spam and I'm not the only one. I don't want Twitter spam to become email spam. I don't want to whine about that either, so I spent some time thinking about what Twitter could do to deal with spam. Consider this an open letter to Twitter.

Facebook's privacy settings are the new programming a VCR. Google has been criticized a lot about profiting from content farms. I believe that all the major players are playing a catch-up game. A lot of people have stared to complain about LinkedIn spam as well. Quora went into different direction — where they started out with a strict upfront policy regarding who can join Quora, ask questions, answer questions etc. — to maintain the quality of their service. Strict upfront policy hampers new user acquisition and adoption but could ensure better quality where as liberal policy accelerates the user acquisition with a risk of service being abused. I do believe that there's a middle ground that these services could thrive for by implementing clever policies.

Here are five techniques that Twitter could use to deal with spam:

1. Rely on weighted rank based on past performance: I ran a highly unscientific experiment. I kept a record of all the accounts that I reported as spammers on Twitter in the last few days. I went back every few minutes, after reporting an account, to see whether that account was suspended. It took Twitter some time before that happened. Every single account that I have reported so far has been suspended. I don't think Twitter is using that knowledge. If it did, my subsequent actions would have resulted into quicker suspension. Learn from Craigslist. Craig Newmark will tell you all about community-based flagging. Instrument the system to rely on reputation of power users — who are savvy enough to detect spam — to suspend a spammer's account. If it turns out that it's not a spam, give an opportunity to the account owner to appeal. Spammers don't waste time arguing; they simply move on.

2. Expand categories to match how people consume: Create a separate "unsolicited" category to receive mentions and replies from people whom you don't follow. This could be a separate window in a Twitter client that replaces the current "replies and mentions" window. Require Captcha for direct replies (and not mentions) for the conversations where both the accounts don't follow each other to stop automated spam. Everything else, including real spam, goes into "mentions", which is now a new category, that can be consumed in a separate window leaving the "replies" window clean.

3. Remove spam tweets from the stream: Many users don't consume their mentions or replies in real-time or even in near real-time. Mark the tweets spam once you suspend the account and require the Twitter clients to remove them from users' stream in real time. No API restrictions and no throttling. If you do it right and spam gets detected within a few seconds, the account can be suspended in no time, and the tweets are removed even before the most users would even see them. Emails can't be recalled, the tweets can be, if Twitter wants it to. Let's do it.

4. Focus on new accounts: Set a reasonably low limit on number of tweets per hour on a new account. A first-time genuine Twitter user doesn't go from 0-100 in a day, but a new spammer certainly would. Focus energy on new accounts; spammers don't wait for a few weeks or months to start spamming. The current "verified" account feature is a black magic. Open it up to all the people and use standard means such as cell phone, credit cards, and other identities to verify their Twitter accounts. These accounts enjoy the benefit of doubt - an upfront requirement of multiple signals before their accounts are suspended. Spammers don't want to verify themselves.


5. Find and fight bots with bots: There are a bunch of bots out there that look for the words such as iPad, iPhone, and XBOX in your tweets and then they spam you. Twitter can crate their own bots to tweet these words to catch these spam bots and more importantly harvest the links that they are tweeting to detect other spammers. Twitter's own bots would obviously be far more intelligent than the spam bots since they would have access to a lot more information that the spam bots don't.

The spammers do catch up, but if Twitter spends a little time and energy, they can stay ahead in this game. They can even lead the pact of social media companies on how to deal with spam.

Update: As soon as I published this post, I tweeted it and copied Del Harvey on it. She immediately responded to the post. You can read her response here. I really appreciate Twitter responding to this. My take on the response is that they seem to understand what the issues are and how they might solve them, but they haven't fully managed to execute on it, so far. I don't agree with their feedback on issue #2 calling it non-safety. Users see Twitter as one integral product where spam is very much part of it. Personally, I don't think of spam as a security issue for me. It's just plain annoyance. Executing on these ideas will matter the most. Let's hope Twitter gets behind this with full momentum and it doesn't become a "project".

Monday, June 6, 2011

Social Shaming

An interaction designer, Joshua Kaufman, had his MacBook stolen a few days back. He is a smart dude. He had installed an app called Hidden on his MacBook before it was stolen. He tracked down the thief and asked the Oakland PD to catch him. They said no. He was frustrated, obviously. He published all the details regarding the theft including the picture of the guy who stole his MacBook on his blog. This story went viral on Twitter and Facebook and made it almost impossible for the cops to ignore it. Oakland PD found the guy and arrested him. Since then the story has been picked up by many major media outlets and became sort of a sensation.

Social shaming works.

There's a fine line between peer pressure and social shaming. Many car dealerships in the US have a whiteboard that tracks which sales reps sold how many cars. They also ring a bell every time someone sells a car. It's a cheesy thing to do, but it sends a clear message to other people to be more aggressive; it's indeed a form of peer pressure. It's also an efficient technique to motivate the kids.

In fact, it's one of the most important gamification elements.

Public shaming has been used in many different ways e.g. send an email out to all the sales people with a list of people highlighted in red that haven't updated the CRM system. I know of a company that had a practice in place to publicly give a "D'oh! award" to a developer who broke the nightly build. Social shaming is essentially public shaming using social media. During my discussion with many enterprise social software vendors, analysts, and thought leaders I have repeatedly argued that changing end users' behavior is less likely to succeed unless there's a significant upside for the end users. What is more likely to work is codifying the real life end user behavior in the software that they use. Social shaming is one of those. One of the ways to achieve this could be by designing software that promotes radical transparency, signals one's successes to the other, and nudges them to excel without embarrassing them.

Thursday, May 26, 2011

Disruptive Cloud Start-Ups - Part 2: AppDirect

Check out the first post of this series on NimbusDB, if you haven't already seen it. This post is about AppDirect. I met with Nicolas Desmarais, a co-founder and the CEO of AppDirect and had a long discussion regarding their current solutions and future strategy. AppDirect is an app store for small businesses. The developers can integrate their applications with AppDirect and AppDirect manages the experience of selling, provisioning, and billing with a 70-30 revenue split with the developers. They also have a white label app store solution that they sell to large customers such as ISPs who can sell these same applications to their customers.

Let's get the things out of the way that I didn't like about them.

The downside:

The target market that comprises of small businesses is extremely difficult to reach to and to market to. This gets even more difficult when the company trying to market is a young start-up and the customers are "S" in SMB. These customers have very different kind of requirements. They look for simple solutions that are not very expensive and have predictable SLA with a clear local support model and not the ones that come with enterprise grade features such as end-to-end integration, single sign on etc. Intuit has owned this channel for a while via Quickbooks and their SMB marketplace (the partner platform) is a great example of selling go-to-market services to other ISVs. AppDirect will have to work much harder if they want to work this channel.

So, why do I think they are disruptive?

The upside:

AppDirect is platform-agnostic. The developers can write applications in any language and run it on any platform as long as they integrate with AppDirect's end points (the APIs). The ISVs or PaaS providers have traditionally locked developers into their platform. That lock-in now goes away.

Even though the telcos are not the most innovative companies, they are laggards with a pile of cash, a ton of customers, and good margins. I believe that telcos can be great enterprise software vendors for SMB. Instead of spending money on the marketing efforts, if AppDirect can convince the Telcos and ISPs to bundle their white label solution, it's a win-win situation. This business alone can make them profitable. What you need is a small number of large customers. Long tail can always be an added bonus.

The team is talented and they have got a good product with some early customers. If they can execute on their vision and pivot as necessary, they're on to something,

Check out their slides and presentation:










Friday, May 6, 2011

Disruptive Cloud Start-Ups - Part 1: NimbusDB

Being at Under The Radar (UTR), watching disruptive companies present and network with entrepreneurs, thought leaders, and venture capitalists is an annual tradition that I don't miss. I have blogged about disruptive start-ups that I saw in the previous years. The biggest exit out of UTR, that I have witnessed so far, is Salesforce.com's $212 million acquisition of Heroku. This post is about one of the disruptive start-ups that I saw at UTR this year - NimbusDB.

I met with Barry Morris, the CEO and Founder of NimbusDB at a reception the night before. I had long conversation with him around the issues with legacy databases, NoSQL, and of course NimbusDB. I must say that, after long time, I have seen a company applying all the right design principles to solve a chronic problem - how can you make SQL databases scale so that they don't suck.

One of the main issues with the legacy relational databases is that they were never designed to scale out to begin with. A range of NoSQL solutions addressed the scale-out issue, but the biggest problem with a NoSQL is that NoSQL is not SQL. This is why I was excited when I saw what NimbusDB has to offer: it's a SQL database at the surface but has radically modern architecture underneath that leverages MapReduce to divide and conquer queries, BitTorrent for messaging, and Dynamo for persistence.

NimbusDB's architecture isolates transactions from storage and uses asynchronous messaging across nodes - a non-blocking atom commit protocol - to gain horizontal scalability. At the application layer, it supports the "most" of SQL 99 features and doesn't require the developers to re-learn or re-code. The architecture doesn't involve any kind of sharding and the nodes can scale on any commodity machine on a variety of operating systems. This eliminates an explicit need of a separate hot back-up since any and all nodes serve as a live database in any zone. This makes NimbusDB an always live system, which also solves a major problem with traditional relational databases - high availability. It's an insert only database and it versions every single atom/record. That's how it achieves MVCC as well. The data is compressed on a disk and is accessed from an in-memory node.

I asked Barry about using NimbusDB as an analytic database and he said that the database is currently not optimized for analytic queries, but he does not see why it can't be tuned and configured as an analytic database since the inherent architecture doesn't really have to change. Though, during his pitch, he did mention that NimbusDB may have challenges with heavy reads and heavy writes. I personally believe that solving a problem of analytic query on large volume of data is a much bigger challenge in the cloud due to the inherent distributed nature of the cloud. Similarly, building a heavy-insert system is equally difficult. However, most systems fit somewhere in between. This could be a great target market for NimbusDB.

I haven't played around with the database, but I do intend to do so. On a cursory look, it seems to defy the CAP theorem. Barry seems to disagree with me. The founders of NimbusDB have great backgrounds. Barry was the CEO of IONA and Streambase and has extensive experience in building and leading technology companies. If NimbusDB can execute based on the principles it is designed on, this will be a huge breakthrough.

As a general trend, I see a clear transition, where people finally agree that SQL is still a preferred interface, but the key is to rethink the underlying architecture.

Update: After I published the post, Benjamin Block raised concerns around NimbusDB not getting the CAP theorem. As I mentioned in the post, I also had the same concern, but I would give them benefit of doubt for now and watch the feedback as the product goes into beta.

Check out their slides and the presentation:

Slides:



Presentation:







Tuesday, April 26, 2011

Gamification Of Enterprise Applications

Gamification is a hot topic for consumer applications. It is changing the way the companies, especially the start-ups, design their applications. The primary drivers behind revenue and valuation of consumer software companies are number of users, traffic (unique views), and engagement (average time spent + conversion). This is why gamification is critical to consumer applications since it is an effort to increase the adoption of an application amongst the users and maintain the stickiness so that the users keep coming back and enjoy using the application.

This isn't true for enterprise applications at all.

For consumer applications, the end user and the buyer (if they pay to use) are the same. e.g. Amazon, eBay, Google, Facebook, LinkedIn etc. For enterprise applications, the end user is not the buyer. The buyers of enterprise applications write a check but don't use the applications, and even worse, the end users have a little or no influence on what gets bought. The on-premise ISVs don't directly benefit from user adoption, once the software is sold. This is also true for cloud or SaaS solutions except that there is no shelfware in SaaS. I would argue that the enterprise ISVs, on-premise as well as SaaS, would in fact benefit, in short term, from reduced user adoption since they would save money by supporting fewer users and reduced activity. Obviously, this is a very short-sighted and myopic view. I hope that the enterprise ISVs don't actually think that way since broader user adoption and deeper engagement are certainly important for longer term growth that allows the ISVs to build brand loyalty, develop stronger customer base, and gain an opportunity to up-sell and cross-sell.

The fundamental reason behind poor adoption of the enterprise applications is that they are simply not easy-to-use and they almost always come in the way to get the actual work done. In many cases, they are designed to be orthogonal to the actual business process that it is supposed to help an end user with. Also, in most cases, these applications are designed top-down to serve the needs of senior management and not the real needs of end users e.g. a CRM system that helps management to run pipeline reports but doesn't help a rep to be more efficient and agile. In cases where broader adoption for enterprise applications is required, it is typically achieved via a top-down mandate e.g. annoying reminder emails to fill out time sheets. The end users don't see themselves as a clear beneficiary of these applications.

Simply put, the approach to gain user adoption for consumer applications is a "carrot" and for the enterprise applications it is a "stick". But, it doesn't have to be that way. There's a significant potential to apply gamification elements to increase the end user engagement for the enterprise applications, make them sticky and fun to use, and make it a win-win situation for the buyers as well as the end users.

Cater to perpetual intermediaries:

Have you ever played Angry Birds? If not, I would highly encourage you to do so. It serves the category of people known as "casual gamers". These games have pretty much zero adoption barrier for a novice, but when you get serious, there are enough challenges in the game as you progress to keep you entertained and bring you back. The equivalent of casual gamers in the enterprise applications are known as "perpetual intermediaries". They don't want to become power users, but they don't want to stay beginners as well. The tool should have zero barrier for a first time user and should have affordances that encourages users to explore and learn more. Microsoft has done a pheneomenal job with Word and Excel. They are extremely easy to use for a person who has never used these tools before and they provide further discovery via contextual menus and reassurance via drop-down menus (and ribbon in later versions) in the journey of becoming a perpetual intermediary. That's exactly how I expect all the enterprise applications should behave.

Let users leverage serendipity:

One of the early features of Google Apps that I really liked: when user logs into Google Apps for a specific domain, she can see other people in the same domain (same company) who are also using Google Apps. This was not a task that someone explicitly wanted to accomplish, but sheer serendipity allowed them to discover other people and eventually helped collaborate with them. If there's an element of surprise in any app, that experience typically leaves positive impact on a user. How many times did you run into someone at a cafetaria or in a hallway and found that short and tacit conversation extremely valuable? The ISVs should thrive to create this experience in their applications. Foursquare's feature to let users know who else is at a venue, Facebook Places' push notification to notify when friends check-in at a place close by, and certain activity feeds that passively push information to users are all examples that leverages serendipity.

Design for teams over individuals:

The gamification elements for consumer applications target individuals, but that's not how corporations are run. In these corporations, the work gets done by a team and not by individuals — it's a team sport. It's the team and not the individuals that wins and loses. Also, for the most consumer applications, the individuals don't compete with other individuals on aspects beyond the application. The employees in a corporation aren't necessarily known for healthy competition and the gamification rewards might aggravate the existing rivalry. The badges are a digital reward, an accomplishment of some kind. Consumer companies are still struggling to take the badges beyond the reputation. I clearly see an opportunity to link the reputation, gained through some kind of contribution, to an economic reward. I know of a case where a manager had set aside 20% team bonus based on contribution to a group WIki as means to open up information and help others. It did work. However, I would be careful in setting up these kind of systems. The reward model, if not applied correctly, could backfire. But, on the other hand, it's a gamification element that holds significant potential. It'a dagger, use it carefully.

Balance simplicity and productivity:

Simplicity is one of the simplest (no pun intended) yet the most ignored and least understood gamification element. As I mentioned above, the systems that are designed for the perpetual intermediaries should be simple to get started. These systems could potentially get far more complex as you explore more and more features. But, there's another class of systems that people only occasionally use e.g. leave request, annual goal setting etc. It's far more important to keep these systems simple at all levels. Imagine the experience of going from one carnival stall to another and play all the games. You need very little or no instructions. These games at carnival are derived from a few basic games with a few twists, but these twists do not require people to go through a steep learning curve. That's how the applications that people rarely use should be designed; it should use the affordances and principles that the users have witnessed and experienced some place else and it should be broken down like carnival stalls to make the journey easy and fun.

The serious gamers prefer power over simplicity. They like to use shortcuts and a zillion combinations of all the keys on their consoles to get moving quickly inside the game. This is exactly the behavior of the power users of enterprise applications. An Accounts Receivables (AR) interface should not force an AR clerk to learn how to create an invoice every time she opens the application. She has learned the ropes and she expects to be productive and she wants to be faster and better than others. The tools should provide enough "power" features to such users to make them successful.

Photo credit: ccarlstead

Sunday, April 10, 2011

Taking The Quotes Out Of "Design Thinking"

Bruce Nussbaum, a design thinking thought leader and a professor of Innovation and Design at Parsons The New School of Design, recently wrote that Design Thinking Is A Failed Experiment.

He claims that:

"Design Thinking has given the design profession and society at large all the benefits it has to offer and is beginning to ossify and actually do harm."

Rubbish.

I would argue otherwise. Design thinking is not a catchphrase anymore, and that perhaps is an issue for someone like Bruce who wants to invent a new catchphrase to sell his book. When I tweeted his post, Enric Gili - a friend, co-worker, and a design thinker whom I respect - had to say this:


I couldn't agree anymore. I have learned, practiced, and taught design thinking, for living. I have worked with folks from IDEO, closely, very closely. I have mentored students at Stanford d.school and I live and breathe design thinking. I don't think of it as a method that goes out of fashion. For me, it's a religion, a set of values, and an approach that I apply to all things that I do on daily basis.

I have taken the quotes out of "design thinking".

Just as I don't get excited by the rounded corners and gradients of Web 2.0 I don't think of design thinking as voodoo dolls. To some, this appears to be a failure of design thinking. Design thinking has gone mainstream; it is not dead. What is dead is a belief that it's a process framework that can fix anything and can even cook dinner for you. Design thinking is an approach that codifies a set of values. Design thinking is not an innate skill. It can be taught, gained, and practiced.

"I place CQ within the intellectual space of gaming, scenario planning, systems thinking and, of course, design thinking. It is a sociological approach in which creativity emerges from group activity, not a psychological approach of development stages and individual genius."

Design thinking is ambidextrous; it advocates abductive as well as deductive thinking. The "design" in design thinking is an integrative discipline. As my boss used to tell me, you can't have Ph.D in design. Unless you're a smartypants clever clogs, it doesn't make sense. If CQ is a sociological-only approach, it fundamentally defies the inclusive and integrative values of design, which is a vital driver for creativity.

"It’s 2020 and my godchild Zoe is applying to Stanford, Cambridge, and Tsinghua universities. The admissions offices in each of these top schools asks for proof of literacies in math, literature, and creativity. They check her SAT scores, her essays, her IQ, and her CQ."

It's 2020 and IDEO has gone out of business and so is d.school. I am applying for a new job and they measure my CQ. I miserably fail at this CQ thing, perhaps. Do I care? Absolutely not. I have got my design thinking value system that may not be catchy to sell a book, but good enough to get my job done, spectacularly.

Creative Quotient? Give me a break.

Thursday, March 31, 2011

It's 1999 Again: The Bubble 2.0 And Talent Wars Of The Silicon Valley

I have been living in the Silicon Valley for a while, and sure enough I haven't forgotten the dot com days. A few days back, on my way to the San Francisco airport, I saw a billboard by aol advertising that they are cool (again!). I also observed that parking lots alongside 101 weren't that empty. I told myself "man! this does feel like 1999".



The smart people - entrepreneurs, VCs, and analysts - that I talk to, tell me that we're in a bubble. They call it Bubble 2.0. Perhaps, they're right. The company valuations are through the roof. Facebook is valued around $75 billion and Color, on the launch day, had $40 million in the bank. The angel, super angel, and incubator investment deal flow is bringing all the talent to the Valley and all these young smart entrepreneurs are working on some of the coolest things that I have ever seen. But, there's a talent side that I am worried about. What this influx of easy venture capital has ensued is companies waging talent wars. For companies such as Google, attracting and retaining talent has become very difficult. Facebook and Twitter are new Google and Quora is new Facebook. The talent acquisitions that worked in the past, such as Facebook acquiring Friendfeed, have started to fall apart since the founders realized that serial entrepreneurship is a much better option that allows them to control their destiny against trusting someone else's innovation engine.



I like the creative ways in which the start-ups try to attract the talent. When Google launched a sting operation against bing, they took the honeypot keyword "hiybbprqag" used in the sting operation to register the domain http://www.hiybbprqag.com and redirected it to the Google Jobs page. They received a few thousand resumes that week. I am seeing more and more creative techniques that the companies use to attract talent. The value proposition for a killer designer or a super-geek programmer to work for you has to extend beyond the basics in the valley. This is especially true under current circumstances where there is a stunningly short supply of designers and developers in the Valley.



The talent war is for real. It's easy to get money and get started on an idea, but a real success requires a great team composition that is not easy to achieve. But, that's the reality of the start-up world and we should recognize that the people are even more important than ever before. If you think retaining talent was hard, gaining talent is much harder. I also foresee that these new millionaires will most likely angel invest their money into new start-ups. This floodgate will result into more start-ups competing for talent and possibly with the marketing budget of the incumbents. But, then, if we believe, it's a bubble, it gotta burst one day, and when that happens, it won't be pretty.