Tuesday, December 28, 2010

Research Report: 2011 Cloud Computing Predictions For Vendors And Solution Providers

This blog post was jointly authored by @Chirag_Mehta (Independent Blogger On Cloud Computing) and @rwang0 (Principal Analyst and CEO, Constellation Research, Inc.)


As Cloud Leaders Widen The Gap, Legacy Vendors Attempt A Fast Follow

Cloud computing leaders have innovated with rapid development cycles, true elasticity, pay as you go pricing models, try before buy marketing, and growing developer ecosystems. Once dismissed as a minor blip and nuisance to the legacy incumbents, those vendors who scoffed cloud leaders now must quickly catch up across each of the four layers of cloud computing (i.e. consumption, creation, orchestration, and infrastructure) or face peril in both revenues and mindshare (see Figure 1). 2010 saw an about face from most vendors dipping their toe into the inevitable. As vendors lay on the full marketing push behind cloud in 2011, customers can expect that:

Figure 1. The Four Layers Of Cloud Computing




General Trends
  • Leading cloud incumbents will diversify into adjacencies: The incumbents, mainly through acquisitions, will diversify into adjacencies as part of an effort to expand their cloud portfolio. This will result into blurry boundaries between the cloud, storage virtualization, data centers, and network virtualization. Cloud vendors will seek tighter partnerships across the 4 layers of cloud computing as a benefit to customers. One side benefit - partnerships serve as a pre-cursor to mergers and as a defensive position against legacy on-premises mega vendors playing catch up.

  • Cloud vendors will focus on the global cloud: The cloud vendors who initially started with the North America and followed the European market, will now likely to expand in Asia and Latin America. Some regions such as Brazil, Poland, China, Japan, and India will spawn regional cloud providers. The result - accelerated cloud adoption in those countries, who resisted to use a non-local cloud provider. Cloud will prove to be popular in countries where software piracy has proven to be an issue.

  • Legacy vendors without true Cloud architectures will continue to cloud wash with marketing FUD: Vendors who lack the key elements of cloud computing will continue to confuse the market with co-opted messages on private cloud, multi-instance, virtualization, and point to point integration until they have acquired or built the optimal cloud technologies. Expect more old wine (and vinegar, not balsamic but the real sour kind, in some cases) in new bottles: The legacy vendors will re-define what cloud means based on what they can package based on their existing efforts without re-thinking the end-to-end architecture and product portfolio from grounds-up.

  • Tech vendors will make the shift to Information Brokers: SaaS and Cloud deployments provide companies with hidden value and software companies with new revenues streams. Data will become more valuable than the software code. Three future profit pools willl include benchmarking, trending, and prediction. The market impact - new service based sub-categories such as data-as-service and analysis-as-a-service will drive information brokering and future BPO models.
SaaS (Consumption Layer)
  • Everyone will take the SaaS offensive: Every hardware and system integrator seeking higher profit margins will join the Cloud party for the higher margins. Software is the key to future revenue growth and a cloud offense ensures the highest degree of success and lowest risk factors. Hardware vendors will continue to acquire key integration, storage, and management assets. System integrators will begin by betting on a few platforms, eventually realizing they need to own their own stack or face a replay of the past stack wars.
  • On-premise enterprise ISVs will push for a private cloud: The on-premise enterprise ISVs are struggling to keep up with the on-premise license revenue and are not yet ready to move to SaaS because of margin cannibalization fears,lack of scalable platforms, and a dirth of experience to run a SaaS business from a sales and operation perspectives. These on-premise enterprise software vendors will make a final push for an on-premise cloud that would mimic the behavior of a private cloud. Unfortunately, this will essentially be a packaging exercise to sell more on-premise software. This flavor of cloud will promise the cloud benefits delivered to a customer's door such as pre-configured settings, improved lifecycle, and black-box appliance. These are not cloud applications but will be sold and marketed as such.
  • Money and margin will come from verticalized cloud apps: Last mile solutions continue to be a key area of focus. Those providers with business process expertise gain new channels to monetize vertical knowledge. Expect an explosion of vertical apps by end of 2011. More importantly, as the buying power shifts away from the IT towards the lines of businesses, highly verticalized solutions solving specific niche problems will have the greatest opportunities for market success.
  • Many legacy vendors might not make the transition to cloud and will be left behind: Few vendors, especially the legacy public ones, lack the financial where with all and investor stomachs to weather declining profit margins and lower average sales prices. In addition, most vendors will not have the credibility to to shift and migrate existing users to newer platforms. Legacy customers will most likely not migrate to new SaaS offerings due to lack of parity in functionality and inability to migrate existing customizations.
  • Social cloud emerges as a key component platform: The mature SaaS vendors that have optimized their "cloud before the cloud" platform, will likely add the social domain on top of their existing solutions to leverage the existing customer base and network effects. Expect to see some shake-out in the social CRM category. A few existing SCRM vendors will deliver more and more solutions from the cloud and will further invest into their platforms to make it scalable, multi-tenant, and economically viable. Vendors can expect to see some more VC investment, a possible IPO, and consolidation across all the sales channels.
DaaS & Paas (Creation and Orchestration Layers)
  • Battle for PaaS begins with developers: Winning the hearts and minds will drive the key goals of PaaS providers. As mobile, social, and cloud intersect, expect new battle lines to be drawn by existing vendors seeking entry in the cloud. The first platform to enable write once deploy any how will win. PaaS vendors will seek to incorporate the latest disruptive technologies in order to attract the right class of developers and drive continuous innovation into the platform.
  • Vendors must own the platform (both DaaS and Saas) to survive: ISV’s who give up on investing in their own cloud platform to other ISV’s will be relegated to second class citizens. Despite the tremendous upfront cost savings, these platform moves cut-off future revenue streams as the stack wars move to the cloud. For example, ISV’s will avoid Java to mitigate risk with Oracle or IBM. The ability to control information brokering services will be limited to the platform owner.
  • Tension between indirect channel partners and vendors in the cloud will only increase: Cloud shifts customer account control to the vendor. Partners who wholeheartedly embrace the cloud risk losing direct relationships with their customers. In the case of .NET development in Azure, greater allegiance by partners to Microsoft will result in less account control with Azure.
  • PaaS will be modularized and niche: New PaaS vendors will focus on delivering specific modules to compete with end-to-end application platforms. One approach - dominate niche areas int the cloud such as programming language runtimes, social media proxies, algorithmic SDK, etc. Expect more players to jump into fill big gaps in big data, predictive analytics and information management.
  • Mobile app development will move to the cloud: App dev professionals and developers want one place to reach the mobile enterprise to build, mange, and deliver. The app dev life cycles will follow the delivery models and device management will prove to be the keystone in ensuring the complete development experience. Vendors should expect the cloud to be the predominant delivery channel for mobile apps to end users. Success will require seamless management of extensions and disconnected support.
IaaS (Infrastructure Layer)
  • Cloud management will continue to grow and consolidate: Cloud management tools saw significant growth and investment in the last couple of years. This trend will continue. Expect to see a lot more investment in this category as increasing customer adoption drives demand for tools to manage hybrid landscapes. Also expect consolidation in this category as several VC-backed start-ups seek profitable and graceful exits.
  • Cloud storage will be a hot cake: Explosive growths in information in many verticals for early adopters already factor into this fast-growing category. With more and more data moving to the cloud, customers can anticipate significant innovation in this category including SSD-based block storage, replication, security, alternate file systems, etc. Data-as-a-service and NoSQL PaaS category will further boost the growth.
  • NoSQL will skyrocket in market share and acceptance: Substantial growth in the number of NoSQL companies reflect an emerging trend to dump the infrastructure of SQL for non-transactional applications. The cloud inherently makes a great platform for NoSQL and that further drives the growth for data-as-a-service and storage on the cloud.

The Bottom Line For Vendors (Sell Side)

Cloud ushers a new era of computing that will displace the existing legacy vendor hegemony. Many vendors caught off guard by the shift in both technology and user sentiment must quickly make strategic course corrections of face extinction. Here are some recommendations for vendors making the shift to Cloud:
  1. Embrace, don't wait, don’t even hesitate: Which is worse; cannibalizing your margins or not having margins to cannibalize? Faster time to market and greater customer satisfaction will pay off. The move to cloud ensures a seat at the table for the next generation of computing.
  2. Begin all new development projects in the cloud: The rapid development cycles for cloud projects ensures that innovation will meet today’s time to market standards. Test out new projects in the cloud and experience rapid provisioning and elasticity. However, don’t forget to fail fast and recover quickly.
  3. Avoid investing in platform led apps: Apps should drive platform design not the other way around. Form really does follow function in the Cloud. Platform designs must focus on agility and scale. Apps prove out what’s really needed versus what’s theoretical. Plan for social, mobile, analytics, collaboration, and unified communications but deliver only when it makes business sense.
  4. Focus on developers, developers, and developers: Steve Ballmer is right. Success in the cloud will require bringing the developers with along on the PaaS journey. Don't make them wait until the platform is done. Otherwise, it may be too late for the company and developer ecosystem.
  5. Prioritize power usage effectiveness (PUEs): As with the factories during the last turn of century, IaaS will be the heart of delivery. Companies with the lowest cost of computing will win and be able to pass cost savings onto their customers or pocket the margin. Further, data center efficiencies do their part in green tech initiatives.
  6. Help customers simplify their landscape: Build compelling business cases to shift from legacy infrastructure to cloud efficiencies. Lead the race to optimize legacy at your competitor’s expense.
Disclaimer: The views expressed in this post are mine and not of my current or past employers'. This is my independent blog.

Friday, December 17, 2010

Salesforce.com's $212 Million Acquisition of Heorku - A Sparkling Gem In Radiant Future Of Cloud And PaaS

I met James Lindenbaum, a founder of Heroku, in early 2009, at the Under The Radar conference in Mountain View. We had a long conversation on cloud as a great platform for Ruby, why Ruby on Rails is a better framework than PHP, and viability of PaaS as a business model. He also explained to me why he chose to work on Heroku at Y Combinator. I was sold on their future, on that day, and kept in touch with them since then. The last week, Salesforce.com acquired Heroku for $212 million. That's one successful exit, which is good news in many different dimensions.

PaaS is a viable business model

PaaS is not easy. It takes time, laser sharp focus, and hard work to build something that the developers would use and pay for. A few companies have tried and many have failed. But, it is refreshing to see the platform and the ecosystem that Heroku has built since its inception. Heroku did not raise a lot of money, kept the cost low, and attracted customers early on. I was told (by Byron, I think) that an average cost for Heroku to run a free Ruby app for a month was $1. They considered it as marketing cost to get new customers and convert the free customers to paying ones, as they outgrew their needs. I cannot overpraise this brilliant execution model. I hope to see more and more entrepreneurs being inspired from - simplicity, elegance, and execution of Heroku's model - to help the developers deploy, run, and scale their applications on the cloud. In the last few years, we have seen a great deal of innovation in dynamic programming languages, access algorithms, and NoSQL persistence stores. They all require a PaaS that the developers can rely on - without worrying about the underlying nuts and bolts - and focus on what they are good at - building great applications. If anyone had the slightest doubt on viability of PaaS as a business model, this acquisition is a proof point that PaaS is indeed the future. Heroku is just the beginning and I am hoping for more and more horizontal as well as vertical PaaS that the entrepreneurs will aspire to build.

Superangels and incubators do work

There have been many debates on viability of the investing approach of the superangels and the incubators, where people are questioning, whether the approach of thin slicing the investment, by investing into tens and hundreds of companies, would yield similar returns, as compared to return on traditional venture capital investment. I also blogged about the imminent change in the VC climate, and decided to watch their returns. The numbers are in with Heroku. It's a first proof point that a superangel or an incubator approach, structurally, does not limit the return on the investment. I believe in investors investing in right people solving the right problems. If you ever meet James and hear him passionately talk about Ruby, the Heroku platform, and the developer community, you will quickly find out why they were successful. Hats off to YC on finding this "jewel". No such thing as too little investment, or too many companies.

Ruby goes enterprise

I know many large ISVs that have been experimenting with Ruby for a while, but typically these efforts are confined to a few small projects. It's good to see that Ruby, now, has a shot of getting much broader adoption. This would mean more developers learning Ruby, cranking out great enterprise gems, embracing Git, and hopefully open source some of their work. I have had many religious discussions, with a few cloud thought leaders and bloggers in the past few months, regarding the boundaries of PaaS. The boundaries have always been blurry - somewhere between SaaS and IaaS - but, I don't care. My heart is at delivering the applications off the cloud that scales, delivers compelling experiences, and leverages economies of scale and network effects. To me, PaaS is means to an end and not the end. I am hoping that an acquisition of a PaaS vendor by a successful SaaS vendor will make Ruby more attractive to enterprise ISVs and non-Ruby developers.

I have no specific insights into what Salesforce.com will do with Heroku, but I hope, they make a good home for Heroku, where they flourish and continue to do great work on Ruby and PaaS. This is what a cloud and Ruby enthusiast would wish for.

Monday, November 15, 2010

10 Business Books In 2010

These are the 10 business books published in 2010, that I would recommend you to read. Originally, I wrote this on Quora, in response to "What are must read business books of 2010?". Yes, I have read all of them, and no, they are not in any specific order.

1) What the Dog Saw by Malcolm Gladwell

I am a big fan of Malcolm Gladwell and his style. This is a compilation of his "The New Yorker" stories. Even though the articles are available on his website, this book makes it a great read.

2) Cognitive Surplus by Clay Shirky

The next time someone asks you how come people have so much time to blog, answer questions on Quora, or contribute to Wikipedia, ask them to read this book.

3) The Big Short by Michael Lewis

Want to know all about CDO and subprime mortgage and still be entertained? This is the book. Michael Lewis has great storytelling skills that makes serious and complex topics fun to read. I like this book as much as I liked Moneyball - http://amzn.to/b9YPx9

4) Open Leadership by Charlene Li

If you liked Groundswell - http://amzn.to/c8faH5 - you will like this as well. If you are interested in organizational transformation through social media, this will make a great read. Social media adoption can certainly make the leaders more credible, open, and transparent. Being a social media freak and an enterprise 2.0 strategist, I loved this book.

5) Engage by Brian Solis

This book is Seth Godin meet Social Media. It's a must-read if you are a marketer, trying to understand the impact of social media on your brand and working on engaging your customers using social media. Brian Solis has a fluid style with a lot of relevant examples.

6) The New Polymath by Vinnie Mirchandani

Vinnie is a great enterprise software analyst and a prolific blogger. I closely follow his work. This is an upbeat book that will excite the technologists as well as the business folks. If you think you have a stretch goal and want to change the world, this book will further stretch your stretch goals, and will give you a reason and purpose, to get out of bed every morning and run for it.

7) Rework by Jason Fried

I have followed 37Signals and Jason's blog. This book puts everything together with illustrations and a simple style making it easy to read, just like 37Signals. If you are itching to be an entrepreneur, this might make you take that leap. If you're starting out and want inspiration and design principles, this is the book. All design is re-design and so is this book.

8) The Facebook Effect by David Kirkpatrick

Some watch the movie, I prefer to read a book. The book is more accurate than the movie. Well, duh. David is a great writer, and he used the access that he had to Zuckerberg and Facebook, to produce a great book. It's quite insightful.

9) Gamestorming by Dave Gray

I love XPLANE. They do a great job and now they are part of Dachis group where I am expecting them to do even better. It's incredibly difficult to take complex concepts and simplify to communicate to any audience. The book outlines great approaches to accomplish the simplicity and facilitate learning, discovery, and decision making.

10) Delivering Happiness by Tony Hsieh

Zappos is a great company. I have learned a lot from its culture and from Tony's management style. This is a must-read, if you believe you want to excel in serving your customers and have your entire team live by those values.

And this is the first 2011 book that you may want to read:


Knowing Umair, this will be a great book.

Tuesday, November 9, 2010

Challenging Stonebraker’s Assertions On Data Warehouses - Part 2

Check out the Part 1 if you haven’t already read it to better understand the context and my disclaimer. This is the Part 2 covering the assertions from 6 to 10.

Assertion 6: Appliances should be "software only."

“In my 40 years of experience as a computer science professional in the DBMS field, I have yet to see a specialized hardware architecture—a so-called database machine—that wins.”

This is a black swan effect; just because someone hasn’t seen an event occur in his or her lifetime, it doesn’t mean that it won’t happen. This statement could also be re-written as “In my 40 years of experience, I have yet to see a social network that is used by 500 million people.” You get the point. I am the first one who would vote in favor of commodity hardware against a specialized hardware, but there are very specific reasons why the specialized hardware makes sense in some cases.

“In other words, one can buy general purpose CPU cycles from the major chip vendors or specialized CPU cycles from a database machine vendor.”

Specialized machines don’t necessarily mean specialized CPU cycles. I hope the word “CPU cycle” is used as metaphor and not to indicate its literal meaning.

“Since the volume of the general purpose vendors are 10,000 or 100,000 times the volume of the specialized vendors, their prices are an order of magnitude under those of the specialized vendor.”

This isn’t true. The vendors who make general-purpose hardware also make specialized hardware, and no, it’s not an order of magnitude expensive.

“To be a price- performance winner, the specialized vendor must be at least a factor of 20-30 faster.”

It’s a wrong assumption that BI vendors use specialized hardware just because of the performance reasons. The “specialized” in many cases for an appliance is simply a specialized configuration. The appliance vendors also leverage their relationship with the hardware vendors to fine tune the configuration based on their requirements, negotiate a hefty discount, and execute a joint go-to-market strategy.

The enterprise software follows value-based pricing and not cost-based pricing. The price difference between a commodity and a specialized appliance is not just the difference of the cost of hardware that it runs on.

“However, every decade several vendors try (and fail).”

Not sure what is the success criteria behind this assertion to declare someone a winner or a failure. Acquisitions of Netezza, Greenplum, and Kickfire are recent examples of how well the appliance companies have performed. The incumbent appliance vendors are doing great, too.

“Put differently, I think database appliances are a packaging exercise”

The appliances are far more than a packaging exercise. Other than making sure that the software appliance works on the selected hardware, commoditized or otherwise, they provide a black box lifecycle management approach to the customers. The upfront cost of an appliance is a small fraction of the overall money that the customers would end up spending during the entire lifecycle of an appliance and the related BI efforts. The customers do welcome an approach where they are responsible for managing one appliance against five different systems at ten different levels with fifteen different technology stack versions.

Assertion 7: Hybrid workloads are not optimized by "one-size fits all."

Yes, I agree, but that’s not the point. It’s difficult to optimize hybrid workloads for a row or a column store, but it is not as difficult, if it’s a hybrid store.

“Put differently, two specialized systems can each be a factor of 50 faster than the single "one size fits all" system in solution 1.”

Once again, I agree, but it does not apply to all the situations. As I discussed earlier, the performance is not the only criteria that matters in the BI world. In fact, I would argue the opposite. Just because the OLTP and OLAP systems are orthogonal, the vendors compromised everything else to gain the performance. Now that’s changing. Let’s take an example of an operational report. This is the kind of report that only has the value if consumed in realtime. For such reports, the users can’t wait until the data is extracted out of the OLTP system, cleaned up, and transferred into the OLAP system. Yes, it could be 50 times faster, but completely useless, since you missed the boat.

The hybrid systems, the once that combine OLTP and OLAP, are fairly new, but they promise to solve a very specific problem, which is real real-time. While the hybrid systems evolve, the computational capabilities of OLTP and OLAP systems have started to change as well. I now see OLAP systems supporting write-backs with a reasonable throughput and OLTP systems with good BI style query performance, all of these achieved through modern hardware and clever use of architectural components.

Let’s not forget what optimization really is. It means desired functionality at reasonable performance. A real-time report, that takes 10 seconds to run could be far more valuable than a report that runs under ten milliseconds, three days later.

“A factor of 50 is nothing to sneeze at.”

Yes, point taken. :-)

Assertion 8: Essentially all data warehouse installations want high availability (HA).

No, they don’t. This is like saying all the customers want five 9 SLA on the cloud. I don’t underestimate the business criticality of a DW if it goes down, but not all the DW are being used 24x7 and are mission critical. One size doesn’t fit all. And, if your DW is not required to be highly available, you need to ask yourself, whether it is fair for you to pay for the HA architectural cost, if you don’t want it. Tiered SLAs are not new, and tiered HA is not a terrible idea.

Let’s talk about the DWs that do require to be highly available.

“Moreover, there is no reason to write a DBMS log if this is going to be the recovery tactic. As such, a source of run-time overhead can be avoided.”

I am a little confused how this is worded. Which logs are we referring to - the source systems or the target systems? The source systems are beyond the control of a BI vendor. There are newer approaches to design an OLTP system without a log, but that’s not up for discussion for this assertion. If the assertion is referring to the logs of the target system, how does that become a run-time overhead? Traditional DW systems are a read-only system at runtime. They don’t write logs back to the system. If he is referring to the logs while the data is being moved to DW, that’s not really run-time, unless we are referring to it as a hot-transfer.

There is one more approach, NoSQL, where eventual consistency is achieved over a period of time and the concept of a “corrupted system” is going away. Incomplete data is an expected behavior and people should plan for it. That’s the norm, regardless of a system being HA or not. Recently Netflix moved some of its applications to the cloud, where they have designed a background data fixer to deal with data inconsistencies.

HA is not black and white, and there are way more approaches, beyond the logs, to accomplish to achieve desired outcome.

Assertion 9: DBMSs should support online reprovisioning.

“Hardly anybody wants to take the required amount of down time to dump and reload the DBMS. Likewise, it is a DBA hassle to do so. A much better solution is for the DBMS to support reprovisioning, without going offline. Few systems have this capability today, but vendors should be encouraged to move quickly to provide this feature.”

I agree. I would add one thing. The vendors, even today, have a trouble supporting offline provisioning to cater to the increasing load. On-line reprovisioning is not trivial, since in many cases, it requires to re-architect their systems. The vendors typically get away with this, since the most customers don’t do capacity planning in real-time. Unfortunately, traditional BI systems are not commodity where the customers can plug-in more blades when they want and take them out when they don’t.

This is the fundamental premise behind why cloud makes it a great BI platform to address such re-provisioning issues with elastic computing. Read my post “The Future Of BI In The Cloud”, if you are inclined to understand how horizontal scale-out systems can help.

Assertion 10: Virtualization often has performance problems in a DBMS world.

This assertion, and the one before this, made me write the post “The Future Of BI In The Cloud”. I would not repeat what I wrote there, but I will quickly highlight what is relevant.

“Until better and cheaper networking makes remote I/O as fast as local I/O at a reasonable cost, one should be very careful about virtualizing DBMS software.”

Virtualizing I/O is not a solution for large DW with complex queries. However, as I wrote in the post, a good solution is not to make the remote I/O faster, but rather tap into the innovation of software-only SSD block I/O that are local.

“Of course, the benefits of a virtualized environment are not insignificant, and they may outweigh the performance hit. My only point is to note that virtualizing I/O is not cheap.”

This is what a disruption initially looks like. You start seeing good enough value in an approach, for certain types of solutions, that seems expensive for other set of solutions. Over a period of time, rapid innovation and economies of scale remove this price barrier. I think that’s where the virtualization stands, today. The organizations have started to use the cloud for IaaS and SaaS for a variety of solutions including good enough self-service BI and performance optimization solutions. I expect to see more and more innovation in this area where traditional large DW will be able to get enough value out of the cloud, even after paying the virtualization overhead.

Wednesday, November 3, 2010

Bottom Of The Pyramid – Nokia’s Second Act

The two-third of world’s 4.6 billon mobile users live in the emerging markets. Millions of these users live below the poverty line and are part of the bottom of the pyramid (BOP). Nokia is the market leader in these emerging markets, at least for now, with 34% market share. It’s clear from rapidly declining Nokia’s marketshare and an appointment of new CEO, Stephen Elop, that Nokia needs a second act. I believe the BOP is what could be the next big thing for Nokia.

The recent NYTimes story highlights a Nokia’s service, to supply commodity data to the farmers in India, using a text message. So far, 6.3 million people have signed up for this service. Nokia is planning to roll out this service, Life tools, in Nigeria as well. This is part of their Ovi mobile business.

I have written before on impact of cloud computing and mobile on the bottom of the pyramid and the importance of public policy innovation in emerging markets. The BOP is one of the biggest opportunities that Nokia currently has. Nokia has been losing marketshare in the smartphone category, and it is going to get increasingly difficult for Nokia to compete with Apple, Google, RIM, and now Microsoft. However, the very same vendors will find it equally difficult to move down the chain to compete with Nokia in the emerging markets.

One of the biggest business challenges to cater to the BOP is not a desire to market or a product to offer, but it is the lack of direct access to these consumers. The people at the BOP are incredibly difficult to reach. I have seen many go-to-market plans fail because it is either impossible or prohibitively expensive to market to these consumers. One of the biggest assets Nokia has is the relationship, the channel, with the people at the BOP. Now is the time to focus and leverage that channel by providing them with the content and the services that could be served on these phones via a strong platform, built for the BOP, and a vibrant ecosystem built around it.

My two cents: exit from the Smartphone category and double down the investment to serve the people at the bottom of the pyramid.

Nokia, that could be your second act.

Thursday, October 28, 2010

Challenging Stonebraker’s Assertions On Data Warehouses - Part 1

I have tremendous respect for Michael Stonebraker. He is an apt visionary. What I like the most about him is his drive and passion to commercialize the academic concepts. ACM recently published his article “My Top 10 Assertions About Data Warehouses." If you haven’t read it, I would encourage you to read it.

I agree with some of his assertions and disagree with a few. I am grounded in reality, but I do have a progressive viewpoint on this topic. This is my attempt to bring an alternate perspective to the rapidly changing BI world that I am seeing. I hope the readers take it as constructive criticism. This post has been sitting in my draft folder for a while. I finally managed to publish it. This is Part 1 covering the assertions 1 to 5. The Part 2 with the rest of the assertions will follow in a few days.

“Please note that I have a financial interest in several database companies, and may be biased in a number of different ways.”

I appreciate Stonebraker’s disclaimer. I do believe that his view is skewed to what he has seen and has invested into. I don’t believe there is anything wrong with it. I like when people put money where their mouth is.

As you might know, I work for SAP, but this is my independent blog and these are my views and not those of SAP’s. I also try hard not to have SAP product or strategy references on this blog to maintain my neutral perspective and avoid any possible conflict of interest.

Assertion 1: Star and snowflake schemas are a good idea in the data warehouse world.

This reads like an incomplete statement. The star and snowflake schemas are a good idea because they have been proven to perform well in the data warehouse world with row and column stores. However, there are emergent NoSQL based data warehouse architectures I have started to see that are far from a star or a snowflake. They are in fact schemaless.

“Star and Snowflake schemas are clean, simple, easy to parallelize, and usually result in very high-performance database management system (DBMS) applications.”

The following statement contradicts the statement above.

“However, you will often come up with a design having a large number of attributes in the fact table; 40 attributes are routine and 200 are not uncommon. Current data warehouse administrators usually stand on their heads to make "fat" fact tables perform on current relational database management systems (RDBMSs).”

There are a couple of problems with this assertion:
  1. The schema is not simple; 200 attributes, fact tables, and complex joins. What exactly is simple?
  2. Efficient parallelization of a query is based on many factors, beyond the schema. How the data is stored and partitioned, performance of a database engine, and hardware configuration are a few to name.
"If you are a data warehouse designer and come up with something other than a snowflake schema, you should probably rethink your design.”

Really?

The requirement, that the schema has to be perfect upfront, has introduced most of the problems in the BI world. I call it the design time latency. This is the time it takes after a business user decides what report/information to request and by the time she gets it (mostly the wrong one.) The problem is that you can only report based what you have in your DW and what’s tuned.

This is why the schemaless approach seems more promising as it can cut down the design time latency by allowing the business users to explore the data and run ad hoc queries without locking down on a specific structure.

Assertion 2: Column stores will dominate the data warehouse market over time, replacing row stores.

This assertion assumes that there are only two ways of organizing data, either in a row store or in a column store. This is not true. Look at my NoSQL explanation above and also in my post “The Future Of BI In The Cloud”, for an alternate storage approach.

This assertion also assumes that the access performance is tightly dependent on how the data is stored. While this is true in the most cases, many vendors are challenging this assumption by introducing an acceleration layer on top of the storage layer. This approach makes is feasible to achieve consistent query performance, by clever acceleration architecture, that acts as an access layer, and does not depend on how data is stored and organized.

“Since fact tables are getting fatter over time as business analysts want access to more and more information, this architectural difference will become increasingly significant. Even when "skinny" fact tables occur or where many attributes are read, a column store is still likely to be advantageous because of its superior compression ability."

I don’t agree with the solution that we should have fatter fact tables when business analysts want more information. Even if this is true, how will column store be advantageous when the data grows beyond the limit where compression isn’t that useful?

“For these reasons, over time, column stores will clearly win”

Even if it is only about rows versus columns, the column store may not be a clear commercial winner in the marketplace. Runtime performance is just one of many factors that the customers consider while investing in DW and business intelligence.

“Note that almost all traditional RDBMSs are row stores, including Oracle, SQLServer, Postgres, MySQL, and DB2.”

Exactly!

The row stores, with optimization and acceleration, have demonstrated reasonably good performance to stay competitive. Not that I favor one over the other, but not all row-based DW are that large or growing rapidly, and have serious performance issues, warranting a switch from a row to a column.

This leads me to my last issue with this assertion. What about a hybrid store – row and column? Many vendors are trying to figure this one out and if they are successful, this could change the BI outlook. I will wait and watch.

Assertion 3: The vast majority of data warehouses are not candidates for mainmemory or flash memory.

I am assuming that he is referring to the volatile flash memory and not flash memory as storage. Though, the SSD block storage have huge potential in the BI world.

“It will take a long time before main memory or flash memory becomes cheap enough to handle most warehouse problems.”

Not all DW are growing at the same speed. One size does not fit all. Even if I agree that the price won’t go down significantly, at the current price point, main memory and flash memory can speed up many DW without breaking the bank.

The cost of DW, and especially the cost of flash memory, is a small fraction of the overall cost; hardware, license, maintenance, and people. If the added cost of flash memory makes business more agile, reduces maintenance cost, and allows the companies to make faster decisions based on smarter insights, it’s worth it. The upfront capital cost is not the only deciding factor for BI systems.

“As such, non-disk technology should only be considered for temporary tables, very "hot" data elements, or very small data warehouses.”

This is easier said than done. The customers will spend significant more time and energy, on a complicated architecture, to isolate the hot elements and running them on a different software/hardware configuration.

Assertion 4: Massively parallel processor (MPP) systems will be omnipresent in this market.

Yes, MPP is the future. No disagreements. The assertion is not about on-premise or the cloud, but I truly believe that cloud is the future for MPP. There are other BI issues that need to be addressed before cloud makes it a good BI platform for a massive scale DW, but the cloud will beat any other platform when it comes to MPP with computational elasticity.

Assertion 5: "No knobs" is the only thing that makes any sense.

“In other words, look for "no knobs" as the only way to cut down DBA costs.”

I agree that “no knobs” is what the customers should thrive for to simplify and streamline their DW administration, but I don’t expect these knobs to significantly drive down the overall operational cost, or even the cost just associated with the DBAs. Not all the DBAs have a full time job to manage and tune the DW. The DW deployments go through a cycle where the tasks include schema design, requirements gathering, ETL design etc. Tuning or using the “knobs” is just one of many tasks that the DBAs perform. I absolutely agree that the no-knobs would certainly take some burden off the shoulders of a DBA, but I disagree that it would result into significant DBA cost-savings.

For a fairly large deployment, there is significant cost associated with the number of IT layers
that are responsible to channel the reports to the business users. There is an opportunity to invest into the right kind of architecture, technology-stack for the DW, and the tools on top of that to help increase the ratio of Business users to the BI IT. This should also help speed up the decision-making process based on the insights gained from the data. Isn’t that the purpose to have a DW to begin with? I see the self-service BI as the only way to make IT scale. Instead of cutting the DBA cost, I would rather focus on scaling the BI IT with the same budget and a broader coverage amongst the business users in an organization.

Monday, October 25, 2010

The Future Of BI In The Cloud



Actual numbers vary based on whom you ask, but the general consensus is that the Business Intelligence (BI) and Analytics in the cloud is a fast growing market. IDC expects a compounded annual growth rate (CAGR) of 22.4% through 2013. This growth is primarily driven by two kinds of SaaS applications. The first kind is a purpose-specific analytics-driven application for business processes such as financial planning, cost optimization, inventory analysis etc. The second kind is a self-service horizontal analytics application/tool that allows the customers and ISVs to analyze data and create, embed, and share analysis and visualizations.

The category that is still nascent and would require significant work is the traditional general-purpose BI on large data warehouses (DW) in the cloud. For the most enterprises, not only all the DW are on-premise, but the majority of the business systems that feed data into these DW are on-premise as well. If these enterprises were to adopt BI in the cloud, it would mean moving all the data, warehouses, and the associated processes such as ETL in the cloud. But then, the biggest opportunities to innovate in the cloud exist to innovate the outside of it. I see significant potential to build black-box appliance style systems that sit on-premise and encapsulate the on-premise complexity – ETL, lifecycle management, and integration - in moving the data to the cloud.

Assuming that the enterprises succeed in moving data to the cloud, I see a couple of challenges, if treated as opportunities, will spur the most BI innovation in the cloud.

Traditional OLAP data warehouses don’t translate well into the cloud:

The majority of on-premise data warehouses run on some flavor of a relational or a columnar database. The most BI tools use SQL to access data from these DW. These databases are not inherently designed to run natively on the cloud. On top of that, the optimizations performed on these DW such as sharding, indices, compression etc. don’t translate well into the cloud either since cloud is a horizontally elastic scale-out platform and not a vertically integrated, scale-up, system.

The organizations are rethinking their persistence as well as access languages and algorithms options, while moving their data to the cloud. Recently, Netflix started moving their systems into the cloud. It’s not a BI system, but it has the similar characteristics such as high volume of read-only data, a few index-based look-ups etc. The new system uses S3 and SimpleDB instead of Oracle (on-premise). During this transition, Netflix picked availability over consistency. Eventual consistency is certainly an option that BI vendors should consider in the cloud. I have also started seeing DW in the cloud that uses HDFS, Dynamo, and Cassandra. Not all the relational and columnar DW systems will translate well into NoSQL, but I cannot overemphasize the importance of re-evaluating persistence store and access options when you decide to move your data into the cloud.

Hive, a DW infrastructure built on top of Hadoop, is a MapReduce meet SQL approach. Facebook has a 15 petabytes of data in their DW running Hive to support their BI needs. There are a very few companies that would require such a scale, but the best thing about this approach is that you can grow linearly, technologically as well as economically.

The cloud does not make it a good platform for I/O intensive applications such as BI:

One of the major issues with the large data warehouses is, well, the data itself. Any kind of complex query typically involves an intensive I/O computation. But, the I/O virtualization on the cloud, simply does not work for large data sets. The remote I/O, due to its latency, is not a viable option. The block I/O is a popular approach for I/O intensive applications. Amazon EC2 does have block I/O for each instance, but it obviously can’t hold all the data and it’s still a disk-based approach.

For BI in the cloud to be successful, what we really need is ability for scale-out block I/O, just like scale-out computing. Good news is that there is at least one company, Solidfire, that I know, working on it. I met Dave, the founder, at the Structure conference reception. He explained to me what he is up to. Solidfire has a software solution that uses solid state drives (SSD) as scale-out block I/O. I see huge potential in how this can be used for BI applications.

When you put all the pieces together, it makes sense. The data is distributed across the cloud on a number of SSDs that is available to the processors as block I/O. You run some flavor of NoSQL to store and access this data that leverages modern algorithms and more importantly horizontally elastic cloud platform. What you get is commodity and blazingly fast BI at a fraction of cost with pay-as-you-go subscription model.
Now, that’s what I call the future of BI in the cloud.

Friday, October 15, 2010

Can A Product Manager Be Effective Without Product Design Skills?

I am very passionate about the topic of design and design-thinking. When I saw this question on Quora, I decided to post my answer. Following is directly from my answer to this question on Quora:

The answer is "Definitely not."

It's not about the product design by itself, but it's about applying core and transferable product design skills to product management. Let's break it down:

1) Understanding users: Good product designers have great user research, observation, and listening skills to put themselves into the shoes of a user and understand the real, mostly unspoken and latent, needs of the end users.

2) Being self-critical: If you are a trained designer, you would stay away from self-referential design, which is a root cause for many failed products. Good product designers are self-critical about their approach and the deliverables and are always open to feedback to iterate on their design.

3) Working with designers: If you are a designer, you have great empathy for fellow designers. I have seen products fail, simply because, the product managers can't work with the designers and don't share the same mindset.

4) A "maker" mentality: The designers are makers. They make things. The product managers typically don't, the engineers do. For a product manager, it's incredibly important to have a "maker" mentality. They should continuously be making and refining, by themselves or with the help of the engineers. The product managers, who believe that their responsibility ends when they are done gathering the requirements are likely to fail, miserably in most cases.

5) A "T-shaped" product manager: If you're a product manager, the vertical line of the "T" is your core PM skills. However, successful product managers go beyond their core skills, the horizontal line in the letter "T", to learn more about product design, engineering etc. This ensures that they have a holistic perspective of the product. That leads me to my last point.

6) General Manager: viable, feasible, and desirable: A good product from a vendor's perspective is commercially viable, technologically feasible, and desirable by the end users. Many product managers stop at the business needs, but they truly need to go beyond that to work with the engineering to make it technologically feasible, and have a design mindset to work with the designers to make it desirable by the end users. The product managers should thrive for a "general manager" mindset, of which, product design is a core element.

Tuesday, September 21, 2010

Telcos Could Be The Future Enterprise Software Vendors For Small Businesses

Having worked on enterprise software product and go-to-market strategy for SMB (small and medium businesses), I can tell you that these are the most difficult customers to reach to, especially the S in SMB. It’s an asymmetric non-homogeneous market for which the cost of sales could go out of control if you don’t leverage the right channels. The competitive landscape varies from region to region and industry to industry. In many cases instead of competing against a company you would be competing against a human being with paper-based processes.

Tomorrow I am speaking at the Razorsight annual conference on the topic of cloud computing. I am excited to meet their customers, the telcos. While I prepare for my keynote, I can’t stop thinking about the challenges that the telcos face and the opportunities that they are not pursuing. My keynote presentation is about how telcos can leverage the cloud, but this blog post is about how telcos can become successful enterprise software vendors and market their solutions to small businesses.
There are very few things that are common across small businesses. They own a landline (at least for now) and they have Internet access, in many cases from the same vendor. I believe that the landlines will be more and more difficult to sell to these customers, but losing a channel – a relationship – would be even worse. If leveraged well, these relationships could be worth a lot more compared to the landline business as it stands today. Just think about it. Selling to small businesses is all about leveraging existing relationships with them. This channel is priceless.

What will it take for the telcos to market products to small businesses?

ISV acquisitions or VAR agreements: If telcos are bundling software, on-premise or SaaS, the telcos, as organizations, don’t necessarily have the skills or resources to make software for small businesses. This would mean a series of small and niche ISV acquisitions across geographical areas and industries and VAR agreements with current ISVs.

What kind of software can telcos bundle?

There are two kinds: horizontal and vertical. The examples of horizontal software are accounting, payroll, point of sale etc. Ask Intuit and they will tell you all about the horizontal cash cow. The vertical software is industry specific for the business that you are in. One of my favorite companies in this area is OpenTable. If you have made an online reservation at a restaurant you have most likely used their software. They had a successful IPO last year and they are on track to become a $100 million company.

Telcos should be doing all these things. They have cash and they can borrow cheap money to buy companies. Telcos also have an option to leverage the cloud, their own cloud in many cases, to provide SaaS solutions to small businesses. They can leap frog the on-premise ISVs who don’t have access to these customers and are sensitive to margin cannibalization.

Friday, September 10, 2010

Lean Startup Customer Development And IxD Personas

On Quora Steve Blank asked "Is it possible to use Lean Startup customer development findings to inform IxD personas?" This post is my response to Steve on Quora:

Absolutely yes.

Pivoting is not just about finding the right business model that works for a start-up but it is also about nailing down the persona that you are designing your product for. I have seen many start-up fail because they don't know who the end user is. Creating a persona is an iterative process by itself. Many people focus on persona as a final artifact but I believe that the journey is more important than the destination. While discovering a persona and iterate on it to make it crisp, the team - the dev, marketing, and UX - comes together with the shared understanding of the target end user. The journey brings in the empathy that they all internalize and that influences what they do. The journey includes getting out of the office and talk to the real people who you think would use your product.

Persona requires qualitative discovery as well as validation. It's an instantiation of your customer. The customer discovery, validation, and creation are all directly related to the persona. In fact I would argue that in many cases knowing the target audience, at a given stage, is far more important than having a perfect product. Plenty of people fixate on building the right product against building it for the right people.

Tuesday, September 7, 2010

A Laundromat Entrepreneur

In my previous post “While Entrepreneurs Scale On The Cloud The Angels Get Supersized” I wrote about how cloud computing is disrupting the VC industry. Continuing on the thread of entrepreneurship I am seeing more and more entrepreneurs building applications who do not belong to any formal organization, start-up or otherwise. The definition of what used to be a start-up itself is changing, primarily because of two reasons - simple and easily accessible PaaS tools to design, run, and maintain applications on the cloud and access to a market place to sell the applications.

We have been witnessing this trend for the mobile applications for a while - Android as well as iPhone and now iPad. I see the same pattern for the cloud-based applications. I have seen many useful, productive, and successful applications that are designed by individual developers with no affiliation to any organization.

Google has done a great job in designing the tools for the developers to build applications that can run on their cloud and can be sold on their app store. This has democratized the application business to large extent that attempt to solve niche problems. At the same time the individual developers have started monetizing their work without going through an overhead of bootstrapping and running a company. While Google’s cloud platform is a generic one the application and stack specific PaaS providers such as Salesforce.com and Heroku are also attracting such developers. Intuit’s partner development platform is a great example of a channel platform that allows the entrepreneurs to market to an SMB segment, a very difficult segment to reach (a post on that later).

All these trends, collectively, have introduced a new category of an entrepreneur. A laundromat entrepreneur.

They are not full fledged start-ups but these individuals are also not developing just for fun. These businesses have steady revenue, positive cash flow, and require very little maintenance. The companies such as Help Me - located in Karachi, Pakistan - have created their business model to support such developers outsource customer support for their existing applications so that they can focus on building new applications. Some of these individual businesses could be worth a few million dollars.

This is a very different business model that combines the best-of-breed with long tail. I am quite excited about this new category since that puts in the developers directly in charge of the product and takes them closer to the end users. I am curious to see the life cycle of these laundromats and how they get bought and sold. Many people that I have had discussions with claim that we could expect to see plenty of individuals who will own such a laundromat portfolio worth five to six million dollars.

Attribution: I have shamelessly stolen the word “laundromat” from my friend Mike Ni after my discussion with him on cloud computing business models. I had told him that I would!

The picture credit to Michael Valli

Wednesday, August 25, 2010

While Entrepreneurs Scale On The Cloud The Angels Get Supersized

Cloud computing is disrupting the venture capital industry in a big way. One of the obvious changes we all have observed is the reduced up-front capital expenditure to start a new venture. Things that used to require an array of expensive servers and an army of people to maintain them have essentially been replaced by a bunch of EC2 instances and a few smart developers. The tools and the technology stack for today’s applications are designed for cheaper and faster experimentation allowing the entrepreneurs to follow the lean methodology and pivot as fast as they can. I agree that some investors underestimate the people cost and overestimate the capabilities of the cloud but regardless this has caused a major shift in how the companies are funded.

The rise of an emergent category of super angel is all about leveraging the cloud computing. Fred Wilson closed a $30 million fund and Aydin Senkut closed a $40 million fund. These funds will invest into dozens of companies that can be bootstrapped with low up-front cost. More and more entrepreneurs prefer to raise as little money as possible in the beginning. This phenomenon has a few effects:

Raise AS you scale and not raise TO scale:

Founders have been able to raise money at good valuation without giving up large equity. This has been an uneasy situation for many venture capitalists and has crated strange problems while raising money. When Foursquare raised money the founders sold part of their equity to the VCs so that the VCs can earn money on a successful exit. The founders also decided not to sell out to Yahoo. Raising money as the company scales follows the cloud motto of scale-as-you-need and pay-as-you-go.

Build a product that you want and not what a VC wants:

The super angels typically stay on the sidelines and definitely don’t serve on the board. This means a lot more freedom to entrepreneurs to define and shape their product. This also allows the companies to take up-front risk, venture into new areas, and experiment where conventional wisdom would otherwise have prevented them. Fail fast and fail cheap is now a reality from a venture as well as technology perspective.

Prominent network effects in the start-up community:

I strongly believe that the cloud is the best participatory platform to create network effects of all kinds. I have seen similar kind of network effects occur in the new angel industry, especially in an incubator such as Y Combinator. The Silicon Valley start-ups have enjoyed the network effects for long time. These effects are even more profound when some of these start-ups are in an incubator setting. Such environments have a natural advantage for the entrepreneurs to leverage cross-pollination. Cloudkick is such an example of a YC company that was started by three entrepreneurs to build a solution to manage the Amazon EC2 instances that all other YC companies used at that time.

Competition in the portfolio companies could be a good thing:

The VCs do not prefer to have competing start-ups in single portfolio to avoid conflict of interest. As rational as it sounds this is simply not feasible when an angel or a super angel funds tens and hundreds of companies. I believe that it’s a good thing. At macro level the angels can see the patterns and advise the companies and at the micro level the companies can hone in their competitive differentiation before raising more money. This might also change how the founders pick and choose the angels. If the founders pick an angel who has similar companies in their portfolio they can expect better connections and mentoring from the angels despite of having the competing companies funded by the same set of investors.

It’s not that the entire VC industry has changed. The series A and B investors are as important as angels and super angels but the way the VCs operate and the expectations that the limited partners have would certainly change. I also believe that the VCs who are not stage agnostic will revisit their seed-funding strategy. The performance of the traditional VC funds that were raised in the last ten years is far worse than what an investor would expect from an alternate class assets, which is what the VC investments are. Time will tell whether doing more deals with same money will yield better return on the portfolio but, at least for now, the VC climate change is imminent.

Thursday, August 19, 2010

Software Is The New Hardware

Today Intel announced that it is buying McAfee for $7.7 billion. This acquisition made people scratch their heads. Why McAfee?

The obvious arguments are that Intel has hit the growth wall and organic growth is not good enough to satisfy the shareholders. But this argument quickly falls apart from margin perspective. Why dilute their current nice gross margin even if McAfee has steady revenue stream? [Read my update at the end of the post]

I believe there are two reasons. The first is that the companies need a balanced product and revenue mix regardless of different margins. Oracle bought Sun and HP bought EDS. Big companies do this all the time. The second, not so obvious, reason is a recognition that software is new hardware. The processors are processors – they are a commodity any which way you look at them. It is not news to anyone that the computing has become commodity which is the basis of utility style cloud computing. Software, embedded or otherwise, has significant potential to sell value-added computing. The security solutions could fit in nicely on a piece of chip. When you drive a few miles from Intel’s headquarters to meet folks at nVidia you will be amazed to see what kind of value a software tool kit can derive from the processors.

I don’t know how Intel will execute the merger considering the fact that this is their largest acquisition ever. But, I am even more convinced that software is the new hardware. Cloud computing, data center automation, virtualization, network security, and a range of other technologies can leverage software in a chip that is optimized for a set of specialized tasks. Time to move from commodity to specialized computing until specialized computing becomes commodity. Interesting times!

Update: Romit sent me a message commenting that how McAfee will dilute Intel’s margin since McAfee’s gross margin is more than Intel. I should clarify. The assumption on the street is that the cost of capital for this purchase is about 4% and Intel expects 8% return on the investment even after paying 60% premium for the purchase. The tricky part is that how long Intel can maintain the close to 75% software margin of a software company operating inside a hardware company. When I say diluting the margin I mean diluting the overall combined margin post-purchase. The analysts are skeptical about the success of the merger and so am I. Intel has no track record of integrating large software companies such as McAfee especially after paying significantly higher than average premium. Hypothetically if Intel were to buy a company with more synergies that can leverage existing channels and can fit into their culture they could have increased the gross margin and hence the return to their shareholders.

Thursday, July 22, 2010

The Missing Half Of A Social Enterprise

In my previous post “Social CRM Is Only The First Half Of A Social Enterprise” I started the discussion on why social CRM is only the first half of a social enterprise and how we can go to the core and build a true social enterprise. Continuing the discussion on the missing half on a social enterprise this is the part 2.

Transform productivity silos into collaborative content curation:

The social software gets better as more people use it but we need more people to make it useful. There is no easy way out. As Andrew McAfee’s rightly put it Email is a 9x problem. There isn’t significant juice in standalone social software to gain broader adoption due to the endowment effect. There is a huge adoption barrier for standalone social software to be successful since it is not contextualized into a business process. The users simply see it as yet another tool that increases their cognitive overload.

I suggest don’t go after social software that is designed to create a parallel universe. Instead design solutions that are contextualized within existing business processes and makes it very easy for the end users to curate existing content from several structured, semi-structured, and unstructured sources e.g. Email, Wiki, PowerPoint, ERP, CRM, SharePoint etc. The nature of the content could be any artifacts such as an invoice, purchase order, strategy document, pipeline report, documentation etc. Describing what collaborative content curation can actually do for enterprise software would require a blog post by itself. I suggest you read democratised curation by JP and "The Seven Needs of Real-Time Curators” by Scoble. But in nutshell if designed correctly it offers significant potential to help people find, nurture, and syndicate the enterprise content with collaboration on steroids. The users continue using the tools that they like. However suddenly these tools start feeling more and more social with collaborative on-ramps and off-ramps. Social media, cloud computing, and collaborative content curation will be peanut, butter, and jelly for a social enterprise.

Use social tools to challenge and rethink management practices:

Efficient tools are not a proxy for an efficient management. The tools of the past did bring the automation and productivity but did very little to influence the way the organizations are being managed. Adding social fabric to existing processes may bring in some additional benefits but a true social enterprise should thrive for the tools that completely make them rethink the management practices almost to the point to cause disruption.

How about opening up the cost structure to the entire organization, democratize the decision making process, run bottom-line based prediction markets – not how much we will sell it for but what will it cost us to build it. It’s an endless list. This will be unsettling in the beginning for some but it would eventually yield great results.

The generational shift is already ready for this disruption. The baby boomers are on their way out and the current mid-level and senior gen X managers will be replaced by the millennial very soon. Millennial is a born-social generation. As one millennial told my friend when asked what does career mean to him – “I want to have awareness of what’s going on around me, have micro-conversations on social tools, and create context. This context is my career”. Such philosophy will challenge the current management practices and put organizations in a difficult situation.

But this is an opportunity as well. The organizations can completely rethink the management practices as they start their journey to be a true social enterprise. This is not just about asking a CEO to use a blog to communicate with the employees but to have a social-first attitude at every single step of the management.

Earn your user base by leaning in with a consumer start-up mindset:

One of the biggest differences between the enterprise and the consumer software is that the user is not a buyer in the enterprise software. Enterprise software vendors don’t attempt to win the end users since they don’t have to. The end users have no choice. I suggest that if you are an enterprise software company that designs social solutions lean in with a consumer start-up mindset where you really have to earn your user base.

The cafeteria menu is my personal favorite example. One of my friends’ company spent $600k to redesign their intranet and the most popular page on the new Intranet is still the cafeteria menu that gets updated every week. Why not solve that problem? Provide cafeteria information that is fresh and accessible from mobile devices. Now, you have my attention. Add social and location-based functionality to help me find other employees to network and have lunch with. This is the new HCM. Well, not exactly, but you get the point.

If you attempt to design an IT-driven top-down solution to enforce “socialness” it simply won’t work. You need to win your users to use your solutions even if, in theory, they don’t have a choice.

Having fun and being productive should not be mutually exclusive.

Friday, July 2, 2010

Podcast: The Next Cloud: Emerging Business Models

I was a guest on Novell's radio/podcast series, the Cloud Chasers. The topic was "The Next Cloud: Emerging Business Models And Their Impact On The Enterprise". It was a great conversation! You can download the podcast here or tune in below:


Friday, June 11, 2010

Social CRM Is Only The First Half Of A Social Enterprise

Social CRM has arrived. My fellow bloggers and analysts friends Jeremiah Owyang, Ray Wang, Esteban Kolsky, Paul Greenberg, Sameer Patel, Oliver Marks, Jeff Nolan, and countless others have done a great job in defining the attributes, characteristics, and value proposition of social CRM. The recent acquisitions - Lithium acquiring Scout Labs, Attensity acquiring Biz360, and Jive acquiring Filtrbox - have clearly indicated the market interest in social CRM. There are also tons of emerging start-ups in this domain solving specific problems in niche sub-categories of social CRM.

However, social CRM is only the first half of a social enterprise.

Let me be that idiot for a minute who over-simplifies enterprise software and its evolution. The traditional ERP, MRP, and SCM software were designed for automation and productivity to improve the bottom-line, scale the business, and make informed decisions. The CRM was essentially designed to sell and market better and eventually to support the customers whom you sold to. Then comes the social CRM that is designed as an extension of CRM to help understand customers better, have rich conversations with the customers, increase the impact of the brand, prevent customer churn etc.

Unfortunately social CRM is only the half part of the equation primarily designed to influence the top-line of an organization. The other missing half is the social solutions that support the bottom-line of a company. Together they form a social enterprise. I don’t like the word “social business”. In case you didn’t get the memo, the business has always been social. What is not social is an enterprise. A combination of social CRM that supports the top-line and a set of solutions that supports the bottom-line can truly transform an enterprise into a social enterprise.

Some vendors have attempted to introduce “socialness” in some of the edge applications but I believe there is a need to go to the core and build a true social enterprise. In my two part series I would like to share my thoughts on how this could be accomplished. This is part one.

Focus on the means and not the end:

I can talk about plenty of ERP processes but let’s discuss a specific process that is perceived my many people as dry and not social. It’s the “closing the books” financial process. I would encourage the folks, who think that the financial processes are not social, to spend some time in a large organization to observe and shadow the controllers and a CFO in the last few days and the first few days of a quarter. The software that “closes the books” is the very last step in the process, the end, designed to keep the CEO and CFO out of the jail. Everything that leads up to closing the books, the means, comprise tacit social interactions such as calling cost center managers for their numbers, asking for clarifications, communicate not to do certain things etc. The list goes on. This social system certainly works. However there is one problem – it is highly inefficient.

This is where I see the opportunity to provide a social toolset designed for a specific process – a social vertical – to help all the stakeholders. The social tools should not be designed to replace the face-to-face interactions and should not just be limited to encode the interactions. Instead they should allow people to scale their social interactions, leverage discovery, and experience serendipity. The social tools become the context for the core processes.

Find an internal business process that is inherently social where employees spend most of their time outside of a destination tool. Run with it.

Don't fight the system, instead cater to emergent roles:

As the nature of business changes the great organizations that are on forefront of this change are good about creating new roles that never existed before. Some the examples are Chief Sustainability Officer, Chief Privacy Officer, Chief Customer Churn Officer etc. Enterprise software vendors are often criticized as “pouring concrete into existing business processes”. It’s not a surprise that existing processes are hard to change and existing human behavior is even harder to change but providing a “social-first” experience to these new emergent roles could potentially trigger a positive change in an organization. The people in these new roles don’t typically have a rigid set of pre-defined processes and tools. That’s good news. Work with these people to identify how social software can enable some of these new business processes and functions. As a vendor you are likely to get more traction working with them against working with a CFO or a purchase manager.

Turn involunteer collaboration into social interaction:

Let’s be very clear that being collaborative does not mean being social. Unfortunately the existing collaboration tools help people collaborate once they have decided to collaborate. Well, duh. But when you think about it, if people get along well before they decide to collaborate they have a higher chance of success while they collaborate. The problem is that people neither have motivation nor time to find and get to know the folks that they might be required to work with. This is where social enterprise can do wonders.

The solution that powers the social enterprise does not have to solve a specific business problem. Imagine an enterprise social network that has algorithms to find the like minded-people based on their skills, interest, extra curricular activities, the departments they work for, the cars they drive, the neighborhoods that they live in etc. The real advantage of using such a network is to bridge silos without having an explicit goal of collaboration. This is an antithesis of collaboration.

You don’t collaborate with your neighbors before you socialize with them. You greet them, go to the block party, and have beer and BBQ. And then if you need to collaborate on chopping that tree you do so. It isn’t very different when it comes to enterprises. End of the day the enterprises have human beings that behave like, well, human beings.

Coming up in the next post:

Social enterprise enablement through collaborative content curation, democratizing the management, and earning instead of buying adoption.

Tuesday, April 27, 2010

Delphix Is A Disruptive Database Virtualization Start-up To Watch

This is my second post on my impressions from the Under The Radar conference. Check out the first post on NoSQL.

Virtualization is not cloud computing. However virtualization has significant potential when it is used to achieve cloud-like characteristics such as elasticity, economies of scale, accessibility, simplicity to deploy etc. I have always believed that the next wave of cloud computing is going to be all about solving “special” problems on the cloud – I call it a vertical cloud. These vertical problems could be in any domain, technology stack, or industry. Raw computing has come long way. It is about the right time we do something more interesting with the raw cloud computing.

Delphix is attempting to solve a specific problem - database virtualization. I met the CEO Kaycee Lai and the VP of sales Jedidiah Yueh at Under The Radar reception the night before. They have great background in understanding the cost and flexibility issues around de-duplication from their days at EMC. They have assembled a great team including Alok Srivastava from Oracle who ran Oracle RAC engineering prior to joining Delphix. Most large database deployments have multiple copies of single database that customers use for purposes beyond production such as staging, testing, and troubleshooting. This replication is expensive from process, resources, and storage perspective and takes long time to provision instances. The founders saw this problem first hand at EMC and decided to solve it.

At the core their offering is a read-write snapshot of a database. That’s quite an achievement. The snapshots are, well, snapshots. You can’t modify them. When you make this compromise they occupy way less space. Delphix took the same concept but created the writable snapshots and a seemingly easy to use application (I haven’t used it) that allows quick de-duplication based on these snapshots. You can also go back in time and start your instance from there.

Delphix has great value proposition in the database virtualization - help the customers reduce their hardware and people – DBA and system administrators - cost at the same time accelerate the IT processes. I like their conscious decision not to go after the backup market. Sometimes you have a great product but if it is marketed in the wrong category with vendors fighting in the red ocean you could die before you can grow. They had the best pitch at the conference – very calm, explaining the problem, articulating the value proposition, emphasizing right people on the team, and identifying the target market. If you are an entrepreneur (or even if you are not) check out their pitch and Q&A. There is a lot you can learn from them.

Thursday, April 22, 2010

Disruptive Cloud Computing Startups At Under The Radar - NoSQL - Aspirin, Vicodin, and Vitamin

It was great to be back at Under The Radar this year. I wrote about disruptive cloud computing start-ups that I saw at Under The Radar last year. Since then the cloud computing has gained significant momentum. This was evident from talking to the entrepreneurs who pitched their start-ups this year. At the conference there was no discussion on what is cloud computing and why anyone should use it. It was all about how and not why. We have crossed the chasm. The companies who presented want to solve the “cloud scale” problems as it relates to database, infrastructure, development, management etc. This year, I have decided to break down my impressions into more than one post.

NoSQL has seen staggering innovation in the last year. Here are the two companies in the NoSQL category that I liked at Under The Radar:

Northscale was in stealth mode for a while and officially launched four weeks back. Their product is essentially a commercial version of memcached that sits in front of an RDBMS to help customers deal with the scaling bottlenecks of a typical large RDBMS deployment. This is not a unique concept – the developers have been using memcached for a while for horizontal cloud-like scaling. However it is an interesting offering that attempts to productize an open source component. Cloudera has achieved a reasonable success with commercializing Hadoop. It is good to see more companies believing in open source business model. They have another product called membase, which is a replicated persistence store for memcached – yes, a persistence layer on top of a persistence layer. This is designed to provide eventual consistency with tunable blocking and non-blocking I/Os. Northscale has signed up Heroku and Zynga as customers and they are already making money.

As more and more deployments face the scaling issues, Northscale does have an interesting value proposition to help customers with their scaling pain by selling them an aspirin or vicodin. Northscale won the best in category award. Check out their pitch and the Q&A:




GenieDB is a UK-based start-up that offers a product, which allows the developers to use mySQL as a relational database as well as a key-value store. It has support for replication with immediate consistency. Few weeks back I wrote a post - NoSQL is not SQL and that’s a problem. GenieDB seems to solve that problem to some extent. Much of the transactional enterprise software still runs on an RDBMS and depends on the data being immediately consistent. The enterprise software can certainly leverage the key-value stores for certain features where RDBMS is simply an overhead. However using a key-value store that is not part of the same logical data source is an impediment in many different ways. The developers want to access data from single logical system. GenieDB allows table joins between SQL and NoSQL stores. I also like their vertical approach of targeting specific popular platforms on top of mySQL such as Wordpress and Drupal. They have plans to support Rails by supporting ActiveRecord natively on their platform. This is a vitamin, if sold well, has significant potential.

They didn’t win any prize at the conference. I believe it wasn't about not having a good product but they failed to convey the magnitude of the problem that they could help solve in their pitch. My advice to them would be to dial up their marketing, hone the value proposition, and set up the business development and operations in the US. On a side note the founder and the CEO Dr. Jack Kreindler is a “real” doctor. He is a physician who paid his way through the medical school by building healthcare IT systems. Way to go doc! Check out their pitch and the Q&A:

Wednesday, April 14, 2010

In Case You Didn't Know Twitter Is Growing Fast - Very Very Fast

I have been following the Chirp conference today where Evan Williams, who goes by @ev, disclosed Twitter growth numbers in his keynote and shared their pains, gains, and priorities. We all know that Twitter is growing fast – very, very fast – but here is the summary of those numbers that tells us what that growth actually looks like:

  • 105 million registered users and they add 300k uses every day
  • 3 billion API request a day (equivalent to Yahoo traffic)
  • 55 million new tweets every day
  • 600 million search queries every day
  • 175 employees
  • 75% traffic comes from third party clients
  • 60% tweets come from third party clients
  • 100,000 registered apps
  • 180 million unique visitors on Twitter.com (you don’t have to be a user)
  • FlockDB, their social graph database that they just open sourced, stores 13 billion edges
  • They started using “Murder” a new BitTorrent platform to transfer files during development. This reduced the transfer time from 40 minutes to 12 seconds
  • Made deals with 65 (telco) carriers
  • 37% of active users use Twitter on their phone (@ev wants this number to be 100%)

Monday, March 15, 2010

Emergent Cloud Computing Business Models

The last year I wrote quite a few posts on the business models around SaaS and cloud computing including SaaS 2.0, disruptive early stage cloud computing start-ups, and branding on the cloud. This year people have started asking me – well, we have seen PaaS, IaaS, and SaaS but what do you think are some of the emergent cloud computing business models that are likely to go mainstream in coming years. I spent some time thinking about it and here they are:

Computing arbitrage: I have seen quite a few impressive business models around broadband bandwidth arbitrage where companies such as broadband.com buys bandwidth at Costco-style wholesale rate and resells it to the companies to meet their specific needs. PeekFon solved the problem of expensive roaming for the consumers in Eurpoe by buying data bandwidth in bulk and slice-it-and-dice-it to sell it to the customers. They could negotiate with the operators to buy data bandwidth in bulk because they made a conscious decision not to step on the operators' toes by staying away from the voice plans. They further used heavy compression on their devices to optimize the bandwidth.

As much as elastic computing is integral to cloud computing not all the companies who want to leverage the cloud necessarily care for it. These companies, however, do have unique varying computing needs. These needs typically include fixed long-term computing that grows at relatively fixed low rate and seasonal peaks. This is a great opportunity for the intermediaries to jump in and solve this problem. There will be fewer and fewer cloud providers since it requires significantly hi cap-ex. However being a "cloud VAR" could be a great value proposition for the vendors that currently have a portfolio of cloud management tools or are "cloud SI". This is kind a like CDO (‘Cloud Debt Obligations’ :-)) – just that we will do a better job this time around!

Gaming-as-a-service: It was a while back when I first saw the OTOY demo. Otoy is scheduled to launch in Q2 2010. I believe that there is significant potential in cloud-based rendering for the games. Having access to an online collection of games that can be rented and played on devices with a varying degree of form factors is a huge business opportunity. The cloud also makes it a great platform and a perfect fit for the massive multi-player collaboration. Gaming-as-a-service could leverage everything that SaaS today does - frequent updates, developer ecosystem, pay-as-you-go etc. This business model also improves the current monetization options such as in-game ad placements that could be far more relevant and targeted.

App-driven and content-driven clouds: Now that we are hopefully getting over the fight between private and public cloud let’s talk about a vertical cloud. Computing is not computing is not computing. The needs to compute depend on what is being computed - it depends on the applications' specific needs to compute, the nature and volume of data that is being computed, and the kind of the content that is being delivered. Today in the SaaS world the vendors are optimizing the cloud to match their application and content needs. I would expect a few companies to step up and help ISVs by delivering app-centric and content-centric clouds. Being an avid advocate of net neutrality I believe that the current cloud-neutrality that is application-agnostic is a good thing. However we can certainly use some innovation on top of raw clouds. The developers do need fine knobs for CPU computes, I/O computes, main-memory computing, and many other varying needs of their applications. By far the extensions are specific to a programming stack such as Heroku for Ruby. I see opportunities to provide custom vertical extensions for an existing cloud or build a cloud that is purpose-built for a specific class of applications and has a range of stack options underneath that makes it easy for the developers to natively leverage the cloud.