Friday, June 11, 2010

Social CRM Is Only The First Half Of A Social Enterprise

Social CRM has arrived. My fellow bloggers and analysts friends Jeremiah Owyang, Ray Wang, Esteban Kolsky, Paul Greenberg, Sameer Patel, Oliver Marks, Jeff Nolan, and countless others have done a great job in defining the attributes, characteristics, and value proposition of social CRM. The recent acquisitions - Lithium acquiring Scout Labs, Attensity acquiring Biz360, and Jive acquiring Filtrbox - have clearly indicated the market interest in social CRM. There are also tons of emerging start-ups in this domain solving specific problems in niche sub-categories of social CRM.

However, social CRM is only the first half of a social enterprise.

Let me be that idiot for a minute who over-simplifies enterprise software and its evolution. The traditional ERP, MRP, and SCM software were designed for automation and productivity to improve the bottom-line, scale the business, and make informed decisions. The CRM was essentially designed to sell and market better and eventually to support the customers whom you sold to. Then comes the social CRM that is designed as an extension of CRM to help understand customers better, have rich conversations with the customers, increase the impact of the brand, prevent customer churn etc.

Unfortunately social CRM is only the half part of the equation primarily designed to influence the top-line of an organization. The other missing half is the social solutions that support the bottom-line of a company. Together they form a social enterprise. I don’t like the word “social business”. In case you didn’t get the memo, the business has always been social. What is not social is an enterprise. A combination of social CRM that supports the top-line and a set of solutions that supports the bottom-line can truly transform an enterprise into a social enterprise.

Some vendors have attempted to introduce “socialness” in some of the edge applications but I believe there is a need to go to the core and build a true social enterprise. In my two part series I would like to share my thoughts on how this could be accomplished. This is part one.

Focus on the means and not the end:

I can talk about plenty of ERP processes but let’s discuss a specific process that is perceived my many people as dry and not social. It’s the “closing the books” financial process. I would encourage the folks, who think that the financial processes are not social, to spend some time in a large organization to observe and shadow the controllers and a CFO in the last few days and the first few days of a quarter. The software that “closes the books” is the very last step in the process, the end, designed to keep the CEO and CFO out of the jail. Everything that leads up to closing the books, the means, comprise tacit social interactions such as calling cost center managers for their numbers, asking for clarifications, communicate not to do certain things etc. The list goes on. This social system certainly works. However there is one problem – it is highly inefficient.

This is where I see the opportunity to provide a social toolset designed for a specific process – a social vertical – to help all the stakeholders. The social tools should not be designed to replace the face-to-face interactions and should not just be limited to encode the interactions. Instead they should allow people to scale their social interactions, leverage discovery, and experience serendipity. The social tools become the context for the core processes.

Find an internal business process that is inherently social where employees spend most of their time outside of a destination tool. Run with it.

Don't fight the system, instead cater to emergent roles:

As the nature of business changes the great organizations that are on forefront of this change are good about creating new roles that never existed before. Some the examples are Chief Sustainability Officer, Chief Privacy Officer, Chief Customer Churn Officer etc. Enterprise software vendors are often criticized as “pouring concrete into existing business processes”. It’s not a surprise that existing processes are hard to change and existing human behavior is even harder to change but providing a “social-first” experience to these new emergent roles could potentially trigger a positive change in an organization. The people in these new roles don’t typically have a rigid set of pre-defined processes and tools. That’s good news. Work with these people to identify how social software can enable some of these new business processes and functions. As a vendor you are likely to get more traction working with them against working with a CFO or a purchase manager.

Turn involunteer collaboration into social interaction:

Let’s be very clear that being collaborative does not mean being social. Unfortunately the existing collaboration tools help people collaborate once they have decided to collaborate. Well, duh. But when you think about it, if people get along well before they decide to collaborate they have a higher chance of success while they collaborate. The problem is that people neither have motivation nor time to find and get to know the folks that they might be required to work with. This is where social enterprise can do wonders.

The solution that powers the social enterprise does not have to solve a specific business problem. Imagine an enterprise social network that has algorithms to find the like minded-people based on their skills, interest, extra curricular activities, the departments they work for, the cars they drive, the neighborhoods that they live in etc. The real advantage of using such a network is to bridge silos without having an explicit goal of collaboration. This is an antithesis of collaboration.

You don’t collaborate with your neighbors before you socialize with them. You greet them, go to the block party, and have beer and BBQ. And then if you need to collaborate on chopping that tree you do so. It isn’t very different when it comes to enterprises. End of the day the enterprises have human beings that behave like, well, human beings.

Coming up in the next post:

Social enterprise enablement through collaborative content curation, democratizing the management, and earning instead of buying adoption.

Tuesday, April 27, 2010

Delphix Is A Disruptive Database Virtualization Start-up To Watch

This is my second post on my impressions from the Under The Radar conference. Check out the first post on NoSQL.

Virtualization is not cloud computing. However virtualization has significant potential when it is used to achieve cloud-like characteristics such as elasticity, economies of scale, accessibility, simplicity to deploy etc. I have always believed that the next wave of cloud computing is going to be all about solving “special” problems on the cloud – I call it a vertical cloud. These vertical problems could be in any domain, technology stack, or industry. Raw computing has come long way. It is about the right time we do something more interesting with the raw cloud computing.

Delphix is attempting to solve a specific problem - database virtualization. I met the CEO Kaycee Lai and the VP of sales Jedidiah Yueh at Under The Radar reception the night before. They have great background in understanding the cost and flexibility issues around de-duplication from their days at EMC. They have assembled a great team including Alok Srivastava from Oracle who ran Oracle RAC engineering prior to joining Delphix. Most large database deployments have multiple copies of single database that customers use for purposes beyond production such as staging, testing, and troubleshooting. This replication is expensive from process, resources, and storage perspective and takes long time to provision instances. The founders saw this problem first hand at EMC and decided to solve it.

At the core their offering is a read-write snapshot of a database. That’s quite an achievement. The snapshots are, well, snapshots. You can’t modify them. When you make this compromise they occupy way less space. Delphix took the same concept but created the writable snapshots and a seemingly easy to use application (I haven’t used it) that allows quick de-duplication based on these snapshots. You can also go back in time and start your instance from there.

Delphix has great value proposition in the database virtualization - help the customers reduce their hardware and people – DBA and system administrators - cost at the same time accelerate the IT processes. I like their conscious decision not to go after the backup market. Sometimes you have a great product but if it is marketed in the wrong category with vendors fighting in the red ocean you could die before you can grow. They had the best pitch at the conference – very calm, explaining the problem, articulating the value proposition, emphasizing right people on the team, and identifying the target market. If you are an entrepreneur (or even if you are not) check out their pitch and Q&A. There is a lot you can learn from them.

Thursday, April 22, 2010

Disruptive Cloud Computing Startups At Under The Radar - NoSQL - Aspirin, Vicodin, and Vitamin

It was great to be back at Under The Radar this year. I wrote about disruptive cloud computing start-ups that I saw at Under The Radar last year. Since then the cloud computing has gained significant momentum. This was evident from talking to the entrepreneurs who pitched their start-ups this year. At the conference there was no discussion on what is cloud computing and why anyone should use it. It was all about how and not why. We have crossed the chasm. The companies who presented want to solve the “cloud scale” problems as it relates to database, infrastructure, development, management etc. This year, I have decided to break down my impressions into more than one post.

NoSQL has seen staggering innovation in the last year. Here are the two companies in the NoSQL category that I liked at Under The Radar:

Northscale was in stealth mode for a while and officially launched four weeks back. Their product is essentially a commercial version of memcached that sits in front of an RDBMS to help customers deal with the scaling bottlenecks of a typical large RDBMS deployment. This is not a unique concept – the developers have been using memcached for a while for horizontal cloud-like scaling. However it is an interesting offering that attempts to productize an open source component. Cloudera has achieved a reasonable success with commercializing Hadoop. It is good to see more companies believing in open source business model. They have another product called membase, which is a replicated persistence store for memcached – yes, a persistence layer on top of a persistence layer. This is designed to provide eventual consistency with tunable blocking and non-blocking I/Os. Northscale has signed up Heroku and Zynga as customers and they are already making money.

As more and more deployments face the scaling issues, Northscale does have an interesting value proposition to help customers with their scaling pain by selling them an aspirin or vicodin. Northscale won the best in category award. Check out their pitch and the Q&A:




GenieDB is a UK-based start-up that offers a product, which allows the developers to use mySQL as a relational database as well as a key-value store. It has support for replication with immediate consistency. Few weeks back I wrote a post - NoSQL is not SQL and that’s a problem. GenieDB seems to solve that problem to some extent. Much of the transactional enterprise software still runs on an RDBMS and depends on the data being immediately consistent. The enterprise software can certainly leverage the key-value stores for certain features where RDBMS is simply an overhead. However using a key-value store that is not part of the same logical data source is an impediment in many different ways. The developers want to access data from single logical system. GenieDB allows table joins between SQL and NoSQL stores. I also like their vertical approach of targeting specific popular platforms on top of mySQL such as Wordpress and Drupal. They have plans to support Rails by supporting ActiveRecord natively on their platform. This is a vitamin, if sold well, has significant potential.

They didn’t win any prize at the conference. I believe it wasn't about not having a good product but they failed to convey the magnitude of the problem that they could help solve in their pitch. My advice to them would be to dial up their marketing, hone the value proposition, and set up the business development and operations in the US. On a side note the founder and the CEO Dr. Jack Kreindler is a “real” doctor. He is a physician who paid his way through the medical school by building healthcare IT systems. Way to go doc! Check out their pitch and the Q&A:

Wednesday, April 14, 2010

In Case You Didn't Know Twitter Is Growing Fast - Very Very Fast

I have been following the Chirp conference today where Evan Williams, who goes by @ev, disclosed Twitter growth numbers in his keynote and shared their pains, gains, and priorities. We all know that Twitter is growing fast – very, very fast – but here is the summary of those numbers that tells us what that growth actually looks like:

  • 105 million registered users and they add 300k uses every day
  • 3 billion API request a day (equivalent to Yahoo traffic)
  • 55 million new tweets every day
  • 600 million search queries every day
  • 175 employees
  • 75% traffic comes from third party clients
  • 60% tweets come from third party clients
  • 100,000 registered apps
  • 180 million unique visitors on Twitter.com (you don’t have to be a user)
  • FlockDB, their social graph database that they just open sourced, stores 13 billion edges
  • They started using “Murder” a new BitTorrent platform to transfer files during development. This reduced the transfer time from 40 minutes to 12 seconds
  • Made deals with 65 (telco) carriers
  • 37% of active users use Twitter on their phone (@ev wants this number to be 100%)

Monday, March 15, 2010

Emergent Cloud Computing Business Models

The last year I wrote quite a few posts on the business models around SaaS and cloud computing including SaaS 2.0, disruptive early stage cloud computing start-ups, and branding on the cloud. This year people have started asking me – well, we have seen PaaS, IaaS, and SaaS but what do you think are some of the emergent cloud computing business models that are likely to go mainstream in coming years. I spent some time thinking about it and here they are:

Computing arbitrage: I have seen quite a few impressive business models around broadband bandwidth arbitrage where companies such as broadband.com buys bandwidth at Costco-style wholesale rate and resells it to the companies to meet their specific needs. PeekFon solved the problem of expensive roaming for the consumers in Eurpoe by buying data bandwidth in bulk and slice-it-and-dice-it to sell it to the customers. They could negotiate with the operators to buy data bandwidth in bulk because they made a conscious decision not to step on the operators' toes by staying away from the voice plans. They further used heavy compression on their devices to optimize the bandwidth.

As much as elastic computing is integral to cloud computing not all the companies who want to leverage the cloud necessarily care for it. These companies, however, do have unique varying computing needs. These needs typically include fixed long-term computing that grows at relatively fixed low rate and seasonal peaks. This is a great opportunity for the intermediaries to jump in and solve this problem. There will be fewer and fewer cloud providers since it requires significantly hi cap-ex. However being a "cloud VAR" could be a great value proposition for the vendors that currently have a portfolio of cloud management tools or are "cloud SI". This is kind a like CDO (‘Cloud Debt Obligations’ :-)) – just that we will do a better job this time around!

Gaming-as-a-service: It was a while back when I first saw the OTOY demo. Otoy is scheduled to launch in Q2 2010. I believe that there is significant potential in cloud-based rendering for the games. Having access to an online collection of games that can be rented and played on devices with a varying degree of form factors is a huge business opportunity. The cloud also makes it a great platform and a perfect fit for the massive multi-player collaboration. Gaming-as-a-service could leverage everything that SaaS today does - frequent updates, developer ecosystem, pay-as-you-go etc. This business model also improves the current monetization options such as in-game ad placements that could be far more relevant and targeted.

App-driven and content-driven clouds: Now that we are hopefully getting over the fight between private and public cloud let’s talk about a vertical cloud. Computing is not computing is not computing. The needs to compute depend on what is being computed - it depends on the applications' specific needs to compute, the nature and volume of data that is being computed, and the kind of the content that is being delivered. Today in the SaaS world the vendors are optimizing the cloud to match their application and content needs. I would expect a few companies to step up and help ISVs by delivering app-centric and content-centric clouds. Being an avid advocate of net neutrality I believe that the current cloud-neutrality that is application-agnostic is a good thing. However we can certainly use some innovation on top of raw clouds. The developers do need fine knobs for CPU computes, I/O computes, main-memory computing, and many other varying needs of their applications. By far the extensions are specific to a programming stack such as Heroku for Ruby. I see opportunities to provide custom vertical extensions for an existing cloud or build a cloud that is purpose-built for a specific class of applications and has a range of stack options underneath that makes it easy for the developers to natively leverage the cloud.

Friday, March 5, 2010

NoSQL Is Not SQL And That’s A Problem

I do recognize the thrust behind the NoSQL movement. While some are announcing an end of era for MySQL and memcached others are questioning the arguments behind Cassandra’s OLTP claims and scalability and universal applicability of NoSQL. It is great to see innovative data persistence and access solutions that challenges the long lasting legacy of RDBMS. Competition between HBase and Cassandra is heating up. Amazon now supports a variety of consistency models on EC2.

However none of the NoSQL solutions solve a fundamental underlying problem – a developer upfront has to pick persistence, consistency, and access options for an application.

I would argue that RDBMS has been popular for the last 30 years because of ubiquitous SQL. Whenever the developers wanted to design an application they put an RDBMS underneath and used SQL from all possible layers. Over a period of time the RDBMS grew in functions and features such as binary storage, faster access, clusters etc. and the applications reaped these benefits.

I still remember the days where you had to use a rule-based optimizer to teach the database how best to execute the query. These days the cost-based optimizers can find the best plan for a SQL statement to take guess work out of the equation. This evolution teaches us an important lesson. The application developers and to some extent even the database developers should not have to learn the underlying data access and optimization techniques. They should expect an abstraction that allows them to consume data where consistency and persistence are optimized based on the application needs and the content being persisted.

SQL did a great job as a non-procedural language (what to do) against many past and current procedural languages (how to do). SQL did not solve the problem of staying independent of the schema. The developers did have to learn how to model the data. When I first saw schema-less data stores I thought we would finally solve the age-old problem of making an upfront decision of how data is organized. We did solve this problem but we introduced a new problem - lack of ubiquitous access and consistency options for schema-less data stores. Each of these data stores came with its own set of access API that are not necessarily complicated but uniquely tailored to address parts of the mighty CAP theorem. Some solutions even went further and optimized on specific consistencies such as eventually consistency, weak consistency etc.

I am always in favor of giving more options to the developers. It’s usually a good thing. However what worries me about NoSQL is that it is not SQL. There isn’t simply enough push for ubiquitous and universal design time abstractions. The runtime is certainly getting better, cheaper, faster but it is directly being pushed to the developers skipping a whole lot of layers in between. Google designed BigTable and MapReduce. Facebook took the best of BigTable and Dynamo to design Cassandra, and Twitter wanted scripting against programming on Hadoop and hence designed Pig. These vendors spent significant time and resources for one reason – to make their applications run faster and better. What about the rest of the world? Not all applications share the same characteristics as Facebook and Twitter and certainly enterprise software is quite different.

I would like to throw out a challenge. Design a data store that has ubiquitous interface for the application developers and is independent of consistency models, upfront data modeling (schema), and access algorithms. As a developer you start storing, accessing, and manipulating the information treating everything underneath as a service. As a data store provider you would gather upstream application and content metadata to configure, optimize, and localize your data store to provide ubiquitous experience to the developers. As an ecosystem partner you would plug-in your hot-swappable modules into the data stores that are designed to meet the specific data access and optimization needs of the applications.

Are you up for the challenge?

Tuesday, February 9, 2010

Google Buzz Is New Black - Solving A Problem That Google Wave Could Not


Today Google announced Google Buzz. Watch the video:



The chart below shows the spectacular adoption failure of Google Wave as a standalone product. This was predicted by a lot of people including myself. As Anil Dash puts it Google Wave does not help solve a "weekend-sized problem".



Besides the obvious complex technical challenges there are three distinct adoption barriers with Google Wave and Google Buzz has capability to overcome those:

Inseparable container, content, and collaboration: Changing people's behavior is much more difficult than inventing or innovating a killer technology. Most of the people still prefer to keep the collaboration persisted separately from the content or not persisted at all. Single task systems such as email, Wiki, and instant messaging are very effective because they do one and only thing really well without any confusion. Google Wave is a strong container on which Google or others can build collaboration capability but not giving an option to users to keep the content separate from the collaboration leads to confusion and becomes an adoption barrier. 

Google Buzz certainly seems to solve this problem by piggybacking on existing system that people are already familiar with - email. Google Buzz is an opt-in system where the users can extend and enrich their experience against using a completely different tool. 

Missing clear value proposition: Google Wave is clearly a swiss knife with the open APIs for the developers to create killer applications. So far the applications that leverages Google Wave components are niche and solve very specific expert system problems. This dilutes the overall value proposition of a standalone tool. 

Google Buzz is designed to solve a problem in a well-defined "social" category. People are already using other social tools and Google Buzz needs to highlight the value proposition by integrating the social experience in a tool that has very clear value proposition unlike Google Wave which tried to re-create the value proposition. Google Buzz assists users automatically by finding and showing pictures, videos, status updates etc. and does not expect users to go through a lengthy set up process.

Lack of a killer native mobile application: This is an obvious one. Google Wave does work on iPhone and on some other phones but it is not native and the experience is clunky at best. When you develop a new tool how about actually leveraging a mobile platform rather than simple porting it. A phone gives you a lot more beyond a simple operating system to run your application on. 

Google recognized this and Google Buzz is going to be mobile-enabled from day one that leverages location-awareness amongst other things. I hope that the mobile experience is not same as the web experience and actually makes people want to use it on the phone.

You could argue that why Google Buzz is going to be different since Google did have a chocolate box variety tools before Google Buzz - Latitude, Profile, Gmail, Wave and so on. I believe that it is all about the right experience that matches the consumers' needs in their preferred environment and not a piece of technology that solves a standalone problem. If done right Google Buzz does have potential to give Facebook, Twitter, Foursquare, and Gowalla run for money.