Monday, March 31, 2014

Amazon's Cloud Price Reduction, A Desire To Compete Hard And Move Up The Value Chain

Recently Google slashed price for their cloud offering. Amazon, as expected, also announced their 42nd price reduction on their cloud offerings since its inception. Today, Microsoft also announced price reduction for their Azure offerings.

Unlike many other people I don't necessarily see the price reduction by Amazon as waging a price war against the competition.

Infrastructure as true commodity: IaaS is a very well understood category and Amazon, as a vendor, has strong desires to move up in the value chain. This can only happen if storage and computing become true commodity and customers value vendors based on what they can do on top of this commodity storage and computing. They become means to an end and not an end itself.

Amazon is introducing many PaaS like services on top of EC2. For example, RedShift is the fastest growing service on EC2. These services create stickiness for customers to come back and try out and perhaps buy other services. These services also create a bigger demand for the underlying cloud platform. Retaining existing customers and acquiring new customers with as little barrier as possible are key components of this strategy.

Reducing hardware cost: The hardware cost associated with computing and storage have gradually gone down. Speaking purely from financial perspective existing assets depreciate before they are taken out from service. Also, new hardware is going be cheaper than the old hardware (at the original cost). If you do pass on the cost advantage to your customers it should help you reduce price and compete at the same or a little less margin. However, hardware cost is a fraction of overall operations cost. In the short term, Amazon being a growth company will actually spend a lot more on CapEx and not just OpEx to invest and secure the future.

Economies of scale: The cost to serve two computing units is not the sum of cost to serve two one computing units. There are many economies of scales in play such as increasing data-center utilization, investment in automation, and better instance management software. Confidence in predicting minimum base volume and reducing fluctuations also gives Amazon better predictability to manage elasticity. As the overall volume goes up the elasticity or the fluctuations as percentage of overall volume go down. On top of that offerings such as Reserved Instances also are a good predictor of future demand. Amazon views Reserved Instances as how banks view CDs but many customers are looking for a "re-finance" feature for these Reserved Instances when price drops. These economic and pricing implications are great to watch.

To offer competitive pricing to win against  incumbents and make it almost impossible for new entrants to compete on the same terms is absolutely important but it would be foolish to assume it is the sole intent behind the price reduction.

Photo courtesy: Flickr

Wednesday, March 12, 2014

Why And How Should You Hire A Chief Customer Success Officer?


For an ISV (Independent Software Vendor) it is everyone's job to ensure customer success but it is no one person's job. This is changing. I see more and more companies realizing this challenge and want to do something about it.

Sales is interested in maintaining relationship with customers for revenue purposes and support works with customers in case of product issues and escalations. Product teams behave more like silos when they approach their customers because of their restricted scope and vision. Most chief technology officers are fairly technical and internal facing. Most of them also lack the business context—empathy for true business challenges—of their customers. They are quite passionate about what they do but they invariably end up spending a lot of time in making key product and technical decisions for the company losing sight of much bigger issues that customers might be facing. Most chief strategy officers focus on company's vision as well as strategy across lines of businesses but while they have strong business acumen they are not customer-centric and lack technical as well as product leadership to understand deep underlying systemic issues.

Traditional ways to measure customer success is through product adoption, customer churn, and customer acquisition but the role of a Chief Customer Success Officer (CCO) extends way beyond that. One of the best ways to watch early signs of market shift is to very closely watch your progressive customers. Working with these customers and watching them will also help you find ways to improve existing product portfolio and add new products, organically or through acquisitions. Participating in sales cycles will help you better understand the competition, pricing points, and most importantly readiness of your field to execute on your sales strategy.

I often get reached out by folks asking what kind of people they should be looking for when they plan to hire a CCO. I tell them to look for the following:

T-shaped: Customer don't neatly fall into your one line of business and so is your CCO. You are looking for someone who has broad exposure and experince across different functions through his or her previous roles and deep expertise in one domain. The CCO would work across LoBs to ensure customers are getting what they want and help you build a sustainable business. Most T-shaped people I have worked with are fast-learners. They very quickly understand continuously changing business, frame their point of view, and execute by collaborating with people across the organization (the horizontal part of T) due to their past experience and exposure in having worked with/for other functions.

Most likely, someone who has had a spectacular but unusual career path and makes you think, "what role does this person really fit in?" would be the the right person. Another clue: many "general managers" are on this career track.

Business-centric: Customers don't want technology. They don't even want products. They want solutions for the business problems they have. This is where a CCO would start with sheer focus on customers' problems—the true business needs—and use technology as an enabler as opposed to a product. Technology is a means to an end typically referred to as "the business."

Your CCO should have a business-first mindset with deep expertise in technology to balance what's viable with what's feasible. You can start anywhere but I would recommend to focus your search on people who have product as well as strategy background. I believe unless you have managed a product—development, management, or strategy—you can't really have empathy for what it makes to build something and have customers to use it and complain about it when it doesn't work for them.

Global: Turns out the world is not flat. Each geographic region is quite different with regards to aptitude and ability of customers to take risk and adopt innovation. Region-specific localization—product, go-to-market, and sales—strategy that factors in local competition and economic climate is crucial for global success of an ISV. The CCO absolutely has to understand intricacies associated with these regions: how they move at different speed, cultural aspects of embracing and adopting innovation, and local competition. The person needs to have exposure and experience across regions and across industries. You do have regional experts and local management but looking across regions to identify trends, opportunities, and pace of innovation by working with customers and help inform overall product, go-to-market, and sales strategy is quite an important role that a CCO will play.

Outsider: Last but not least, I would suggest you to look outside instead of finding someone internally. Hiring someone with a fresh outside-in perspective is crucuial for this role. Thrive for hiring someone who understands the broader market - players, competition, and ecosystem. This is a trait typically found in some leading industry analysts but you are looking for a product person with that level of thought leadership and background without an analyst title.

About the photo: This is a picture of an Everest base camp in Tibet, taken by Joseph Younis. I think of success as a progressive realization of a worthwhile goal.

Friday, February 28, 2014

Recruiting End Users For Enterprise Software Applications

As I work with a few enterprise software start-ups I often get asked about how to find early customers to validate and refine early design prototypes. The answer is surprisingly not that complicated. The following is my response to a recent question on Quora, "How do we get a target audience for enterprise applications, when you dont have an enterprise customer yet for rapid prototyping?"

Finding a customer and finding end users are quite different. In enterprise software end users are not the buyers and the buyer (customer) may or may not use your software at all. To recruit end users, there are three options:

Friends and families: Use your personal connections through email and social media channels and ask for their time (no more than 30 minutes) to conduct contextual inquiries and get validation on your prototypes. Most people won't say no. Do thank them by giving them a small gift or a gift card.

Find paid end users: This does seem odd but it works. I know of a few start-ups that have used this method effectively. Use Craigslist and other means to recruit people that match your end user role and pay them to participate in feedback sessions. It is worth it.

Guerrilla style: Go to a convention or a conference where you could find enough end users that fit your profile. Camp out at the convention with swag and run guerrilla style recruiting to validate and prototype. Iterate quickly, preferably in front of them, and validate again.

Friday, January 31, 2014

A Design Lesson: Customers Don't Remember Everything They Experience

My brother is an ophthalmologist in a small town in India. In his private practice, patients have two options to see him: either take an appointment or walk in. Most patients don't take an appointment due to a variety of cultural and logistics reasons and prefer to walk in. These patients invariably have to wait anywhere from 15 minutes to an hour and half on a busy day. I always found these patients to be anxious and unhappy that they had to wait, even if they voluntarily chose to do so. When I asked my brother about a possible negative impact due to unhappiness of his patients (customers) he told me what matters is not whether they are unhappy while they wait but whether they are happy or not when they leave. Once these patients get their turns to see my brother for a consultation, which lasts for a very short period of time compared to how much they waited, my brother will have his full attention to them and he will make sure they are happy when they leave. This erases the unpleasant experience from their minds that they just had it a few minutes back.

I was always amused at this fact until I got introduced to the concept of experience side versus memory side by my favorite psychologist Daniel Kahneman, explained in his book Thinking, Fast and Slow and in his TED talk (do watch the TED talk, you won't regret it). While the patients waited the unpleasant experience was the experience side which they didn't remember and the quality time they spent in the doctor's office was the memory side that they did remember.


Airlines, hotels, and other companies in service sectors routinely have to deal with frustrated customers. When customers get upset they won't remember series of past good experiences they had but they would only remember how badly it ended - a cancelled flight, smelly hotel room or production outage resulting in an escalation. Windows users always remember the blue screen of death but when asked they may not necessarily remember anything that went well on a Windows machine prior to a sudden crash resulting into the blue screen of death. The end matters the most and an abrupt and unrecoverable crash is not a good end. If the actual experience matters people will perhaps never go back to a car dealership. However people do remember getting a great deal in the end and forget the misery that the sales rep put them through by all the haggling.

Proactive responses are far better in crisis management than reactive ones but reactive responses do not necessarily have to result in a bad experience. If companies do treat customers well after a bad experience by being truly apologetic, responsive, and offering them rewards such as free upgrades, miles, partial refund, discounts etc. people do tend to forget bad experiences. This is such a simple yet profound concept but companies tend not to invest into providing superior customer support. Unfortunately most companies see customer support as cost instead of an investment.

This is an important lesson in software design for designers and product managers. Design your software for graceful failures and help people when they get stuck. They won't tell you how great your tool is but they will remember how it failed and stopped them from completing a task. Keep the actual user experience minimal, almost invisible. People don't remember or necessary care about the actual experiences as long as they have aggregate positive experience without hiccups to get their work done. As I say, the best interface is no interface at all. Design a series of continuous feedback loops at the end of such minimal experiences—such as the green counter in TurboTax to indicate tax refund amount—to reaffirm positive aspects of user interactions; they are on the memory side and people will remember them.

In enterprise software, some of the best customers could be the ones who had the worst escalations but the vendors ended their experience on a positive note. These customers do forgive vendors. As a vendor, a failed project receives a lot worse publicity than a worst escalation that could have actually cost a customer a lot more than a failed project but it eventually got fixed on a positive note. This is not a get-out-of-jail-free-card to ignore your customers but do pause and think about what customers experience now and what they will remember in future.

Photo courtesy: Derek 

Monday, January 20, 2014

Focus On Abstraction And Not Complexity


I am a big fan of software design patterns. A design pattern is a general reusable solution to a commonly occurring problem within a given context. Software design patterns are all about observing technical abstractions in complex problems by identifying patterns and applying well known solutions to them.

My management style is largely based on abstractions. When things get muddy I step away from complexity for a few minutes and explore abstractions. This helps me keep in touch with the bigger picture while I look for solutions to a given problem. When you're too close to a topic you do tend to fixate on complexity leaving sight of the bigger picture. I make a conscious attempt to go between complexity and abstraction when I need to. And, that's perhaps the only way to manage it effectively in pursuit of working smart and not just working hard. Complexity invariably makes people get into an analysis paralysis mode resulting into a decision gridlock that affects the bigger picture. In many cases, not being able to make a decision has far worse consequences than not solving a problem which may or may not be important in long run. Abstracting complexity helps me make a decision with focus on consequences as opposed to a short term solution. Abstraction also allows me to spot behavioral and systemic problems as opposed to tactical and temporal problems.

Ask yourself what you remember the most about a couple of complex problems that you solved last year and the answer most likely won't be how great your solution was but it very well would be what the problem actually taught you. It's not the complexity that you will cherish but the simplicity, the abstracted experience, is what will stay with you for the rest of your life to help you find solutions to similar problems in future.

Photo courtesy: miuenski 

Tuesday, December 31, 2013

Challenges For On-premise Vendors Transitioning To SaaS

As more and more on-premise software vendors begin their journey to become SaaS vendors they are going to face some obvious challenges. Here's my view on what they might be.

The street is mean but you can educate investors

Sharp contrast between Amazon and Apple is quite clear. Even though Amazon has been in business for a long time with soaring revenue in mature categories the street sees it as a high growth company and tolerates near zero margin and surprises that Jeff Bezos brings in every quarter. Bezos has managed to convince the street that Amazon is still in heavy growth mode and hasn't yet arrived. On the other hand despite of Apple's significant revenue growth—in mature as well as in new disruptive categories—investors treat Apple very differently and have crazy revenue and margin expectations.

Similarly, traditional pure SaaS companies such as Salesforce is considered a high growth company where investors are focused on growth and not margins. But, if you're an on-premise vendor transitioning to SaaS the street won't tolerate a hit on your margins. The street would expect mature on-premise companies to deliver on continuous low double digit growth as well as margins without any blips and dips during their transition to SaaS. As on-premise vendors change their product, delivery, and revenue models investors will be hard on them and stock might take a nosedive if investors don't quite understand where the vendors are going with their transition. As much as investors love the annuity model of SaaS they don't like uncertainty and they will punish vendors for lack of their own understanding in the vendor's model. It's a vendor's job to educate investors and continuously communicate with them on their transition.

Isolating on-premise and SaaS businesses is not practical

Hybrid on-premise vendors should (and they do) report on-premise and subscription (SaaS) revenue separately to provide insights to investors into their revenue growth and revenue transition. They also report their data center related cost (to deliver software) as cost of revenue. But, there's no easy way, if at all there's one, to split and report separate SG&A costs for their on-premise and SaaS businesses. In fact combined sales and marketing units are the weapons incumbents on-premise vendors have to successfully transition to SaaS. More on that later in this post.

The basic idea behind achieving economies of scale and to keep the overall cost down (remember margins?) is to share and tightly integrate business functions wherever possible. Even though vendors sometime refer to their SaaS and on-premise businesses as separate lines of businesses (LoBs), in reality they are not. These LoBs are intertwined that report numbers as single P&L.

Not being able to charge more for SaaS is a myth

Many people I have spoken to assume that SaaS is a volume-only business and you can't charge customers what you would typically charge your customers in your traditional license and maintenance revenue business model. This is absolutely not true. If you look at some of the deal sizes and length of SaaS contracts of pure SaaS companies they do charge a premium when they have unique differentiation regardless of volume. Customers are not necessarily against paying premium - for them it is all about bringing down their overall TCO and increasing their ROI with reduced time to value. If a vendor's product and its delivery model allow customers to accomplish these goals they can charge them premium. In fact in most cases this could be the only way out. As a vendor transitioning from on-premise to SaaS their cost is going to go up; they will continue to invest into building new products and transitioning existing products and they will significantly assume the cost of running operations on behalf of their customers to deliver software as a service. They not only will have to grow their top-line to meet the growth expectations but to offset some of the cost to maintain the margins.


Prime advantage on-premise incumbents have over SaaS entrants

So, what does work in favor of on-premise vendors who are going through this transition?

It's the sales and marketing machine, my friends.

The dark truth about selling enterprise software is you need salespeople wearing suits driving around in their BMWs to sell software. There's no way out. If you look at high growth SaaS companies they spend most of what they earn on sales and marketing. Excluding Workday there is not much difference in R&D cost across vendors, on-premise or SaaS. Workday is building out its portfolio and I expect to see this cost go down in a few years.

Over a period of time, many on-premise vendors have built a great brand and achieved amazing market penetration. As these vendors go through SaaS transition they won't have to spend as much time and money educating the market and customers. In fact I would argue they should thank other SaaS vendors for doing the job for them. On-premise vendors have also built an amazing sales machine with deep relationship with customers and reliable sales processes. If they can maintain their SG&A numbers they will have enough room to deal with a possible initial hit on revenue and additional cost they would incur as they go through this transition.

Be in charge of your own destiny and be aggressive

It's going to be a tough transition regardless of your loyal customer base and differentiating products. It will test the execution excellence of on-premise vendors. They are walking on a tight rope and there's not much room to make mistakes. The street is very unforgiving.

Bezos and Benioff have consistently managed to convince the street they are high growth companies and should be treated as such. There's an important lesson here for on-premise vendors. There is no reason to label yourself an on-premise vendor simply making a transition. You could do a lot more than that; invest into new disruptive categories and rethink existing portfolio. Don't just chase SaaS for its subscription pricing but make an honest and explicit attempt to become a true SaaS vendor. The street will take a notice and you might catch a break.

Thursday, November 21, 2013

Rise Of Big Data On Cloud


Growing up as an engineer and as a programmer I was reminded every step along the way that resources—computing as well as memory—are scarce. The programs were designed on these constraints. Then the cloud revolution happened and we told people not to worry about scarce computing. We saw rise of MapReduce, Hadoop, and countless other NoSQL technology. Software was the new hardware. We owe it to all the software development, especially computing frameworks, that allowed developers to leverage the cloud—computational elasticity—without having to understand the complexity underneath it. What has changed in the last two to three years is a) the underlying file systems and computational frameworks have matured b) adoption of Big Data is driving the demand for scale out and responsive I/Os in the cloud.

Three years back, I wrote a post, The Future Of The BI In Cloud where I had highlighted two challenges of using cloud as a natural platform for Big Data. The first one was to create a large scale data warehouse and the second was lack of scale out computing for I/O intensive applications.

A year back Amazon announced RedShift, a data warehouse service in the cloud, and last week they announced high I/O instances for EC2. We have come a long way and more and more I look at the current capabilities and trends, Big Data, at scale, on the cloud, seems much closer to reality.

From a batched data warehouse to interactive analytic applications:

Hadoop was never designed for I/O intensive applications, but Hadoop being a compelling computational scale out platform developers had a strong desire to use it for their data warehousing needs. This made Hive and HiveQL popular analytic frameworks but this was a sub optimal solution that worked well for batch loads and wasn't suitable for responsive and interactive analytic applications. Several vendors realized there's no real reason to stick to the original style of MapReduce. They still stuck to the HDFS but significantly invested into alternatives to Hive which are way faster.

There are series of such projects/products that are being developed on HDFS and MapReduce as a foundation but by adding special data management layers on top of it to run interactive queries much faster compared to plain vanilla Hive. Some of those examples are Impala from Cloudera and Apache Drill from MapR (both based on Dremel), HAWQ from EMC, Stinger from Hortonworks and many other start-ups. Not only vendors but the early adopters such as Facebook created Hive projects such as Presto, an accelerated Hive, which they recently open sourced.

From raw data access frameworks to higher level abstraction tools: 

As vendors continue to build more and more Hive alternatives I am also observing vendors investing in higher level abstraction frameworks. Pig was amongst those first higher level frameworks that made it easier to express data analysis programs. But, now, we are witnessing even higher layer rich frameworks such as Cascading and Cascalog not only to write SQL queries but write interactive programs in higher level languages such as Clojure and Java. I'm a big believer in empowering developers with right tools. Working directly against Hadoop has a significant learning curve and developers often end up spending time on plumbing and other things that can be abstracted out in a tool. For web development, popularity of Angular and Bootstrap are examples of how right frameworks and tools can make developers way more efficient not having to deal with raw HTML, CSS, and Javascript controls.

From solid state drives to in-memory data structures: 

Solid state drives were the first step in upstream innovation to make I/Os much faster but I am observing this trend go further where vendors are investing into building in-memory resident data management layers on top of HDFS. Shark and Spark are amongst the popular ones. Databricks has made big bets on Spark and recently raised $14M. Shark (and hence Spark) is designed to be compatible with Hive but designed to run queries 100x times faster by using in-memory data structures, columnar representation, and optimizing MapReduce not to write intermediate results back to disk. This looks a lot like MapReduce Online which was a research paper published a few years back. I do see a UC Berkeley connection here.

Photo courtesy: Trey Ratcliff