Twitter for Business webcast from O’Reilly

Image representing Twitter as depicted in Crun...

Image via CrunchBase

As a regular user of Twitter I often get asked by business people if I see value in using the micro-blogging service for business.  While the nuances of twitter may vary if you are in marketing, customer support, or other parts of an enterprise,  I do think its an important new channel to consider for most business folks.  For marketers it has become increasingly important to watch your organizations brand.  For some customer support organizations, twitter is becoming another channel to watch as customers begin to use twitter as a vehicle to broadcast their frustrations with products.  For others twitter can serve as a really useful tool to communicate with a wide array of people in a quick and efficient manner.  O’Reilly media recently did a webcast with just this focus.  I’ve embedded the Webcast here, followed by a link to the O’Reilly website.

Webcast: Twitter for Business

Twitter–the messaging service that lets you send instant, short updates to people around the world–is fast becoming a mainstream communication tool. Hundreds of brands and thousands of companies use it to connect with customers and co-workers, and new micro-messaging services are springing up every week to meet specific corporate needs.

Credit: Guy Kawasaki

Reblog this post [with Zemanta]

Thoughts from Dreamforce (Salesforce Developer Conference 2008)

Sunday night Dreamforce reception

Image by Mountjoy via Flickr

Last week I was at the Salesforce developer conference, Dreamforce 2008.  Most of the significant news (linked below) to emerge from the Moscone Center was tied to the theme of ‘Cloud Computing‘.  This included announcements related to the availability of Facebook integration, the ability for customers to host their websites on the Salesforce ‘Cloud’ (a service known as Sites), and the continued expansion of the force.com platform into areas far and wide (verticalization).  Companies usually wait for major conferences to make major announcements, and most of the new news created the appropriate level of buzz in the hall, and on the web.  Clearly Salesforce sees their existing SaaS model extending beyond hosted enterprise software, and deeper into other parts of the infrastructure business as well.  In general, Dreamforce was full of optimism at a time, and in a market, that is in need of such excitement.  Some even ventured to call it the ‘Cloud Computing Woodstock‘, perhaps inspired by the presence of Neil Young.  On a more serious note, beyond the major introductions made, there were other important vibes to groove to.  Here are some thoughts.

Salesforce Adds Robust Knowledge Management to Customer Service and Support

This was also, I believe, the first major event for Salesforce after their acquisition of Instranet, the European based knowledge management application company.  Throughout the three day conference, Salesforce Knowledge Management (SKM) – as Instranet has now been relabeled – played an important role in many of the presentations that I attended, and reflects the growing importance of the customer service and support (CS&S) space for Salesforce.  In the last year or so, I’ve noticed an increased presence of Salesforce in the CS&S space.  With SKM Salesforce closes a major hole in their CS&S product offering.  Based on the the demonstrations that I saw of the SKM product, they are now poised to make a major play for CS&S mindshare in the U.S.  Instranet has been a big player on the European scene, with a significant share of the incumbent Telcos and other high volume, low complexity call centers in the Old World.

SKM approaches CS&S knowledge management differently than the traditional KM vendors in CS&S.  It is a much lighter application to deploy than the established players in CS&S knowledge management.  By light, I mean that SKM does not need a lot of administrative tweeking or constant management of ontological or taxonomic layers.  The added flexibility allows for customers to ‘hard code’ processes around SKM, which is critical in many high volume call centers.  By baking in processes, these large call centers effectively reduce training time.  For Salesforce, adding a ‘light’ application to their solution stack makes a whole lot of sense as it plays to the core sales proposition of their entire suite of applications, which is to provide rapid deployment that, in return, begins to pay benefits (ROI) immediately.  Light doesn’t necessarily mean lightweight, as I have heard that SKM is working its way through the Consortium for Service Innovation’s Knowledge Centered Support (KCS) verification process.  When verified, Salesforce will be one of the few KCS verified companies that will have an integrated incident management, knowledge management solution.

Multi-Tenant Integration may lead to  real rewards for Customer Service and Support (and product management)

Recently, Salesforce has enabled seamless Salesforce to Salesforce (StoS) integration which allows two Salesforce hosted instances to be integrated by application administrators, and doesn’t require complex coding.  This multi-tenant integration was demonstrated on the second day in the lead-up to Michael Dell‘s Keynote.  The demo focused on how, through multi-tenant integration, Dell was effectively building an opportunity supply chain through which downstream distributors, and Dell, could coordinate sales activities.  In this instance, the integration allowed for a tighter relationship than traditional partner management systems did.  The demo got me thinking about how this type of integration could be used to enhance customer service and support.

A major challenge that I’ve seen many call centers, and help desks, deal with is managing customer exceptions on products developed by other companies (think wireless telcos reselling handsets, for example).  Many manufacturers today incorporate OEM solutions into their final product.  When these products encounter exceptions, the manufacturer often has to deal with both their own product issues and those of the OEM embedded product.  I’ve visit some call centers where that OEM exception rate can be upwards of 80%!  Basically this means that those call centers are spending more time handling someone eleses exceptions than their own.  This has two major implications.  First, the overhead of dealing with downstream exceptions skews the cost of running the call center.  Second, the downstream OEM manufacturer, usually, never receives a detailed understanding of how – and how many – exceptions are caused by their product.  In one instance that I observed several years ago, the OEM product was not only causing a high level of exceptions, but the final product manufacturer was overlooking a performance guarantee from the OEM because the exception handling wasn’t fully understood.  In an integrated CS&S ‘supply chain’, two critical elements could be easily addressed.

Some of this is totally speculation on my part, but I’m assuming that Salesforce will eventually expose CS&S to CS&S integration.  If so, two way CS&S relationships could be enabled to enhance both the support agent experience and the overall product experience.  First, integrated analytics could be exposed to OEM support agents which would allow them to be more proactive in dealing with upstream support issues.  Second, OEM product management would have a greater insight into product exceptions, and this insight would be faster delivered than traditional modes.  Third, knowledge sharing relationships could be created whereby the combined knowledge-base, through the SKM product, would reflect a broader understanding of both the final product and the OEM product.

Of course, this is all predicated on all the companies involved using Salesforce as their CS&S solution.  For Dell, their sheer size must play a key role in their ability to influence their partners’ choice of SFA applications.

Final Thoughts

I attended Dreamforce on behalf of a non-profit, Silk Screen, that I’m actively involved with.  At Silk Screen we have been very grateful for Salesforce’s Foundation donation of seat licenses.  It makes our management of resources more efficient, and most importantly, lets us apply our funding to the central cause of the organization.  What pleasantly surprised me at Dreamforce was how central the Foundation’s activities are in the corporate culture.  I learned quite a lot from the sessions I attended on the non-profit front as well.

Reblog this post [with Zemanta]

Tim O’Reilly discusses web 2.0 and cloud computing

Oracle CEO Larry Ellison tells customers that ...

Image via Wikipedia

Over the last few months I’ve been asked, a lot, by some smart folks on how web 2.0 and cloud computing are defined, and what their impact will be on technology as a whole.  Since both terms are used very loosely, and often times by marketers who aren’t knowledgeable in either field, web 2.0 and cloud computing have somehow melded into one concept for many people.  This, however, is not the right way to look at things.  In a recent email to a friend I put forth my thoughts on the matter, and was busy recrafting a post from that email until I read Tim O’Reilly’s post this evening.  As expected, his definitions are much better than mine.  He also goes on to develop a case for the future impact of both concepts for the technology industry:

Web 2.0 and Cloud Computing – O’Reilly Radar

I believe strongly that open source and open internet standards are doing the same [migrating the point of profit] to traditional software. And value is migrating to a new kind of layer, which we now call Web 2.0, which consists of applications driven not just by software but by network-effects databases driven by explicit or implicit user contribution.
So when Larry Ellison says that cloud computing and open source won’t produce many hugely profitable companies, he’s right, but only if you look at the pure software layer. This is a lot like saying that the PC wouldn’t produce many hugely profitable companies, and looking only at hardware vendors! First Microsoft, and now Google give the lie to Ellison’s analysis. The big winners are those who best grasp the rules of the new platform.So here’s the real trick: cloud computing is real. Everything is moving into the cloud, in whole or in part. The utility layer of cloud computing will be just that, a utility, without outsized profits.

But the cloud platform, like the software platform before it, has new rules for competitive advantage. And chief among those advantages are those that we’ve identified as “Web 2.0”, the design of systems that harness network effects to get better the more people use them.

Read the whole post, it’s worthwhile.

Quick thought: Uncertainty dominates enterprise software market

City of Las Vegas

Image via Wikipedia

So I figure I’d write a quick post this morning while I wait to catch my connecting flight home. This is also my first post via the iPhone WordPress application, so it’s somewhat of a test from this device.
=====================

This week I was at the SSPA trade show in Las Vegas, and it was the quietest I’ve ever seen a major industry event. I don’t have actual numbers, but it seemed like attendance was down. I’m sure some of this reflects general economic softness, but I also think that the dramatic gyrations in the financial markets must have led to last minute cancellations. Either way, if this one data point reflects the level of activity in the overall market, then enterprise software is about to face the biggest slowdowns since the beginning of this decade…at least from a new license perspective.

The week before, Consona had their annual user meeting, also in Las Vegas. While attendance was down, the existing user base in both the CRM and ERP divisions seemed to be energized.

If the two events reflect any larger trend, those enterprise software companies that have a business model that is built on new license sales are in for some very choppy waters…

Reblog this post [with Zemanta]

Docstock rollouts out several enhancements for document sharing web application

Image representing Docstoc as depicted in Crun...

Image via CrunchBase

Docstoc, which debuted late last year, has rolled out several interesting features for it’s cloud based document sharing platform. The new features focus on the ability to upload and manage private documents on Docstoc, something that wasn’t possible previously. Competitor Scribd has had this feature since the beginning, so Docstoc is playing catch-up here. While I doubt many company secrets will be uploaded to these types of services, they do serve an important purpose in allowing the creator of content to manage who gets access to that content and what those authorized users can do with that content. Docstoc’s example of a business plan for a startup is a good scenario where document availability may need to be tightly controlled. Read more at Docstoc’s weblog:

Docstoc.com Blog » Introducing Docstoc Private Documents

Ultimate Document Protection: Want to share a Word or PowerPoint file without giving away the actual source file? Upload your document and mark it copy write and your document can be viewed but not downloaded.

Who will own the cloud?

King Cloud

Image by akakumo via Flickr

A very robust discussion about could computing is going on over at the O’Reilly Radar, associated with this post:

Open Source and Cloud Computing – O’Reilly Radar

I’ve been worried for some years that the open source movement might fall prey to the problem that Kim Stanley Robinson so incisively captured in Green Mars: “History is a wave that moves through time slightly faster than we do.” Innovators are left behind, as the world they’ve changed picks up on their ideas, runs with them, and takes them in unexpected directions.

A solid primer on cloud computing

Cloud Computing

Image by stan via Flickr

Dion Hinchliffe, over at ZDNet, has a great description of cloud computing, why it’s gathering steam, and how it may impact enterprise applications in the years to come.

Enterprise cloud computing gathers steam | Enterprise Web 2.0 | ZDNet.com

The days when organizations carefully cultivated vast data centers consisting of an endless sea of hardware and software are not over, at least not yet. However, the groundwork for their eventual transformation and downsizing is rapidly being laid in the form of something increasingly known as “cloud computing.” This network-based model for computing promises to move many traditional IT capability out to 3rd party services on the network.

It’s a lengthy post that I would recommend everyone to read.

Apple’s flawed foray into Cloud Computing acknowledged by Steve Jobs

Image representing Apple as depicted in CrunchBase

Image via CrunchBase

Ars Technica is reporting that Steve Jobs sent out an internal email that acknowledged missteps in the rollout of MobleMe:

Steve Jobs: MobileMe “not up to Apple’s standards”

“The MobileMe launch clearly demonstrates that we have more to learn about Internet services,” Jobs says. “And learn we will. The vision of MobileMe is both exciting and ambitious, and we will press on to make it a service we are all proud of by the end of this year.”

Apple may get a mulligan this time from the faithful, but the cloud computing space is rapidly becoming the most competitive place on the internet, and such failures won’t be tolerated for long.

My own experience with MobleMe has left me disappointed, and I’ve decided to stay with the google calendar and gmail solution, and added SugarSync to manage my files across computers.

What to watch for on the Cloud

King Cloud

Image by akakumo via Flickr

I’ve been catching up this morning with some weekend posts, and came across (another) excellent post at GigaOM. We all know that cloud computing is developing into the hottest non-iPhone topic of 2008, so here’s an excellent post from GigaOM on where the development is taking place:

Inside the Cloud: 9 Sectors to Watch – GigaOM

…there are distinct sectors of the IT industry that are particularly well suited to the on-demand, pay-as-you-go economics of cloud computing. Here are eight segments — and one company that’s a segment all its own — that we’re tracking closely.

Amazon’s S3 was down for a long stretch yesterday

Image representing Amazon.com as depicted in C...

Image via CrunchBase

In recent months I’ve written more and more about Cloud Computing, and have been a fan of it’s most prominent tool Amazon’s S3. Well, it looks like S3 had another major failure yesterday. I didn’t notice because I spent the day away from the internet, but according to many blog posts, the service was down for several hours. That’s not good, Amazon. Here’s a link to Om Malik’s post regarding the failure:

S3 Outage Highlights Fragility of Web Services – GigaOM

That said, the outage shows that cloud computing still has a long road ahead when it comes to reliability. NASDAQ, Activision, Business Objects and Hasbro are some of the large companies using Amazon’s S3 Web Services. But even as cloud computing starts to gain traction with companies like these and most of our business and communication activities are shifting online, web services are still fragile, in part because we are still using technologies built for a much less strenuous web.