Quote

Target’s massive security breach exposes security process failures

…And although there are companies that blatantly violate the standards, security is a constantly changing condition, not a static one. Every time a company installs new programs, changes servers or alters its architecture, new vulnerabilities can be introduced. A company that is certified compliant one month can quickly become non-compliant the next month if administrators install and configure a new firewall incorrectly or if systems that were once carefully segregated become connected because an employee didn’t adhere to access restrictions. Companies that conduct audits also have to rely on their clients to be honest about disclosing what they have on their network — such as stored data.

To answer the question posed by the title of the Wired.com post – No.  Therein lies the problem. 1
Wired link: Will Target’s Lawsuit Finally Expose the Failings of Security Audits?


  1. The nature of audits, in most professions, is that their usefulness is a function of the competency of those conducting them

Quote

Creepy data collection in the modern workplace

Another pioneering outfit is Sociometric Solutions, which puts sensors in name badges to discover social dynamics at work. The badges monitor how employees move around the workplace, who they talk to and in what tone of voice.
One client, Bank of America, discovered that its more productive workers were those allowed to take their breaks together, in which they let off steam and shared tips about dealing with frustrated customers.

The bank took heed and switched to collective breaks, after which performance improved 23 per cent and the amount of stress in workers’ voices fell 19 per cent.

Data pioneers watching us work – FT.com

It seems that last century’s Taylorism has, for some employers, morphed into more invasive forms of employee surveillance.

Quote

Jeff Atwood’s rant on the sad state of Apps is worth a read

Nothing terrifies me more than an app with no moral conscience in the desperate pursuit of revenue that has full access to everything on my phone: contacts, address book, pictures, email, auth tokens, you name it. I’m not excited by the prospect of installing an app on my phone these days. It’s more like a vague sense of impending dread, with my finger shakily hovering over the uninstall button the whole time. All I can think is what shitty thing is this “free” app going to do to me so they can satisfy their investors?

Read the rest: App-pocalypse Now

Quote

Cautionary thoughts on the Internet of Things

The level of hype around the “Internet of Things” (IoT) is getting a bit out of control. It may be the technology that crashes into Gartner’s trough of disillusionment faster than any other. But that doesn’t mean we can’t figure things out. Quite the contrary — as the trade press collectively loses its mind over the IoT, I’m spurred on further to try and figure this out. In my mind, the biggest barrier we have to making the IoT work comes from us. We are being naive as our overly simplistic understanding of how we control the IoT is likely going to fail and generate a huge consumer backlash.

The home automation paradox – O’Reilly Radar

Quote

David Weinberger and retiring the myth of non-scalable conversations

If enough people are in a conversation, one of them will be an expert. The larger the crowd, the more unexpected will be the expertise contained within it.

Of course, “larger” in this case may mean thousands, or tens of thousands. And, to uncover really obscure expertise, you may need millions of people. Of course that also means that you’ll need a social environment where obscure expertise can rise to the top. But that’s supposed to be impossible: Conversation doesn’t scale, we were told.

We were told wrong.

Unexpected expertise – KMWorld Magazine

Quote

Matt Mullenweg: The Four Freedoms

I believe that software, and in fact entire companies, should be run in a way that assumes that the sum of the talent of people outside your walls is greater than the sum of the few you have inside. None of us are as smart as all of us. Given the right environment — one that leverages the marginal cost of distributing software and ideas — independent actors can work toward something that benefits them, while also increasing the capability of the entire community.

This is where open source gets really interesting: it’s not just about the legal wonkery around software licensing, but what effect open sourced software has on people using it. In the proprietary world, those people are typically called “users,” a strange term that connotes dependence and addiction. In the open source world, they’re more rightly called a community.

The Four Freedoms | Matt Mullenweg

Matt posted this a few weeks ago.  This blog is hosted on WordPress.com, and prior to that on wordpress.org deployments across several hosting providers.  Matt’s post is worth mentioning here, not because of that WordPress connection, rather, the post is probably the best rationale I’ve seen for the value of open source software.  Read the post via the link.

Quote

Thousands of industrial internet devices found to be vulnerable

Moore’s census involved regularly sending simple, automated messages to each one of the 3.7 billion IP addresses assigned to devices connected to the Internet around the world (Google, in contrast, collects information offered publicly by websites). Many of the two terabytes (2,000 gigabytes) worth of replies Moore received from 310 million IPs indicated that they came from devices vulnerable to well-known flaws, or configured in a way that could to let anyone take control of them.

On Tuesday, Moore published results on a particularly troubling segment of those vulnerable devices: ones that appear to be used for business and industrial systems. Over 114,000 of those control connections were logged as being on the Internet with known security flaws. Many could be accessed using default passwords and 13,000 offered direct access through a command prompt without a password at all.

via Pinging the Whole Internet Reveals Unsecured Backdoors That Could Tempt Hackers and Cyber Criminals | MIT Technology Review.

Read the whole thing.

Article

The corroding value of the internet cookie, and an opportunity to shape a new market

Several years ago, I first heard Doc Searls make an amusing comment about one of the basic elements of the internet universe, the browser cookie.  With full credit to Phil Windley, Doc’s historical summary of ecommerce (and much of the modern internet) went like this:

A brief history of ecommerce can be summarized as this- 1995: The invention of the cookie. The end.

The browser cookie has reigned supreme for nearly two decades.  It has given rise to marketing empires like Double-Click (Google), Omniture, and nearly every imaginable advertising network of the modern web.  Cookies also provide context beyond ecommerce, since they help sites fine-tune the user experience and reduce friction for end users.

Cookies have become so pervasive that a contextualized web with out them would not be possible.  They’ve also extended well beyond context, as most cookies now actively track internet users, often without explicit permission.  With that backdrop, it’s hard to imagine that this atomic element of today’s web may soon fade away.

Perhaps because of how pervasive it is, and how invasive it is to personal privacy, the browser cookie is now under assault on many fronts.  The Europeans have taken to legislation as the primary vehicle to act against personal tracking technologies like cookies, Microsoft has gone as far as to ‘default‘ a do-not-track feature with their latest version of Internet Explorer, and there are at least a dozen such plugins for Firefox and Chrome.  Some ad-tech experts are actually predicting the complete collapse of the browser cookie in five years:

Five years at the most.

At my former company, my peers were the people who created cookies. We didn’t create them for this. It’s a very weak computing mechanism. It’s flawed, invasive, it’s got privacy issues, it’s going to go.

I think it will take five years to kill it. At that point, it’ll be like birds chirping and flowers blooming because we’ll find some kind of value proposition that allows consumers to trust us and opt into personalization. I term it, tailor don’t target.

via - The cookie has five years left says Merkle’s Paul Cimino | Ad Exchanger

It’s no surprise that ad-tech professionals see a paradigm shift away from cookies, but that shift isn’t being driven by a direct attack on the technology.  I can’t imagine that the ‘average’ internet user is proactively installing browser plugins to block cookies, so there has to be another reason why cookie usage has dropped precipitously.  At a prior point in the same blog post, Cimino reveals:

The second main reason is that non-cookieable devices – phones and iPads, Kindles and the like – are generating traffic somewhere between 35% and 40% of our overall traffic. So 35-40% of traffic is not from computers.

Consumer behavior has shifted away, which is forcing a shift away from cookies.  Although this might seem as a ‘win’ for privacy,  the ad-tech world has figured out even more invasive ways to target consumers:

I can’t cookie your iPhone or your Android phone. If you are at home or you go to the same place every day, I can see the IP and part of the user agent – enough information to reasonably identify you over and over and keep that good sync between the data – the first- and third-party data and the targeting opportunity that’s out there.

The takeaway here is that, as we see the value of cookies corroding, the technological fabric that has woven the modern web has produced even more invasive methods to track individual behavior.  At the same time, legislation and technology to counteract tracking technology is focused on the old cookie paradigm.  While the new tracking systems are relatively new, perhaps there is a window of opportunity for consumers to help shape a more balanced framework.

It is this balanced framework, that we are focusing on developing at Customer Commons:

Customer Commons holds a vision of the customer as an independent actor who retains autonomous control over his or her personal data, desires and intentions.  In this vision, each of us will act as the optimal point of integration and origination for data about us. Customers must be able to share their data and intentions selectively and voluntarily. Individuals must also be able to know exactly what information is being held about them by those who gather it, by whatever means. To achieve this, customers must be able to assert their own terms of engagement, in ways that are both practical and easy to understand for all sides.

I encourage you to join the conversation at Customer Commons.  Additionally, I will be devoting more time writing about how customer engagement in a modern marketplace will be significantly different, and how we call all help to shape that future, and more free, market.

If you are in the bay area during the week of May 6th, 2013, please consider joining the Customer Commons Salon that Monday evening.

Quote

Google’s take on the customer journey

These days, the customer journey has grown more complex. Before making an online purchase decision, a customer may engage with your brand through many different media channels over several days. This tool helps you explore and understand the customer journey to improve your marketing programs.

via The Customer Journey to Online Purchase – Think Insights – Google.

There are several interactive charts on that post, all of which reveal some interesting characteristics on how customer interactions vary based on the channel of engagement, by industry and region.

Quote

Real progress in artifical intelligence

 

Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects.

via New Techniques from Google and Ray Kurzweil Are Taking Artificial Intelligence to Another Level | MIT Technology Review.

%d bloggers like this: