Month: November 2007

Breaking Barriers in Human Thought Extraction

Posted on

Tttable1
Something has been really bugging me lately.   Why is it that we can get data into our heads faster than we can get it out?   By one back of the envelope calculation (drop me a note if you want to see the envelope) we can load humans with about 60 GBPS worth of data through the well engineered use of visualization systems.  And when done right the brain can process all that and bounce it off vast stores of saved memory in speeds that are much much faster.  But when it comes to getting that data and its results back out, we are stuck typing or speaking.  So, depending on how fast a person types or speaks, the data comes out at about 40 to 120 words per minute.   

Why is this important? 

In the analysis world, whether it is financial analysis or risk analysis or intelligence analysis or medical analysis, sometimes we humans must coordinate and collaborate to get to the right answer.  The default way we do that is through voice, of course.  So, you have multiple humans creating knowledge at the speed of thought and then trying to exchange it at a snail’s pace.  That has been a frustration of fast thinkers for ages. 

After wrestling with this problem for a few years I think I know part of the answer.  We don’t need brain implants or advanced ESP skills to solve this one.  We need more widespread use of advanced human generated computer graphics devices.   

I’m going to mention one in particular to illustrate the point:  The TouchTable.   

I imagine most CTOs have at least heard of the Touch Table.  This was a capability developed by Applied Minds Inc to meet the requirements of a special group of users in the Defense Intelligence Agency.  Many exciting videos of the TouchTable exist on the Internet, with one of the most recent being one with TouchTable CEO Rocky Roccanova on a Wired Science episode.

Now back to my point.   What if every human who had to have elegant interactions or collaborations on advanced higher-order thoughts had a TouchTable and what if they were all linked together so info sharing could be facilitated by these devices?

Now the speed of getting data out of a human’s head and into another human’s head would be significantly changed.  Ideas and thoughts can be created visually and received by people through their fastest input pipe.   And since it is totally digital, humans can use TouchTable to collaborate around the globe.

When viewed this way, advanced devices like this TouchTable can really help break the bottleneck in getting thoughts out of then back into heads.

For more see: http://www.touchtable.com/site/

Convergence Context for Technologists

Posted on

When CTOs think of the word convergence we usually associate it with old concepts like the convergence of voice, video and data onto common communications paths or the convergence of multiple devices into fewer devices.  These ideas are still very powerful and many of today’s major concepts in IT, like unified communications, utility computing and even mashups are direct descendants of convergence ideas.

But there is an even more powerful convergence underway.  The convergence of computer science, political science, business management and innovation. 

There are few masters of this new discipline.  In fact, I know of only one, Lewis Shepherd.   Lewis is the recently departed chief of the innovation directorate of the Defense intelligence Agency (DIA).   Lewis has been a friend and teacher for several years.  I’ve had the benefit of having frequent access to him and he has had a significant impact on my thinking on subjects related to technology. More importantly, he has had a tremendous impact on the US Intelligence Community and the Department of Defense.  The entire national security community is better off because of his years of service. 

Technologically, he has been a pioneer in areas like mashups, web2.0, AJAX, gadgets, widgets, blogs, wikis and collaborative environments.

Lewis was a daily blogger for years, but his writings were primarily posted on the top secret network of the intelligence community (via JWICS).  Lewis has now transitioned to a new job as CTO for Microsoft’s Institute of Advanced Technology for Government, a position that will allow to share more thoughts with a wider swath of readers.  Lewis is doing that now on his blog at "Shepherd’s Pi."  Check it out!

The future of information sharing technology

Posted on

I’m presenting a briefing at the Homeland Defense Journal sponsored conference on information sharing on 27 November 2007 (for more info see:  homelanddefensejournal.com ). 

Here is a bit on the goal of the conference from their website:

The Information Sharing
Strategies Conference
will bring together government officials,
emergency responders and the private sector to address the critical issues of
information sharing. Attendees to the Information Sharing
Strategies Conference
will hear first hand how the National
Strategy on Information Sharing will be implemented and supported across the
country. Speakers will provide insight into the roadmap that defines for the
nation’s ability to share information and how agencies can obtain funding and
training. This is an opportunity for government and industry representatives to
come together for a common purpose to understand the current state of
Information Sharing and where the nation is going in the near future.

I have a thesis in my presentation that I’ll be soliciting thoughts on.  My thesis is this:  I believe that we technologists can and should do more to support the nation’s secure information sharing enterprise strategy.   One of the ways we can do this is by helping decision-makers think through several aspects of the future of technology.  By thinking through the future we can extract lessons directly relevant to today.  For more on my thesis, see the presentation I’ve prepared for this conference ( Download InfoSharingTechnologyFutures.ppt (1662.0K) ).   For more on the government’s strategy for secure information sharing, see the national strategy for information sharing ( Download NationalStrategyforInformationSharing.pdf (420.4K) ).

A Federal CTO?

Posted on

One of the candidates for President of the US announced an intention to appoint a federal CTO.  This is such a good idea that I imagine all candidates are about to say similar things.   And if that happens we could see some really positive change in the federal IT space. 

Here are some anecdotal stories that lead me to that conclusion:

– In the last month since leaving my position as a CTO at a DoD agency three friends in CTO positions in other agencies have asked for my comments on what the optimal role of a CTO should be.  That was a common discussion topic when I was in government, but lately the topic seems to be heating up again. 

– Also in the last month, I’ve met with a former CTO and a standing CTO and a former agency head to brainstorm restarting the old Federal CTO summit.  We and many others see the need for that sort of venue.  We will organize a great session, I know, and I imagine it will be widely attended by federal CTOs.  We find all CTO’s share similar challenges and like interacting like this to solve problems.

– Over the last two months, I’ve cooperated with volunteer friends in an industry-government collaborative group to form a stronger CTO community.  In a recent panel three of us CTOs were asked questions like "Should the CTO position be mandated in law the way CIO positions are" and "how should CTOs be selected?".   I’m not sure we had great answers to those questions, but the interest in the topic made me believe the time is right for the nation to have more advanced approaches to technology governance.

– I was asked by friends today for ideas on how to strengthen CTO to CTO relationships in industry. I would almost chalk that up to coincidence, but there is so much other movement in this area I’m starting to think there are no such things as  a coincidence.  Industry is adopting more advanced models of technology leadership and that is yet another reason the government can move with confidence in finding a way forward.

Those are just some personal anecdotes that point to the heightened discussion I hear underway.  These and many other discussions influence my belief that the time is right for stronger CTO leadership for the federal government and for the nation. 

But one of the greatest drivers of my opinion on why the federal government needs a stronger CTO-type leader is the way the position of CTO was articulated by Jonathan Schwartz.  I already quoted him in a previous blog entry.  But for completeness sake I’ll reference him again here.  Jonathan said:

From Jonathan Schwartz’s Weblog.

"I
found myself talking to a group of media company CEO’s. I asked a
simple question, "do you have a general counsel reporting to you?" The
answer was universally, yes.

I do, too. Mike and his team are central to the evolution of Sun (as
I’ve said, we are nothing less, or more, than an intellectual property
company – it’s hard navigating those waters without a great legal team).

But then I asked a harder question: "Do you have a chief technology officer reporting to you?"

I do, and I talk to Greg at least every day. He plays a central role
at Sun. Central as in nervous system. He’s involved in every major
strategic decision I make (and a ton of minor ones, too).

But in response to my question, the answers from the group were more
dismissive than substantive – most did not. And in my view, if you have
a general counsel reporting to you, and not a CTO, you’re saying legal
advice is more important to you than technology counsel. Which seems
backward for a media company. Why?"

Stay tuned for more…

Thin Clients and The Enterprise

Posted on

CTOs, like everyone else in the enterprise, need to contribute to workforce agility, mobility and productivity, and they need to do that in a way that provides security.  And all that needs to be done in a way constrained by fiscal realities.

Which leads to the point of this post. All CTOs need to be aware of the wave of thin client computing that is sweeping through the industry. Here are some thoughts and observations:

– Some non-technical types assume the term “thin client” means “web browser” or “web services”.  These are  different concepts.  We should all embrace web services, and applications that can run in a browser are good, but real thin clients run a full spectrum of applications, not just web apps. To put it another way— today, real thin clients can run almost all applications.

– Thin clients come in many forms.  The one you want is one that meets your mission needs. So discussions on which solution to chose should begin with the functionality you need. 

– There is a great deal of information available on which thin client solutions really work, but there is also a great deal of spin provided by some vendors who want to take your money. So, in this area more than many others, you would be well advised to scrutinize the claims in vendor press releases, speeches and marketing material (I’ve never tried this, but I wonder, what if every contract you issue says you don’t pay a dime if any vendor statement is proven to be false).

My personal favorite architecture for thin client computing is one based on the Sun Ray.  There are also several other providers of true thin devices (and it is possible to run thin client architectures on old fashioned PCs), so my recommendation is to look around before making a decision, but I know first hand the benefits of the Sun Ray. Here are some:

– Security.  Sun Rays provide significantly enhanced protection, including virus protection. The Sun Ray architecture has security designed in from the very beginning.  Additionally, only a very small instruction set runs on the Sun Ray.  A virus or other malicious code will not run on a Sun Ray (Thin Clients that use other operating systems (like Linux) can be configured in a way that is more secure than regular PCs, but the potential for vulnerabilities are far greater than a Sun Ray thin client).

– Cost.  The incremental cost of adding a new user desktop is so low most IT departments are surprised at the savings they generate.  The server side equipment does have a cost but by the time you field over 100 users that already becomes more economical than fielding fat clients.  And for enterprises with more than 100 users the cost drops off significantly (if you ever hear of a vendor claiming their thin client has a lower cost per seat start asking the hard questions… like how did they calculate cost?)

– Energy savings.  This is critically important for many reasons. It is good to be green, of course, but there is also direct fiscal savings tied to energy cost. My last organization is saving about $3M per year in energy costs (the Sun Ray draws around 4 watts of power- old fashioned PCs can draw 140 or more watts—that adds up).

– Reliability. There is a law of computing that flows directly from the laws of physics: Anything with moving parts will break.  And the more moving parts the more it will break. Sun Rays don’t have spinning hard drives or whirling fans or turning CD drives.  They just keep running and running and running.

For more on Thin Clients see:

http://en.wikipedia.org/wiki/Thin_client

http://www.afcea.org/signal/articles/anmviewer.asp?a=427&print=yes

http://www.sun.com/sunray1/index.html

http://www.gdc4s.com/content/detail.cfm?item=35ce4913-a9b8-4919-bb95-3855bbfcdb57

http://www.eweek.com/article2/0,1759,1958667,00.asp

Google Upgrades and Enhancements

Posted on

The following is a review of some new capabilities at Google
(these were all announced and discussed on the official Google blog):

 – Better and faster route planning on maps.  Google
maps now has dragable directions, and you can also enter guidance and info on
Google maps like “avoid highways” or calculate “time-in-traffic”. That’s pretty cool.

 – A new operating system and open architecture for
mobile phones has been announced. This
is a comprehensive platform for mobile devices that should give a significant
boost to innovation in the mobile market. A software development kit will be available 12 November.

 – A new architecture and framework for social
network data exchange has been developed. This is known as “OpenSocial”. This
is a set of common API’s that make it easy to create social software. The widespread adaptation of these API’s will
make the net a more social place and will make it easier on humans like us. Global
members of the OpenSocial community include MySpace, Engage.com, Friendster, hi5, Hyves,
imeem, LinkedIn, Ning, Oracle, orkut, Plaxo, Salesforce.com, Six Apart, Tianji, Viadeo, and
XING.

These are just three of a long line of capabilities coming out from these guys.  Capabilities are also continuously being added to other elements of the Google platform.  For example, Google documents is frequently being upgraded, and since the capabilities of Google Docs are delivered as a web service, the upgrade is automatic and painless. 

Google has come a long long way from just a search company.

But there are more significant points for CTOs.  For example, since these many capabilties are just some recent examples, are we starting to become numb to the news of new things from Google?  Each of the items mentioned above as recent Google capabilities have the potential of being very disruptive to the current web services world.  But how can we analyze or assess the potential disruption if we have become so numb to the news?   

And another thought:  What if the rate of innovation or even the rate of perceived innovation slows down from Google?  We have become hooked on the frequent announcement of incredible new capabilities.  What will we do when the announcements slow? 

But, as an enterprise technologist, my biggest question is always "when will the capabilities being fielded by Google be available inside the enterprise?  On behalf of users everywhere, I hope the answer is "soon."

Some references to the above:

http://googleblog.blogspot.com/2007/11/road-to-better-path-finding.html

http://google-latlong.blogspot.com/2007/06/its-click-drag-situation.html

http://www.openhandsetalliance.com/developers.html

http://googleblog.blogspot.com/2007/11/opensocial-makes-web-better.html

 

An Enterprise CTO’s View of Cyberspace

Posted on

I heard yet another definition of cyberspace today.  I won’t repeat it here, I’ll just say it was an academic’s definition and it was somewhat useful to the particular conversation we were having.  But it pointed to an enduring problem for those who try to study these concepts.  If everyone everywhere can create their own definition of cyberspace, then how can we study this field?

At the risk of clouding things even further, I’ll propose a solution.

The term cyberspace is primarily used in two ways:  1) As a metaphor and 2) As a description of the modern IT environment.   Depending on which of those two usages is required, there are two easy definitions.  They follow:

When used as a metaphor, the term cyberspace must convey the full complexity and elegance of a new environment that is man made but seems alive.  When a definition is required for this usage, there is none better than that written by the man who coined the term.   William Gibson wrote about it in 1984 in his book Neuromancer.  He defined cyberspace as "A consensual hallucination experienced daily by billions of legitimate operators, in every nation… A graphical representation of data abstracted from the banks of every computer in the human system.  Unthinkable complexity."  For me that terms puts you in the frame of mind to creatively explore the realm of the possible and is still very much of use.

When the term is used in the context of real world actions and infrastructure, we can now use a very simple definition.   Cyberspace = The Information Technology in operation today (including hardware, software and data).

Is there any need for any other use of the term "cyberspace"?