Month: February 2009

Enhancing Security and Functionality At The Same Time

Posted on

Have you ever been sucked into the false debate over how much IT spending should be spent on security?  I used to all the time.  Some folks point to a rule of thumb that goes something like “ten percent of the IT budget should be applied to security.”  That old school formula may well be part of the reason we got into the mess we are currently in.  It contributes to thoughts that lead you to think security can be separated.  By my way of thinking, 100% of the budget goes to security and functionality and that is the calculus.

Really, security is about ensuring information confidentiality, availability and integrity. And those constructs are totally connected to functionality of IT.   I try whenever possible to use the term security and functionality in the same context just to underscore that point. 

For example, the goal I continually push regarding security in the federal space is not just one dealing with security.  I put it this way:  “Security and functionality of all federal IT will be increased by two orders of magnitude in the next 24 months.”  Putting the goal this ways also underscores that it is not security vs. functionality.  Both need to increase. 

This goal also cries out for the need for metrics in security and functionality.  For functionality there are many customer focused survey methods that can help collect the right metrics.  For security, I think one metric stands out above all others:  Detected unauthorized intrusions.  There are many other important metrics for other dimensions of the security problem, but that one is key.  So, a goal that expects both security and functionality of federal enterprise IT to improve by two orders of magnitude will expect customer survey satisfaction to go through the roof, and will expect detected intrusions to drop significantly.  If there were 50,000 detected intrusions in 2008, there should be less than 5000 in 2010.  

That is a dramatic goal.  What makes me think it is achievable?  In part the dramatic action being put in place today in the federal space.  And in part by dramatic new technologies and approaches like private clouds and thin client computing and enhanced identity management and authorization methods.  But of more importance and more relevance than all of that, in my opinion, is the coordinated action and leadership underway by CIOs and CISOs and the security  experts in the federal space today.

As evidence of this incredible positive action I’d like to bring your attention to a release by a Consortium of US Federal Cybersecurity Experts on Consensus Audit Guidelines.  Details of this effort are at http://www.sans.org/cag/

The Consensus Audit Guidelines provide the twenty most important controls and metrics for effective cyber defense and continuous FISMA compliance.   These controls and metrics include:

Critical Controls Subject to Automated Measurement and Validation:

  1. Inventory of Authorized and Unauthorized Hardware.

  2. Inventory of Authorized and Unauthorized Software.

  3. Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers.

  4. Secure Configurations of Network Devices Such as Firewalls and Routers.

  5. Boundary Defense

  6. Maintenance and Analysis of Complete Security Audit Logs

  7. Application Software Security

  8. Controlled Use of Administrative Privileges

  9. Controlled Access Based On Need to Know

  10. Continuous Vulnerability Testing and Remediation

  11. Dormant Account Monitoring and Control

  12. Anti-Malware Defenses

  13. Limitation and Control of Ports, Protocols and Services

  14. Wireless Device Control

  15. Data Leakage Protection

Additional Critical Controls (not directly supported by automated measurement and validation):

  1. Secure Network Engineering

  2. Red Team Exercises

  3. Incident Response Capability

  4. Data Recovery Capability

  5. Security Skills Assessment and Training to Fill Gaps

The site at http://www.sans.org/cag provides more details on each, including detailed descriptions of the controls, how to implement them, how to measure them, and how to continuously improve them.   The site also spells out the fact that this is a work in progress and processes are in place to ensure this great effort remains relevant and maximizes our ability to protect ourselves.  

What should CTOs think about this guidance?  As for me, I most strongly endorse it. In my mind the appropriate implementation of these controls will reduce unauthorized intrusions in any enterprise. 

The deeply respected community leader Alan Paller said it this way:

“This is the best example of risk-based security I have ever seen,” said
Alan Paller, director of research at the SANS Institute.  “The team that was
brought together represents the nation’s most complete understanding of
the risk faced by our systems. In the past cybersecurity was driven by
people who had no clue of how the attacks are carried out. They created an
illusion of security. The CAG will turn that illusion to reality.”
 

Please give these controls a read, and please help get them into the hands of the security and functionality professionals in your enterprise.

The Future of the Grid: From Telecommunications to Cloud-Based Servers

Posted on

netra-ct-900-ATCA-blade-server.gifThere was once a time long long ago when telecommunications and
computing were two different concepts.  That was the age when phone
company operators manually switched calls and computers like ENIAC
were programmed by patches and cables.  Since then the two fields have
been on a convergence path.   The many advances in both fields since
the 1940’s make for exciting reading for computer and telecom fans, but
rather than recount those achievements here I’d rather talk about a
more modern achievement of note, the establishment of the Advanced
Telecommunications Computing Architecture (ATCA or AdvancedTCA). 

ATCA is an open standard that has been around since about 2003.  It has
been continually enhanced and today it is perhaps the most broadly
accepted standard in the telecom industry, with over 100 companies
participating in development and implementation of the specification.  Perhaps more important is the adoption of the standard in the telecommunications industry.  A review of wikipedia entries and other open info (like the Intel Embedded and Communciations Alliance) indicates typical “hockey-stick” implementation seen in other highly reliable, highly virtuous standards.  IDC projects the ATCA market will be about $2.7 billion in size by 2013.   I think the global financial crisis and the ongoing wave of mergers  and purchases of smaller comms and equipment providers by larger ones will accelerate this trend even faster, as the need for modular low cost, highly relieable standards is needed even more.  

Network equipment providers face two challenges that they are addressing with ATCA: 1) the need to continue to deliver new platforms and applications and, 2) the need to reduce costs and improve productivity.  ATCA provides a great opportunity to address these needs.  ATCA standards provide a common platform which provides lower cost, reduced maintenance, the ability to use third party boards, and the ability to reduce vendor lock-in (more on ATCA capabilities is below). 

In my opinion, enterprise CTOs should work to accelerate moving the
ATCA standard and compliant products into data centers.  It results in
more computer power per square inch, higher reliability, power savings,
cost savings, long term maintainability, and a path for upgrade that
does not require forklifts.  ATCA is not something that currently scales down to small network devices, but it is something that I believe will prove to be perfect for data center server support.

Here is more on ATCA:

– Boards (blades) in an ATCA shelf are hot swapable.
– There is not a “bus” for communications in an ATCA shelf.  Instead,
boards communicate point to point, which is faster and ensures there is
not a single point of failure like in the bus model.
– Any switching fabric can be used.
– Boards can be processors, switches or specially designed advanced cards, if desired.
– The most advanced shelf management capability ever designed is in
the ATCA container.  If any sensor reports a problem the shelf manager
can take action or report the problem to a system manager. This action
could be things like turning up a fan or powering off a component or
telling a human that something needs to be replaced before failing.
– It is designed for very high reliability and very high availability. 
– It runs cooler, even with its higher powered processors.
– It supports a healthy multi-vendor, interoperable ecosystem.
– It is based on open standards vice proprietary (locked-in) solutions.

Now back to the opening idea of this post.  Telecom and data and compute power are not separate things anymore.  Each are closely interwoven and successes in one thrust can make a huge positive difference in capabilities in other areas.  As organizations and users grow more accustom to the power of cloud computing they will demand higher and higher levels of reliability and resiliency from their server providers.  And as service providers provider higher levels of reliability and throughput cloud compute providers will see more and more success which will place increased requirements on their capability.  In both cases, ATCA will provide the agility, resiliency and reliability required, which will drive its adoption further and further into the telecon and data worlds.

So, for
CTOs who are concerned with maximum performance with power and space
efficiency and a path to future upgrades, accelerate ATCA into your
enterprise.  How?  I just typed the words “atca for the datacenter”
into Google and got several links worth diving much deeper into,
including:

Will ATCA Bring Order Out of Chaos for Blade Servers?

Sun Netra CP3220 ATCA Blade Server

The Future of Global IT: Its like the Kobayashi Maru

Posted on

If you are a little rusty on your advanced science literature, theater and movies and don’t recall the story from the fictional Star Trek universe known as the Kobayashi Maru, please take a moment to watch this clip, just to get your mind going: 

This part of the story, reportedly written by Gene Roddenberry himself, is about a simulation at Star Fleet Academy.  For most students there it is a no-win scenario, as you saw in the clip.   For one, however, there was a solution we should all remember.  That solution is one I like to use to remind people of what we need to do to help enhance the security and functionality of today’s enterprise IT.  More on that later.

Regarding the future of global IT, the NYT ran an article today I’d recommend to any enterprise technologist or security professional titled “Do We Need a New Internet?”  The article provides a good summary of how we got into the current mess regarding security of interconnected devices:

The Internet’s original designers never foresaw that the academic and
military research network they created would one day bear the burden of
carrying all the world’s communications and commerce. There was no one
central control point and its designers wanted to make it possible for
every network to exchange data with every other network. Little
attention was given to security. Since then, there have been immense
efforts to bolt on security, to little effect.

It also briefly discusses some of the threats and significant penetrations we have seen, and then introduces the Stanford Clean Slate project.   This project seeks to build a new Internet with improved security and capabilities to supoprt a new generation of applications, as well as support mobile users.  It is an Internet designed with security in it from day one.  From the Clean Slate site:

We believe that the current Internet has significant deficiencies that
need to be solved before it can become a unified global communication
infrastructure. Further, we believe the Internet’s shortcomings will
not be resolved by the conventional incremental and
‘backward-compatible’ style of academic and industrial networking
research. The proposed program will focus on unconventional, bold, and
long-term research that tries to break the network’s ossification. To
this end, the research program can be characterized by two research
questions: “With what we know today, if we were to start again with a
clean slate, how would we design a global communications
infrastructure?”, and “How should the Internet look in 15 years?”

The site provides a good synopsis of research and profiles of the leaders working on the project.  I’ve heard of several similar efforts, but none so well formed, in my opinion.  The thing I really like about this one is it does not require everything everywhere to be thrown out before transitioning to this new way.  There will be changes required, but this is much more evolutionary than other approaches seem to be.  It is a great way for us humans to take back control of the technological aspects of our destiny. 

Now back to the Kobayashi Maru.   How did our hero Captain James T. Kirk win the simulation?  He realized that the simulation was a creation of humans and decided that it could be redesigned.  He designed it to work better for him and he won.   That is the approach we need when it comes to Internet security, and it is an approach I think of when I read about the Stanford Clean Slate project.  People like Nick McKeown have realized it is ok for us to decide what our future will be and design it.  Thanks Nick for that, you remind me of one of our SciFi heros.

A Blog I Like: Haft of the Spear

Posted on

Michael Tanji brings a perspective forged in years of intelligence work and a successful stint protecting information in the financial sector.  He is a well published author who focuses on national security issues and is also a thought leader in the computer security domain.

At Haft of the Spear he writes primarily about technology related/enabled national security issues, which includes a heavy dose of information warfare. 

Read HOTS at: http://haftofthespear.com/

Next week I write about Nicholas Carr and his Rough Type blog.

Plastic Logic and what could be the ultimate thin client

Posted on

PlasticLogicElectronicReadingDevice2_thumb.jpgI’ve written a bit here about new display technologies that are so thin they are disruptive to our current way of work.

In October 2007 I wrote “Enterprise Requirements Come From Hollywood” where OLED (organic light emitting diode) TV’s were discussed.   I mentioned the fact that once again Hollywood got it right first, with superthin displays in sci-fi and fantasy movies helping to drive user expectations and requirements.  I’ve also written about thin clients, especially the game-changing infrastructure components for thin clients from Sun Microsystems.  The servers supporting thin clients provide dramatic positive benefits for any IT enterprise. 

And in January 2009 I wrote:

Flexible computers will arrive in production this year for early
adopters and many CTOs will use them in labs to assess applicability
for massive deployment in the coming years.   These flexible computers
are the ultimate thin clients.   Backends/servers/architectures
developed for the cloud perfectly suit ultra thin, flexible computing
devices. For more on this hot topic, start at the site of the Flexible Display Center at ASU.

One company poised to take advantage of the technologies of flexible displays is Plastic Logic. They are a Silicon Valley startup producing a paper-pad-thin device that is designed for business reading.   For now, their offerings are focused on the business user and information can get into the device either by users sending it to the device or by content providers.

The Plastic Logic Reader is officially still in development.  It will
enter the market later in 2009 via pilots and trials (I hope to get
one) and then be commercially available in 2010.  Complete features
lists are not available but it supports a wide range of document types,
including: PDF, DOC, DOCX, XLS, XLSX, PPT,
PPTX, JPEG, PNG, TEXT, HTML, BMP, RTF, and ePub. 

Users will hold this reader like they hold a piece of paper and read documents provided via wireless communications. The device weighs ounces not pounds, is thinner than a Macbook Air, and has a battery that lasts days vice hours.  For more see this video of Plastic Logic CEO Richard Archuleta from the Fall 2008 Demo conference:

 

My suggestion to any enterprise-class CTO is to check out their website
and find ways to get their capability into your lab and into the hands
of your users. 

I’d also suggest thinking through how these devices can fit into the rest of your enterprise, and I’d suggest you (actually, I suggest all of us) start formulating our desires for enterprise capabilities on this device.  For example, what encryption will be used?  How will it do identity management?  How will it to access control?  How will it work with a Sun Ray environment?  

Foreign Spies Make Recession Worse and Steal Part of Our Future

Posted on

Foreign spies are in our country for many bad reasons. Spies target defense secrets and seek to penetrate the
decision-making process of our government leaders.  They also gain unauthorized access to information held by our nation’s corporations.  In this time of
serious economic crisis this aspect of the threat from foreign spies is particularly troublesome.  Spies contribute to the problem’s we face in the economy.

Today one of the most damaging things spies do is steal the trade secrets and intellectual property of our corporations and research labs.  The intellectual property they steal is moved overseas where other countries (and companies inside those countries) can benefit from the investments we make in research and development.  This hurts our economy in many ways.  It causes the value of our research and development to be significantly sub-optimized.  It hurts the ability of our companies to compete in the global market place.  It causes more jobs to go overseas.   It can threaten the survival of companies which of course hurts both investors and employees.  This is all bad for the economy.  And its all WRONG!  Our country needs to invest enough in our counterintelligence capabilities to find foreign spies and get them out of here. 

A particularly insidious threat is one where a country might couple the power of spies in our borders with cyber attacks and cyber espionage to extract information from companies while at the same time monitoring the response to those attacks.  Humans can enable cyber attacks in many ways that make them far more damaging.  In fact the most feared type of data theft if one where a trusted insider moves data.  With modern high capacity thumb drives large quantities of data can be moved in moments.

I just read an article by an authoritative source on this topic, Michelle Van Cleave.  Michelle served as the hed of U.S. counterintelligence from July 2003 through March 2006 and was in a position to observe firsthand some of the damage being done by foreign spies.  The article outlines examples and gives a firsthand account of some of the challenges we face in this area.  It concludes with:

How important is all of this, really? Cynics will scoff and say, “There
will always be spies.” But I have read the file drawers full of damage
assessments; I have catalogued the enormous losses in lives, treasure
and crucial secrets that foreign intelligence work has caused. The
memory of what’s in those files — and the thought of the people and
the operations still in harm’s way — can keep me awake at night.

So we have to choose. We can handle these threats piecemeal, or we
can pull together a strategic program — one team, one plan, one goal
— to reduce the overall danger. We can chase individual spies case by
case, or we can target the services that send them here. The next
devastating spy case is just around the bend. I fear that when it
comes, we will all ask ourselves why we didn’t stop it. I suspect I
already know the answer.

I recommend this article to all, especially enterprise technologists.  If you are a CTO, a CISO, a CISO it is especially important for you to understand the nature of the threat to your systems and to your intellectual property.  If you are a citizen it is important for you to know as well.  We must collectively address this challenge to our intellectual property and to our economic recovery.

For more on these topics please see:

http://www.ctovision.com/cyber-war/

and

http://www.ctovision.com/information-warfare/

 

Intelligence Community Executive Forum and Carahsoft

Posted on

Carahsoft is a fantastic company in Reston, VA run by the hardest working, most modest, ethical, business leader I have ever met.   His behind the scenes style means he would probably not want me to mention much more about him, but if I have your curiosity up about them you can read more here (read the one about their winning the Smart CEO magazine Future 50 in Jan 2009, or Fairfax County economic development authority award for 2009, or other award after award after award).

One thing I like about Carahsoft is their desire to help government customers think through hard problems and their desire to help their extended team mates and partners learn about customer hard problems so enterprise solutions can be developed.  One of the many ways Carahsoft does that is by hosting venue like the Intelligence Community Executive Forum (ICEF).  This periodic venue brings together executives and thought leaders from government and industry to listen to lesssons learned, hard problems and successes in creating CONOPs to address mission needs.

I’ll be helping Carahsoft with the next ICEF on 17 Feb 2009.   This one will focus on collaborative enterprise solutions like those provided by Adobe.   Panels will be held on topics like real-time collaboration, secure information sharing and Integration/web2.0.

Please check out the agenda and register if you can make it.   More info is here: http://www.intelligencecommunityexecutiveforum.com/

  

Unrestricted Warfare Symposium, Sponsored by JHU’s APL and SAIS

Posted on

For enterprise technologists and national security professionals and most of all for those who fit both of those descriptions, please check out Johns Hopkins University’s 2009 Unrestricted Warfare Symposium at: http://www.jhuapl.edu/urw_symposium  This symposium seeks to advance our understanding of and solutions for some very complex problems related to our nation’s defense.  I’ll be speaking on a panel at the conference (on issues of cyber war and cyber defense) and hope to see you there. 

The following is from an e-mail from Dr. Ron Luman (Johns Hopkins University Applied Physics Laboratory National Security Analysis Department Head)

National Security Community Colleagues:
This is a reminder that the Johns Hopkins University’s 2009 Unrestricted Warfare Symposium will be held 24-25 March 2009, and I encourage you to register now at http://www.jhuapl.edu/urw_symposium/.

The fourth annual symposium is in Laurel, MD at JHU’s Applied Physics Laboratory (APL), and is jointly sponsored by APL and the Paul H. Nitze School of Advanced International Studies (SAIS). Last year more than 300 participants from government, industry, and academia interacted with distinguished speakers and expert panelists who addressed national security issues from three perspectives: strategy, analysis, and technology. In 2009, this uniquely synergistic approach will be applied to the challenge of identifying interagency imperatives and capabilities.

The symposium presentations and panels are organized around four potential unrestricted lines of attack – cyber, resource, economic/financial, and terrorism. We’ll begin each session with a discussion of the potential for such attacks and then expert roundtable panelists will discuss imperatives for interagency action, offering ideas for enhancing interagency capabilities. A fifth session will focus on the role of analysis in identifying and assessing interagency approaches for preventing and combating these types of attacks.

I am particularly pleased that The Honorable James R. Locher, III, Executive Director of the Project for National Security Reform, will open the symposium as our keynote speaker, providing the Project’s timely findings and recommendations for interagency reform. Throughout the two days featured speakers and distinguished panelists, include: Dr. George Akst, MCCDC; Mr. Eric Coulter, OSD(PA&E); Dr. Richard Cooper, Harvard University; Dr. Stephen Flynn, Council on Foreign Relations; Representative Jane Harman; Professor Bruce Hoffman, Georgetown University; Professor Michael Klare, Hampshire College; Dr. Michael Levi, Council on Foreign Relations; Dr. Matthew Levitt, Washington Institute; Dr. Pete Nanos (DTRA); Mr. James Rickards, Omnis, Inc.; Mr. Frank Ruggiero (Department of State); Dr. Khatuna Salukvadze, Georgian Ministry of Foreign Affairs; Mr. Dan Wolf, Cyber Pack Ventures Inc.; Mr. Bob Work, CSBA, to name a few.

The attached announcement identifies confirmed speakers and other essential information. We encourage dynamic networking, and to facilitate audience participation, we will again be utilizing electronic groupware to collect comments, insights, and questions. The collection of papers and transcripts of discussions will again be published as Proceedings, in both hard copy and electronic form. The 2006 -2008 Proceedings, the current agenda/speakers, and 2009 registration details can be found at the symposium website: http://www.jhuapl.edu/urw_symposium/.

Your experience in national security and defense will contribute unique perspectives and challenging questions to our understanding of Unrestricted Warfare, and I look forward to seeing you next month.

Best regards,

Ron Luman, General Chair

I hope to see you all there.

 
Symposium Attachment:
URW2009Flyer 4Feb-1.pdf

A Blog I Like: Devost.net

Posted on

Matt Devost has been a thought leader in information technology, cyber warfare, counter terrorism and security training for over a decade.  He has built successful companies, taught warriors security, helped protect industry and taught (and still teaches) information warfare at Georgetown university.

Through history great thoughts have come from leaders who work at the intersection of multiple domains of practice and Matt continues to demonstrate his thought leadership at is blog.  As proof let me mention his winning of NDU’s Sun Tzu infrormation warfare essay contest in 1996. The article he co-authored titled “Information Terrorism: Can You Trust Your Toaster?” remains a classic thought piece that should be read by every IT professional and military strategist today.

Read that article and Matt’s more recent thoughts at: http://blog.devost.net/

Next week I write about Mike Tanji and Haft of the Speer.

Vivek Kundra: The Alpha CTO

Posted on

Vivek_Kundra.jpgEvery CTO I know has heard of Vivek Kundra, CTO of
the District of Columbia.  We have all been following his accomplishments
in transforming the technology program in DC and have watched in excitement as
more and more capabilities have been rolled out to serve the city and its
citizens. We have followed reports of bold moves he put in place to ensure
technology programs deliver.  We have read about his new approaches to
technology portfolio management and watched as he discussed the leap ahead he
delivered to his enterprise by his audacious, courageous use of Google Apps and
other cloud-based solutions.

If you are not one of those familiar with Vivek, here
is a short bio: Vivek Kundra is the CTO for the
District of Columbia where he leads an organization of over 600 staff that
provides technology services and leadership for 86 agencies, 38,000 employees,
residents, businesses, and 14 million annual visitors. He brings to the role of
CTO a diverse record that combines technology and public policy experience in
government, private industry, and academia. Previously, Vivek
served as Assistant Secretary of Commerce and Technology for the Commonwealth
of Virginia, the first dual cabinet role in the state’s history.  In the
private sector, Vivek led technology companies
serving national and international customers. Earlier he served as Director of
Infrastructure Technology for Arlington, Virginia. He also taught classes on
emerging and disruptive technologies at the University of Maryland. Since Vivek became District CTO, he has been honored with major
IT awards. In 2008, the MIT Sloan CIO Symposium recognized him among
outstanding IT innovators. In addition, InfoWorld Magazine named Vivek among
its “CTO 25”
.

I recently saw Vivek at a meeting of the Washington Area CTO Roundtable,
an informal collective of area CTOs led by Yuvi Kochar, CTO of the Washington
Post Company. Before the meeting we chatted about mashup technologies (including his Apps for Democracy  contest and also JackBe).  During the meeting Vivek discussed several
aspects of his innovative efforts to transform the District’s information technology
infrastructure.   A point that struck me was his leadership through
principles.  Three key ones he articulated were: 1) Leveraging commercial
technology, 2) Driving transparency, and 3) Rethinking notions of IT
governance. 

Vivek and I just finished a phone call where we discussed these and other items
in more detail.  Here is a bit more on his approach. 

1) Leveraging commercial technology: Commercial radios and cell phones
allowed a rapid enhancement of the tactical communications infrastructure of
the DC workforce, including the police workforce.  Police squad cars are
also now equipped with commercial, but toughened, laptops.  Commercial web
technology has been leveraged in ways that leaped ahead of old clunky office
automation and also enable rapid development and mashups. 

2) Driving transparency and engaging citizens:  Technology
impediments to information access and information sharing were eliminated in
ways that enable citizens to see how government decisions are being made. 
Data was also exposed in ways that enabled mashups and agile
programing/development.  Examples include DCs digital public square and
Apps for Democracy efforts.

3) Rethinking notions of IT governance: Totally new, innovative ways to
manage IT portfolios were created and used to ensure all stakeholders could
evaluate the technology program and better make informed decisions on when to
terminate programs and where to invest more money.  Chief among these
innovations was an approach to portfolio management that replicates a stock
market trading floor.  More important, however is the relentless focus on
performance and innovation to support performance.  Beside rethinking
these notions of governance Vivek also took measures to smartly
watch/reduce/reprioritize IT costs.

I asked Vivek for thoughts that might be relevant to technologists who have set
their sites on careers where they can deliver results.  Many of us would
like to follow in his footsteps.  I wondered, if there is a particular
computer programing language we should all be learning now?  Should we be
diving into Python?  That’s hot now.  And what about databases? MySQL
and Hadoop are all the rage.  The thoughts I got back from Vivek were
incredibly insightful and far more relevant than the simplistic question I
asked. 

V:  Technology is important, and we do need to know technology.  But in these very exciting times where
Moore’s law pushes us all forward it is actually more important to be able to quickly learn new technology rather than focus on one and only one.  This is the beauty of the new world of
technology. There is always something to learn.  We should also always remember that the reason to learn is the mission.  To an enterprise CTO, technology by itself is worthless.  Technology
only has value if it addresses business problems and drives business success.
Therefore technologists must have an ability to translate between the worlds of
mission needs and technology and need an ability to rapidly learn and deeply
understand both.

I asked Vivek for his intention for sharing his models and methods, since they
have clearly delivered success in DC.  He is doing quite a bit there so
all of us who would like more info have plenty of ways to learn more:

V: The DC CTO site at http://octo.dc.gov
provides links to many of the ongoing activities of the office and for those
who would like more on the models that produce the results we link to policies,
guidelines and procedures.  We also provide information on how our
governance process works.   But additionally we host visits to our
office by interested parties and have begun blogging about them.  In
another effort we hope will help move the models forward we are pressing ahead
with plans to turn our stock market approach to portfolio management into an
open model and will open source the code that makes it work, which should help
drive more innovation there.

Speaking of innovation, Vivek seems to have found a way to accelerate
innovation, which is something all CTOs are interested in doing.  I asked
him for his thoughts on where to look for innovation.  Another interesting
reply:

V:  You can look for innovation many places, but remembering that
necessity is the mother of invention you should keep an eye open for places
that innovate because they really need to.  I always keep an eye on the
developing world and am so incredibly amazed at the tech innovation
there.  Enterprise IT does not mean that every program and project must be
delivered with huge budgets and huge staffs and the incredible innovations
coming out of the developing world prove that time and time again.  I’m
excited and enthused about developments like cell phone voting in Estonia,
electronic census that works in Chili, fishing villages around the world using
instant direct data to plan movement.  Innovation occurs many places, but
some of the greatest lessons for innovation are coming from the developing
world.

I asked Vivek about how to find balance between setting standards and enabling
innovation:

V:  Standards are important, but if a standard gets in the way of
innovation kill it.   Use standards that enable innovation. 
This is the role of the CTO.

Vivek also offered thoughts on social networks.

V:  In seeking ways to make your cycles of innovation move faster, never
underestimate the power of social networking tools and the networks you can
build with them.  Facebook is the example most talked about but there are
many others including networks built around ecommerce like eBay and
Amazon.  I believe we should not only embrace them to enable the power of
social networking but to help us leverage, in a large way, the IT
infrastructure of these platforms.   The new generations today are making
maximum use of these platforms and I view this as a very optimistic point.

As for me, I view the results of Vivek Kundra and his models as optimistic
points.  The great thing about being a CTO is the learning never stops in
this field and Vivek is a great teacher we should all be learning from.

For more on Vivek and the way hew views technology, including some of his inputs to the Obama adminstration, see: http://www.ctovision.com/2009/01/federal-government-technology-directions-and-the-fed-cto.html