Monday, December 10, 2007

Resurrection of the late 1990s. 1<3<2?

Which came first? Web 2.0 or Web 3.0 Technologies?

Interesting question. Technology that supported Web 2.0 appeared around 1999. Microsoft created the asynchronous javascript to support Microsoft OWA and some other pieces of software. It was an interesting offshoot of the DHTML r-evolution.

Some (notably Tim O'Reilly) define Web 2.0 as a participatory movement. The evolution of WebPages into a two-way "interactive" medium can be seen as a consolidation of email, chat and newsgroups that were popular since 1980s. These e-communication paradigms became more public in nature - transforming into blogs and social networks. Written in HTML pages - rather than pure text files.

In effect, Web 2.0 was the remoulding of old communication paradigms into a Hypertext driven world. Newsgroups were wild social networks. Newsgroups were blogs that had no response control. The point is that global human participatory interaction over the net was identifiable since the 1980s.

Since 1996, XML - a derivative of SGML which rose out of IBM in the 1960's - has been touted as the next lingua-franca of the net. RDF, OWL and other XML languages and standards were proposed and formulated between 1997-1999. Semantic web started effectively circa 1998 as an identifiable paradigm.

In the history of the web, Web 2.0 may probably be remembered as a distraction. It is not surprising that Tim Berners-Lee discouraged the notion of Web 2.0 as being significant.

Web 2.0 may be identified as the time we took our eyes off the web - and started looking at ourselves as the center of a self-less web. It may be marked as the time when the selfish "we" became the center of a form-less web.

Now we are back to the roots again. We are back to making machines understand us.

Web 3.0 is truly an enigmatic mixture of machines and information - a space where men go like spiders looking for entertainment, news, food, love, work and life. Far from being participants, humans are becoming part of the web.

Sunday, November 25, 2007

GGG, WWW, 123

Tim Berners-Lee, in his blog recently made a very insightful observation about the next phase of conceptual layout of the web and its connections. TBL coined GGG (aptly named as Giant Global Graph) to distinguish from the World Wide Web - and what it meant.

Tim observed that "It's not the documents, it is the things they are about which are important". Many industry practitioners have observed that "objects" mean a lot more than "data". When expressing a problem, objects and their relationships can have far better value than data and files.

Web was/is the world's filesystem - where we could find things quickly. The Internet was categorized by Yahoo (in a failed effort), tagged by Google (effectively) - and today we can instanly access information from anywhere in the world without thinking about where it comes from. C:\ became http://.

Semantic web has been having trouble getting serious due to its closeness with Artifical Intelligence - and related expectations. The dream of machines running the world is still not any closer to reality. The hope is that the RDF layer on XML may possibly give better direction to a machine than just URLs from the Web.

For example, in social networks FOAF (Friend of a friend) relationships may be represented in a machine readable form using simple XML files (called RDF). Representing relationships in machine readable format (just as we do in databases) have value. Eventually the machines may be able to make intelligent references based on connections represented in the relations- and eventually deliver some results. This was the expectation under which XML technologies was developed circa 1996, and after 10 years we are getting somewhere.

However, semantic web almost screeches to a halt after this step. Success in practical Ontology, feasible Modal Logic and Axiology (even remotely) has been appalling. Not surprising. NP complete problems are NP complete - intelligence is not purely XML driven. Intelligence is more than a graph, with aspects overlaid, and experiences to glue. Tackling true intelligence is a dream.

We sometimes forget the real use of data - that of providing value to humanity in various forms, and providing true functionality as the humans need it. Connections are good, but functionality is paramount. The fact that a company can store ticket information on the web is not sufficient, but the user being able to buy it is significant. A company storing data is not sufficient, it being able to sieve out information from it, transforming it into knowledge, and converting to action is paramount. Somewhere along this, functionality becomes the significant aspect.

URLs are becoming more potent with XML wrappers (RDF/OWL/SPARQL) around it. The new generation of applications will be playing on these enhancers to achieve seamlessness that we have sorely been lacking in the last 25 years.

The WebTop is becoming more significant than the desktop. Browsers that were a mere window to the world may become a real wide entrance to the world itself. In a very short time, local resources on a computer may have no significance in how users achieve functionality.

Monday, November 12, 2007

Blending of Data and Omni-Functionality


Tiger


Fig 1. Vector Drawing done using a web utility. Web formats are transforming from bitmap-driven formats to vector driven formats like VML, SVG and Flash.

For over 30 years, software scientists have been attempting to organize data into binary shelves, and drawing up algorithms to make sense out of it. Needlessly to say, countless number of applications have been created to handle the ocean of data. The clutter has not diminished - with more complex combinations of apps and data - apparently preserving IT jobs.

Complexity feeds job security, and that breeds more complexity.

Well, this is ok in a corporate world - to keep one's job. With jobs being done half-a-world-away and half-a-day-ahead, life-as-we-knew-it has changed. And who wants his or her life to be more complex than it should be.

It is time for omni-functionality to take the center stage. The world of segmentation is loosing its luster and lucre. We need a way of thinking that is simple, unified, and aspect centric - not data centric.

The new web will be composed of applications that rise out of separate siloes and implementations - into a seamless medium where data and functions blur.

Imagine a world where the web calculates better than Excel, presents better than Powerpoint, composes better than Word, connected more than outlook, browses better than Internet Explorer, and draws better than Visio/Adobe.

Do you want to wait another three years?

Friday, November 9, 2007

Death of Web 2.0?

Tim O'Reilly coined and "trade-marked" the term Web 2.0. Over the years, Web 2.0 became the most overused term, in an attempt to cover up the massive bust of 2000 for fundraising and justifying hyped-up valuation (like Youtube and to a large extent most related companies).

Maybe it is the mortgage related credit crunch that is clearing the fog. Steve Rubel indicated that most of Web 2.0 are after the almighty dollar. Chris Shipley declared Web 2.0 is so last version. Followed by GigaOm reporting (with a graph that reminds us of Lotus 123 from 1991!) that Kleiner Perkins would not be funding any more Web 2.0 companies.

Web 2.0 is another term for another social company with Ajax funneling data back and forth. Anyone who has used Ajax applications will be able to tell about the lack of performance (New Yahoo Mail, Google Docs & Spreadsheets, EditGrid, the lists goes on (See footnote 1)) and the ubiquitous waiting for server messages. And hung clients.

Web 2.0 is dead because it was a shallow term coined to just say something was different. The technology it was based on was too small to be even significant. Dynamic updating of webpages with pull technology - how fancy is that?

Some try to define the next generation as partly the Semantic Web (the RDL, Ontology, Modal Logic, Axiology sequence) - which is yet another attempt at recreating the twice failed artificial intelligence. However, it is clear that the likelihood of the real semantic web is miniscule - because just by its definition if it ever does work as nicely as promised, all NP complete problems would all be then solved too.

The fundamental nature of software and internet interfaces are changing. There are new seamless structures of data, interaction, organization that has such appeal that it questions the way we do everything.

For example, we are comfortable with Office applications where Word is good at composing, Excel is good at calculating, Powerpoint is good at presenting, and Visio is good at drawing. But none of these is good at what the others do. And this has been going on for the last 10 years.

The next generation wants to do it all in one place. Nay, they have to do it all without having to learn 10 interfaces.

We need to make it happen. Web 2.0 is now officially dead.


--------------------------

Footnote 1. Google Maps, one of the few good later-generation apps, is truly not a massively Ajax application. It evaluates hyperlinks (.gif pieces, which is then rendered onto the dhtml canvas directly without a server passthrough). There is some minor Geocoding, which could be offline calls, but that cannot be counted as serious AJAX - or Web 2.0.

Saturday, November 3, 2007

Defining the Next Generation of Web

Web 2.0 is over. The next generation is coming.

Nova Spivack (Kurzweil.net) defines Web 3.0 as the era of the Web during which he suggests several major complementary technology trends will reach new levels of maturity simultaneously including:

  • Transformation of the Web from a network of separately siloed applications and content repositories to a more seamless and interoperable whole.
  • Ubiquitous connectivity, broadband adoption, mobile Internet access and mobile devices;
    network computing, software-as-a-service business models, Web services interoperability, distributed computing, grid computing and cloud computing;
  • open technologies, open APIs and protocols, open data formats, open-source software platforms and open data (e.g. Creative Commons, Open Data License);
  • open identity, OpenID, open reputation, roaming portable identity and personal data;
    the intelligent web,
    Semantic Web technologies such as RDF, OWL, SWRL, SPARQL, GRDDL, semantic application platforms, and statement-based datastores;
  • distributed databases, the "World Wide Database" (enabled by Semantic Web technologies); and
  • intelligent applications, natural language processing.[12], machine learning, machine reasoning, autonomous agents.[13]

Though a very catch-all safe definition, that Web 3.0 is expected to transform from tagging, folksonomy, AJAX, and the like to serious and fundamental changes to how we interact with the net. For example, openID like standards are expected, and will make things simpler. Semantic Web will take shape like OOP did in programming. It can be expected that smaller devices will become in a few years (or months) more powerful and more used than desktops. Licensing and paid software will move towards open source and free models.

Semantic web by itself will change the business models companies like Google have been currently successful at exploiting. In a machine driven world, ads do not fly.

Omni-Functionality delivered over the browser will become the central theme of Web 3.0. One should not have to select and follow rigid applications - where data is stored in multiple siloes. Users will be demanding that the interface do it for them. No one wants to be constrained.

Everyone hates having to read several manuals - and shop in a myriad of shops. The web is the same at the moment. That will and needs to change.