are here >> MTL Home Page >>
Innovating uses of technology >>
of Web 3.0 vary greatly. Some believe its most important features
are the Semantic Web and personalization. Focusing on the computer
elements, Conrad Wolfram has argued that Web 3.0 is where "the
computer is generating new information", rather than humans.
Andrew Keen, author of The Cult of the Amateur, considers the
Semantic Web an "unrealisable abstraction" and sees
Web 3.0 as the return of experts and authorities to the Web. For
example, he points to Bertelsmann's deal with the German Wikipedia
to produce an edited print version of that encyclopedia. CNN Money's
Jessi Hempel expects Web 3.0 to emerge from new and innovative
Web 2.0 services with a profitable business model.
Futurist John Smart, lead author of the Metaverse Roadmap echoes
Sharma's perspective, defining Web 3.0 as the first-generation
Metaverse (convergence of the virtual and physical world), a web
development layer that includes TV-quality open video, 3D simulations,
augmented reality, human-constructed semantic standards, and pervasive
broadband, wireless, and sensors. Web 3.0's early geosocial (Foursquare,
etc.) and augmented reality (Layar, etc.) webs are an extension
of Web 2.0's participatory technologies and social networks (Facebook,
etc.) into 3D space. Of all its metaverse-like developments, Smart
suggests Web 3.0's most defining characteristic will be the mass
diffusion of NTSC-or-better quality open video to TVs, laptops,
tablets, and mobile devices, a time when "the internet swallows
the television." Smart considers Web 4.0 to be the Semantic
Web and in particular, the rise of statistical, machine-constructed
semantic tags and algorithms, driven by broad collective use of
conversational interfaces, perhaps circa 2020. David Siegel's
perspective in Pull: The Power of the Semantic Web, 2009, is consonant
with this, proposing that the growth of human-constructed semantic
standards and data will be a slow, industry-specific incremental
process for years to come, perhaps unlikely to tip into broad
social utility until after 2020.
According to some Internet experts Web 3.0 will allow the user
to sit back and let the Internet do all of the work for them.
Rather than having search engines gear towards your keywords,
the search engines will gear towards the user. Keywords will be
searched based on your culture, region, and jargon. For example,
when going on a vacation you have to do separate searches for
your airline ticket, your hotel reservations, and your car rental.
With Web 3.0 you will be able to do all of this in one simple
search. The search engine will present the results in a comparative
and easily navigated way to the user.
Semantic Web is a "web of data" that facilitates machines
to understand the semantics, or meaning, of information on the
World Wide Web. It extends the network of hyperlinked human-readable
web pages by inserting machine-readable metadata about pages and
how they are related to each other, enabling automated agents
to access the Web more intelligently and perform tasks on behalf
of users. The term was coined by Tim Berners-Lee, the inventor
of the World Wide Web and director of the World Wide Web Consortium
("W3C"), which oversees the development of proposed
Semantic Web standards. He defines the Semantic Web as "a
web of data that can be processed directly and indirectly by machines."
The term "Semantic Web" is often used more specifically
to refer to the formats and technologies that enable it. These
technologies include the Resource Description Framework (RDF),
a variety of data interchange formats (e.g. RDF/XML, N3, Turtle,
N-Triples), and notations such as RDF Schema (RDFS) and the Web
Ontology Language (OWL), all of which are intended to provide
a formal description of concepts, terms, and relationships within
a given knowledge domain.
Many of the technologies proposed by the W3C already exist and
are used in various contexts, particularly those dealing with
information that encompasses a limited and defined domain, and
where sharing data is a common necessity, such as scientific research
or data exchange among businesses. In addition, other technologies
with similar goals have emerged, such as microformats. However,
the Semantic Web as originally envisioned, a system that enables
machines to understand and respond to complex human requests based
on their meaning, has remained largely unrealized and its critics
have questioned its feasibility.
computing provides computation, software, data access, and storage
services that do not require end-user knowledge of the physical
location and configuration of the system that delivers the services.
Parallels to this concept can be drawn with the electricity grid,
wherein end-users consume power without needing to understand
the component devices or infrastructure required to provide the
Cloud computing describes a new supplement, consumption, and delivery
model for IT services based on Internet protocols, and it typically
involves provisioning of dynamically scalable and often virtualized
resources It is a byproduct and consequence of the ease-of-access
to remote computing sites provided by the Internet. This may take
the form of web-based tools or applications that users can access
and use through a web browser as if they were programs installed
locally on their own computers.
Cloud computing providers deliver applications via the internet,
which are accessed from a Web browser, while the business software
and data are stored on servers at a remote location. In some cases,
legacy applications (line of business applications that until
now have been prevalent in thin client Windows computing) are
delivered via a screen-sharing technology, while the computing
resources are consolidated at a remote data centre location; in
other cases, entire business applications have been coded using
web-based technologies such as AJAX.
Most cloud computing infrastructures consist of services delivered
through shared data-centers and appearing as a single point of
access for consumers' computing needs. Commercial offerings may
be required to meet service level agreements (SLAs), but specific
terms are less often negotiated by smaller companies.
computing is an overarching term which defines the myriad human
interaction tools that enable idea sharing, non-hierarchical decision
making and the full utilization of the world’s mind space.
Examples of these tools (many falling under the Web2.0 umbrella)
include collaboration packages, information sharing software,
such as Microsoft’s SharePoint, wikis, blogs, alerting systems,
social networks, SMS, MMS, Twitter, Flickr, and even mashups.
Business and society in general increasingly rely on the combined
intelligence, knowledge, and life experiences of the “crowd”
to improve processes, make decisions, identify solutions to complex
problems and monitor changes in consumer taste.
An early example of crowd computing was the discovery of a gold
deposit location at the Moribund Red Lake Mine in Northern Ontario.
Using all available data, the company, Goldcorp, Inc. had been
unable to identify the location of new deposits on their land.
In desperation, the CEO put all relevant geological data on the
web and created a contest, open to anyone in the world. An obscure
firm in Australia used their software and algorithms to crack
the puzzle. As a result, the company found an additional 8 million
ounces of gold at the mine. The only cost was the nominal prize