by Timothy Martin, CEO and Editor-in-Chief read it

What aspect of the Web 3D Industry needs more coverage?
interface design

Send your url for review in the VirtuPortal!

Give us your email for regular news & updates

about us
The State of the Web 3D Industry - by N. Polys


An essential introduction to the Web 3D Industry. Includes a feature interview with Neil Trevett, President of the Web3D Consortium on Web 3D technologies (VRML97, X3D, proprietary efforts) and Politics.  Get some hindsight on the last century and find out what the coming years may look like for 3D multi-media on the Web.

Information technology is accelerating at a break-neck pace.  Media delivery and computing power are making the dreams of the past realities today.  It is inevitable that our global information network evolves beyond its current dimensional boundaries and expands into a more immersive, interactive, and intuitive realm: CyberSpace- networked, multi-user 3-dimensional worlds explorable on the World Wide Web in realtime.


When most people think of 3D computer graphics, usually one of two things come to mind: 3D characters and effects composited in feature motion pictures like Phantom Menace or the ToyStories, or interactive computer games such as Quake or Unreal.  While these applications are certainly impressive, they are only part of a bigger revolution going on in the 3D industry.  There is a new momentum of convergence rising, enabled by better, faster, and cheaper technology.  It is this momentum that is bringing 3D graphics out of the high-end production studios, beyond the gaming worlds, and onto your desktop.

There are certainly elements we can point to as evidence of this revolution: the increased capabilities of computer processors, the increased power and sales of graphic accelerator cards (100 million shipped), broadband internet access, and the affordable availability of 3D software packages for authoring and viewing.  These forces are rapidly coalescing into a formidable new medium that will undoubtably change the face of computing as we move into the next century.  What will it mean for 3D to come into its own and take its rightful place as a useful visualization tool for the average end-user? 

The implications and applications are enormous and I will only mention a few of the obvious: In e-commerce, one could examine how a product looks and functions before they purchase it.  In education, one could visualize and explore anatomy, the solar system, geography, ecology, and more.  In data-visualization, one could view a relational database structure, network traffic, inventory and distribution, or stock performance.  In entertainment, narratives and events could be experienced from any point of view, online chats could be supplemented by visual cues and gestures by the participant's avatars- their customized Îbody' in cyberspace.

When one combines the power of realtime 3 dimensional interactivity as a tool with the networked power of the World Wide Web, it seems that this new frontier of Cyberspace is the natural evolution of technology.  While it is growing out of current technologies, the vision of Cyberspace is fundamentally different from the 2D, HTML pages what we see on the web today- different enough that a number of industry pundits have termed this new stage of interactive, online 3D ãThe Second Webä.  Gibson, Stephenson, Leary, and McKenna envisioned the vast future and philosophical implications of Cyberspace and Virtual Reality over 15 years ago; meanwhile, thousands of computer and information technologists have developed VRML, Virtual Reality Modeling Language, possibly the most massive industry-wide collaboration for an international file format (ISO/IEC 14772). 

If you are at all familiar with online 3D media, you have probably heard of  VRML.  VRML 2.0 was released in 1996, with a subsequent revision in 1997 which has been dubbed VRML97- the current version in use today. The Web3D Consortium (formerly the VRML Consortium), which spearheaded the development of the VRML ISO Standard, is a non-profit organization with a mandate to develop and promote open standards to enable 3D Web and broadcast applications.  Since 1994, the Web3D Consortium has promoted open standards instilled with the principle that high-quality infrastructures can be built out in the open on level playing fields.  This results in faster, better products as well as more interesting and productive market competition based on value, not history or platform dependency.

As a file format to describe interactive online 3D environments, VRML97 was designed to be highly flexible and able to be employed for virtually any application- a capability that has made it simultaneously praised and vilified.  With such a broad range of applications domains, the implementation for a VRML97 client generally ranges from 2 megs to 4.5 megs, a prohibitive download size for those still on slow modem connections.  This in itself wouldn't have stopped VRML from gaining widespread commercial acceptance, but in conjunction with misplaced expectations perpetuated by the press and browser incompatibility issues, VRML has remained a rather esoteric media.  The Web3D Consortium is currently driving the X3D project to create the next evolution of the VRML97 standard which is XML-based and designed to remedy this situation by providing a scalable architecture where a lightweight 3D browser can be built and distributed.

In recent years, a number of individual companies have released proprietary 3D technologies for online functionalities, each with its own strengths and weaknesses. What these technologies such as MetaCreations, Pulse, Cycore, 3Ddreams, Genesis3D, all have in common is that they are designed for specific application domains (i.e. product demos or gaming) and in this way are only partial solutions to getting ubiquitously functional 3D graphics onto the desktop.  Neil Trevett has termed this industry dynamic Îbabelization' as more and more closed formats are propagated across the web· 
Let's get to our interview and Neil's perspective in more detail.


INTERVIEW with Neil Trevett

Neil, we've been hearing great things about the activities of the Web3D Consortium, and your work with the X3D initiative, an exciting new open standard for 3D on the web.  You're working with the World Wide Web Consortium (W3C), the MPEG Group, and many software and hardware companies to proliferate this industry-wide initiative.  How are these partnerships going to benefit the use of 3D on the Web?

Neil: Let me put the W3C into context because I think this is so key.  It's such an opportunity for anyone who cares about 3D on the web that it's worth continually repeating this message: the Web3D Consortium is in a unique position because of its membership and cooperative partnerships.

I am a fundamental believer in the power of open standards.  I think that most of the key standards that the web is built on are open standards.  It's the internet way, it's the way to build rapid momentum. There are lots of cool 3D technologies out there, a great long list of proprietary technologies.  Lots of super innovation is going on that the industry will really benefit from, but if we don't have a foundation of open standards, we're just going to end up what I call "Babelizing" the 3D web.  There will be a mish-mash of different technologies that don't inter-operate together.  Each is very good at their own specific application focus, but aren't a complete, broad, general-purpose solution.  The reason why I'm involved in the Web3D Consortium is, I think that the open standards route is the best way to grow the market for 3D on the web.  If we look around in the industry, it's important to remind ourselves that the Web3D Consortium is currently the ONLY open, industry forum that's working on open standards for 3D on the web.  There is no other.  The Consortium is in a pivotal role in the industry.  In addition, they have done a very good thing in forging cooperative partnerships with the W3C - the architects of most of the key web standards. What this says is that when they need 3D, they will come to the Web3D Consortium to get their 3D standards.  By this partnership, VRML97 and X3D will be W3C's play in 3D.  This a very fundamental dynamic.

 The other good cooperative relationship we have is with MPEG 4: the group of companies doing the MPEG standard which includes 3D based on VRML97.  As X3D gets developed, we are working hard to ensure that this cooperative relationship continues.  So as MPEG4 evolves and needs more advanced 3D, that 3D component will be X3D.  These 2 very influential bodies out there are directly basing their 3D initiatives on the work of the Consortium.  So we are in a very central and pivotal position·

 And X3D is the ball all these players will use·  How is the spec developing?

Neil: There is a momentum in the X3D Working Group inside the consortium that's working on the X3D project.  It has critical mass, it has the right commercial interests being represented, it has the right technical interests being represented, and it is staying on track in term of time schedule which means that it is something that matters to people- it's something people want to happen and they're putting in the necessary resources to make it happen.

A recent thread on the VRML mailing list asked this question which web developers are constantly faced from their management or clients:
IS VRML DEAD? and Why?
If it isn't dead, surely you acknowledge that it has gotten a bad reputation·

Neil: In many quarters, mainly in the media in the US, VRML has managed to unfortunately get itself a bad name ö it's perceived as dead öyou hear that a lot but the reality is somewhat different than that.

First of all, the VRML97 specification is a great specification.  There's nothing wrong with it from a technical point of view.  It is enabling for a whole bunch of applications out there.  As you correctly say, the failure is largely because VRML97 was· it wasn't deliberate- there was so much excitement about the original concept of 3D on the web, but inevitably it caused a lot of people to think ahead and dream of what 3D will be on the web one day.  When the reality hit home that ãWait- we don't really know what to do with 3D on the web yetä - the fact that this is going to take a little bit longer than people had originally imagined - the blame for that got laid at VRML's door which is not fair because in reality, VRML97 is the first step and a good positive first step to making 3D on the internet a reality.  We are beginning to understand that it's not going to happen overnight. 

People say, "Well, why hasn't 3D on the internet happened overnight?".  I say, "Well, how can you expect 3D to happen in the internet overnight when haven't yet figured out what 3D is good for on the mainstream desktop."  Forget the Multi-User, forget the internet as a medium .. normal people at this point don't use 3D on their desktops· devout game players do· people frag-ing each other in Quake.. or using workstations to do CAD or DCC·  but there's no one in the middle .  I think the natural evolution of the way that 3D is going to make it onto the internet is,  first we have to figure out useful user interfaces and useful applications on the desktop, and then the Web3D Consortium is working to make sure those useful paradigms can be naturally extended out across the internet.  That's the evolutionary part that we need.

When thinking about useful interfaces for the end-user, How about applications like e-commerce, product demos, and online shopping?

Neil: I think that e-commerce is gaining such a momentum in the psyche and tremendous financial momentum that it can be the engine for working through some of the remaining issues we have.  3D can be a big enabler for certain types of e-commerce.  While VRML97 is perfectly fine for most types of e-commerce out there, X3D will be better still.  Probably the main barrier to having 3D e-commerce sites is the authoring tools.  If I am Amazon and I have a cool gadget I want to put up in 3D, how do I do it?  How do I get the model?  This is actually the biggest barrier. 
I think this will be solved in 2 ways:
1) the manufacturers will begin to supply web-ready models presumably in VRML97 or X3D format.  And
2) better authoring tools- cool ones like from Metacreations where you take a series of 2D photographs and stitch them together-compositing them into a 3D model.  I think that kind of tool will be very, very enabling.  Once people can enable the 3D content, getting it displayed is really a solved problem, thanks to VRML97..

I would agree that getting the geometry into the file format is a major task.  I was excited to hear of some cool tools coming out of the west coast like video capture & video modeling tools. 

Neil: Yes that's right.

Now, we hear all this talk about VRML being dead, and I'm surfing the web- places like Sandy Ressler's site and finding that in the last year or two that many educational institutions and colleges are beginning to sponsor research and courses using VRML97. 

Neil: I agree.  That's the irony.   It's the backlash against VRML97.  We'll look back in a year or two's time with X3D really getting out there and we'll look back at VRML97 and say VRML97 was a success.  Because at the moment, we're just in the bad patch of mis-set expectations.  When we started out on this path everyone was working and hoping that we'd all be in a 3D web environment by now.  That hasn't happened.  I think, again, we are beginning to understand the whole ecology of tools and techniques and insights we need to make 3D on the web be mainstream reality.  We're just beginning to chip away at some of that stuff so its just that we were expecting a huge exponential leap into 3D when in fact its going to take a while to get all these components and pieces in place.  VRML97, we will look back and see that it was such a positive and innovative first step. 

And as you say, this learning process is going on right now, today.  It's in universities, it is out there in commercial websites, not mainstream, but it's there and it's growing.  It's a necessary step I believe in making 3D a reality and it's happening at a pace that is commensurate with the size of the problem- it's not a trivial problem to get the authoring tools, the content, and user interfaces that are actually useful, not just gimmicky.  So it's important to get people to realize that 3D is building momentum.  The good news is that VRML97 is the foundation for lots of stuff and there's lots of learning going on.  The bad news at the moment is the notion that since we haven't gone hyper-exponential with web 3D, that it is a failure; that's unfortunate but we'll work through that.

There have been some great milestones lately as the X3D Task Group has posted their deliverables for the W3C which include a spec and implementation, authoring tools· what happens between now and Feb when we come to Monterey?

Neil: To answer your question, what does that mean in terms of tangible stuff between now and February?  There aren't any roadblocks in the way from W3C; we don't have to jump through any hoops, we don't have to pass any tests from the W3C.  The really positive thing that is going on at the moment is we are beavering away at X3D, which is 3D as best we know how, and the W3C is not having to worry about it.  When people ask the W3C, ãWhat are doing for 3D?ä, they say "It's X3D, the Web3D Consortium".  That's the big benefit right now- that we're not going off in divergent paths.  What will happen going forward is, as 3D becomes needed for W3C's ongoing work, they will come to us like MPEG4 has and begin to absorb the X3D stuff in there.  The best thing we can do as a Consortium right now to meet the needs of W3C is to stick to our knitting and just make sure that X3D is available in a timely way which means that we stick to our schedule.  X3D is being designed to be integrated with all of the new standards like XML.  That's going to make it an ideal pick-up for the W3C when they need to integrate 3D more intimately into the work that they are doing on their side.

Excellent answer!  So MPEG4 is about streaming media- geometry and animations right?

Neil: Yes. The relationship between MPEG4 and the Web3D Consortium is a complex one because unlike the W3C, where we are quite kindred spirits, both dealing with open standards- you know the standards are free once they're done ö and working to a common goal with a common methodology, MPEG4 works in a different way.  The fundamental structure and purpose of the MPEG4 group is to create a standard that uses company patents.  Companies participate in MPEG4 specifically to get their patents into the standard which generates a royalty stream going forward.  I'm not saying it's right or wrong, it's just a different way of working.  Look at MPEG2  and JPEG, which have been very successful standards that have been very enabling to the industry.

 From a logical point of view the rewards of partnership are potentially huge because the MPEG4 people's area of expertise is not 3D so they've benefited substantially from taking the encapsulation of our expertise (VRML97) and taking it into their work, saving them years of first principle design. 
On our side, we can benefit because MPEG4 are the masters of streaming technology.  They have put a lot of thought into how you stream 3D and synchronize 3D with other media types that are being streamed concurrently.  If we leverage their work, we can save ourselves years of work trying to figure that out.  The only potential barrier is: Can we find our way through the Intellectual Property minefield?  That's something we are working on right now.  We send representatives to their meetings and are positively working for resolution on that.

That brings up a question: Does MPEG4 present any conflict with an open standard like SMIL technology which is endorsed by the W3C and attempts to solve the same problem space?

Neil: I am not an expert on the technical intricacies of this or the pros and cons of these approaches, but from a standards and industry dynamic, the potential is certainly there for conflict.  In the end, standards are enabling, you can't dictate with standards.  They have to exist in the Darwinian Universe and if a standard is not adding value or robust enough to stand on its own feet and solve real problems- there's no point in creating it for its own sake.

It is interesting seeing the Web 3D Consortium and the W3C continue to go down the "IP-free" route, and the MPEG4 folks continue to go down their proven successful path of "IP-laden" standards.  I think because the tech advantages to finding common ground· MP3 is an IP laden standard, but most people who are downloading MP3 and playing them on their PC and Rio players don't really realize that's the case because the MP3 royalty model is one that burdens the production tools and authoring, not the listener.

Many VRML purists throw their hands up in horror saying, "You can't make X3D have a license fee!"  I think everyone in the Consortium is doing every thing they can to avoid that but potentially the advantages may be high enough that some compromise might be appropriate down the line.  We're not discounting that.  A vote recently inside the Consortium voted to continue to work with MPEG4 to try and find resolution on this issue. 

There are a number of companies and contributors in on building the X3D spec and browser implementations.  Some of the core implementations are down to 50-100 k!  Is it true that these contributors have signed off on Intellectual Property Rights for the Core and the full VRML97 Profile?

Neil: Yes.  This is such a significant milestone.  All the indications are that we can implement a full sample implementation of X3D and put that implementation totally into open-source.  Over the last year, the Consortium has come to understand and embrace the concept of open-source.  It can be extremely enabling and we have made a good positive first step with the community-source VRML97 code that has come from blaxxun.  Community-source is an excellent research and development tool, but if we can make X3D true open-source with no limitation on how the source is used for both research and commercial applications, that will be a tremendous boost to the momentum of X3D.

Is this the goal for the Core Profile or the VRML97 Profile?

Neil: Our goal is to get both IP-free.  Further extensions designed by software companies may be subject to their own distribution policies.
For the transition from VRML97 to X3D, the fact that X3D, with one of the base shipping profiles is providing complete VRML97 compatibility, means that we can have an orderly and controlled transition period.  All of the existing VRML97 content is going to work.  The value of VRML browsers as opposed to X3D browsers will continue to be significant because of all of the VRML content that's out there.  There's no reason for anyone working with VRML to have any concern for the long-term viability of their content. 

Going forward, once we do make the transition to X3D, the componentized architecture is such a powerful tool for building industry momentum.  To compare the power of engagement of X3D with this architecture versus the good but monolithic standard that is VRML97, it's so much more enabling.  Put yourself in the position of a Working Group inside the Web 3D Consortium, for example, working on a specialized application like geographic visualization on the web.  If I'm working on that in the context of VRML97, how do I get it deployed?  Basically, you have to convince a browser vendor to implement your neat bits and tricks.  That is pretty nigh-well impossible.  There is a lot of good work going on inside the Working Groups, but none of them have had an outlet for all their good ideas and efforts.  With X3D, because we have a componentized architecture, any constituency can develop its own components to plug-in to the X3D framework and be totally enabled to provide precisely the capabilities that their particular market segment needs.  That's a fundamentally different and more positive dynamic than we have with VRML97.

Much more in the egalitarian philosophy of the internet.  There was a thread on the VRML list about John Carmack's (of ID software) statement that proprietary efforts for 3D would win out because they are more funded and focused and unencumbered by the standards process. Does the Consortium have a strategy to change this kind of mentality among various companies building pieces of the puzzle in proprietary technologies?

Neil: Yes.  That's an excellent question.  First of all, you have to respect Carmack for what he's achieved.  He's probably done more than any one person to bring 3D into the consciousness of the mainstream.  I can certainly understand his comments.  For the Game-space, he's completely 100% correct.  There's no open standards consortium to figure out a standardized games engine.  Rather, there are a bunch of successsful, proprietary engines out there being widely deployed in a number of different titles.  His comments definitely ring true.  The game space is a fundamentally a different one to the more generic and widespread 3D web that we have in our minds- where you can use a whole bunch of different tools and output to a common format.  You can download a whole bunch of different players and have content being interoperable on any of them in a whole bunch of different browser environments. We are going for a much richer, deeper embedding of 3D into the fabric of the web than just playing Multi-user games. 
When you are in a Multi-user gaming environment, you are basically in a customized, unique environment where everything is under the control of one vendor.  For games, which are one of the most demanding realtime 3D applications for the PC out there today, I think that is the most appropriate way of doing it.  But to say that you don't need standards for making 3D a fundamental part of the fabric of the web is like saying, "We should all do our own HTML and figure out how to make it inter-operable later".  This is a different problem that we are trying to address.  He is not wrong, but for the problem space that we are trying to solve, rather than his problem space, an open standard is fundamental to market growth. 
We are trying to make this philosophy a reality, and hopefully with X3D there will be an enabler to get the message out there.  We want to be inclusive for everyone out there wanting to make 3D on the web a success. 

We want the Consortium to be the natural forum that they come to help make 3D as pervasive as we all want it to be.  This goes back to our changing the name and charter of the Consortium about a year ago from the VRML Consortium to the Web3D Consortium. This change was an external instantiation of a fundamental philosophy shift because frankly, we were initially set up and funded to do one thing and that was VRML.  But it very quickly became clear that as soon as anything else came up that was good from a technological point of view, but wasn't VRML, these religious wars broke out· weird negative discussions started up.  It seemed more important to bring in all these good ideas if possible, and work in a more cooperative forum to reduce this Babelization and work towards more inter-operability. 

So now the reality is that if there is a company out there that has a 3D technology for the web, there are a number of different levels they can come, join and engage with the Consortium to reduce Babelization. If they see an advantage in the open standard route for their own technology a company can join and put their own technology through the standardization process that we have proven in the Consortium. We will work to standardize any appropriate piece of technology that will further 3D on the web. It will have to be voted for by the membership, so you won't get some useless or irrelevant technology through the process. The minimum that any company involved with 3D on the Web should do is work inside the Consortium to encourage interoperability between their technology and other 3D standards - including VRML97 and X3D.

Now, as we have this componentized architecture in X3D there is a third way that a company with 3D technology can engage with the Consortium, I am hoping that companies will join the Consortium and convert their proprietary technologies into X3D components.   Some might see advantage in making their component open-source, some may want to keep its internal working closed, it's up to them.  By working within the Consortium, they can make sure their component inter-operates correctly with X3D and other components being developed.  With these multiple levels of engagement, we are going out there to use X3D as a tool for unification rather than a tool for dividing. 

We are considering expanding membership to individuals.  We are taking the membership votes and doing the math.  We want to enable people getting involved, we don't want the financial barrier to be one that would prevent people from contributing.  It should not be the sole prevention for people to come in and get involved.  We need to have a critical mass to reach its full potential.  We are getting feedback that we should have a membership more conducive to small companies and individuals engaging with us.  We will probably sort out this policy in January.

Some of the most exciting developments are the proposed hardware accelerated functionalities for X3D.  A new level of rendering and visual aesthetic beyond what we experience with VRML97 is on the horizon.

Neil: Yes, the X3D Hardware Group includes companies like 3Dlabs, 3Dfx, nVidia.  We are working toward putting baseline hardware cap into the core itself.  With Multi-texturing, you can enable a whole slew of good visual techniques like reflections etc.  The X3D Hardware group has a proposal on the table as to how this can be defined which is a simple low-impact extension to the nodes.  We are currently discussing it with software vendors to insure that it does not place undue burden on software renderers.  With the component architecture, the sky is the limit beyond that.  Our proposal will fit comfortably on top of Direct 3D and Open GL.

We now have a functional VRML97 to X3D converter.  They have proven it has 100% VRML97 compliance already, and its up and running.  NIST is already running through their VRML97 conformance suites with is and it working.

Oooh! So that's another GREAT piece of the puzzle!

Neil: Yes!  And in the longer term, we will be working hard with tool vendors to enable native support for X3D; but that will take a little while.  During the transition period, the translators will be in place. 

We have covered a lot of great stuff! 
In the August issue of 3D (gate) magazine, you predicted that:
"In two years' time, our desktops will be all 3D.  In another year or so, with the exception of data that's better done in 2D, the web could go all 3D."
Could you comment on that further? 
Are we really on target for your prediction?

Neil: I think we are.  From the Web3D Consortium's perspective, weÎve stuck to our  milestones and X3D is meeting our expectations.  So we're fulfilling our part of the bargain.  But this brings us neatly back, full circle, to where we started out- why hasn't 3D become pervasive on the internet?   Because I think a necessary stepping stone to having this is 3D on the desktop.  What I was referring to in that quote was: having watched application vendors with a lot of neat ideas to use 3D really being unsuccessful in bringing a useful 3D application into the mainstream.  What is the missing piece of the puzzle.  We have the hardware, we have good APIs (D3D and OpenGL), the PC are fast enough, the standards are there, a lot of people are commercially motivated to make it happen, and it hasn't happened.  What the missing magic piece?  As I was saying in that article, I think the final thing is the Operating System.  When I look back to the transition from DOS to Windows, when DOS was the prevalent OS, all the applications were text-based.  As soon as you went to Windows- in an amazingly short period of time- text based applications started to look really clunky.  Once you tasted the power of a graphical user interface (GUI), you didn't want to go back and type command lines again.  Now here are a number of technological initiatives going on inside Microsoft for the Windows platform, the most important of which is GDI+. 

As you probably know, there is a huge chasm in the operating system between 2D and 3D.  In Windows, you use GDI for 2D stuff, and you use Direct 3D or Open GL for 3D and the 2 don't meet.  You can't make a 3D object and composit into a 2D web or wod window.  You're not able to do it.  GDI+ brings them together.  That's a technological enabler· but the hope is that all the PHDs up in Redmond are working on cool 3D user-interfaces that exploit that GDI+ technology and begin to make 3D a useful element in the user-interface of the Operating System itself.  If the can do that, I think you will see the floodgates open and the 3D user-interface elements will become basic throughout applications as well and you'll get into that upward spiral.  I think- and hope- that thats going to be the last piece of the puzzle to fall into place and make 3D pervasive.  That can happen in the next couple of years, and hence my prediction.

Neil, thank you so much!  You've really given a lot of perspective on the industry and how its evolving.  Thanks for your time!

Neil: You're more than welcome·

Current features
* Core Web3D Book

* Spazz3D: Authoring-tool
* Game Programming in VRML

Previous Features
* State of the Industry
* The X3D Initiative
* Alphabet Soup: Our glossary of technical acronyms

What do you think of this article?
Not so hot 

What did we miss?

Questions, Comments, and Suggestions about this site can be emailed to: [webmaster]
VirtuWorlds, VirtuWorld, VirtuPortal, and 3DEZine are Trademarks of VIRTUWORLDS LLC.
No unauthorized uses are permitted.
© VirtuWorlds 1999, 2000