Blog

One for the thumb!

I was born and raised in Pittsburgh Pa. I've been a fair weather fan but my sister Julia more than made up for me. Anyways, here is something interesting I've noticed in the coverage around the game:






Moodle, Elgg, CivicSpace, CiviCRM and Drupal should join forces

Why?

  • save development effort across projects
  • collaborate on fundraising, marketing, etc.
  • deliver complete solution across application domains on one web framework: CMS, CRM, Course Ware, and Social Apps

What?

  • Phase 1: light integration (single sign on / common installer / unified interface)
  • Phase 2: Leverage Drupal frame-work as appropriate

It will work because

  • Funders love joint projects
  • LAMP is way cheaper than JSP/J2EE
  • This is exactly what CTO's / CIO's at Universities want
  • We have the community! (Drupal: 55,000, Moodle: 8,900, CivicSpace: 2,000 installs)

Precedence

  • Sakai / uPortal / OSPI
  • CivicSpace / Drupal / CiviCRM

Challenges

  • application integration
  • business models

Roadmap

  1. Project leads sign on
  2. Line up University partners (some paying members)
  3. Get industry partners to back project
  4. Merge fundraising targets
  5. Raise seed round from private donors to investigate
  6. Create prototype integration
  7. Raise real money from foundations
  8. Execute
  9. Win

Sakai vs. Moodle

For IT directors at schools debating whether to use Sakai or Moodle as a course management solution, here is a side by side comparison. All signs point strongly towards Moodle kicking Sakai's butt and to the Mellon Foundation, Hewlett Foundation, and Sakai Partners wasting $6.6M.

Founded:
Moodle: 2002
Sakai: 2004

Community Website Traffic (Alexa*):
Moodle: 150 per million
Sakai: 20 per million

Business Readiness Rating (OpenBRR.org):
Moodle: 4.19
Sakai: 3.23

Vendors:
Moodle: 27
Sakai: 11

Install Base:
Moodle: 8,900
Sakai: 35

Funding:
Moodle: $0 initial funding and ~ $12,000 a year from individual donors.
Sakai Project: $2,200,000 initial grant from Mellon Foundation and Hewlett Foundation and $4,400,000 from core partners.

Related Posts:
Digging into OpenBRR Rating of Sakai and Moodle
Higher-ed LMS Market Penetration: Moodle vs. WebCT+Blackboard vs. Sakai

* Alexa statistics are definitely suspect. I would love to see more reliable data. If anyone has access to better data please get in touch.

The battle for the semantic web

The stakes: The emergence of the workable metadata representation and data interchange technology will determine the future of the web. As the semantic web is built piece by piece over the next ten years everything will be up for grabs.

For example...

And that's just the current lot. What happens when all the newbies show up?

The story: The thing is - the semantic web is taking it's good old time to come about. You would think that with everything at stake there would be a near instantanious glut of innovation. But as it turns out, the research community responsible for birthing the semantic web has failed thus far to fit their vision through the innovation pipe and now others are coming forward to dismantle it and shove it through piece by piece. This has of course created a very real but rather congenial rift within the community of web-technologists working on metadata represenation and data interchange on the web - the old RDF vs. XML technology pissing match.

RDF: RDF came straight out of the genius brains of the guy who invented the web in the first place, Tim Berners Lee. With the financing of corporate research dollars and incubation by the most stately of computer science facilities the original semantic web proponents are pursuing a fittingly grandiose vision. Too be honest I don't entirely understand how the technology they are developing is supposed to work but I do know that their approach is predicated on an entirely new way to internally represent and retrieve data that is quite complicated to implement. Anyone in the business of building web-apps would immediately tell you that RDF Store technology will be a hard sell to web-application developers who will likely favor their simpler more tried and true approaches. But no-bother say the RDF crew -'we invented the web once, we'll do it twice' and then continue on hacking their RDF prototypes together.

While I will be the first to admit these MIT CS researchers are much more competant engineers than I am, this argument doesn't sit well with me. The web was new technology born on the frontier of the internet and helped catalyzed it's tremendous inception. But it's a different landscape today. As I see it the RDF semantic web has a trillion dollar legacy problem that it has not come to grips with.

1990: When the WWW was created it offered critical and unique functionality in a new and creative environment. It was ingenious, relatively simple to implement, and the only game in town (well, except for Gopher). Down the mountain the little snowball went - JPL puts up pictures of asteroids, Compuserve signs up some customers, Netscape IPO's, Ebay revolutionizes junk sales, Google monetizes search and voilà the web is born.

2006: The web works. Billions have been invested in platform technologies such as LAMP, JSP, .NET and wells of innovation are being tapped more quickly than ever. Meanwhile up atop the hill Universities and far sighted corporate research vehicles have spent eight years perfecting their new snowball but are apparently stuck waiting for others to push it down the hill for them. But it simply isn't happening yet. Non RDF structured data interchange technologies for consumers on the web appear to steadily advance at the speed at which Dave Winers beard grows while simple-to-implement simple-to-understand web-service data interchange techniques (REST) solve the immediate 'enterprise' needs. This leaves no immediate critical functionality path for RDF technologies to toboggan down.

Hence, after eight years of RDF work, nobody on SIMILE's public mailinglist could point me towards a single real-world application of RDF semantic web technologies used for data representation and interchange across the web.

XML: In the abscence of workable RDF technologies other innovators are marching steadily towards working solutions for metadata representation and data interchange. The vision of the semantic web is now marching free from it's body as a floating apparition of hype and speculation. Implementable standards, running code, and real world successes are fueling these innovators. And they are making progress.

If there are two different camps of technologists each tackling the same problem set working towards a shared vision then why are they working on two different sets of competing technologies?: Beats me.

So what happens?: It hasn't happened yet. It's happening right now. Thats the fun part!

Explaining Reed's Law

When I drop Reed's Law into conversations peoples eyes tend to glaze over. People don't seem to emotionally connect to math equations or respond well to phrases like 'social network theory' or more likely they have a very hard time understanding something they can't visualize even if they hear the words and read the proof.

So let's try a narrative instead....

uberzacker: think of it like this
uberzacker: you have 500 people in an auditorium
uberzacker: all listening to a speaker
uberzacker: they are given a task - like, solve the oil energy crisis
friend: heh
uberzacker: only problem is they can only speak to one person at a time
uberzacker: like
uberzacker: in the standard sense
friend: I see.
uberzacker: the speaker takes questions at a mike
uberzacker: and holds a discussion with an audience
uberzacker: ok
uberzacker: you might learn a bit
uberzacker: but you won't get very far in solving the energy crisis
uberzacker: now
uberzacker: imagine everyone in that auditorium can instantly find and create a working group of cohorts
uberzacker: where they can team up and independantly solve different facets of the problem
uberzacker: one group for urban planners
uberzacker: another for economic buffs
friend: damn.
uberzacker: another for building a website for the movement
uberzacker: etc.
uberzacker: and each group can coordinate with each other effectively towards their common goal
uberzacker: now, how much more effective is the second group?
uberzacker: exponentially more effective
uberzacker: thats what the internet does, except on a global scale

I came up with a law

Zack's Law: Data interchange technologies on the web advance at speed at which Dave Winer's beard grows.

Is the Blogosphere untapping the power of Reed's Law?

I've spent a a good deal of time over the last couple of years puzzling over a couple things:

  1. What makes the blogging medium so powerful? You have 8,000,000,000+ webpages out there on the net .31% (25,000,000) of which are blogs, but the blogs seem to carry so much weight. Why?
  2. How can we see and directly measure the effects of Reed's Law? It's one thing to understand that by harnessing group forming within your network you can potentially un-tap exponentially more power from your network, but i've seen scant studied and provable evidence of this dynamic playing out.

What if the reason the blogging medium is so powerful is because it is the best working example of a distributed group forming network our world has ever seen and has therefore tapped into the previously unrealized power that Reed's Law portends?

Consider this:

  1. A group is created within the readership of every blog.
  2. The blogs (groups) are highly networked. This is key

How are blogs highly networked?

Well, people generally start blogging with the self interest of building an audience of readers. One of the most effective ways to build an audience is to get linked from a highly trafficked blog such as BoingBoing.net. The way you get linked is to post something interested that the editor of boingboing thinks is of enough value to their community to repost. In fact, BoingBoing has built it's audience by steadily providing a feed of interesting information scoured from accross the net. At this point they have to barely lift a finger - interesting content comes flying in to their submission box faster than they can filter it.

So in other words it is in the bloggers self interest to share information with boingboing and it is in boingboings self interest to promote information from the blogger (if it's good enough of course). Now if we think of BoingBoing and it's audience as a group, and we think of the blogger and their audience as another separate (smaller pithier) group then what we are describing is really information traversal from group to group. In other words the social dynamics of blogging make the blogosphere very conducive to inter-group information traversals. The yearning of the independent author for an audience and the information aggregation role BoingBoing plays, and similiar relationships across the blogosphere, enable the blogosphere as a whole to interact in a highly networked fashion. This dynamic shows up in many ways. Go look at how many blogs the biggest bloggers subscribe to w/ their aggregators (i think Scoble is up to 1,000). Look at typical features of a blog (trackback, blogrolls, etc). See how much content that is published to a blog is actually relayed from other sources. The blogosphere is undeniably densely networked. I believe it is by far the best example we have of a distributed group forming network, and in this way I think it is the first real peek we are getting into what happens when Reeds' law is unleashed and the effective value of a social network yields exponential returns.

After all publishing to the web is nothing new. People have been publishing to groups for decades (usenet, mailinglists, etc). Forums have been put to use and have had millions and millions of users and contributors before blogger.com launched. So why is blogging having the impact it is? I believe the answer is that those other 7,975,000,000 sites on the internet were published by individuals who had much less impetus to connect their group of readers with the audiences of other groups across the net and thus lost out on the power of Reeds' law.

RDF Semantic web research isn't working

It has been eight years since Tim Berners-Lee threw up his hands and said "it's all crap, lets do it over" and set off to create the semantic web. We've got very little to show for it so far. I firmly believe the work semantic web technologists are pursuing is important and the concepts will inevitably be realized and I very much want to see this research become viable. But things are not moving fast enough and the tack semantic researchers are taking simply isn't working.

Semantic web technology is marred in a chicken/egg paradox. The technologies are generally not useful unless they are adopted and implemented on a large scale and people are not willing to invest in implementing them unless they are useful. This is exacerbated by the fact that there are very high technology, business, and social barriers to implementing the semantic web.

  1. Technology Barriers: Even today, implementing RDF parsers is complex and difficult and the best tools are hopelessly slow. These are the most basic and fundamental tools the semantic web needs to operate and we still can't get them to work.
  2. Business Barriers: If the semantic web is implemented the current web industry will be intensely disrupted. Ebay, Google, Amazon - virtually all mainstays of web-business will have to significantly adjust their business and technology models. Because of this web-businesses are trepidacious when it comes to investing, adopting, and promoting the semantic web.
  3. Social Barriers: The way in which we use the web will be greatly changed when the semantic web is implemented. Just look at the current state of usability in feed aggregation for a hint of what will be required for users to adopt the newly realized functionality.

These barriers are far from insurmountable, but the tack the current researchers are taking simply won't cut it.

  1. Researchers are not finding adequate use-cases for implementing compelling functionality, instead they are creating widgets. There are a great many of organizations out there with real-world needs that would be greatly served by implemented semantic web-technology but researchers are for the most part turning a blind eye and working in a vacuum.
  2. Researchers are not picking their battles. Instead they are building generic tools with little real world applicability.
  3. Researchers are not keeping up with the web and web-publishing software. It seems that in an effort to remain neutral towards the current web-publishing industry semantic web researches choose to build their own tools in isolation. This means that anyone wanting to reuse these tools in a real world application has to re-implement them within their own web-publishing environment which due to the high technology barriers simply isn't happening. This is a shame because it would actually save the researchers time, effort, and money if they simply implemented their tools within web-publishing environments such as Drupal and it would allow adopters to implement the tools at zero cost.
  4. Researchers are not moving at the pace the web is currently developing, instead they are attempting to leap-frog it. A good example of this is the Structured Blogging and Microformats initiatives. Why are semantic web researchers not collaborating with the teams pursuing these projects?

So what can we do about it?

  1. Researchers need to stop thinking of themselves as researchers and start thinking of themselves as implementors.
  2. Research institutes need to join forces with emerging businesses looking to adopt semantic technology. This breaks the current model of business / research institute collaboration since startups do not have money to contribute to fund research, but tough noogies.
  3. Researchers need to build their tools in real-world development environments, i.e. as modules for LAMP web-publishing tools such as Drupal and Wordpress. They need to find more organizational partners to deploy their solutions. They need to do something other than build widgets.

What does it take to make Drupal easy to use?

Dries, the founder and lead developer of Drupal, just posted this on his weblog:

For long I focused, completely and utterly, on the aesthetics of Drupal's code, neglecting eye candy and ease of use. I spent days trying to do something better, with fewer lines of code and more elegant than elsewhere. The aesthetics of Drupal's clean code has attracted many developers, but has also given Drupal the reputation of being developer-centric and hard to use.

--snip--

For Drupal to remain competitive in the future,

  1. we'll have to offer critical functionality not available in other content management systems, or
  2. we'll have to make Drupal easier to use and improve the aesthetics of Drupal's user interface design, and
  3. we have to maintain the aesthetics of Drupal's code.

As other systems are catching up in terms of critical functionality and because the amount of critical functionality is limited, we have little control over (1). Hence, we should focus on (2) and (3). To grow the number of users we should focus on (2) and to grow the number of developers we should focus on (3). Because the ability to make changes to Drupal's code is restricted, we can easily enforce (3). That pretty much leaves us with (2) to worry about

First reaction after reading this: Yes! Yes! Yes! (and more yes!)
Second reaction: Ok, but how?

The potential for Drupal's success on the web is staggering. If Drupal can rock all three: offer all critical functionality in an easy to use package built on a beautifully constructed and extensible framework - it will compete on a level no web-platform currently can.

But #2 seems to be the sticking point. For every message that goes by on the developers list or every CVS commit what percentage come from someone with training in usability? For every thought or effort that goes into building a new feature, what goes into trying to make sure someone can figure out how to use it?

Very succesful consumer mass market organicaly grown open-source applications are extremely few and far between. Heck, maybe they don't even even exist yet come to think of it. Firefox is just now nearing 10% market share, and they've had helping hands with pretty deep pockets since the start. Wordpress has big install base, but their blogtool market share is a relatively small slice so far, and their slice of all internet publishing tools is vastly smaller.

But more troubling, both these applications are - relative to drupal - in much more well defined application spaces. By the time FireFox came around browsers had been used for 10 years. Likewise, when WordPress was started blogging tools were well understood and are pretty straight forward and simple applications. But Drupal is off the map - innovating on many fronts at once. For code development this works out ok, developers simply adhere to Dries's oversight or their code doesn't get checked in. But for making the interfaces easy to use and aesthetically pleasing? That is hard work I don't think the Drupal community has the expertise to pull off currently.

But we certainly can get a lot farther along than we are right now. The before and after photos are impressive, Drupal has come a long way in a fairly short amount of time. Dries' consistant insight has gotten Drupal to where it is today. Just as Blake Ross' commitment to making a browser for his grandma turned Mozilla into FireFox and Matt's comitment to aesthetics and usability made WordPress into a work of art, Dries' commitment to making Drupal usable is a pre-requisite for what it takes to transition Drupal from a powerful gadget into a pervasive utility.

But Dries is going to need a lot of assistance. How can we help?

Here are some options that I see...

  1. Rally the troops: consistantly press on Drupal developers to contribute interface and usability improvements. Make it a chief objective of each point release. Dries is already doing quite a bit of this.
  2. Find allies: figure out a process for better integrating outside usability oversight and go find usability experts to contribute back to drupal pro-bono. Kieran is pursuing this.
  3. Hire mercanaries: go raise money to employee usability experts and interface engineers to reshape Drupal's interface. I will soon be in a position to pursue this.
  4. Clone Steven Wittens en masse: any takers?

Hello World

This blog will be used to publish my thoughts and interact with the world. I hope I will be more succesful now than in my previous attempts.

Syndicate content