Category Archives for Technology
peterme talks about Audience Segmentation, which is pretty much what we’re doing with information spaces in Sytadel.
A holy grail in Web site information architecture is the ability to cleanly segment content by audience type. Much of the content on a website is not applicable to every single person, but without good segmenting methods, we have to expose all that content to everyone.
(Originally posted to Synop weblog)
Today I stumbled across one of Dave Winer’s RSS posts titled Would a big media company lose traffic if they supported RSS?. As regular readers know, I’m not a big fan of the idea that big media RSS is “little almost useful teaser bits of random stuff which have been selected by the publisher from their bigger stash of actually useful stuff”, so while I can see Dave’s point as a stop gap measure, and appreciate his post is all about evangelising RSS, the real problem still isn’t being solved, the availability of and access to content (which I promise I won’t reiterate yet again 🙂 ). From Dave’s post:
I assume you’d publish links to your articles with brief descriptions, in your RSS feeds. So when the reader clicks on a link, they go to your site to read the full article [Â…] and your traffic stays even. Of course those pages have ads, so your revenue doesn’t decrease.
Even uber-Microsoft blogger Scoble isn’t a fan of crippled feeds, and to say that traffic stays even is a bit of a misnomer, as typically you’d see two or even three pages of content with several ads per page for each news item you read on a web site (which is ~200-900% more ads than coming in from a feed). Replacing this with a feed will definitely reduce traffic, that’s sort of the point isn’t it? Remove the web site embellishments, show the raw content (or an almost useful part of it) in a feed, reduce traffic yet increase content discovery and value. If this wasn’t the case, then people wouldn’t be starting to talk about putting ads in feeds to protect their revenue.
Another argument goes that increased (or viral) traffic from weblog referrals could offset the reduction in traffic, but that’s a weblog effect, which people often forget is independent of RSS or Atom.
This all seems a bit defeatist to me, to invent this great syndication technology (RSS, Atom etc.), then have much of it’s value crushed by media companies perhaps too scared or ingrained to question their own outdated business models. It’s the same problem with p2p file sharing and the big record companies, the only real bipartisan step forward is to change the business model to embrace and take advantage of the technology.
People want music (and big media content), and they don’t mind paying for it to arrive in a way that suits them. Seems pretty simple to me.
(Originally posted to Synop weblog)
I was talking the other day about filtering or lensing content, in order to view content according to a particular combination of context, depth and field of view.
- Context is the orientation and position of the content, with respect to related content.
- Depth is the amount of focussed detail contained within the content. We’d liken this to depth in the standard model of 3D space, and typically use the zoom in and out analogy in user interfaces like Adobe PhotoShop or Microsoft Visio.
- Field of view is the scope of the content, and using our 3D model, is effectively the width and height, or area covered by the content.
Within Sytadel, our CMS product, we’ve tried to address the context problem in several ways, but the most interesting is through information spaces. A Sytadel information space is like a category, and each item of content must belong to one. However, information spaces also have security control, which is used passively during web page construction for personalisation. Information spaces are like filters/lenses, and help weave the content into the final constructed web page.
All web pages in Sytadel are constructed at request time, and can contain up to several hundred items of microcontent on some of the more complex pages. Depending on the security for the content and information spaces, we end up with a page which has been constructed specifically for this particular user and their location within the site, by combining content from various information spaces and several layers of information architecture. We say location, like with a typical web site, but because every page is completely dynamic, there’s no real notion of a page or location within the site. It is left to the information architecture to allow the user to perceive the structure of the site in their own particular way. This is real personalisation, and without bragging too much, Sytadel does this exceptionally well.
Getting back to filtering/lensing, because the technology is not currently available to make the contextual, depth and field of view decisions required for this type of personalisation, information spaces allow us to annotate the content to give the system hints. You could liken this to web search engines, where AltaVista tried to use page metadata for hints, and Google dismissed the metadata once the technology became available to act purely on the content. (I don’t like to use too many search engine examples, because Peter, our in-house pedantic search uberguru usually spends an hour explaining to we why my example is slightly incorrect and a better example would beÂ…)
We have a standard diagram we use to try to explain information spaces to our customers, which likens them to an 2D onion skin, where there are successive layers of information spaces containing content, such as Intranet, Extranet, Internet, and our page construction is like polygons of various sizes and shapes which lay over the onion as a representation of the page. An external observer looking at the site may see content from all three layers, as if peering into the onion.
In hindsight, this is a flawed analogy, because it gives information spaces a distinct layer ordering, whereas while technically possible within Sytadel, we instead use a flat separated information space model for sites which have a typical team (Intranet), organisation (Extranet), public (Internet) content split. Also, this weaving of content into the final page means that the typical end user is unaware which parts of the page come from the Intranet, Internet, Extranet, or any other information space or section of the site. With an onion, you have a pretty good idea about what each layer looks like.
Sytadel is great example of annotating content to solve the problem of personalised filters/lenses on content, and it is encouraging to see that for this and a fair number of other inherent problems with assembly and visualisation of microcontent, we’ve either solved or at least recognised the problems which weblog CMS developers are only now started to bump their heads against.
While our Sauce project has a different user base to Sytadel, the technical challenges are the same if not similar, which gives both Sauce and Sytadel a nice developmental reinforcement effect, as they gain from each other’s future development. Bring it on I say, bring it on!
(Originally posted to Synop weblog)
I love CSS Vault, I really do. The last six months they’ve really gone from strength to strength, and now people consider them the ultimate CSS gallery site.
But why does every single site in their gallery end up being 105.2 pixels wide? Well, I’m exaggerating ever so slightly, but the whole point of HTML is that the browser renders the site as it thinks it should look, and the user can adjust their window size to whatever they feel comfortable wth.
For example, I have my work machine set to 1600 wide, and all my browser windows are, you probably guessed it, 1600 wide. Want to know what a typical “fully CSS compliant” web site looks like on that kind of set up? Crap, basically. Imagine if you had a large six foot wide whiteboard to play with, but you were restricted to using only the middle 30cm or so.
Don’t get me wrong, all the sites in the CSS Vault gallery look absolutely fabulous, but why do they all have to be 98.4 (slightly exaggerated) pixels wide ?!
I figure the reason is that developers will always abuse what they’re given. Remember the IE custom support days, remember the table as layout hack days, and now remember the CSS as fixed layout days. A lot of work goes into these standards (HTML and CSS) to make them completely browser independant, or at least malleable enough for user personalisation, so why don’t web developers understand the spirit of the standards? Do they think every browser window on earth is either 640, 800 or 1024 pixels wide? I think I’ve seen browser windows in every single pixel size from 300 to 1600. That’s the reality I’m afraid, and forcing a 927 pixel wide window user into only seeing 800 wide is just plain stupid. It is the user for which the site exists, so why not do them a favour, and instead of showing off how talented graphic designers are and how ignorant they are about the actual site content, why not do something for their audience for a change?
My window is 1600 pixels wide, I do it for a reason, and if I see another web site which uses only a couple of hundred pixels, then claims to be super standards compliant, I’m gonna go nuts! In fact, I might even start a new CSS Vault, and call it Skinny Vault, with the URL www.useless.skinny.websites.com. Know of a useless skinny web site? Enter a comment on this post, and together we can rid the web of these graphical dunderheads once and for all.
We’ve been saying it for years, but the Internet has a lot to answer for. Here’s a few reasons why.
Fermilab is a U.S. laboratory for research into high energy particle physics, second only to Europe’s CERN (European Laboratory for Particle Physics), but owns the world’s highest energy particle accelerator and collider, the Tevatron. Basically they accelerate protons and antiprotons and smash them together to try and identify their content. I won’t get into gluons and bosons, but you can check their site if you want to know more. Anyway, a current project of theirs is to measure the history of the expansion rate of the universe by photographing the dark energy which makes up 70% of the universe’s mass. To do it, they’re using a CCD (Charged Coupled Device), which is the same technology used in consumer digital cameras, of 500 megapixels. Compare that to the current 5 megapixel you’ll get at the local camera store. (via slashdot)
Apparently the other night, 64 million americans voted for American Idol. Compare that with November 2000, when 99 million voted for members of the U.S. House of Reps and 105 million voted for presnit G.W. (via GLOBALIZE THIS!)
Here’s a story about Washingtonienne, a Washington based staff assistant’s anonymous sex exploits blog, which was saved from the trash heap by a caring individual after it was dumped; Wonkette, who you may know, outed her; and the Washington Post article giving the background as well as an intereview with her after she was fired for the blog. This would sound like a regular sex scandal with a bit of high tech mixed in if it weren’t for the fact that she started the blog on 10th May and was fired and moving to New York by 23rd May.
Finally, we now have a TV show in Sydney called Mondo Thingo with Amanda Keller, original Towards 2000 member and part time breakfast radio comedienne, which I caught the last 10 minutes of tonight. Basically it is a TV version of Boing Boing and similar web sites. Not a copy mind you, but the same cultural and weird kind of stuff.
So what do all these have in common? Not too much really, but for Fermilab, before the Internet (and arguably New Scientist, which I did read back in the pre-Internet days) I wouldn’t have known how a CCD worked, who Fermilab were or why they were looking for dark energy, but more specifically, most sites are just taking this for granted and the amusing part is that they have a 500 megapixel camera. Not because that’s unbelievable, but because us geeks would like to have one.
Before the Internet who would have cared that 64 million americans voted for anything, let alone a TV show. The fact that a political anti-globalisation web site (which is a good site by the way) is highlighting it as a way of showing the dumbed down intelligence of modern human beings, makes it particularly interesting, but the fact that it was written the day after, and he had all the figures at hand to write with, and he’s not a professional journalist, says something about the modern world.
Regarding Washingtonienne, the whole story runs for only 13 days, that’s less than two weeks. The blog started on 10th May, built up popular acclaim in the space of about a week and a bit, then spread across all the tabloids in Washington, the woman was outed, fired, interviewed and the story finally closed and forgotten by 23rd May. Welcome to the Internet!
Seen on the atom-syntax mailing list:
[..] Thus, you can’t say things like “Give me only the entries that have been updated since time XXXX'”. Should HTTP be extended to address better the needs of Atom? Should RFC-3229 be extended to define an ATOM specific mechanism for retrieving Atom Fragments?
Well, you could indeed, for most CMSes, create a URI that would launch a query that would retrieve a bunch of entries, or an RSS/Atom feed for them, or whatever. There might be scope for standardizing the query encoding. – Tim Bray
At the risk of sounding like a broken record, and I promise this will be my final post to include the words control, publishing and pipeline, please mark 18th May 2004 down in your diary as the day Atom started to get it.
Amongst discussions of pull vs. pull, firewalls, open ports, and other technical issues, ultimately what is important is that I am able to get access to the content that I want. At it’s most simple level, this is a “pull”, and in order to get all the content I want, I’m going to need filtering/querying, directories and content relationships. Do RSS or Atom currently do any of this?Â
See my previous posts for details on pulled feeds as information filters, big media changing to content based business models, insight into why personal control of content is inevitable, and why RSS is simply a distraction from the real game.
And on that note, I’ll leave you in peace. 🙂
(Originally posted to Synop weblog)
Today Engadget posted a rumour (from AppleInsider) that the next iPod would have direct audio input, using a built in MPEG-4 or AAC encoder. If this is true, then I’d seriously consider this the MiniDisc killer, which would be amusing considering every one of the couple of hundred MP3 players released in the last 12 months were supposedly iPod killers.
For background on MiniDisc, see this post of mine from earlier on.
What does audio input give us? Well, you can plug in a condensor microphone for starters, and do away with other personal recording devices. You could also plug in the outputs of other equipment like home electronics, handheld devices, concert mixing desks for bootlegs etc. And the beauty of iPod is that you just take it home and everything gets sucked out into iTunes, which you can then drop into an audio app of your choice, edit, mix and burn and you’re done. Using a professional boom mic, you can record high quality sound to the iPod and transfer it directly into Final Cut Pro or iMovie.
DJs are already replacing CD collections with iPods, and it won’t be long, assuming the audio input rumour is true, before we’re able to mix our own audio at any time and place we wish. Random access, digital, high quality audio, directly transferable to and from Mac and Windows, software upgradable sound quality, and a USB/Bluetooth connection. I’m sorry, but that’s a MiniDisc killer. No wonder Sony are suddenly releasing so many devices based on hacked MiniDisc technology, as they’re about to have 15 years of technology development made redundant virtually over night. A classic example of product panic. You probably won’t believe me, but I actually love Sony products, and most of my home electronics equipment is high end Sony, but aside from a period of about 4 years where it was relevant, MiniDisc is a flawed late 1980s technology that I at least won’t be sad about seeing disappear. Goodbye and good riddance.
Today Engadget posted a rumour (from AppleInsider) that the next iPod would have direct audio input, using a built in MPEG-4 or AAC encoder. If this was true, then I’d seriously consider this the MiniDisc killer, which would be amusing considering every one of the couple of hundred MP3 players released in the last 12 months were supposedly iPod killers.
A little history of MiniDisc is probably in order. Back when Sony and dutch company Philips invented CDs, we suddenly had digital audio in our lounge rooms, cars and even in our Walkmans. This was a great money spinner for Sony and Philips, not because they could sell CDs, as Sony wasn’t actually in the music business at that stage, but so they could sell their manufacturing plant technology and the compact disc certification mark to the consumer electronics companies. Only Sony and Philips had developed the CD manufacturing technology, electronics company were required to license the playback LASER technology from them, and the record companies were required to pay for the privilege of having that little compact disc logo on their product. This is why the current CD DRM technologies which prevent digital copying of CDs has Philips a little frustrated and Sony in a bit of a schizophrenic quandary, because the DRM doesn’t actual conform to the Sony and Philips standard, and therefore cannot use the compact disc logo, which ultimately means they don’t have to pay for it either. Sony of course is now in the record business, having bought Columbia Records back in 1989. Also, you can tell the difference between the Philips and Sony manufacturing, due to the see through plastic centre on a CD, which is clear for Sony and opaque for Philips. But I digress.
The problem with CDs, and why people were still buying cassettes, was because the CD was read only, and home equipment that could manufacture a CD seemed a long way away, until of course Pioneer invented the technology to do it. At least my memory says it was Pioneer, so I may be wrong. In fact I searched the CDR FAQ and I couldn’t find a reference to it, but I’m sure if you email the maintainer, Andy McFadden, who is also an old Apple IIer like me by the way, he’ll track down the answer for you.
So to plug the gap, Philips invented the Digital Compact Cassette (or DCC), a digital version of the old stereo cassettes we knew and loved, which made sense, considering they had also originally invented the cassette to begin with. Digital audio, in it’s raw form, is simply a series of values representing the position of a waveform over time, in the case of CD, 44100 samples per second at 16 bit resolution. 44100 or 44.1KHz was chosen because the maximum frequency that our ears can hear is around 22KHz or so, and 44.1KHz gives you at least two samples per wavelength, at that frequency, which should represent as close enough to the positive and negative amplitudes of the wave for playback. The original Fairlight music computer sampled at 50KHz by the way, and DAT tape, while variable, is able to sample at 48KHz, which is why DAT is still so popular. These samples are called PCM or Pulse Code Modulation, and are the basis of digital audio. Anyway, in order to store the huge amount of data required to store digital audio, Philips came up with a technology called PASC or Precision Adaptive Subband Coding. The basic idea was that you chop the incoming audio into a dozen or so frequency (or subband) bands, ranging from low bass sounds up to high 22KHz, removing sounds which probably can’t be heard from each band, and then joining them back together again. This effectively compresses the data, but it is of course lossy, so every time you record with it, you’ve lost data from the original waveform. However this was fine, because by designing it to effectively be good for only one generation of copying, you have a built in DRM. The problems with PASC, were that the bands were divided equidistantly across the spectrum, whereas sound is inherently logarithmic. This meant that the lower bands actually represented more perceptive range than the upper bands. Perhaps this was supposed to address the compression of harmonics problem, but I’ll come to that a little later. Anyway, DCC failed. It wasn’t random access, so you still had to fast forward and rewind, the PASC obviously wasn’t ideal for home taping, and the audio head was still using metallic particles oriented via magnetic forces (the same as in standard cassette tapes), to simulate a purely digital recording format. With these type of recording heads, like in standard cassette tapes, the tape itself rubs against the head, causing both the tape and the head to wear down. But the big reason it failed, was because of Sony.
Sony came up with MiniDisc, using a similar analog head rubbing against metallic particles technology, but developed it as a rotating disc, giving them random access capability like a CD. They also developed LASER based guidance for accuracy, so that the analog read/write head could use more closely spaced tracks and subsequently store more data. Sony also developed their own compression scheme called ATRAC or Adaptive TRansform Acoustic Coding, which works similar to PASC, but divides the signal into 52 logarithmically divided subbands instead, giving each band equal importantance in the spectrum of hearing. Having killed off DCC, Sony is still flogging this 1980s based technology as modern audio equipment.
The big flaw in PASC and ATRAC is the fact that sound, particularly in music, is based on harmonics. A simple note played on a guitar for example, such as A, which is 440Hz, isn’t just 440Hz, it also generates harmonics at doubling intervals, so 880Hz, 1.7KHz, 3.5KHz etc. The problem is that these harmonics fall into different subbands when compressed, and may or may not be removed if the compression feels like removing them. So, pull out a couple of harmonics, and you end up with a more echoey or thin presence of the sound. This is the basics of why MP3 and the rest is so crap at low sampling rates. The importance of harmonics tends to be lost on technologists, which is why audiophiles still love vinyl, and a lot of professional recording is still done in the analog domain.
MiniDisc and DCC use lossy compression, same as MP3, AAC and MPEG-4, they’ll all degrade through successive generations of copying. That’s why the record companies aren’t completely up in arms about this, because most audio luddites will rip music at some really low encoding speed and resolution, which makes it sound tinny and echoey, and won’t realise how bad it sounds. A recent article by Jupiter Research claimed that with personal devices, particularly MP3 players, increasing their storage, there was limit at which people would want probably no more 1000 songs, and therefore was just increasing memory size for the sake of publicity. What they fail to realise is that increased disk storage actually means the capability to finally return to raw non-lossy PCM encoding for much higher quality audio. I can finally toss that 1MB song away, and have a perfect digital copy at around 60MB instead. As bandwidth and storage increase, lossy compression such as MP3 will become a distant memory and a short 20 year period in history, which we’ll look back on with melancholy.
Now, where was I? I don’t believe I’ve remembered all this crap. Oh yes, the new iPod, the MiniDisc killer. This needs a new post.
Every few years, I get into an argument a discussion with someone about why Apple’s platforms are inherently better designed for users and usability than competing platforms, whatever the domain. So far the iPod seems to be the exception to the rule, but not excessively so.
The problem is that in most cases you can’t really argue the point, particularly to a Windows or Linux fanatic, because their reasons for liking their preferred platforms typically bear no resemblance to usability. Although all three of these computing platforms are moving closer together, the step from the Windows to Mac desktops is still at least as far as from the Linux to Windows desktops. Anyone upgrading, from a Linux to Windows desktop, and I choose the term upgrading intentionally, is more often than not amazed at the new found usability and consistency, so their argument is that anything more would be simply nit picking, or purely subtle or academic improvements. I’ll liken that to the person upgrading from a horse and buggy to a Model T Ford, not realising that the Mercedes-Benz S55 AMG would probably make their driving experience a lot more pleasurable. Please note that I’ve played fair by resisting the obvious stereotypical Ferrari comparison.
But ultimately, a 15 minute argument isn’t going to convince a Windows desktop nut, who is an expert in Outlook 2003’s weird arsed assortment of UI controls and who has already decided to have an argument about desktop usability, that Apple designs are better. The best you can probably do, is use that old chestnut of pointing out roughly how much they don’t understand about UI design, and then let them feel a little inadequate for a few hours. Because if they did understand it better, or knew how much they didn’t know, they certainly wouldn’t have started such a dumbarse argument in the first place.
I recently bit the bullet and moved my Windows task bar on my work machine to the left side of the screen, to match both my home Windows box and my Mac OS X dock setting. It reminded me of Bruce Tognazzini, who amongst other things spent 14 years at Apple and founded their Human Interface Group, and as far as I’m aware the only computing company at that time to have a group dedicated to defining and enforcing the rules of user interaction with a computer, or at least desktop GUIs. My task bar change was instigated particularly because of Fitt’s Law, which I was reminded of recently while using some Windows application which forced me to do everything in little task steps through the main menu bar, causing my hand to go partially numb. Fitt’s Law, amongst other UI basics, is better described for UIs by Tog. In fact, reading through that page reminded me how much there is that you need to know before you can make intelligent UI decisions, and how much of the theoretical stuff you consciously forget over time. It frustrates me when I can see a broken UI, but can no longer argue why it is broken.
Anyway, I wasn’t planning to go into a long rant about interaction design or how good Apple are, because, yes you guessed it, like that’s going to convince you, right? The point of this post, before it went astray, was to highlight Apple’s possible new MiniDisc killer. In fact, because I’ve wasted so much space, I’m going to move it into a separate post.
On Online Journalism Review today, the article To Their Surprise, Bloggers Are Force for Change in Big Media talks about bloggers arguably forcing change in the way big media view the blogging independants, and are beginning to take them fare more seriously as writers and/or researchers. I don’t think Mark is able to absolutely prove the article’s title however, for reasons I’ll allude to in a second, but it certainly shows the direction things are headed.
It is interesting to think that individuals are starting to affect the outcome of corporate journalism. As shown in the article, if a big media story has fictional elements masquerading as journalism, the public can have an effect on the resulting copy of the article, if not editorial content. Bloggers in particular tend to be more informed members of the public, which partially explains their tendency to spot conflicts more easily than your regular non-blogger.
So let us assume that we’re talking about all public interaction with big media, not just bloggers, and them arguably having the ability to annotate, embellish or even dare I say it control publication. If you extend the interaction further, you start to see where control and personalisation could be used to tailor big media stories with your own lense or filter.
For example, here’s a news story about how a local school board couldn’t agree on whether Intelligent Design (the new 2nd millenium friendly non-religious compliant buzzword for “creationism”) should be taught side by side with evolution. Putting on my Richard BF filter, I’m interested in investigating where the idea came from that the school board should even be considering it, and what laws and constitutional rights are being used to justify or fight the decision. The Mr Local Smith filter may be more interested in the school board itself, who the board members are, and which community groups are actively participating in the issue at a local level. The Senator Joe Bloe filter might be more interested in whether other schools in the area are considering similar changes, who they are, what the constitutional and federal education laws are that may be affected by it and whether there are any precedents or related cases in this or other states.
To provide all this information in a single story would be pointless, most people would turn off after a few paragraphs, except for hardcore researchers. If Mrs Soccer Mum was worried about what her kids might be taught in school, she’d never see the story or be able to see the three or four paragraphs that she’s interested in, presented in the same way she expects all her news stories to be presented. The Mrs Soccer Mum filter could be a simple local rag pocket magazine picked up at the local supermarket.
Does this sound a little like controlling the publishing pipeline?
Imagine if when I viewed a news story, it popped up a side list of related stories, categories by depth of reporting/research as well as by subject, and a visual virtual web of research about the story. I can choose built in selective filters that I can overlay on the story and the additional research.
Taking this further, if we had a fairly low level of English comprehension and/or intelligence, what we’d probably locally call the A Current Affair audience, then the story’s ACA filter would be very clear in it’s intent.
Today the Localville school board couldn’t decide if they should teach biblical history and Darwin’s evolution, the scientific theory that we descended from apes, in the same classroom. Here’s why this is important: [..]
Or if we were science academics, then our academic filter could produce this.
The proposition that Intelligent Design Network’s Objective Origins Teaching Policy —Â the teaching of the newly renamed for liberal consumption creationism alongside evolution — should be considered by the Localville school board, was dealt an initial blow today, when permanent and transient members of the council found themselves in deadlock over adoption of what some counsellers referred to as the atheist’s biological critique of illogical determinism. [..]
Sure it’s wordy, but if you know the lingo, you’re getting way much more information than the ACA filter.
Obviously the technology and big media’s capacity to provide this kind of news is a long way off, but through smaller steps by individuals towards the various control points currently in big media’s publishing pipeline, we can gradually move towards personalisation of the media that exactly matches our own interests, intelligence, intellect, political bent, and countless other aspects of our individual character.
News doesn’t need to be biased toward my opinion, but it should at least be an objective summation of the various angles/aspects of a story, tailored to my own personal interests and level of understanding. e.g. I know what evolution is, so I don’t need a paragraph explaining it, but other people might.
Now, where did I put my good news filter.
(Originally posted to Synop weblog)