Technologies & Coding

Future of the web

I recently had the opportunity to attend WordCamp Asia in Taipei. It was my first time in a WordPress community event. One of the key takeaways for me was that WordPress adoption was in decline. Not only that, behind closed doors, I got to hear that number of websites created on the web overall is slowing down as well (no data was shared unfortunately… but I believed the source)! This got me thinking… does WordPress no longer meet the needs of people? Or perhaps, websites are going away..?

Keynote presentation from Noel Tock: Future of WordPress

Search for answers like “can dogs eat mango” or “do butterflies pee” (yes, I’ve searched for both recently), you can easily see how Google’s negative influence on the web. Job of an indexer is to make sense of content and surface it appropriately. However, search is the window to the web, shifting some power from authors to the indexers of content. The web is, at least partially, shaped in the image of Google.

Despite search being the dominant player on the web, authors still get to style and present content in ways that reflect their branding and style. Websites are a reflection of an author’s intent. They get to take the consumer on a journey. Search is part of how content is organised. However, all of this lies in the hands of the author.

I asked ChatGPT the same questions above to get answers. Answers were instant. No bullshit ads or content hidden amongst a sea of SEO text. I was only regretful that I hadn’t done that as my first step. I’m still building my muscle memory to open ChatGPT instead of Safari.

There is something interesting about searches done through LLMs. LLMs drop all hints of what the original authors of this content intended to convey or how they wanted to present the information. Content is reduced all the way to what I personally wanted to know.

AI is becoming the translation layer between what authors created and what the consumer was looking for. This is causing a fundamental shift to how we think about the web. The presentation responsibility shifts from the author to consumer, reducing the burden on something like a website.

There are lots of down sides to this, of course. Content creators don’t get enough credit (hence, lawsuits) and we don’t read how content was intended to be consumed. We miss out on the motivations and stories behind what fuelled great writing or artwork. Yet, the significant amount of trash on the web today will move more people into this mode of using something like ChatGPT to discover content.

This isn’t a new phenomenon either. Google Maps is a good example of providing a different window to content around the web. That too, has a lot of complaints from content owners. However, Google Maps is something Google designs to be a specific lens into a type of content. It isn’t as personal to me specifically.

Web browsers have been the window to the web. If you had the opportunity to watch Halt and Catch Fire (season 2 and 3), Joe MacMillan has this epiphany. They don’t have to create their own network to get users. They just need to create a window to what is out there in other networks and servers with HTTP requests. It is so powerful. A web browser’s job is to present content an author has prepared using HTML and CSS. JavaScript makes things more fun and interactive.

Entering the AI age, the concept of a “web browser” takes a different shape. My browser should be different to yours because I, as a consumer, get to control how content is presented to me. Browsing will be deeply personal. I have the power to decide what I want to see. My bot / browser / companion / OS / computer / whatever I use… will understand me and shape content to help me easily consume it. It will help me make sense of the world.

It is started to happen: Arc Search

In fact, this will go so much further in the future. Bots will talk to other bots to learn about things and bring final outcomes back to the consumer. We’ll build more and more purpose driven applications/models that reduce vast amounts of information to simple consumable chunks.

It will be an absolute mess before we get it right. But it will be the begining of something new.

We may very well end up making a lot less websites in the future. This is not cause for alarm. Part of my current job includes looking at products like cPanel hosting. People aren’t reaching for those tools as much over time. However, more content is being hosted on the Internet than ever before, just not in traditional hosting. Social media collects incredible amount of content from users. These changes on the web will make consuming more content possible and CMSes and content creators will still be super important. They just might not exist in the current shape and form.

What will be important here is to figure out how do we continue to have a web that isn’t a series of walled gardens like Facebook, TikTok, Reddit, X and so on. Internet should be open. It should potentially be free to a large extent too. The benefits of it being that way are obvious: the free Internet has progressed our society at an incredible pace.

There’s a lot to figure out in this future. Its success will be determined by whether we can solve these problems.

  • How would we give credit and incentivise creators to continue to make great things? Imagine technology like ActivityPub and Activity Streams will play a key role in the distribution of content and credit.
  • How would we make the economics work? Advertising powers much of the Internet today. Websites can mix ads with content to make revenue. Reddit, X and others have closed their APIs and websites to crawlers that look to create LLMs to protect revenue. Would we end up with walled gardens of content that takes a fee to browse? Wouldn’t be a very open Internet anymore.
  • How do we evaluate such an unpredictable medium to be accurate enough or safe to use? Explainability will become important.

As someone who is obsessed with user experience, this is really exciting to think about. This fundamental shift will bring new ways to explore and consume information. There is so much innovation opportunities ahead of us.

Euclidean distance vs Cosine Similarity for text searches

Cosine similarity is the angle between two points, while euclidean distance is the actual distance between two points. Cosine doesn’t care as much about magnitude so does well for different lengths of data (think how far into a specific direction something is as long as they are in that direction).

This year, everything should go “Touch-first”

Happy 2017!

I’ve been considering buying an iPad Pro. I’ve always felt that the iPad is an "in-between" device that can’t fit my life well. But there is proof that the desktop is getting... deprecated. Here’s a good article from The Verge that talks about it.

Is it time to transition over?

By the end of 2016, many major websites saw traffic shift from desktop to mobile. Mobile has become the dominant form of computing for the every day consumer.

From a platform perspective, native apps started to transition over to web experiences. Hopefully, "Progressive Web Apps" will replace the generation of native content apps we have today.

What we didn’t seem to have gotten around in 2016 was to really question our approach to mobile and touch. Our approach remains primitive. We pick up interactions like "text editing" and try to retrofit mobile into it. We don't question the fundamentals to ensure the new experience fits well. I can’t help but feel that “keyboard covers” are a grave mistake.

Consider how someone uses a mobile device. He/she would be leaning back on a comfy couch, holding on to a tablet with both hands. Or, they may be standing in a crowded train, arm wrapped around a pole, trying their best to type with the two thumbs. The users' posture and environments above doesn't allow for a traditional editing experience.

We have to be more creative with the solutions we engineer. If we do transition well, these touch-first solutions should make us more efficient!

Also consider the change from a mouse to multi-touch. Direct manipulation of objects could be amazing. We see some of these interactions when we use apps for drawing, photo editing or maps. But more mundane tasks, like text editing, never seem to get much of a boost from touch. We seem to staying on safe ground with rows of buttons to carry out functions. We spend significant time manipulating and navigating between elements. Yet, these tasks tend to be not touch friendly; at least not enough to be more efficient.

It feels like we transferred the interaction to mobile, instead of converting the intent. We should be evaluating the purpose of every task and attempting to accomplish that in a "touch-first" way.

It seems that we have some ways to go before we get really good at mobile and touch. I’m hoping for a 2017 filled with ideas and techniques that shifts our thinking. I’m hoping we build products that are thought from a touch-first (or even touch-only) perspective.

Ember.JS: How to handle invalid URLs

There’s a lot of documentation for the new Ember Router. However, I found that no one was talking about how to handle the “*” route in Ember, i.e. the routes that don’t match anything.

I first tried to look at the ApplicationRoute but that didn’t seem to throw anything. Ember just sits there with a lovely blank page!

 

ApplicationRoute { 
events: {
errors: function(reason, transition) {
console.log("never happened");
}
}
}

So here’s the easiest solution for handling bad urls. My router looks like this now:

App.Router.map( function() {

this.route('login');
this.resource('companies', {path: 'companies'});
this.resource('company', {path: 'companies/:company_nice_url'}, function() {
this.route('members');

});



this.route('bad_url', { path: '/*badurl' }); // Catch everything else!

});

The last route named “bad_url” will catch all the other unrecognized URLs and direct the user to the App.BadUrlRoute where you can handle them. You will get the URL segment as params.bad_url for you to inspect and offer friendly advice like “hey did you mistype companies?”.

If you simply want to show a page that says “There’s no one here you fool!”, just create a handlebars template:

<script type="text/x-handlebars" data-template-name="bad-url">
<h1>There’s no one here you fool!!!!</h1>
</script>

And you’re done! Any path that doesn’t match your routes will get routed through ‘bad-url’, where you can act on it accordingly.

 

EDIT ON 6TH AUGUST

Marcus in the comments pointed out to me that using path: '/*badurl'  is better due to this handling "/this/is/wrong/path" situations. My initial solution to using a dynamic segment (:bad_url) did not catch this. Thanks to all the commenters!

 

EDIT ON 7TH AUGUST

I have realised that having "*badurl" instead of ":bad_url" has one caveat. I am implementing a funky search route that deals with all URLs past a base route like this: /search/x/y/z/a/b/c . Having the astriks to catch all the bad url's makes it impossible to have a search router which deals with "/search/*search_terms". So there you go... ups and downs to both methods. More on that awesome search route another day... :)

 

How is Facebook Chat Heads possible on Android?

This has been the question since the day I saw Facebook unveil the new Facebook Home suite of apps. Workmate of mine (_) might have found the answer on good-ol' StackOverflow!

http://stackoverflow.com/questions/13346111/draw-overlay-in-android-system-wide

​Basically, you are able to spawn a service which draws directly on to the Android System's WindowManager by adding a new subview on top of everything. It requires some special Android permissions that the user can agree to during the install. Code example on the link.

Can't help but deeply appreciate the flexibility of Android.

Passing session keys in headers for Embjer.JS Data REST calls

First thing you have to do is to override the RESTAdapter if you aren't already. The ajax function is the key. It is given the jQuery hash that will get passed down so all you have to do is to populate the beforeSend key with a function like below. The 'xhr' passed in can be set with a request header value.​

DS.MyAppRESTAdapter = DS.RESTAdapter.extend({

ajax: function(url, type, hash) {

hash.beforeSend = function (xhr) {
if (MyApp.isLoggedIn()) {
xhr.setRequestHeader('x-token', MyApp.session.token);
}
};

// handle errors if ember-data doesn't already
if (hash.error == undefined) {
hash.error = function(xhr) {
this.didError(null, type, null, xhr);
}
}
// do some other work and call super()
...
this._super(url, type, hash);
},

didError: function(store, type, record, xhr) {
if (xhr.status == 401) { ... }
}
});

​That's it! Your app should now be firing off request headers like a pro! I've also left my basic error handling code to catch instances when ember-data simply does nothing. This will probably change in the future leading up to a stable 1.0.

Still learning this stuff, happy to hear about a better way to do it if you know :).

Trouble with Time Zones and Day Light Savings in iOS

As you all know, Suneth and I are working on 60Hz 2.0 and we had the weirdest date related problem yesterday. We have a weekly calendar which tells you when your library shows air and what are the premiering (returning or new series) airing on any particular day.​

Yesterday, our calendar looked like this!!​

​We have 2 Sundays!!!

​Huh??

After a lot of digging around the problem was this: yesterday, Sydney timezone rolled back an hour due to day light savings.​

We calculate each day by dropping its time component from [NSDate date]. We use a special category which gives us today [NSDate today] and work our way back to calculate the week and all the other relative days. For the first cell, which is correct, we have April 7th, Midnight. When it comes to Monday, Sydney rolled an hour backwards. Instead of getting April 8 Midnight, we're getting April 7th, 11PM. This accounts for the second 7th of April! After looking for bugs in all over the code, we finally figured it out.​

The fix was to simply take midday for today calculations. Adding 12 hours to today fixes and stays away from all kinds of DST issues (I hope).​

Rails Validating Field vs Association

​One really interesting thing I came across recently is validating a field vs association. I’ve always written something like this (assume this is a… Company model with a attribute called owner_id and a belongs_to association called owner):

validates_presence_of :owner_id

What’s better to write is:

validates_presence_of :owner 

This actually makes sure that your :owner_id actually maps to a real object in the database. So you can’t just set a random integer as owner_id and get away with it!

Sadly this has made testing a little bit harder. Tearing through FactoryGirl documentation again now!