Chromecast: First impressions

I got a Chromecast recently, and it’s worth writing about because I feel it’s part of an important step in how we enjoy media (television, movies, music) and will do in the future.

So, on to my observations.

My general misunderstanding of what they did

Before getting one I never properly understood how they actually worked and what they did. I did understand that videos weren’t streamed to the Chromecast via your client device, and that instead the Chromecast directly fetched the video to stream.

What I didn’t understand was that the number of things you could stream on it were endless, and limited only by the vast array of apps on the Chrome Web Store, Play Store (Android) or App Store (iOS) which chose to take advantage of it (and that anyone could develop such an app of their own).  I assumed that it was limited to a small set of high-profile apps: Netflix, YouTube, Play Movies & TV, Play Music, and probably a handful of other things Google will have deemed worthy of building into the device.

In reality, the Chromecast device itself has no “apps” built into it, well, apart from the thing that sets up its wifi connection and the thing that shows wallpapers as a sort of screensaver when it’s idle.  The “apps” all run on the client device and control it from there, whether that is your Android device, your Chrome web browser on a computer, or an iOS device, and anyone can write an app for any of those three devices that can control it.

The device can basically stream a bunch of different video or audio formats or display pictures, which it downloads from whichever server the app specifies.  The app you are using tells it what address and protocol to fetch video, audio or pictures from, and how to show it on screen.  Once you start a video streaming, you can kill the app and it’ll keep streaming on the Chromecast, but because it has no inherent controls or intelligence itself, that’s all it will do – you can’t pause, restart, or change anything unless you are still using the app and the app is still connected to it.

Issues I’ve found

Even though I wouldn’t say I’m elderly, looking down at my phone, then back up at the screen, repeatedly, strains the eyes a bit.  Once you get a video or music started, you don’t really need to do this much I guess.

Once you’ve been watching a show for a little while and you want to pause it because the baby is crying or something, it’s not a case of finding just one pause button on your remote: because it’s your Android device, it has probably gone to sleep.  So you wake up its screen first.  The app you were controlling the Chromecast with may have a pause control on the lock screen, but if so the button will be small so you actually have to focus on the screen a bit to see where the button is a press it carefully to pause.  But occasionally the app may have closed so you have to unlock your device and find the app again in your home screen or app drawer.  Sometimes the app may have even disassociated itself with the Chromecast, in which case you have completely lost control over it; the only way to stop it is to go back in to the app, re-connect with the Chromecast, choose the Chromecast device from the menu, then hope it brings up what was playing – and if not, you may even have to search for what you were watching all over again. This is obviously more an issue with the individual app than with the Chromecast model in general.  If the app that was in control of the Chromecast stayed running always, and always had lock screen controls working and visible, this would minimise some of these problems.

I started a YouTube “mix” of 50 videos and for some unknown reason the Chromecast stopped playing after only 4 videos.  Waking up my phone showed that the YouTube app was no longer running but opening it up again showed that it did still know it was connected to the Chromecast, it just wasn’t playing anything for some reason.  I don’t know what to blame this glitch on.

I watched a TV episode using the Stan app (I have free subscription to Stan) and with 2 minutes remaining it mysteriously stopped playing and again I looked at my phone to find no sign of the Stan app running.  Going back in, I found that it had disassociated itself from the Chromecast and didn’t remember where I was in the episode: sharing to Chromecast and pressing play again sent me to the start.  Again, more an issue with the individual app than the Chromecast or that model in general.

9 June, 2015 at 5:42 pm Leave a comment

PHP: recursion causes segmentation fault

The fact that attempting recursion in PHP leaves you at the mercy of segmentation faults isn’t anything new, really (though it is new to me now).  Since 1999, bug reports to this effect have been knocked back, stating that this is the intended behaviour.

This is all due to technical limitation inside PHP, which I guess is reasonable.  PHP pushes to the stack each time you enter a new function call, which is quick and handy, but limits your recusion depth.

What annoys me is that it’s one more reason that PHP is not very good for creating portable code.  There’s no warning or PHP error before you exhaust the recursion limit, PHP just crashes (bringing down an Apache process with it if you’re using mod_php).  The “safe” depth of recursion you could do on PHP is hard to predict, and is going to vary between installations and builds.  I’ve seen posts online showing people doing experiments to try and figure out PHP’s recursion limit, but there’s too many factors at work – how PHP’s been built, whether it’s 32-bit or 64-bit, the type and quantity of variables you are passing as arguments or return values, and so on.  In some code of my own, I’ve seen php-cgi crashing after just 20 to 30 levels of recursion.  The function was passing some rather large arrays as return values.

You could argue that the same issue is going to happen in a C program, and that any decent programmer should be ready to code around this, anyway.  However, in my mind PHP should be a level of abstraction higher than this sort of detail; a programmer in such a high level scripting language should not need to worry about the underlying low-level implementation.  Ideally, PHP ought to put out a PHP error, allowing for the problem to be reported gracefully just as if you’d tried to call a non-existant function, and the PHP interpreter should end the script gracefully without crashing any processes.

The bottom line is that if you’re doing recursion in PHP, you should use your own method to limit the depth to a very conservative level, or perhaps rewrite it to use a big loop instead of recursion.  If, like me, you’re writing a recursive descent parser, if you need to make your code portable you may be better off to rewrite it to use a big loop.

16 May, 2010 at 3:47 pm 1 comment

Stable vs stable: what ‘stable’ means in software

I’ve come to learn that when someone refers to software as ‘stable’, there is more than one quite different thing they might mean.

A stable software release

A stable software release is so named because it is unchanging.  Its behaviour, functionality, specification or API is considered ‘final’ for that version.  Apart from security patches and bug fixes, the software will not change for as long as that version of the software is supported, usually from 1 to many years.

Software that is intended for the public to use is usually “stable”.  It is released, and following the release no new features are added apart from the odd bug fix.  To get new functionality users eventually need to upgrade to the next version.  Any problems with the software (unless they can easily be fixed with a bug fix update) are “known” problems, and the software vendor does not need to keep track of more than one variantion of these problems for any given version.

Examples of releases that are the opposite of stable include development snapshots, beta releases, and rolling releases.  A characteristic of all three of these is that they are in a frequent state of change; even their functionality and feature list can change from week to week, or day to day.  You cannot depend on them to behave the same way from one week to the next.

Some people like that with non-stable releases such as development snapshots, beta releases or rolling releases, they are always getting the latest features as soon as they are written into the software.  In many cases, these releases also fix deficiencies or bugs that would otherwise remain stagnant in the stable release.  However, with no stability in the feature list or functionality, this affects the ability for documentation, other software that interfaces with the software, plugins, and more to function: a change in the software can mean these become out of date or fail to work anymore.  When you have software which needs to work well with a lot of other software, having a stable release reduces the frequency with which changes in the software will break compatibility with the other software relying on it.

Another meaning of stable

Another meaning of stable exists in common use, where people take it to mean “working reliably” or “solid”.  That is, people refer to software that runs consistently without crashing as stable.  You can see why they may use the word in this way: in real life, when something can be described as stable, it won’t fall over.  If a chair is stable enough, you can sit in it and it won’t topple or collapse.

However, confusion arises when people use this form of the word stable to refer to software that isn’t stable in the earlier sense.  For example, it’s why you see comments like “I’ve been using the beta version since February and it is very stable” or “the newer version is more stable”.  The point that these comments make is not that the software is final and unchanging, as in a stable software release, but more that the software is stable like a chair might be stable.  It seems reliable, and the user hasn’t experience any major problems.

This kind of stability won’t help developers extending the software with other software, or writing plugins or customisations for the software, since the fact that at any given time the software is running well does not make up for the fact that the software is subject to frequent changes.

Commenters or reviewers who describe beta or rolling releases of software as stable, might want to try describing them as “solid” or “reliable” instead, to save confusion with a stable release which is an unchanging release.  Or, the fact that the same term is understood in two different and sometimes conflicting ways may indicate that the term is not an ideal one in the first place.  It does, however, seem firmly entrenched in the software development world, where the meaning of a stable release is well known.

2 April, 2010 at 11:00 am 3 comments

How JPEG and MPEG picture compression algorithms work

While their names sound similar apart from the first letter, JPEG and MPEG refer to the names of two unrelated organisations.  The Joint Photographic Experts Group (JPEG) created the JPEG image file format (as well as JPEG 2000, a different format), while the Moving Picture Experts Group give their backing to a large number of audio, video, container and stream formats to support the transmission of digital audio or digital video, which are grouped into generations (MPEG 1, MPEG 2, MPEG 4).  For instance, the popular MP3 audio file format is a part of MPEG 1 (MPEG 1 audio layer 3), DVDs and digital television usually use standards from MPEG 2, and the popular “MPEG 4″ or “DivX” video format is standardised as part of MPEG 4 (MPEG 4 part 4).

All of the formats, including JPEG images and all types of MPEG videos, work in a very similar way, which I’ll call block based DCT image compression.  This involves chopping up the picture into blocks, 8 pixels by 8 pixels, and compressing each block separately, and using a mathematical transform called a Discrete Cosine Transformation (DCT).

Here are some of the steps involved in JPEG and MPEG image compression:

Colour space conversion and resampling

If necessary, the image is first converted into the correct colour space and resolution.  This involves separating the colour image into 3 separate images or ‘channels': a luma channel which carries the brightness information (like a black and white image), and two chroma channels which carry the colour information.  Furthermore, the luma channel is usually given double or quadruple the resolution of the colour channels; this saves some storage space already, and works because humans are much better at seeing intricate changes in lightness than colour, which is why we prefer reading light text on dark background and vice versa, to red text on green or similar.

Separate into 8×8 blocks and perform Discrete Cosine Transform

Each channel of the image is then divided into blocks of 8 pixels by 8 pixels (64 pixels per block).  To each block, a mathematical formula called a Discrete Cosine Transform (DCT) is performed.

Before the discrete cosine transform, you have 64 values, each representing a pixel in the block.  After the transform, you have 64 values, each representing a particular frequency of any detail in that block.  It is like taking a piece of audio and constructing a spectrum analysis, where you get a frequency vs amplitude curve, except the curve is more of a surface, because it’s in two dimensions.  The very first resulting value represents a frequency of ‘zero’, ie it is the average value of all of those original 64 pixels.  The rest are co-efficients: values which you can input back into the inverse DCT function to get the original pixel values.

Storing these coefficients is no more efficient than storing the original pixel values.  In fact, due to rounding error you need greater precision in order to be able to store them than the pixel values.  How, then do we achieve any compression?  Well, for that, you need to throw away some data.

Quantisation

Quantisation is the process of reducing the precision of a number so that it is less accurate, but takes fewer digits (that is, bits) to write, or store.  It is, quite simply, dividing a number by a certain value, and throwing away the remainder.  When multiplying by that value again in an attempt to get the original number, chances are that the number is close to, but slightly different to, the number you started with.  The more you divide the number by, the greater the strength of the quantisation and hence the greater the likely loss in precision once you multiple the number again.

So, the previous compression step left us with 64 co-efficients arranged in an 8×8 matrix, representing the values of different horizontal and vertical frequencies within that 8×8 block.  Now, we apply quantisation to these coefficients, reducing their precision by dividing them by a specific number and throwing away the remainders.  When we save a JPEG image or MPEG movie we can usually set either a quality slider or quantisation factor – these just vary the number we are dividing by and hence the amount of precision we want to retain.

The JPEG and MPEG compression can also apply different amounts of quantisation to the different frequency coefficients within a single block, or different amounts per block in an image.  When this is done, it is all in an attempt to get the best looking picture, as perceived by human eyes, for a given amount of quantisation.

Encoding the coefficients

Depending on the amount of quantisation applied, some or most of the smallest coefficients will have been reduced to zero, because they were so small and insignificant to start with and during quantisation, resulted in a value less than 1 and therefore were discarded.  It’s likely that a typical block will only have a few non-zero coefficients remaining after quantisation.

The co-efficients, which were laid out in an 8×8 matrix, are now picked out in a specific order for encoding: usually a “zigzag pattern”, chosen so that the smallest co-efficients will likely be last; any zero-coefficients at the end are not encoded at all.  The remaining coefficients undergo some form of lossless coding, which minimises the bits taken by the remaining coefficients.  The type of coding chosen varies between sub-types of MPEG video, but the general idea is that each coefficient is represented using a variable number of bits, so that smaller ones take less space.

Motion

The process so-far is simplified, but basically true for still images, and for the still component (keyframes or I-frames) of MPEG video.  MPEG video encoding can also involve motion estimation, where a block can be optimised by only encoding the difference between it and a nearby area of some other frame.  Encoding is still done per-block, and the blocks still undergo DCT and quantisation, but the selection of a reference point in another frame can be somewhat complex.

Decoding the image

Decoding is a reverse of the above process: the stream is decoded to find the coefficients for each block, and these coefficients are multiplied by the same number they were previously divided by, in order to reverse the quantisation.  The quantisation process did involve some precision loss, which can be seen by this point.  Then, an inverse DCT is applied to get pixel values for the block.

Finally, if it is a colour image and the two chroma channels used a lower resolution than the luma channel, these are now up-sampled – by interpolation – to match the luma channel, and then the image can be converted into a suitable colour space for viewing, such as RGB (when displaying on a monitor).

2 February, 2010 at 12:38 am 2 comments

Improved tools for optimising PNG images – pnqnq and pngquant “Improved”

I’ve taken the liberty of creating WIN32 builds of Kornel Lesinski’s improved pngnq and improved pngquant open source tools.

pngnq and pngquant reduce the file size of a PNG file by reducing the number of colours, using advanced algorithms which produce the most visually pleasing results given the limitations.

pngnq and pngquant are capable of producing 8-bit (or less) optimised images which still contain alpha transparency (fully varied transparency), something which Photoshop cannot do!

This package contains

  • Improved pngnq v0.9 (January 2009)
  • Improved pngquant v1.1.1 (January 2009)
  • Sample batch files
  • Full source code and any original open source licenses

Download pngnq and pngquant command-line tools for Win32 now

What they’re for

pngnq and pngquant perform lossy compression of PNG images, by reducing the number of colours appearing in the image.

There are many different approaches to doing this, and most graphics applications capable of saving to PNG or GIF images have some algorithm for reducing a full colour image down to an image with a limited number of colours.  Different algorithms vary in terms of visual quality and processing time.  pngnq and pngquant aim for maximum possible visual quality at the expense of a longer processing time (though they are, to some extent, adjustable), and generally do perform better at this than most graphics software.

Choosing a fixed number of colours to best present a full-colour image is not an easy task, and as such there are a few different approaches.

One approach is to use a fixed set of colours regardless of the image; this is the simplest but also the worst quality approach.

Other approaches look at what colours actually appear in the image, and try to cover the most common ones.  Of these, there is still a variety of approaches: median-cut based colour selection picks colours by repeatedly calculating median values of the colours in the full-colour image.  Other, more ‘perceptual’ approaches try to place more emphasis on areas of the image where small colour variations are more likely to be noticed by the human eye, and less emphasis on ‘busier’ areas of the image.  In the best case, such an algorithm can often produce a reduced colour image that is indistinguishable from the original, by the human eye.

pngquant is a general open source tool to do just this, and can accept pretty much any type of PNG image as its input, though a true-colour PNG, optionally with alpha transparency information, is best.  It is a command-line tool, and is cross-platform.

pngnq is an alternative to pngquant which uses the neuquant algorithm, a more complex algorithm which aims to produce better results.  It is also command-line and cross-platform.  It evolved from an earlier version of pngquant.

Kornel Lesinski’s improved pngnq and improved pngquant tools add some further minor improvements to pngnq and pngquant’s algorithms, giving more pleasing (to my eye) results for images with alpha transparency, especially antialiased boundaries for example on icons.  They also contain other various fixes, as documented on their respective web pages, which improve results in some edge cases.

You can choose how many colours you want to end up with, with 256 as the maximum; unlike GIF, PNG’s efficiency does not really suffer if you choose a palette size that is not a power of two.

The resulting PNG images will be viewable in all modern browsers, with an important exception: in Internet Explorer 6, images with alpha transparency will not display their partially transparent areas.  Thus, the images will look a lot like they have just 1-bit transparency.  Some see this as still better than the alternative of not using alpha transparency, or using full-colour images with alpha transparency and having them completely broken on IE6, due to the relatively graceful way these images degrade on IE6.  You should test the results in IE6 and decide for yourself, on an image-by-image basis.

Download pngnq and pngquant command-line tools for Win32 now

26 June, 2009 at 5:25 pm 4 comments

Figuring Twitter out

The success of Twitter always puzzled me – it was probably the first massively popular web phenomenon that I just could not get.  I don’t spend a lot of time on World of Warcraft, or Second Life, but at least I can easily understand their appeal, and why people use them.  For Twitter, I just could not understand it.

‘Why would anyone use a service so restrictive and limited in use, and like it?’, I thought.  For one, it’s full of self-promotion, utilised by many as one big PR tool.  I felt like I was being spammed every time I visited (until, that is, I learned to un-follow any of ‘those’ accounts), and at other times it just looked like a whole bunch of ‘in’ people sharing ‘in’ jokes and having their own little private conversations to which I was not invited, but which were nonetheless put out in the public, as if to say ‘look at me, I actually have friends’ or ‘geek is the new trendy’.

To help myself to understand it – without necessary aiming to like it, but just to get a better grasp of why others did – I told myself eight days ago that I would put something on my Twitter (Tweet?) at least one a day.

  • Installing TwitterFox has helped.  Now I don’t need to go through the hassle of loading a website just to do something which should be trivially easy to do, given you are writing just a few words.
  • The @ signs in messages annoy me.  Twitter should just hide the @, allowing you to link to another user without it looking like some sort of secret geek language.
  • The need to use URL shorteners annoys me.  Twitter should just hide the URL and show it as link text instead, allowing the link to show up as part of your sentence, you know, like in HTML.  This is probably the single most embarrassingly backward feature of Twitter, so lacking that an entire industry of ‘URL shortening services’ has thrived as a result of this limitation.  Imagine their lack of
  • Twitter is not open.  It’s controlled by one company (and doesn’t have a good record of staying up).  Can I host a Twitter site on my own server?  I guess this point is kind of moot; as much as I find it mildly irritating, most people, unlike me, don’t really care too much about ‘freedom’ in that sense.  But even a viable Twitter competitor would open up the market a bit.  I may check out identi.ca at some stage.
  • I have been inspired by the likes of Sockington, who I think uses the Twitter format really well.

All in all, I’ve found that it is actually easier to maintain a regular posting habit on Twitter than on my blog, which is refreshing.  A week ago I didn’t understand the point of Twitter at all, and while I still see Twitter as largely made up of chaff I can now understand that it is like any other self-published media, including blogging: 95% of it is uninteresting, but the 5% that’s good can make it an enjoyable experience.

You can check out my Twitter here and tell me if I am a twit or not.

17 June, 2009 at 7:24 pm 2 comments

Distributed Version Control Systems are not Voodoo

Distributed version control is nothing to be scared of.  It may be a lot simpler than a lot of people or ‘introductory tutorials’ have led you to believe.

  • Distributed version control doesn’t force you into a new way of working.  If you prefer a centralised approach, you can still do that.  You could use it very much like you currently use svn if you want to, and yet you’ll still get the benefit of having a local history of revisions, allowing you to do a lot of things such as revert, diff, or even check in some code locally – without the overhead of network access.
  • Distributed version control does not force you to have a mess of different branches of your code on different developers’ machines, with no clear indication as to which one is most ‘up to date’.  You would still have a ‘main’ or ‘trunk’ branch on a server somewhere which represents the main focus of development.  Developers get the added option of having new branches on their own machine, but this doesn’t mean that they can’t send that code back to some shared trunk when they finish something.
  • The benefits to distributed version control do not end at the ‘coding while on a plane’ scenario.  While this is indeed a benefit of distributed version control (while coding without network access you still have access to the complete revision history and can do commits), this is not the sole reason behind DVCS.  You also get the benefit of reduced network overhead, making typical operations much faster, and you get a lot more flexibility in choosing a suitable workflow, and altering that workflow when it suits you.  If you are working on your own pet feature but don’t want to check in your code to the main trunk until it’s done, you can still easily use revision control on your own little copy, and when it does come time to check in and merge all your changes to the trunk all the little revisions you made when you were working on it separately are preserved.  This requires no setting up on the central server – any developer can create their own little parallel branch for a while, then merge it back up – it can be as simple as doing a ‘local commit’ instead of a ‘commit’ or ‘check-in’ (for me, that’s just one checkbox).  Merging may sound complicated, but it’s not – it simply works.  The system is smart enough to figure out which file was which and you can’t really break it – anything is reversible.
  • Not all DVCS are git, or particularly like git.  Git originated as a special-purpose version control system which Linus Torvalds developed to support his own personal workflow, managing the Linux kernel.  While it now has pretty much all the features of a full-featured version control system, it still has its origins as a tool built especially for one person’s own workflow.  Undoubtedly it is a good system, but if you try git and don’t like it, you should not assume that other distributed version control systems are the same as it.  Git also has problems on Windows (a native Msys version is still being developed as of this blog post).

My version control system of choice at the moment is Bazaar, chosen because I need both Windows and Linux capability and I like that it is easy to use.  There is not much separating it from Mercurial except for some small, almost philosophical conventions.  Just as one almost insignificant example, Bazaar versions directories, allowing empty directories to be significant.  I’d recommend either.  Bazaar has the massive support of Canonical (of Ubuntu fame) and big projects such as MySQL.  Mercurial has big projects such as Mozilla (of Firefox fame).  You can get near instant help for either by going to their respective freenode IRC channels or by asking a question on Stack Overflow.

14 May, 2009 at 8:00 pm Leave a comment

Older Posts


Tweets


Follow

Get every new post delivered to your Inbox.