Elod P Csirmaz’s Blog: 2013

23 November 2013

Remote Debugging on Android Chrome: Missing Developer Tools and Issues on Debian 7

I've spent a couple of hours today trying to set up remote debugging on Chrome on my Samsung Galaxy Note 3. It is an extremely useful feature as it allows one to debug XHTML/CSS/JS issues affecting a mobile browser. However, setting it up turned out to be much more difficult than I expected because most of the documentation I found was either out of date or incomplete. I'm summarising what I've found here in case it helps someone.

The official documentation and how-to is at https://developers.google.com/chrome-developer-tools/docs/remote-debugging. However, the Developer tools option no longer exists in the Chrome browser on Android. I've found that you don't need to enable it at all, so it's enough to get the ADB extension for Chrome on the desktop machine and enable USB debugging on the phone. (Please note that on Android 4.2 and newer versions, the Developer options is hidden. To make it appear, go to Settings > About, tap Build number seven times, then tap Back.)

With these in place, I could get remote debugging work on a Windows machine. However, on Debian 7, the options in the ADB menu in Chrome appeared to do nothing at all. I have found http://stackoverflow.com/questions/19665688/chrome-adb-extension-not-working-on-debian-7 which explains how to set up access the USB device. My user was already in the plugdev group, so the only thing I could do was to set up a persistent rule, but it didn't appear to make any difference.

But it turned out that the ADB extension was not working for others on Linux, either. The bug report at https://github.com/GoogleChrome/ADBPlugin/issues/20 suggested that I could start ADB using a command-line tool after installing the packages ia32-libs and libncurses5:i386, but I had to hunt around for the executable a bit. In the end, I found it in ~/.config/google-chrome/Default/Extensions/[...]/[VERSION]/linux. (I guess it might be somewhere else if you get ADB as part of the SDK.) The adb_command.sh file in this folder hinted that I needed to make adb executable. Then, running adb devices seemed to report the presence of my phone (offline), but I didn't know how to proceed.

Fortunately, the post at http://wesbos.com/remote-debugging-mobile-chrome-android/ explains that one needs to run adb forward tcp:9222 localabstract:chrome_devtools_remote and then one can navigate to chrome://inspect/ in Chrome on the desktop machine to start inspecting the page shown on the mobile. It worked! You can also run adb kill-server to stop ADB.

1 November 2013

Snapshotting Filesystem for Linux - Released

Just a quick note that ESFS, a FUSE-based filesystem I have been working on that supports snapshots, now seems to be relatively bug-free, and is available as beta from GitHub. It is an overlay filesystem as it uses an underlying filesystem to carry out the file operations, and to save the data necessary to maintain the snapshots themselves. Through the snapshots, it provides read-only access to previous states of the whole filesystem, which can be used to restore data following some corruption caused by human error or software bug. In this sense, it provides a kind of backup solution, although on the same storage as the main filesystem itself.

Please visit the ESFS page on my homepage, and the GitHub Wiki for more and for usage and installation instructions.

16 August 2013

My Brain is Too Simple to Be Writing This

One often hears the argument for dualism (which maintains that humans are not mere physical beings, but also have a divine spark, a soul, a non-material mind, a psyché that enables them to think, talk, and be intelligent) that the mostly homogenous grey mass that is the brain is too simple, too undifferentiated, too mundane to give rise to the amazing range of human creativity and activities that we see around ourselves.

And it indeed looks to be so. Sure we know which region is responsible for visual processing, speech recognition, language production, positive feelings, and so on. Sure some neuron activity patterns can reveal whether I’m thinking of tennis or the Labour Party, but where is an individual, logical thought? Which neuron is the one that “knows” that rain needs clouds? Which is the one that’s to be or not to be, and which is the one that’s that is the question?

I’m sure it’s there. We can’t see it yet, but we get to know more and more about the brain each day, and to maintain the position that the physical brain is incapable of experiencing or creating feelings, music, poetry, drama and parking tickets in the face of a deluge of evidence to the contrary is to turn a blind eye to the wonders of this amazing object.

Take the analogy between the brain and a computer. Look around your desktop (or your mobile home screen) as you’re reading this. You see the myriad things a computer can do: zoom and rotate images, decode text from a network stream and render it on a pixellated screen; display 3D objects from multiple angles in real time, play music, find words in a dictionary even if you misspell them, redesign traffic in cities and call mom on her birthday. You see meaning; you even see intention. And now open the part of the computer that you know is responsible for all this behaviour: the CPU. With some magnification, you’ll see something like this:

Now let’s imagine that you are an extraterrestrial who has access to a working CPU, but has no idea how it works. You’d do what anyone would do in this case: leave it on, take it apart, and prod it with knitting needles until it bursts into flames. This line of enquiry (not unlike the one we used to amass what we know about the human brain) will tell you that if you poke the top orange part, the computer suddenly forgets what it was doing a moment ago, and if you jab a green-yellow square, it will mess up multiplication one in four times. What does this remind us? Oh, yes, the regions in the brain.

Admittedly, they look less regular and less rectangular. But just like the extraterrestrial and the CPU, we know little more about these regions than that messing with them causes the patient to lose the ability to speak, or recognise shapes, or it triggers amnesia.

So where is that YouTube video of a kitten in the CPU? We have good reason to think it’s in there, but a knitting needle is simply the wrong tool to extract it from the intricate network of transistors it’s made of. If that is imaginable, why would we think this blog post or Shakespeare is anywhere but in the brain? It’s there: not between, above or beyond the slimy pool of neurons, but in it.

8 August 2013

Changing the language in Gimp 2.8.6 when entries are missing from the language drop-down

I installed Gimp 2.8.6 on a Windows 7 machine where the language of the operating system was English. I made sure that the translations were also installed, but, strangely, when I started Gimp, only "System language" and English were available in the language drop-down in the preferences window. I couldn't find anything by searching for this particular problem, and all the other suggestions about changing the language of Gimp like adding a LANG environment variable or deleting all locales except the desired one failed.

After reading almost all configuration files, fortunately I found [SYSTEM_DRIVE]:/Users/[USER_NAME]/.gimp-2.8/gimprc, which already had a line in it setting the language to English. (You may need to toggle the language setting in the preferences before it appears.) Changing it manually to the name of the locale (take a look at the [SYSTEM_DRIVE]:/Program Files/GIMP 2/share/locale/ folder for the available ones) solved the problem (although the language drop-down still doesn't work).

19 July 2013

Macros in Kate editor v 3.8.4 (KDE 4.8.4)

After a recent Debian upgrade, I needed to revisit the Kate macros I developed for Kate version 2.

The location of the scripts has changed; on Debian wheezy they are in /usr/share/kde4/apps/katepart/script/. Also, now it is possible to include multiple scripts in the same file, and you need a header section in your file as explained here (please also see below for an example). Basically, the header needs to signal that the file contains Kate scripts, and list all the function names that should be available as commands on the command line.

Some of the internal function calls have changed as well, but I was pleased to find that one could define helper functions and call them from the commands. As I use my commands as shortcuts for snippets I insert often while programming or debugging, I defined two helper functions: one two simply insert some string at the position of the cursor, and another that ensures that the line is indented before inserting the string.

I also found that I needed to nudge the cursor to make word completion work properly.

Please see a sample header and two command functions and the two helper functions below:

/* kate-script
* author: NAME
* revision: 1
* kate-version: 3.4
* type: commands
* functions: hello, ihello
*/

function insert_at_cursor(text){
Cur = view.cursorPosition();
document.insertText(Cur,text);
Cur = view.cursorPosition(); /* to make word completion appear */
Cur.column--;
view.setCursorPosition(Cur);
Cur.column++;
view.setCursorPosition(Cur);
}

function tab_insert(text){
Cur = view.cursorPosition();
if(Cur.column==0){ text = "\t" + text; }
document.insertText(Cur,text);
Cur = view.cursorPosition(); /* to make word completion appear */
Cur.column--;
view.setCursorPosition(Cur);
Cur.column++;
view.setCursorPosition(Cur);
}

function hello(){ insert_at_cursor('Hello world!'); }
function ihello(){ tab_insert('Hello world!'); }

Put this in a .js file in /usr/share/kde4/apps/katepart/script/, and the commands 'hello' and 'ihello' should be available after reloading the scripts in Kate or restarting it. Press F7 to get to the command line, type one of the commands, and press ENTER. I usually use one, two, or three-character-long command names, and they speed things up a lot (and they can also help with RSI, especially if the inserted string contains many characters that need Shift to type).

For macros in Kate 2.5.5, please see this previous post.

11 July 2013

SQLBrite 1.0 Released

Just a quick note that SQLBrite, a small but hopefully useful PHP class that defines convenience methods for SQLite3, is now available on GitHub. Please find the first release at https://github.com/NewsNow/sqlbrite/releases/tag/v1.0 and some more information on the NewsNow Technology Blog. Happy coding!

30 January 2013

An Excursion into Quantum Mechanics


I've always wanted to see a tiny fraction of how quantum physics works, and I found a wonderful lecture series by Leonard Susskind, one of the fathers of string theory on YouTube (link). You might have heard about the double-slit experiment, where two light sources create a complicated pattern on a screen which is immediately destroyed if we "look at" how the light got there; Schrodinger's cat, which is both dead and alive, and about how quantum mechanics gives rise to a new place of consciousness in the world. The idea that there are alternative realities, widely used in SF and in the more mainstream, comes directly from Hugh Everett's 1957 interpretation of quantum mechanics, the many-worlds.

Along the way, I discovered amazing things, but also learned to see the strange quantum-mechanical effects in a different light, in which they make much more sense. I also encountered a very simple experiment that I think shows the strange ways of modern physics, but which everybody can try at home. No lasers, laboratories, expensive equipment needed.

Only three (linear) polarisers. These are half-grey plastic foils, and used to be there in front of and behind all LCD screens in calculators, monochrome handheld games, and aeon-old digital watches. They have a direction, and they only let through light "polarised" in that particular direction. That's why they are not completely transparent - they only let through a portion of the light hitting them.

If you put two polarisers on top of each other, and start to rotate one, at one point they won't let any light through - the area where they overlap becomes black. This is when their directions are orthogonal: light that goes through one is exactly the type of light blocked by the other.

So far, so good, nothing too interesting has happened. But now put a third polariser between the two without moving the first two ones. By rotating the middle one, you should be able to make the blackness disappear, and let some light through! How is that possible? More polarisers should block even more light, shouldn't they?

What actually happens is that light polarised at a given angle has a given chance of getting through a polariser - the closer the match between the angles, the higher the chance. At 0 degrees, when the photon is polarised just in the direction of the polariser, the chance is 100%; at 90 degrees, the chance is 0. (By the way, the photon is indivisable. Either the whole goes through, or none of it.) However, after it emerges from the polariser, it is certain to be polarised along the direction of the polariser. That means that the polariser in the middle, which is at, say, 45 degrees compared to the first polariser, will let some photons through, but then they will be polarised at 45 degrees, not zero - and have some chance of going through the third polariser as well.

Actually, if you think about it, this makes a lot of sense. If a polariser would only let through light that is already polarised in its direction, it would be completely black, as there are so many directions light can be randomly polarised in. I think this is a wonderful example of how our intuition (evolved based on large masses moving slowly) betrays us, and how these strange phenomena can actually be more logical than what we originally expected to happen.

But polarisation is also very useful in other experiments, in one of which a pair of photons is created in a way that we know that they are perpendicularly polarised. Now if we put a polariser in the way of one of the photons, and it goes through (when we then know its polarisation is aligned with the direction of the polariser), then every time, without fail, the other photon also goes through a perpendicularly aligned polariser. Either both go through, or neither. It is as if the first photon could send a message to the other that it should also align its polarisation - only that this message can travel faster than light. Einstein called this "spooky action at a distance," and devised ingenious ways of trying to explain it away (he couldn't). Others called it entanglement.

What's even more astonishing, is that it has been verified experimentally almost completely ruling out any other explanation, that the photons did not decide in advance whether they should both go through the polarisers or not. It is done based on Bell's inequality.

So when the pair of photons is generated, we don't really know how they are polarised - they are in a superposition of possible polarisations. When the first photon goes through a polariser (it is measured, in a way: it either got through or it didn't), those possibilities collapse, and we either know that the first photon got through and so will the second one, or know that the first photon didn't get through, and neither will the other.

This is what is called wavefunction collapse, and is, for example, at the heart of destroying the pattern created in a two-slit experiment when the photon is measured at the slits.

But what happens during this collapse? And when does it happen? We only see the photon either going through or not going through the polariser, and so when we measure, there must be a collapse. But what is a measurement? A photon hitting a polariser, in reality, only interacts with its atoms (or, rather, electrons), which get excited, enter various states, emit photons, and so on. A camera, detecting a photon, is also made of atoms. And particles can quite happily hit other particles without causing a wavefunction collapse.

Some, as detailed in the articles linked above, suggested that it is us, or, in less obviously egocentric language, consciousness that causes a wavefunction collapse. I think it is quite clearly nonsense. We don't need anything but atoms to explain the mind and consciousness - and so by Occam's razor, consciousness also consists of atoms. There's a difference in quantity, not quality.

But I was intrigued by how the illusion of the collapse comes about, and the first step was to learn more about entanglement, the process that combines multiple particles (and, ultimately, us) in a wavefunction.

*

Without really knowing how, I took paper and pen, and tried to figure out how the two photons in the pair I mentioned above are supposed to be described. I failed and had to try again and again, but in the end managed to put together something that looked vaguely promising. Along the way I was also guided by Dr Lvovsky's lecture notes. What follows might very well be completely mistaken, but I thought I'd include my notes here as a souvenir from my journey - much like a photo from a vacation posted on Facebook. And, the results appear to agree with Bell's inequality to boot. Nevertheless, if you know where I've gone amiss and how this should really be done, please feel free to leave a comment and let me know.

And with that, onto the equations. (Unfortunately, I cannot introduce the concepts used here like Hilbert space and bra-ket notation. But if you're new to these, please watch the lecture series by Prof Susskind, linked above.)

I imagine the following experiment: suppose there is a source that emits two entangled photons along the z axis, A and B, both linearly polarized, but in an orthogonal direction. Photon A encounters a linear polariser aligned with the x and y axes and a detector behind it. Photon B encounters a linear polariser rotated by \(\phi\) and another detector behind it. There is some apparatus, C, connected to both detectors. I think after the experiment, the apparatus should be in a superposition of detecting one, the other, both, or neither photons:
\[ |C\rangle = x\left|C_{A\&B}\right> + y\left|C_A\right>+ z\left|C_B\right>+ w\left|C_0\right>. \]
I'm trying to find the coefficients (amplitudes) x, y, z and w. The order in which the photons interact with the polarisers and detectors should not matter.

In isolation, the polarization of photon A can be described in the 2-dimensional Hilbert space \(H_A\). Let the observable associated with the x,y-aligned polariser be P with the following eigenvectors:
\[ P = \begin{pmatrix}1&0\\ 0&-1\end{pmatrix}
~~~|+p\rangle = \begin{pmatrix}1\\0\end{pmatrix}
~~~|-p\rangle = \begin{pmatrix}0\\1\end{pmatrix}. \]
The polarization of photon B can be described in a similar space \(H_B\). Let the observable associated with the \(\phi\)-rotated polariser be \(P_\phi\) with the following eigenvectors (using the equivalents of \(|+p\rangle\) and \(|-p\rangle\) as the basis):
\[ P_\phi = \begin{pmatrix}\cos2\phi& \sin2\phi\\ \sin2\phi& -\cos2\phi\end{pmatrix}
~~~|+\phi\rangle = \begin{pmatrix}\cos\phi\\ \sin\phi\end{pmatrix}
~~~|-\phi\rangle = \begin{pmatrix}-\sin\phi\\ \cos\phi\end{pmatrix}. \]

The (linear) polarization of the entangled photons, if A is polarized in the a direction, can be described in \(H_A\otimes H_B\) as
\[ |\Psi\rangle = \cos a |+p\rangle\otimes|-p\rangle + \sin a |-p\rangle\otimes|+p\rangle, \]
and, using the \(|\pm p\rangle \otimes|\pm p\rangle\) basis, this can be expressed as
\[ |\Psi\rangle = \begin{pmatrix}0\\ \cos a\\ \sin a\\ 0\end{pmatrix}. \]
Now in order to find the outcome of the experiment, I considered \(P\otimes P_\phi\) as an observable in \(H_A\otimes H_B\) (tensor product of Hermitians is Hermitian). Its eigenvectors are the tensor products of the eigenvectors of P and \(P_\phi\), which are:
\[ \begin{aligned}
|+p\rangle\otimes|+\phi\rangle =& \begin{pmatrix}\cos\phi\\ \sin\phi\\ 0\\ 0\end{pmatrix} \\
|+p\rangle\otimes|-\phi\rangle =& \begin{pmatrix}-\sin\phi\\ \cos\phi\\ 0\\ 0\end{pmatrix} \\
|-p\rangle\otimes|+\phi\rangle =& \begin{pmatrix}0\\ 0\\ \cos\phi\\ \sin\phi\end{pmatrix} \\
|-p\rangle\otimes|-\phi\rangle =& \begin{pmatrix}0\\ 0\\ -\sin\phi\\ \cos\phi\end{pmatrix}
\end{aligned} \]

And from these, the amplitudes appear to be:
\[ \begin{aligned}
\langle+p+\phi|\Psi\rangle = \sin\phi\cos a &= x \\
\langle+p-\phi|\Psi\rangle = \cos\phi\cos a &= y \\
\langle-p+\phi|\Psi\rangle = \cos\phi\sin a &= z \\
\langle-p-\phi|\Psi\rangle = -\sin\phi\sin a &= w,
\end{aligned} \]
as we expect both photons to go through the polarisers at \(\phi=\pi/2\) and \(a=0\).

This result also seems to be in accordance with the reasoning around Bell's theorem, which suggests that the average correlation between the two detectors, defined as
\[ \text{Cor} = \frac{
\left(\begin{matrix}\text{number of experiments}\\ \text{showing correlation}\end{matrix}\right)
- \left(\begin{matrix}\text{number of experiments}\\ \text{with no correlation}\end{matrix}\right) }
{\text{number of experiments}}. \]
is not linear, but sinusoid. Based on the above amplitudes, this is
\[ \begin{aligned} \text{Cor} = \sin^2\phi\cos^2a + \sin^2\phi\sin^2a - \cos^2\phi\cos^2a - \cos^2\phi\sin^2a &\\
= \sin^2\phi - \cos^2\phi = -\cos(2\phi)&, \end{aligned}\]
which seems to be OK as we expect complete correlation at \(\phi=\pi/2\) (when the polarisers are aligned orthogonally) and again at \(\phi=3\pi/2\).

In our original experiment, the two polarisers were arranged perpendicularly, that is, \(\phi=\pi/2\). In this case,
\[ \begin{aligned}
\langle+p+\phi|\Psi\rangle &= x = \cos a \\
\langle+p-\phi|\Psi\rangle &= y = 0 \\
\langle-p+\phi|\Psi\rangle &= z = 0 \\
\langle-p-\phi|\Psi\rangle &= w = \sin a.
\end{aligned} \]
That is, it is not possible for only one photon to go through the polariser in its way. Depending on the initial direction of polarisation, \(a\), either both go through with greater probability, or neither.

*

What have I learned from this? Well, first and foremost, a first-hand experience of what has been described in many places: that quantum entanglement, the "spooky action at a distance," is nothing else but statistical correlation mistaken for causation. That is, that we ourselves become entangled with the first photon when we measure it, and then will necessarily see the second photon as we do without there being any "action". This leads, admittedly, to Everett's many-world interpretation, but what happens to entanglements on a macroscopic scale is a completely different question. Chapter 2 of Dr Lvovsky's lecture notes contains some indications as to how this could be described. Usually, it seems, data loss is inevitable, which leads to the (observed) collapse of superpositions.

My second observation partly follows from this problem that arises with many particles, and partly from some interesting facts I encountered. For example, although a complete Hilbert space is used to describe states, only unit vectors represent physical states. And even of those, a single phase (basically, a single direction of the many that define a vector) is irrelevant. There might very well be completely consistent and good reasons for these limitations that I'll learn later. But at the point it seems that the mathematical model does not fit the physical reality perfectly. It does not seem to be beautiful, however successful quantum mechanics is at describing and predicting phenomena. (We shouldn't forget how successful classical physics was in its own day.)

And then there is our rapidly evolving view of the Universe. It seems to constantly surprise us. We now need dark matter, then dark energy to make ends (and the two sides of equations) meet.

Because of these, I acquired a feeling that this is not all of the story. Maybe it's string theory. Maybe Stephen Wolfram is right, who suggested that ultimately, we won't be able to create equations that will predict nature at any point in the future, effectively shortcutting the calculations nature makes to arrive at her results. Maybe we can only simulate these calculations, but not outwit them.

And if so, I think it is premature to ask the big questions: are there alternative worlds? What will happen to the Universe? How come the Self, the I, is such a singular point in space-time, while nothing physical makes it stand out from the rest of the collections of particles?