Weekend Reading Round-Up

by Ian Hellström (15 September 2017)

Below I have collected an initial batch of recent research articles and posts on various topics, such as deep learning, graphs, music, and Scala, that may be of interest to readers of Databaseline.

Network Classification and Categorization

A three-page empirical study on arXiv talks about a random-forest classifier that can predict which category (out of eight) a real or synthetic graph belongs to with an astonishing 94.2% accuracy. Synthetic graphs, such as Erdös-Rényi or Barbasi graphs, which are often used to model real-world networks, are distinct enough to be identified as such.

Reversible Architectures for Arbitrarily Deep Residual Neural Networks

Reversible architectures for deep learning have previously been shown to be structurally (i.e. mathematically) the same as ordinary differential equations (ODEs). In recent research posted on arXiv, that work is extended and the authors compare three reversible architectures to different (symplectic) discretizations of non-dissipative Hamiltonian systems. By doing so, the researchers prove Lyapunov stability properties of different network architectures, and they show that — thanks to the mathematical structure — they can achieve results that are on par or even superior to existing methods but with less memory.

Builder Pattern in Scala with Phantom Types

Composition and inheritance can be used to model structural and behavioural states respectively. Phantom types, together with Scala’s implicit evidence (i.e. context-bound references), can be used to model structural states too! This Medium post describes a simple example to ensure correctness at compile time rather than runtime (as in the case of pattern matching on union (sum) types). The author shows how phantom types can be used to create a (compile-time) safe implementation of the builder pattern.

Composing only by Thought: Novel Application of the P300 Brain-Computer Interface

In PLOS ONE, researchers describe a non-invasive, water-based brain-computer interface (BCI) that was placed in-between an EEG device and MuseScore, an open-source music notation tool. Their setup allows people to compose music via P300 brain waves, opening up the possibility for people with physical disabilities to compose music in the future too.

Happy Creativity: Listening to Happy Music Facilitates Divergent Thinking

Again in PLOS ONE, it is shown that upbeat (classical) music can improve people’s divergent thinking skills. Convergent thinking is apparently not affected by listening to happy music. Whether the study’s results can easily be generalized remains to be seen as most participants were western women in their mid-twenties.

Deep Learning Techniques for Music Generation — A Survey

The title says it all really, but on arXiv, the three authors provide an overview of various techniques to generate music. They describe all known algorithms in terms of their representations (e.g. audio, piano roll, lead sheet), the architecture of the neural network, and the strategy employed to attain the objective within the neural net (e.g. feed-forward). It is clear that — perhaps not surprisingly for audio generation — recurrent neural nets (RNNs) are the most popular choice. The researchers also discuss some tentative future research options based on similarities in image generation and style transfer for images.

struc2vec: Learning Node Representations from Structural Identity

Presented at KDD, struc2vec is a novel approach that mimics the basic idea of word2vec or word embeddings. In struc2vec the context of a node (i.e. nodes within the k-neighbourhood of it) is used to obtain a vector representation of the graph’s structural identity, which in turn is used to compute the structural similarity between nodes. The approach is demonstrated to be superior to current state-of-the-art methods for certain classification tasks. Adrian Colyer has a few nice graphics to illustrate the methodology further, and there is also a video, which is not behind a pay wall.