A lifetime ago

It does seem like a lifetime ago since my last post was published. Life got busy, and in the past five years, it was even busier with a new family addition.

I have been doing minimal freelance jobs and mostly family-related work, and feeling “rusty” is inevitable. Technologies can change daily (in some cases), and that insurmountable feeling of getting obsolete with the knowledge I acquired before is daunting and crippling.

As the title says… a lifetime ago… do we still remember many details of our “past lives”?

I finally opened this editor to start again, almost from scratch, with the hope of catching up with the basics and perhaps going a bit further, learning some new technologies and being on trend with AI and the best tools of the moment.

So, this is a short post to say “Hello, I’m still here” but motherhood took me on a huge detour, and I’m now back to “tech stuff”. I plan to post some discoveries – as basic as they might seem – because when I write and make notes, I learn again – and perhaps someone else needs these notes and posts too.

Until next time!

Pure Data

I came across a presentation at the University of Oxford on the ‘Show Series‘ that was called “Pure Data – a musical programming show and tell”. I went expecting more about ‘data’, ‘big data’ and ‘stuff’, with a few related notes (pun intended) about music…

But was pleasantly surprised.

What is Pure Data?

“Pure Data (or Pd) is a real-time graphical programming environment for audio, video, and graphical processing. Pure Data is commonly used for live music performance, VeeJaying, sound effects, composition, audio analysis, interfacing with sensors, using cameras, controlling robots or even interacting with websites.” (from flossmanuals)

“Pure Data is a flexible and accessible programming language widely used by musicians because of its friendly “patching” interface.” (from the ‘Show Series’ description)

The talk demo-ed some entry level projects and gave an overview of how the program is used to control and make sounds. He gave an introduction of some of the objects used to create a sequence of sounds, and controllers such as oscillator, volume and digital to analogue objects. I really liked the references given and the speaker pointed us to these:

A few observations for the future (and once you learn the basics you might remember this):

  • VERY IMPORTANT: one of the first things to do is to add a volume control and set it to 0.3 (30%) otherwise you can ruin your speakers, headphones and/or ears!
  • If there are two input connections, use the right first, then the left. Left is usually dominant, and if you connect to left then right, the right might never be played/listened.
  • Best practice is to tidy the “patch”, allowing others to best understand your own drawing/code.
  • Number entries can be changed to sliders, click/right-click and select Properties to edit.
  • Sinesum allows complex sounds (any right combination of sounds with sine), so it is good to know.

Examples:

Ambient Pure Data Composition:

Algorithmic Composition In Pure Data:

This last one has different ‘skins‘ and a tidy control panel, the two videos already exemplify some of the power of PD and also how complex a composition can become. I think if you understand music theory in some level you can benefit greatly, but if not, have fun just playing with the various controllers.

Another great and fun thing to do is to turn your music creations into apps, here are two references on how this can be done:

On the ‘make an app’ note, there are two versions of Pure Data to download, the “vanilla” and the “extended”. The vanilla apparently is easier to export your patch to an android app, however the extended version comes with loads of useful libraries.

You can check the screencast of the Pure Data ‘Show Series’ and maybe learn a bit more than what I listed here.





Speaking at FOWA

I work at the Bodleian Libraries on the BDLSS department for a few years now. And inquiring about the Future of Web Design (FOWD) conference in January this year, I was pleasantly surprised to be invited to speak on the Future of Web Apps (FOWA) in October. The FOWA conference has a “Rising Stars” track for people who are new to speaking but have great skills and passion to share.

It is worth saying that the FOWA team, specially Sarah Robinson, had a great impact on my speaking adventure. They made the whole experience simple to follow, always showing support and being available for any questions. On arrival, they guided me to a “speaker room” in case I need some quiet area to prepare anything before the presentation. I received a greeting card, and the simple gesture was truly appreciated.

It is quite cool to see my name on it… thank you @BathCityGirl and F.I. team for the lovely card! #fowa

A photo posted by Monica Messaggi Kaya (@monicakaya) on

There was always reassurance available in case “the nerves” kicked in. During the presentation one of the organisers also let herself to be a “point of eye-contact” in case I needed a familiar face in the audience. I found it really helpful and if I forgot a detail or two I could focus and get back on track. Another follow up was given at the end of the talk, and I felt part of a team, instead of an outsider trying to get in. I really enjoyed the experience, especially being the very first time speaking in a conference.

Since I was working on the Digital Manuscripts Toolkit (DMT) project, I thought would be great to talk about the journey and the challenges of putting this toolkit together considering the amazingly rich examples of manuscripts the Bodleian Libraries have.

I’ve spoke about the International Image Interoperability Framework (IIIF); with the aim to use, develop and repurpose digital manuscripts in interesting and innovative ways. Highlighted the steps to work with IIIF, introducing the Image API with the demo at http://iiif.io/#try-it, and Presentation API explaining about the primary resources (Manifest > Sequence > Canvas > Content) and their properties.

I carried on talking about the DMT Journey so far as the toolkit itself is a large combination of tools, and the one I was working on was the online editor. I’ve mentioned the possibility of having Docker images to create the Image server needed to satisfy one of the steps to work with IIIF. Then the IIIF metadata compliance that we’re achieving using Manifest Factory (Python); exemplified the discovery step with the Digital Mushaf project – that reunites Qur’anic manuscripts held by four institutions, and showed that we’re using a Mirador viewer instance to display and combine all these manuscripts into one sequence.

During the presentation I spoke about the research of existent tools and libraries that was done, to name a few of them: Grunt, Node.js, jQuery, Karma, underscore.js, pubsub.js, handlebars.js, URI.js, mousetrap.js, ZeroClipboard.js, d3.js, state-machine.js, tinymce.js, qTip2, sinon.js, Jasmine, Istanbul, Travis. Also demonstrated some viewers available to display and zoom images (OpenSeadragon, Mirador, UniversalViewer).

I was able to show the mockup of the online editor and a few tests that I made trying to build our first Manifest editor online.

Finishing up the presentation, I was able to talk about our funded projects that represent a direct challenge and great examples on how the scholars could use our Manifest editor online:

It was a great experience, and considering a highly technical audience, this talk was received well, and had plenty of examples and JavaScript libraries for them to discuss and possibly use. Some of the feedback received was in the lines of:

“you made some already amazing manuscripts even look more interesting”.

Slides https://slides.com/monica/digital-manuscripts-toolkit/

Code https://github.com/bodleian/dmt