Arcsecond.io 0.6: Team Size Doubling + iObserve on the web !

This is a milestone for Arcsecond ! I am happy to announce that Eric Depagne, currently astronomer at SAAO/SALT has agreed to join his forces to arcsecond.io. His expertise with Python will help a lot on the backend, but I don’t expect him to remain within the boundaries of it! 

Moreover, we releases iObserve on the web ! Not as feature-rich as the macOS app yet. But already new features that will belong only to Arcsecond: Night Logs (in prep) and Data !

Freely regsiter in www.arcsecond.io and tell us what you think !

 iObserve on the web
iObserve on the web

A fantastic meteor iOS app (based on SwiftAA)!

Collaborative work help great developers to push their own ideas forward. Alexander Vasenin participated to key improvements of SwiftAA, put in place by yours truly. SwiftAA, based itself on AA+ by PJ Naughter, but with much easier and swifty APIs, is the most comprehensive and accurate collection of astronomical algorithms in Swift (and Objective-C along the way).   

Alexander just released a wonderful and very detailed iOS app about meteors, called MeteorActive. Find everything about these beautiful phenomena in a snap, thanks to this carefully crafted app. And it’s free!

Download MeteorActive ! 

A whole new dimension…

… of the answer to the question about « life, the universe and everythig. »

It reminds me the idea that if you ask a horse to draw a god, it will draw a horse.  

Anyway I am surprised I discover this about « 42 » so late. 

One small step for a developer…

… but quite a milestone in my master plan (see previous post). After about a year of (discrete periods of intense) work, I’ve decided that SwiftAA, the best collection of astronomical algorithms on Swift, hit the 2.0-alpha stage.

SwiftAA is intended to be the underlying code framework of all scientific computations of the next version of iObserve. With it, I’ll be able to provide tons of details about many objects, and especially about Solar System objects, which are clearly missing in the current app.

It’s an alpha stage, of course. It means a lot of details need to be polished: iOS version polishing, more unit tests, a more consistent handling of numeric types etc. But all of the C++ code is wrapped in Objective-C(++) code, and all that old-style code mimicking the original AA+ one is now « Swifted ». That is, it has been elevated to a lot higher level of expressive formulation.

Complexity remains, since the solar system isn’t quite easy to simplify. Hence, when one has the goal of minimizing the amount of lines of code, to extract the most of it, things aren’t easy to read at first. 

But there is a Swift Playground for those interested to learn. I wish I had more time for making this Playground more « ready to use ». But as for now, you need to dive a bit into the thing and the project to actually understand it. But time will come, I’ll prepare a better one.


In my website stats, I noticed that some people keep talking about iObserve, which is great. One post however mentioned the wish to have a Linux version of it. Those interested in what happens here @ onekilopars.ec probably understood that it is also part of the master plan. But current web-based technologies to make cross-platforms apps are difficult to put in place. I’ve tried about 6-7 times. But I don’t give up! 


I’ve received about 30 new observatories to be included in the next version of iObserve. That’s really great, as it is the sign of a strong usage of the app (more than 15k downloads so far). They are all in my list of to-dos, but I must say that it is sometimes hard to be motivated to finish this new new version, and I am late. But it will come! 

 

arcsecond.io is now open-source

Arcsecond.io aims at integrating all sources of astronomical data and information into a unified scheme using modern web techniques.

This is big. Really. If you jump on board, this can be huge!

If you don’t see how big it could be, imagine a world where every resource (the word is key) has a unique, simple, stateless URL. Yes. It means every object, every planet, every lightcurve, telegram, FITS file has a simple, unique URL which returns well-formated fully standard JSON/XML output, consumable by modern web techniques (like AngularJS).

It’s a kind of a super mega SIMBAD or NED services with modern tools and interfaces. And not limited to any type of data. Allowing you to concentrate on stuff that matter: the data. And not its formatting.

Imagine furthermore that your own personal resources: night log, observing runs, reduced data are accessible the same way, through usual individual and group permissions that you personally control.

Imagine even furthermore that community-curated informations is also accessible that way! This is what has been started with observing sites around the world (see below). Imagine now this list constantly updated, and enhanced by informations about domes and telescopes, and further more… instruments and detectors. All accessible freely, always the same way. A kind of scientific data-based wikipedia!

This would allow us to build a bazillion of new services and (web) apps. This is the future of arcsecond.io.

arcsecond.io is intended to full embrace the RESTful principles (that is, the modern way in the web to decouple data from its consumption). I know, there is also the VO. But… Oh well.

 

[Amazing picture by E.S.O./Y. Beletski]

The XXth century is finally over

You may have noticed in the news and Twitter and all over medias that the LIGO collaboration has detected gravitational waves. This is an undeniably amazing achievement, both technologically and scientifically. It echoes the fantastic detection of the Higgs boson some months ago in CERN.

These are big news for science, in times of a worldwide decline of scientific literacy and reason-based worldviews. These achievements must be spread and explained in classrooms.

Even better, these 2 detections represent not at all a revolution in physics, and that’s for the good! They both end the XXth century, as they both confirm, to the ultimate level (25 years for LIGO, and decades for the Higgs boson), that our 2 most successful theories of the world are… well, successful!

That’s it! XXth century over. You’re a young student on physics? That’s a good time to start the XXIst century!

The detection of the Higgs boson confirms the overalll validity of the so-called Standard Model of particle physics. The detection of gravtitational waves confirms the validity of Einstein’s General Relativity. One would be quite presomptuous to say where the next big discovery would occur. But my personal feeling is that it won’t come in these two fields…

So personaly, and as a general ethical position for a scientist, I wouldn’t look in these overwhelmingly crowded fields, but elsewhere. Wherever your personnal battle of intimate questions leads you. Enjoy the journey and tell the world!

 

The story of 6 years of development of iObserve

6 years. What a journey!

Short story: Because I am taking the head of software developments in a newly-created french VR startup, my energy will be focused on that for the coming years. The development at onekilopars.ec will necessarily change to a much slower pace.  Nonetheless, I will release iObserve 1.5 and keep supporting everything, including arcsecond.io, but as much as time is left. Moreover, I am happy to announce that iObserve is now entirely free. Read on to learn why, for the history of very special app, and the role it has in the life of its developer, with all the details, decisions, downloads, graphics, sales numbers and more!

December 2009 – July 2010: Take-off

6 years ago, on December 2009, I was in a very peculiar situation. Just two months before, I submitted to the European Research Council (ERC) the most ambitious project I have ever prepared. I was mentally exhausted. I noticed I started to have short-term memory losses… which will continue for more than 6 months. I was still showing up in my astrophysics lab in France, but I just couldn’t keep going doing usual research projects. I was convinced of the revolutionary insight of the project I’ve just submitted, and onto which I bet all my academic future (since ‘usual’ routes for tenure were obscured anyway), and thus my child dream of becoming an astrophysicist too. Of course, I didn’t want to see that ERC reviewers wouldn’t probably take the risk of funding such project (despite what is being advertised). My name was Ego.

On that end of the year, I needed to do different stuff. I just couldn’t simply relax (waiting has always been hard for me…), and I needed some new horizon to look at. On December 28th came that email message from Dr. David Nicholls (to whom I send my warmest regards), from Down Under, kindly informing me that he is a happy user of a small OS X widget of mine, if only I could fix two small bugs…

It was the good old times of OS X 10.5 Leopard with the amazing Exposé and the Dashboard widgets.

Most of us, I am pretty sure, remember that time where we could feel the rising energy and momentum of the Apple platform, combining the best of the two worlds: a true UNIX system onto which one could install most of our astro softwares and packages, combined with an elegant OS for our personal digital life.

Back in the times when working at the La Silla Observatory, I started to work on some basic but useful widgets for astronomers. In order of release date: AstroTimes, AstroAirmass, AstroObservability, and finally the one used by Dr Nicholls: the UltimateAstroWidget (the ugliest software name and icon ever, probably), which combines the functionalities of all first 3 widgets. When preparing this post, I managed to retrieve all 4 widgets in my archives! You can download and try all them out (some UI glitches remain, and not all of them work flawlessly, but, fun to try and see what code is inside it – use Ctlr-Click on one widget in the Finder).

That email message was the kick I needed. I really took the widget code and threw it inside an Xcode project, and refactoring literally everything in Objective-C along the way… A little Quicklook plugin, called QLFits, that I started some time before was also helping with the Objective-C transition. I sent to David the version 0.1 of a true OS X app just 5 days after his message! iObserve was born. Look at what it looked like on January 2nd, 2010:

 iObserve 0.1! Very basic UI elements, no special color/fonts, synchronous network requests... (argh!), complicated choices of observatories, no saving of objects, mix of UI/UX with help, already some difficult choices at how to present data... But it was fast and was doing the job! iObserve 0.1! Very basic UI elements, no special color/fonts, synchronous network requests… (argh!), complicated choices of observatories, no saving of objects, mix of UI/UX with help, already some difficult choices at how to present data… But it was fast and was doing the job!

The nice thing is that, while preparing this post, I also found a copy of all old versions of iObserve in my archives… so you can also try yourself! No worries, it has no impact whatsoever on an official version downloaded from the MacAppStore. However, if you had an old version (<1.0), you may need to delete the folder « ~/Library/Application Support/iObserve » (otherwise, the app may crash at launch). Amazing to see that basic Apple/Cocoa APIs are perfectly stable over more than 6 years while so much has changed otherwise.

Within 2 months, I released iObserve 0.2, 0.3, 0.4 and 0.5 (yes these are all download links). David Nicholls sent me awesome feedback and support. His colleagues in offices next to his own also started to use it. I was amazingly happy to have such direct and positive impact. What a contrast with everyday research. iObserve also started to be used in real conditions in Cerró Paranal, home of the Very Large Telescope (thanks to Julien Girard) and Siding Spring (thanks again, David)!

iObserve was a mental life buoy. I started learning as much of the Apple/Xcode/Objective-C stuff as I could. At that time, I considered Xcode 3, the Cocoa IDE, an amazing thing… (Xcode users will sense the irony). I knew I would certainly like a software developer life, given all the software I was already doing since many years (star collisions, basic n-body simulations, image and spectra processing, spectroscopic instrument quicklook, OSX dashboard widgets and plugins, massive stars wind-wind collisions distributed modelling, stellar black-hole and accretion disk emissions along with tons of scripts, wrappers, automators etc). But of course, software was a rescue path (thanks to those who reminded this to me during job interviews…). The clash between doing something I nonetheless like, and having dreamed of something different, remained.

At the end of March 2010, version 0.6 was released, and the ERC panel rejected my project for absurd reasons (no need to go into the details, but there really were artificial). I was kicked out of academia, my child dream was smashed out: I will not become an astrophysicist. 36 years old, 2 young kids, career break, not an easy time. I started to look for a regular job. Beginning of April, version 0.7 was out and by the end of July, version 0.8, and very basic elements of iObserve on an iPad simulator.

This iPad version was truly a good move. On July 27, 2010, I was hired by a small company here in the Grenoble region, for doing iPhone software, thanks to it!

First lesson: a purely personal project, no matter its size, is the best business card you can have with you, for software positions, especially when you have little experience. And I still think the same, you will see why at the bottom.

August 2010 – March 2011: Transition, Transformation and Hitting the Store

Having a ‘regular job’, I started the quite difficult path of transformation to the normal (i.e. industry/business-wise) vocabulary. It was tough. Because I was aware of the privilege of doing something (astrophysical research) aligned with my dreams, up to that time. The awareness of having lost that was unacceptable. One could argue, looking at my publications list, that I didn’t really do what is necessary to get a permanent position (spending time on things like Thorne-Zytkov objects, really)? In my mind, I was not doing enough of it.

A job? That is, getting paid for letting companies use half of your awake time, and the best hours of it? I felt it was tough (of course, there are much tougher things in the world). But it was probably even worse for people around me… who were hesitating between the « how nice and interesting guy who did astronomy » and « what an asshole he is, thinking he knows everything ». Most of them stopped hesitating after some time, quite naturally.

I was behaving like an asshole, I know. But I was simply unable to accept my failure. I considered myself as, finally, the kind of scientist I always aspired to be, taking a step beyond incremental research, and for that reason, being deprived of the means to continue (or in fact, to start for real…), in what constitutes an utmost injustice. The European Research Council review of my project didn’t help me to think otherwise: my project was considered not only good, but rather the exact kind of project ERC wants to support… but no, I can’t have the funds. One of the member of the ERC review panel came to my office in Grenoble and told me exactly this to my face, and suggested to resubmit the next year, which I did, as a loyaulty for myself.

At that time of your life, when you feel you can finally change the world, hearing that, and finally working anonymously in an ugly environment with guys not especially interested in things other than computers (and not Macs…), doing stuff for which I had no particular care was the most shitty feeling I ever felt.

Below an example of the thing I was doing just before leaving academia. I am not even sure it has been published…

 The results of stellar black-holes accretion disk emission spectra simulations. Left: hardness-Intensity diagram in X-rays. Right: 3 examples of spectra, will all high-energy components: Bremsstrahlung in orange, Synchrotron in blue, Compton in green, inverse-Compton in violet if I remember well. The results of stellar black-holes accretion disk emission spectra simulations. Left: hardness-Intensity diagram in X-rays. Right: 3 examples of spectra, will all high-energy components: Bremsstrahlung in orange, Synchrotron in blue, Compton in green, inverse-Compton in violet if I remember well.

Anyway, I not only learned how to behave in a slightly more normal way (that is, starting to get interested in other people), but I also learned how far I was from a real software developer… I had a conceptual understanding and basic knowledge but no real experiences of things like encapsulation, (de)serialization, storage/DB, asynchronous operations, memory pointers and de-reference, thread safety, composition vs inheritance, operator overloading, design patterns, network requests vs sockets etc.

Lesson #2: Given that this technology/software learning takes a lot of time, and is never finished, software positions do not rely so much on diplomas. Which may be hard to hear when you hold a Ph.D and multiple years of experience, with some in very tricky simulations. But this is a mental transition you have to make. What matters then is to participate to the flow. Obviously, only very few people make major contributions (like a new language, a new kernel…). Hence, it is good to accumulate small things you’ve done, usually in open source, for instance in a GitHub account. This is first useful for you, since it gives you a sense of your trajectory.

No true industry-wise experience despite my 8 years of coding, data-processing and simulations, okay. But I knew how to do stuff, and iObserve was the proof of it. It was my own diploma. Because my otherwise skills were totally unimaginable for someone outside research. Could you really consider applying for a job saying you were doing stellar black-holes emission spectra? Ok, I also did public speaking in international conference, wrote books (Ph.D.) and sophisticated technical reports, and being very autonomous at managing projects. But what kind of job is it exactly? That translation of your experience into a more natural language, that fits in the vocabulary of the company you apply to, in front of people who have no fucking idea nor interest for the science, is HARD. Because you just can’t simply explain! (Actually, stop explaining the whole world for every question…) You have to become this. A new yourself. Other candidates also apply for that job, and « being good » is not enough! And what you see is that your life depends on your ability of transforming a life-long passion into a list of skills… My name was Anger.

Resolution #1: Never consider people as a list of skills in your job. Or anywhere else, for that matter. And be nice with people who obviously need to provide explanations for everything. They need some more time to settle down…

Hence, in order to avoid the pain, I immersed myself into iObserve code. I made plans for a universe of features! I started to promise an enormous amount of cool stuff. Did you notice the presence of Observing Runs and Night Logs in the above screenshots (v0.8)? Something I also promise with arcsecond.io… I spent almost a year inside this very long 0.8 serie, preparing Night Logs (see below), which never appeared in the final app.

Another lesson (#3): you may work an enormous amount of time on something, but if existing users actually want something different, you have to change your priorities. That’s a bit different from a pure lean attitude where you look for exactly what the users and/or market want. iObserve development has always been strongly influenced by the feedback from users. But not all of it.

 The dedicated preparation app I developed for Night Logs. All the small pieces of observations you see can be resized live, with everything updating and remaining consistent, with a exposure times, overheads etc. The whole thing could be played live, with smart scrolling and zooming, on that night of observation. It was never ready to be included inside the app. The dedicated preparation app I developed for Night Logs. All the small pieces of observations you see can be resized live, with everything updating and remaining consistent, with a exposure times, overheads etc. The whole thing could be played live, with smart scrolling and zooming, on that night of observation. It was never ready to be included inside the app.

Nevertheless, it was a wonderful canvas to ask myself all the good questions you have to ask when developing a complete product. On February 23, 2011, roughly a year after starting iObserve, I announced on my blog that I was stopping its development… as it was, to focus on a version 1.0 to be submitted to the new MacAppStore. The goal was to stop playing with a toy, and to make a pro app for people to rely on for the coming years. This became a reality on March 23rd, 2011, for a price of 9.99 $US, with a smaller but better subset of features.

That price tag was the result of enormous amount of thinking (as usual, would say people who know me… but I’ve much improved lately). I was about to earn a bit of money with something made with my little hands. But I couldn’t ask too much either, as I wouldn’t have any client (despite very encouraging emails I got from all around the world during the beta stage of development). I thought the price should reflect that it was not « just a small app » (that is, it should be more than 5 $US) but its scope and quality couldn’t put it – yet – in the category of the pro apps (say, 20 $US or more). At that time, I followed closely what the guys at mekentosj.com were doing with their app Papers, dreaming I could have the same success story.

Price went up with time, but you will see below that purchase rate was remarkably stable over the years. The iObserve community grew slowly along with my own coding and modern software experience… and skills.

Other lesson (#4): If someone send you a bug report, verify and admit immediately it is a bug. Emails seemingly aggressive from people upset to find a bug in something they have paid for is perfectly understandable. But you may have a hard time at fixing it if the guy stays upset and you rush to the store for a bugfix release. And admitting rapidly your mistake lets you engage a useful conversation with the reporter, and gives you a chance to promote new features etc. In 6 years, I could remember of only one condescending email to whom I answered appropriately. All the others were pretty much fantastic feedbacks.

April 2011 – December 2013: Crazy Versions, Desert Crossing and False Hopes

iObserve was on the Store. Now what? I spent the two and a half difficult years hitting walls, and slowly mourning my past life. Missing a lot traveling and landscapes, among others. On July 2011, I nonetheless had the great chance to participate to the Apple World Wide Developer Conference (WWDC), the last one with Steve Jobs alive.

 That's me observing the sun rising beside the Moscone West Center where Apple's WWDC is always taking place. I started waiting there at ~2:30 am, the Keynote being at 11 am. I was the ~110th guy in the queue! That's pretty good. :-) #AppleFanboy That’s me observing the sun rising beside the Moscone West Center where Apple’s WWDC is always taking place. I started waiting there at ~2:30 am, the Keynote being at 11 am. I was the ~110th guy in the queue! That’s pretty good. 🙂 #AppleFanboy

But after a year working for Motwin, I realized that I would never make any more progress in there and I had to move (the company itself was not in good shape and management was truly awful). I started to apply in many places, with a strong will at coming back to physics, somehow. I wanted to value a lot more my past experience of physicist. I probably felt that my experience on software wasn’t large enough, that my professional progress in software would be slow, and that I could earn a better position right now by focusing on physics rather than software (I felt I deserved it… which is a very bad attitude). Until then, I never really understood that the rising buzzword of ‘Data Scientist’ was the way businesses and industries swallow former scientists. For me, reducing a scientist to its data analytics… skills, or worse, its purely technical and software ones, was just plain wrong. Hence I never looked for such positions. See how an idealistic vision of the world can be fun? 🙂

Among others, I applied to Alstom, ESRF (they needed a mixed physicist / software guy! they never replied to my emails), CEA (multiple times), Mathworks, Movea, Sofradir (who didn’t want to consider a Swiss guy because of National Defense projects, then I became french, and applied again, but to no avail), Comsol, Cognizant, Wisekey, Xerox etc. All failures. Once, I had 3 interviews within 10 days, and spent a total of 8 hours in interviews. Failed. Most of the companies don’t give a fuck anyway, sometimes never even bothering replying to emails.

I stumbled upon this article one day: Six ways industry failed to convince me. Can’t be more true, more complete. That’s the best description of what happened to me too.

Resolution #2: If I need to hire people one day, I will engage actively at setting up a communication channel with every candidate to let her/him know her/his applicant status. I hate waiting. I hate not knowing, with no fixed dates. In that situations, silence from the employer can suck all of your mental energy. And that’s just the opposite of what we should look for in people (not list of skills…) interested by a job.

In the meantime, iObserve was progressing a lot, but I was ridiculously stuck into the 1.0 versioning scheme. Hence the versions 1.0.1, 1.0.2, 1.0.3 to 1.0.13… The reason was that my plans were huge for that app and I considered updates a purely small ones given the beautifully enormous things coming! But version 1.0.5 brought Finding Charts (should have been v1.1), and v1.0.7 brought Star Tracks Plots (with the possibility to import custom ones, should have been v1.2). Not that bad!

On December 2012 I finally managed to quit and move to another startup company (in Geneva, Switzerland, this time, from where I come in fact), thanks again to iObserve, to make iOS software again. I finally had to do Objective-C/iOS, a bit of Java with the GWT, some low-level C, Javascript and Cordova plugins stuff along with restaurant printer drivers. 6 months later, I’ve been fired (the company wasn’t doing so well, and anyway, founders never really trusted me, and asked me to do only auxiliary stuff). 7 months before turning 40, I was unemployed.

Some people play music, makes extreme sports, engage in associations, or prefer drugs, religion or alcohol. All these times, I had iObserve. To go through reality.

Hence, during this second semester of 2013 – unemployed, I had lots of time – , I released iObserve 1.3, with Converters, Exoplanets and Small Bodies (Asteroids + Comets)! A big big update. For 19.99 $US (the price was decided one day before release). Interestingly, this was the time I finally found the right « identity » I wanted for my digital life: onekiloparsec (whose origin is to be found in this article and associated ones). That was it. New website, new Twitter account, new username everywhere. The previous name (Soft Tenebras Lux), was based on a word trick only software engineers knowing Geneva’s motto could possibly understand.

I kept going with updates, while I was preparing iObserve for iPad to also finally reach the Store. It was an enormous amount of work to unify OS X and iOS APIs to be able to use the exact same codebase for both apps. I wouldn’t do it that way anymore today. More experienced, you know… iObserve for iPad was finally released on December 3rd, 2013, with an arm-long list of bugfixes for v1.0.1 on January 2nd, 2014. The whole iObserve project reached officially about 80+ kloc (kilo-lines of code).

 iObserve on iPad. An iPhone version was also prepared along the way, but it was too much for a single man. iObserve on iPad. An iPhone version was also prepared along the way, but it was too much for a single man.

In the meantime, from July to December 2013, I was in contact with people at the CEA (Commissariat à l’Energie Atomique, one of the big science / technology institute in France, that has a large campus in Grenoble) to … launch a startup! In a wonderful field I loved: optics and ultra-fast detectors. Yeah, back to Physics again! I thought. No need to go into the details, but it never materialized either. Mostly because managers never wanted to take a risk they considered too large… Does it ring a bell? I told myself: AGAIN?!?!?! Am I that stupid guy who always rushes and bang his head into risky walls?! What is exactly I was doing wrong? Apart from being myself?…

Hence, at the end of December 2013, I accepted a job at Hortis.ch for the Radio Télévision Suisse (RTS, the Swiss national broadcaster) thanks… to iObserve for iPad! During the interview at RTS, I brought only my iPad. They particularly liked the way I abstracted the connections to multiple webservices, and the rigor with which curves were computed and drawn. But I was about to be 40. I was still doing iOS software and not physics. And I had to work 4 days/week away from my family (in Geneva). My name was Capitulation.

January 2014 – December 2015: New Friendship, and New Openings.

So that was it, I was 40 and I was doing iOS software. Why on Earth I wasn’t doing other things, to diversify my portfolio and pretend to other jobs?! Well, I guess today I can answer quite simply: confidence. Or the lack of thereof. And the lack of a personal project outside the Apple platform (which is tightly coupled). Hired as an « iOS expert » at Hortis/RTS, I remember having an attitude to make this clear to everyone… (in a beautiful proof of psychological deny of this lack of confidence, and you need some to understand that being unemployed is an opportunity). Anyway, iObserve proved again to be useful at finding jobs. So why risking to weaken my most important asset?

But one thing was clear: I would not make iOS software all my life. Even with the Apple momentum, once you’ve learned iOS3, 4, 5, 6, and 7, iOS8 comes surprisingly as a smaller event… I needed something bigger, and more complex! One, and only one obvious, way out: startups and entrepreneurship!

Interestingly, the thing I was saying when I left academia was this: the mobile software industry is certainly a good place to make good money and earn a living, and even have a chance to earn enough money to buy me enough time to continue my studies on my own. I just didn’t realize the real journey it would be, both in terms of efforts and also fun, yes, fun, sometimes (and that money wouldn’t be so voluminous, even if it is still much better than academia).

Talking about money, here are the numbers for iObserve. Since the beginning of the OS X app on the store (that is 4 years and 9 months ago), it has been downloaded almost a thousand times (50% from the U.S. 30% from Europe), and not even a hundred times on iOS. This last number is a real disappointment, given the amount of work I’ve put in it. I am pretty proud of the result, but the market is probably not large enough. Given the way it is written, with some very special handling of rotations and split views, it is not easily modernizable to the latest (iOS9) APIs. Hence, I will only continue to support it as much as I can (say, iOS10?), but not more. [update: see the numbers since it became free.]

In total, I earned about 12 k$US (that is, fluctuating between 80 and 200€ every month). For that, I am very grateful to all my customers/users. The fact that iObserve was a paid app made this journey a lot more interesting. And thanks to all, among other small things, I’ve been able to buy some good wine, make nice gifts to my two boys, and offer a really good bike to my wife.

 A snapshot of my iTunesConnect view of iObserve sales over time for both OSX and iOS. Price tags, some key release numbers as well as OSX versions are also labelled. A snapshot of my iTunesConnect view of iObserve sales over time for both OSX and iOS. Price tags, some key release numbers as well as OSX versions are also labelled.

Startup and entrepreneurship? Okay. The reasoning was simple, and backed up with the real numbers above: the market of ‘pro astronomers’, even considering the serious amateurs, is too small to start a business. I had ideas (and already some good code!) of other pro apps to increase the size of my portfolio. But still. These are truly complex apps. I could foresee a big price jump (say up to 200-500 $US), but it would need to come with some serious features that it will anyway takes time to develop. Moreover, no one is making enough money alone with some general public mobile app (even if it pays a lot more on iOS than on Android). I had no money to hire some help, and I didn’t dare to contact investors and banks, considering that the market is small and will remain so anyway. And all these developments at onekilopars.ec were sucking already all the personal time I had left.

So. Let’s look for something else! As a matter of fact, along with iObserve and all the software I was doing, I had a ‘grand public’ idea I was cooking in my head for a long time (ah, mental cooking…). It was about short stories, allowing people to create, branch and update stories of their own or from others. Back in November 2011 already, I presented this idea in a Startup Weekend happening in Grenoble. My idea was not selected; first time I was pitching… But I met a very nice guy whose name is Laurent, of basically the same age, also interested in entrepreneurship. We slowly became very good friends, talking about many things, taking a lot of pleasure at sharing ideas, discoveries, links, articles about technology, entrepreneurship, professional careers, wives and kids etc. We finally decided to launch something based on that idea around stories and fictions.

Slowly, in about a year, in a very lean / todo-cards methodology (using Trello), we managed to learn the basics of Django (back to Python, yeah!) and how to deploy a real server (thanks to the great PaaS: Heroku), reaching the stage where I could consider learning fancy Javascript stuff such as AngularJS. The result, as well as the feedbacks from friends, was mostly disappointing. It was called PicoLegends. And in Spring 2015, we had to admit that it was probably not going anywhere. But the real value, for both of us, was elsewhere.

We learned that we really liked to work and explore together. You know, not the goal is most important, but the journey… Moreover, we had a lot more assets: we know how to deploy a multi-language client/server webapp, we know what difficult questions we need to ask ourselves and what are our true motivations, like every entrepreneurs. Hence we decided to continue, in another subject! We just had to find that subject! And that part was also a lot of fun (Laurent is very good at knowing lots of different technology trends and stuff).

But then came arcsecond.io…

Last summer (2015), all along on-going day work at RTS in Geneva and explorations for startups with Laurent, I was struggling with the name of an idea I had already since quite some time, thanks to PicoLegends. As a matter of fact, RTS colleagues convinced me a while back to write iObserve 2. It would be a totally new, super-pro version of iObserve, with Night Logs and a high price tag. Quite logically (experience talking…) it needed a true dedicated backend server, with a lightweight client consuming its well-formatted data. That is, the idea was to put on a server all that crazy logic of data requests made by every today’s iObserve client apps, which are all different whether it is SIMBAD, NASA ADS, JPL Horizons and its telnet/ASCII interface, or exoplanet.eu

I wanted to give a name to that server, and I couldn’t find one good. But at the minute I found ‘arcsecond.io‘, everything became clear. I bought the domain name, and started to code at the speed of light. I was very happy. And during my vacations I realized that arcsecond.io was not a simple project, a simple backend for iObserve. It could really be an entire cloud for astronomy, collecting all meaningful data, provide well-formatted APIs for anyone, including me, to consume this data and create new services. Hence the two feet onto which it is based: ‘api.arcsecond.io‘ for the data itself. And ‘www.arcsecond.io‘ for an AngularJS-based web app. My regular job at RTS was taking (by choice) only 4 days a week. Hence I had one day/week, evenings and week-ends left to code arcsecond.io. My name was Software!

 My private GitHub activity profile. The green square indicates the amount of activity on that particular day, measured as the number of commits to the repositories. The 3-weeks totally blank area to the left are the vacations I took to bring my kids to Chile, and the La Silla Observatory. A 55-days streak is not that bad... My private GitHub activity profile. The green square indicates the amount of activity on that particular day, measured as the number of commits to the repositories. The 3-weeks totally blank area to the left are the vacations I took to bring my kids to Chile, and the La Silla Observatory. A 55-days streak is not that bad…

But that was a too big Everest for Laurent, my partner. Too much knowledge and experience on my side and not enough on his own. And a not-so-easy business model anyway for arcsecond.io itself! We decided to continue in a dual-coaching mode. I kept doing arcsecond.io and he provides feedback and advises. And on the other hand, he continues to explore other paths for entrepreneurship, while I give him my feedback.

December 2015: Connecting the dots…

You probably know the famous speech made by Steve Jobs when he receive a honorific degree at Standford. If you don’t, here it is. It’s a real good speech.

On December 2015, I was connecting the dots. Finally.

In a conversation at the Science Fest of the … CEA in Grenoble, I talked to a colleague of my wife about all the software I was doing. And he told me that, with some colleagues and other people, he was about to launch a startup about some exciting new Virtual Reality technology. A few days later, while sending him an email with some useful links, I asked him whether they would need an experienced software guy in their startup? I finally meet the co-founders – awesome and diverse people – and they were actually interested in my experience! All of it, somehow. That is, not as a list of skills… I couldn’t thank them enough for this, and the opportunity they gave to me.

When I met them, I talked about all my experience and the products I made, which comprised basically none of those I made during regular jobs. Only my own projects. Of course, regular-job projects contributed a lot to my experience. But that’s small compared to the pride to talk about your own stuff. It makes a key difference in the impression you leave in people’s mind. And my experience was not only iObserve but also arcsecond.io (web frontend + data-based modern APIs backend!), my open-source plugins and SDKs, and even the old image-processing / scientific developments. And also something I forgot to mention: my 2 years of experience with the agile Scrum methodology at RTS in Geneva, in which it is truly implemented throughout the multimedia department. By the way, I am a certified Professional Scrum Developer, since August 2015; you know, diplomas/certifications can still make some good…

Quite naturally, I jumped on board, and joined the startup.

So here is the end of that journey, and I will start a new one. I will now build teams of people and take care of all software developments in a very exciting startup that will integrate an amazing technology stack and deliver a unique and fantastic VR experience! Can’t be more excited. And relieved.

One of my very best friend told me once that he likes (he lives by?) a quote from Churchill:

Success is the ability to go from one failure to another with no loss of enthusiasm.

When I read it, I thought: yeah, yeah, easy to say… But he was right. iObserve was my reserve of enthusiasm. My companion for never giving up.

My name is Cédric, also known as @onekiloparsec on the Internet. And I am happy to make iObserve, my beloved 6-years-old app made during countless night hours, free for everyone from now on. It will remain on the MacAppStore for a while, to give me the time to prepare the transition to a custom distribution channel from my website. If you nonetheless would like to pay something for it, you can donate some Bitcoins to the following address.

1MJwZaYQDi8aYiDhrzCMmYciTVTgYiF5jE

At the exchange rate of today, 30 $US is about 0.07 Bitcoins (that’s for when I’ll read this post in 6 years from now…).

What a journey. Can’t wait to start the new one. Thanks to everyone (and especially to my wife and my kids).

Covariations vs Correlations in BigData

Recently, I wrote about how #BigData and #BigScience differ, having almost opposite approaches at looking at data. Needless to say that I remain skeptical about the varying quality of what’s being said and written about data, big or not. As a matter of fact, my main concern is about what one can infer, or pretend to infer from that data. Data help to think the world, yes. Yet it isn’t the whole story. Reading posts on Internet and the sky-rocketing amount of new material about it, one must honestly ask oneself: Is Data, especially since it became Big, a object of knowledge by itself?

In this post, I want to discuss the difference between covariations and correlations. In a context of data-driven decisions (a concept I’ve read in the two books I’ve mentioned last time), failing to distinguish covariations and correlations might lead to unexpected consequences, to the say least. The least dommageable being, probably, to remain ignorant after all…

In my previous post mentioned above, I cited these sequence of tweets:

Here is the image of the original tweet:

 The image of the original tweet.
The image of the original tweet.

Talking to strangers, and telling them they are wrong. What else Internet is about?…


(xkcd: Duty Calls)

Anyway. Days passing, I couldn’t help but keep thinking about this « fitting » problem. I think I have a (natural?, normal?, scientist?) reflex saying that data isn’t telling the whole story, but just a mean, among others, to climb the ladder to stand in the shoulders of giantsThe existing story we build upon, with the help of data, is the knowledge and understanding we have of our world, and the history of the discoveries that led to the state of it. And that knowledge is based on correlations. Correlations that were observed, checked, verified, and understood (if I could make a fine word I would say, that, in computer science, we would say these correlations were entirely ‘debugged’, since debugging is understanding).

But correlations and covariations haven’t the same meaning! Simply stated, a covariation is the observation that when one parameter varies, one another does as well, and vice versa. Covariations are (I love this rule from mathematics:) necessary but not sufficient to make correlations. Covariations are merely a hint about something happening under the hood. Covariations can have various ‘shapes’ or, in other words, can be represented graphically with various figures. The shape of that figure is certainly an excellent hint about the underlying phenomenon, but it is not the explanation by itself. On the other hand, understanding means giving a cause, or an explanation, to a covariation. While the study of covariations is full of lessons, this isn’t usually enough to reach an explanation. And it is not a matter of quantity. Correlations are living in a different space. ‘Data-points fitting’ isn’t equal to understanding (obvious, isn’t it? Or not?). Stated simply, a correlation integrates the corpus of knowledge, while a covariation integrate the corpus of observations.

What amaze me most in this journey into BigData as I navigate into it, and the dozens of articles about it in every corner of Internet, is – again – the very weak presence of words such as understandingknowledgeresearch, ‘Nature‘, etc. They are utterly dominated by the presence of ‘insights’, ‘obvious’, ‘noise’, ‘pattern discovery’, and also ‘revolutionary’, ‘potential’; words that belongs a lot more to marketing than to, well, science. <note>This little game about the number of occurrences of words in BigData articles should prompt me one day to perform a semantic and quantitative analysis of them… with BigData tools, of course!</note>

Recently, I stumbled upon an truly excellent website that illustrates very well the general considerations outlined above. It is entitled A Visual Introduction to Machine Learning. (Machine Learning, for those who aren’t really immersed into BigData is one of the key technique of manipulating the data. See the detailed Wikipedia entry about it.) The above article is really well crafted (even if it doesn’t fully work on Safari – prefer Chrome or Firefox). Please, to follow what’s next in this post, read it (~10 min) and come back. I’ll wait.


In the meantime, here is a small visual interlude, with the first image of an exoplanet. Are you seeing a large white-blueish dot and small red dot too? How do you know they are not only dots?  And what about the fundamental process of crafting meaning by placing, in a spatially-structured manner, variations of colours in a limited rectangular 2D space, also known as ‘image’? How does this process could even make sense to you? Isn’t an image already a graphical representation of a lot of data?

Knowing how truly the electromagnetic fields of light combine to form constructive fringes that lead to measurement of the spatial coherence along a line projected into a plane would already change for ever your vision of what an image is.


Image Credits: E.S.O.


Ok, back to our business. If you freshly read the article, you probably have an idea of what I am heading in this post.

The article beautifully exemplifies the use of a Machine Learning technique. In this particular example, it allows, seemingly, to classify members of a dataset into one of the two categories: a home is either in New York or San Francisco. We have 7 different types of data points. Literally: ‘elevation’, ‘year built’, ‘bathrooms’, ‘bedrooms’, ‘price’, ‘square feet’, ‘price per sqft’

Before saying anything about it, the immediate question that obviously should have strucked you as well is: why not simply obtaining geographical coordinates of these houses?!! Given the problem they ask themselves to solve, that would be the immediate and logical question to raise. (We note that the goal seems to change a little bit between the introduction – ‘distinguish homes in New York from homes in San Francisco‘ – and the first section – ‘determine whether a home is in San Francisco or in New York‘ – which is not really the same question. Anyway.)

But okay, that’s an example. And examples are often a little bit silly, for the matter of demonstration, and they rarely demonstrate intelligence, but rather skills.

What is example beautifully illustrate is that machines are powerful, but are not smart. And those who pretend here and there that « BigData will revolutionise the way we think the man or the world » are probably seeking power rather than intelligence… 

Here is a list of problematic points that the article does not even touched gently:

  • How the data types were chosen? 
  • Are the data types relevant to the question? Is there any other relevant quantity that could help solving the problem? (okay, okay…)
  • Are the data types enough to solve the question?
  • How did these data points were obtained? Measured? Any error associated with it?
  • Are there any statistical biases? Instrumental ones? Data isn’t just numbers, you know…
  • Were the data points taken all at the same time? How? By how many different people? Were there some outliers?
  • How do we know that the distributions of the points of each type can be compared? Are all these types meaningful to the question?
  • How do we know that all points have the same weight?
  • How do we know the problem is ‘solved’? 
  • Actually, is the problem well- or ill-posed?

Ok there are more than that, but enough. 

There is an obvious conclusion to all of this. But I am never sure myself I didn’t just miss something obvious. I would formulate a conclusion that is somehow obvious, but if this is so too for other people, why do we (I) never hear of them?

Conclusion: A data analysis does not lead to data science, even less science pure and simple. And when you see ‘exciting’ data-scientist positions in companies that list a number of technologies you have to master before applying, be simply aware that science is probably everything but these required technical skills.