Jekyll2020-03-02T15:14:26+01:00https://developers.canal-plus.com/feed.xmlCanal+ Developers HubAn amazing website.Canal Labscanallabs@canal-plus.comhttp://www.canalplus.fr/Premier CodingHub Réussi !2017-03-13T00:00:00+01:002017-03-13T00:00:00+01:00https://developers.canal-plus.com/blog/feedback-codinghub-gitc<p>Le 28 février s’est tenu le tout premier CodingHub CANAL+ !</p>
<p>Cet événement d’une soirée a été l’occasion pour <+ DE DEV> d’accueillir des développeurs externes à CANAL+ afin de coder et d’échanger autour du dernier concours international lancé par <a href="https://www.codingame.com/">CodinGame</a> : <a href="https://www.codingame.com/leaderboards/challenge/ghost-in-the-cell/global">“Ghost In The Cell”</a>.</p>
<p><a href="https://www.codingame.com/leaderboards/challenge/ghost-in-the-cell/global">Ghost In The Cell</a> était fortement inspirée du jeu <a href="http://www.galcon.com/g2/">Galcon</a>, un jeu de stratégie temps réel dans lequel le joueur doit diriger une flotte de vaisseaux pour capturer les planètes ennemies.</p>
<p><img src="https://developers.canal-plus.com/images/gitc_ide.jpg" alt="Ghost in the Cell en action" /></p>
<p>Plus de 3 500 développeurs ont donc codés une Intelligence Artificielle(IA) capable de jouer en temps réel et d’affronter les autres IA dans des guerres de positions sans merci.</p>
<p>Les CodingHubs s’inscrivent dans une démarche visant à revendiquer le côté “TECH” de CANAL+ et à faire émerger des talents externes et internes au groupe.
Cette première édition aura été pour <+ DE DEV> l’occasion de tester le concept et de prendre la température du déroulé de ces événements.
Après ce succès, d’autres éditions seront organisées avec: plus de fun, plus de code, plus d’échanges et, nous l’espérons.. encore plus de participants !</p>
<p>Suivez-nous sur <a href="https://twitter.com/plusdedev">Twitter</a> ou <a href="https://www.facebook.com/plusdedev/">Facebook</a> pour être au courant de nos prochains événements !</p>
<p>Codement,</p>
<p>Sébastien pour <+ DE DEV></p>
<p><img src="https://developers.canal-plus.com/images/gitc_leaderboard.jpg" alt="Ghost in the Cell Leaderboard" />
<img src="https://developers.canal-plus.com/images/gitc_laptops.jpg" alt="Ghost in the Cell Participants" />
<img src="https://developers.canal-plus.com/images/gitc_room.jpg" alt="Ghost in the Cell Salle" /></p>Sébastien Talasisebastien.talasi@canal-plus.comLe 28 février s’est tenu le tout premier CodingHub CANAL+ !Meet our CodinGamers2017-02-23T00:00:00+01:002017-02-23T00:00:00+01:00https://developers.canal-plus.com/blog/meet-our-codingamers<p>CANAL+ hosts a <a href="https://www.facebook.com/events/233028310478983/">CodingHub</a> on the 28th of February and our dev community is very excited about it !</p>
<p>This CodingHub will allow developers from outside the company to meet and work together on the next CodinGame contest: Ghost In The Cell.
For this event, about 20 developers will be there, half will be CANAL+ developers.</p>
<p>Most CANAL+ developers are going to compete in their first CodinGame contest !
Nevertheless… Some are used to compete during hackathons or are solid CodinGamers.</p>
<p>Most will uses Javascript but some will code using PERL, Python or PHP… No C++ or C# this time !</p>
<p>This event will be the opportunity to code for fun, share both techniques and strategies, have a good time, compete… and, of course… eat pizzas !</p>
<p>We have absolutely no idea about the content of the contest…
One thing is certain thought, we’re already on the starting blocks, are you ?</p>
<p>CANAL+ Codingamers : Athella, clemtoy, FlorentD, Rainphoenix, SackPhantom, Tetesoulo</p>Sébastien Talasisebastien.talasi@canal-plus.comCANAL+ hosts a CodingHub on the 28th of February and our dev community is very excited about it !Retours sur React-Europe 20162016-06-14T00:00:00+02:002016-06-14T00:00:00+02:00https://developers.canal-plus.com/blog/feedback-react-europe<p>Chez <a href="http://www.canalplus.fr/">Canal+</a> nous aimons et utilisons <a href="https://facebook.github.io/react/">React</a> dans de nombreux services.</p>
<p>Nous avons eu la chance de pouvoir assister cette année à la conférence <a href="https://www.react-europe.org/">React Europe</a> - LA conf européenne sur React - qui s’est déroulée le 2 et 3 juin à Paris et a rassemblé quelques centaines des passionnés venus du monde entier. Cette année étaient présents - entre autres - <a href="https://twitter.com/vjeux">Christopher Chedeau</a>, <a href="https://twitter.com/_chenglou">Cheng Lou</a> et <a href="https://twitter.com/dan_abramov">Dan Abramov</a> de la React Core Team, <a href="https://twitter.com/brindelle">Bonnie Eisenman</a> de Twitter ou <a href="https://twitter.com/jhusain">Jafar Husain</a> de Netflix - du beau monde donc - venus nous présenter les avancées de la stack Facebook et nous permettre de découvrir les grandes orientations des prochains mois/années.</p>
<p>L’édition précédente avait été l’occasion de découvrir en profondeur l’architecture Flux et notamment de vivre la naissance de <a href="http://redux.js.org/">Redux</a> par Dan Abramov. Les équipes de Facebook avaient aussi présenté deux nouveautés : <a href="http://graphql.org/">GraphQL</a> et <a href="https://facebook.github.io/react-native/">React-Native</a>.</p>
<p>Cette année peu de découvertes mais Facebook nous a conforté dans l’idée que GraphQL et React-Native sont leurs deux sujets du moment. Deux sujets qui changent en profondeur la manière de concevoir des applications. Et il semble que ça va durer.</p>
<h1 id="graphql">GraphQL</h1>
<p>GraphQL est <del>un langage</del> <a href="http://facebook.github.io/graphql/">une spécification</a> de requêtage de données se positionnant entre l’interface et le serveur. Il permet notamment d’agréger de façon “intelligente” des données éparses et les formater dans un json ayant une structure qui sied aux besoins du front. Nous l’utilisons chez Canal+ dans ce but, avec des résultats très satisfaisants (qui feront l’objet d’un prochain article (sourire)).
En production chez Facebook depuis quelques années, GraphQL est aujourd’hui implémenté dans une douzaine de langages (javascript, Java, ruby, go ou .NET …).</p>
<p><img src="https://developers.canal-plus.com/images/reactue2016_graphql.png" alt="GraphQL" /></p>
<p>Facebook semble vraiment croire en ce nouveau paradigme de communication client/serveur et le nombre impressionnant de projets (la gestion des caches, l’intégration avec React etc…) que propose la communauté montre un engouement réel.</p>
<p>Jafar Husain en a d’ailleurs profité pour présenter <a href="https://github.com/Netflix/falcor">Falcor</a>, la solution proposée par Netflix pour répondre aux mêmes problématiques. La philosophie reste globalement la même mais l’implémentation semble plus souple et nous a personnellement conquis !</p>
<p>GraphQL et Falcor remettent donc en cause l’architecture REST, ce qui n’est pas rien ! Mais un changement encore plus profond semble prendre forme avec React-Native…</p>
<h1 id="react-native">React-Native</h1>
<p>React-Native est né d’une volonté féroce de Facebook de simplifier et améliorer leurs développements sur mobile. On se souvient du virage à 180 degrés qu’a opéré la société californienne il y a 4 ans en abandonnant le HTML5 sur mobile au profit d’applications natives et laissant <a href="http://techcrunch.com/2012/09/11/mark-zuckerberg-our-biggest-mistake-with-mobile-was-betting-too-much-on-html5/">un souvenir douloureux</a> à Mark Zuckerberg. Ce traumatisme est sans doute à la base de nombreuses expérimentations en interne pour trouver une solution qui permettrait de développer des applications exécutables sur plusieurs plateformes et de manière homogène : Le fameux “Learn once, run everywhere”, fortement inspiré par le slogan de Sun Microsystems.</p>
<p>Après plus d’un an en open-source, React-Native est en passe de réussir son pari tant les améliorations et la création d’outils autour du framework présentés à ce React Europe sont nombreux. Facebook investit énormément d’énergie pour améliorer l’expérience et architecturer les développements. Limité initialement à iOS puis ouvert à Android, React-Native est aujourd’hui portable sur Tizen et Windows grâce aux <a href="http://techcrunch.com/2016/04/13/facebooks-react-native-open-source-project-gets-backing-from-microsoft-and-samsung/">travaux de Samsung et Microsoft</a> et d’autres développements sont en cours pour supporter OSX et…. le web !</p>
<p><img src="https://developers.canal-plus.com/images/reactue2016_reactnative.png" alt="React-Native" /></p>
<p>C’est ici qu’on prend la mesure de l’ambition de Facebook de créer un framework qui nous permettrait de créer des interfaces web & native pour un nombre illimité de plateformes avec une même logique de développement. On imagine le gain en coûts et en temps de développements que cela implique ! Cette promesse fait grand écho chez Canal aux vus de nos produits cross-pateform (TV, WEB, MOBILE, BOX) ou la recherche d’optimisations est permanante.</p>
<h1 id="en-conclusion">En conclusion</h1>
<p>D’autres talks nous ont aussi inspirés, comme <a href="https://www.youtube.com/watch?v=mVVNJKv9esE">la présentation</a> de Cheng Lou qui nous invite à réfléchir sur la notion d’abstraction dans nos applications.</p>
<p>Cette édition a été l’occasion pour Facebook de s’affirmer encore un peu plus comme un acteur de premier plan dans le monde du développement et l’absence de grosses nouveautés est finalement une bonne nouvelle en montrant la maturité de leurs solutions dans un monde JS en révolution perpétuelle.</p>
<p>PS : les vidéos des conférences sont déjà disponibles sur la <a href="https://www.youtube.com/channel/UCorlLn2oZfgOJ-FUcF2eZ1A">chaîne Youtube</a> de React-Europe (en anglais)</p>Florent Duveauflorent.duveau@canal-plus.comChez Canal+ nous aimons et utilisons React dans de nombreux services.Jenkins 2: Pipeline as Code2016-04-14T00:00:00+02:002016-04-14T00:00:00+02:00https://developers.canal-plus.com/blog/jenkins-2-pipeline-as-code<p>Over the last month, we tested the newest (and still sometime unstable) Jenkins version with a state-of-the-art development workflow in mind. It includes strong Github integration (i.e. with the Status API), Docker usage (for on-demand and versatile slaves) and user-friendly Jobs configuration and bootstrapping (i.e. build steps should be part of the code and focus on reusability).</p>
<p><strong>Note : this blog post relate our Jenkins 2 experience as of the Alpha & beta release of the 2.0 version. Your mileage may vary with the release of the recent RC and the future stable version.</strong></p>
<h2 id="jenkins-20">Jenkins 2.0</h2>
<p>Do you remember when Jenkins was the way to go for building your CI environment ? Down the road, many other tools went into the CI / CD race, adding their own awesome feature and helping redefine what Continuous Integration means.</p>
<p>Compared to , say, Travis-CI, Jenkins lacks some major features like easy, builit-in integrations with other CI / CD / code quality tools, volatile & on-demand slave management or the ability to describe your job in a simple, versionned text file.</p>
<p>This is where Jenkins 2 comes, bringing “Pipeline as code, a <a href="https://wiki.jenkins-ci.org/display/JENKINS/Plugin+Selection+for+the+Setup+Dialog">new setup experience</a> and other UI improvements” mainly to the job configuration pages. Sticking to the Jenkins philosophy, the new Jenkins verison if fully compatible with older jobs & plugins. This is, as stated in the release announcement, a “drop-in replacement” for Jenkins 1.x and you don’t have to worry about existing configuration.</p>
<p>The killer feature is the ability for a developer to provide the build steps from a text file (a “Jenkinsfile”), exactly like a .travis.yml file with a touch of Groovy. Take this example :</p>
<div class="language-groovy highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">node</span> <span class="o">{</span> <span class="c1">// The "node" directive tells Jenkins to run commands on the same slave.</span>
<span class="n">checkout</span> <span class="n">scm</span>
<span class="n">stage</span> <span class="s1">'test'</span>
<span class="n">sh</span> <span class="s1">'make test'</span>
<span class="n">stage</span> <span class="s1">'publish'</span>
<span class="n">sh</span> <span class="s1">'make publish'</span>
<span class="o">}</span>
</code></pre></div></div>
<p>Simple, right ? I’ve just declared 2 build steps, “test” and “publish” both using the <code class="language-plaintext highlighter-rouge">sh</code> directive to launch my test/build scripts (and a Makefile under the hood, for brevity). Such stages are stored in Jenkins. They are logged, timed and replayable anytime one-by-one or by group.</p>
<p><img src="https://developers.canal-plus.com/images/jenkins2_stageview.png" alt="Stage View in Jenkins 2" /></p>
<p>The available documentation contains a <a href="https://github.com/jenkinsci/workflow-plugin/blob/master/TUTORIAL.md">great tutorial</a> to get started with the Groovy DSL used by Jenkins. Many plugins developers already have ported their code to be usable from a Jenkinsfile and the <a href="https://github.com/jenkinsci/workflow-plugin/blob/master/COMPATIBILITY.md">list of Pipeline-compatible plugins</a> is growing quickly.</p>
<p>These directives can easily be extended with your own scripts (check out the <a href="https://github.com/jenkinsci/workflow-cps-global-lib-plugin/blob/master/README.md">dedicated docs</a> - more on that later).</p>
<p>The rest of this article focuses on configuration decisions we made to achieve our workflow objectives.</p>
<h2 id="light-efficient-github-workflow">Light, efficient GitHub workflow</h2>
<p>Ok, now for a hands on with this new version. After installing some smart plugins like the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+GitHub+Branch+Source+Plugin">CloudBees GitHub Branch Source Plugin</a>, we are able to create a “folder” project (“Multibranch Pipeline Job”) based on any repository in your GitHub organisation.</p>
<p><img src="https://developers.canal-plus.com/images/jenkins2_newjob.png" alt="Creating a Multibranch Pipeline job with the new UI" /></p>
<p>Note : Jenkins will need valid credentials to access your organization’s private repositories. I strongly suggest you to use a <a href="https://developer.github.com/guides/managing-deploy-keys/#machine-users">Github Machine User</a> for Jenkins <-> GitHub integration.</p>
<p>After configuring the plugin so Jenkins use the Github Commit Status API to check ever commit/branches/PR of our repositories, it’s time to see it in action.</p>
<p><img src="https://developers.canal-plus.com/images/jenkins2_github.png" alt="Configuring the GitHub integration" /></p>
<p>Let’s create a branch to work on a new feature. Jenkins will automagically detect this new branch, create a job based on the <code class="language-plaintext highlighter-rouge">Jenkinsfile</code> it find inside the tree. The same behavior applies for the PR created on GitHub. If I want to, I can edit the <code class="language-plaintext highlighter-rouge">Jenkinsfile</code> for this new branch only, without breaking the build for other branches.</p>
<p><img src="https://developers.canal-plus.com/images/jenkins2_folder.png" alt="A sample project folder in Jenkins 2" /></p>
<p>Additionally, if you delete the branch (i.e. after a successful merge), Jenkins will simply delete the branch-related job.</p>
<p>On the GitHub side, Jenkins now checks our work and tell us if it’s good or not - like Travis-CI would do. We’re close to our ideal CI workflow!</p>
<p><img src="https://developers.canal-plus.com/images/jenkins2_pr.png" alt="Jenkins tells me the PR is ok" /></p>
<h2 id="let-it-be-dockerized">Let it be Dockerized</h2>
<p>Now that we have good GitHub integration, let’s push our CI workflow forward with Docker. Thanks to the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Pipeline+Plugin">CloudBees Docker Pipeline Plugin</a> (and its great <a href="https://documentation.cloudbees.com/docs/cje-user-guide/docker-workflow.html">user guide</a>), we can easily tell Jenkins to use a Docker image (either pull or build one from a versionned Dockerfile) to handle a job. Let us slightly change our first job definition :</p>
<div class="language-groovy highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">node</span><span class="o">(</span><span class="s1">'docker'</span><span class="o">)</span> <span class="o">{</span> <span class="c1">// I want a slave labeled "Docker" (i.e. with Docker installed)</span>
<span class="n">docker</span><span class="o">.</span><span class="na">image</span><span class="o">(</span><span class="s1">'python:latest'</span><span class="o">).</span><span class="na">inside</span> <span class="o">{</span> <span class="c1">// Let's use the latest Python image</span>
<span class="n">checkout</span> <span class="n">scm</span> <span class="c1">// Run the job normally...</span>
<span class="n">stage</span> <span class="s1">'test'</span>
<span class="n">sh</span> <span class="s1">'make test'</span>
<span class="n">stage</span> <span class="s1">'publish'</span>
<span class="n">sh</span> <span class="s1">'make publish'</span>
<span class="o">}</span>
<span class="o">}</span>
</code></pre></div></div>
<p>What’s happening here ? Well, <code class="language-plaintext highlighter-rouge">docker.image('python:latest').inside</code> tells docker to instanciate a container with the <code class="language-plaintext highlighter-rouge">python:latest</code> image and mount the workspace inside it! Note that if all your nodes have Docker installed, you can omit the <code class="language-plaintext highlighter-rouge">node</code> directive, as all the job will be run in one container.</p>
<p>Going further, if you want to test your code with multiple version of a Docker image, simply create a branch, change the version in the <code class="language-plaintext highlighter-rouge">Jenkinsfile</code>, and that’s it. You can even parameterize the version tag in Jenkins for run-time changes!</p>
<p>Note: You can use this Docker logic with old jobs with the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Custom+Build+Environment+Plugin">CloudBees Docker Custom Build Environment Plugin</a>.</p>
<h2 id="wrapping-it-up">Wrapping it up</h2>
<p>I definetely suggest you to test the newest version of Jenkins. This is making Jenkins a real, state-of-the-art competitor again in the CI/CD competition. It’s your good ol’ tools, but under steroids.</p>
<p>We achieved a clean workflow with ease in days rather than weeks. We don’t go often on the Jenkins UI anymore, since all the logic is in the repository. We gave back developer the power of CI. It’s our code, our build directives.</p>
<p>Have anything to say ? Please tell us in the comments or <a href="https://twitter.com/plusdedev">tweet us</a> !</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://jenkins.io/2.0">Jenkins 2.0 official page</a></li>
<li><a href="https://www.youtube.com/watch?v=kzRR8XR8hu4">the new setup experience in video</a></li>
<li><a href="http://fr.slideshare.net/andrewbayer/seven-habits-of-highly-effective-jenkins-users-2014-edition/12-Jenkins_User_Conference_San_Francisco">“Rules of Jenkins”</a></li>
<li><a href="https://www.docker.com/sites/default/files/UseCase/RA_CI%20with%20Docker_08.25.2015.pdf">slides from docker.com (2015/07)</a></li>
</ul>Julien Tanayjulien.tanay@canal-plus.comhttp://julientanay.comOver the last month, we tested the newest (and still sometime unstable) Jenkins version with a state-of-the-art development workflow in mind. It includes strong Github integration (i.e. with the Status API), Docker usage (for on-demand and versatile slaves) and user-friendly Jobs configuration and bootstrapping (i.e. build steps should be part of the code and focus on reusability).Meet Bob : conversation-driven development2016-02-26T00:00:00+01:002016-02-26T00:00:00+01:00https://developers.canal-plus.com/blog/meet-bob<p>Many teams here at Canal+ use their own ChatOps software to handle repetitive tasks and/or notifications through their favorite chat app (from IRC to Slack). From Hubot to custom solutions, they vary on user’s affinites with programming languages (Hubot scripts are written in CoffeeScript, Lita’s ones are in Ruby) and chat platform (some bots are not compatible with the XMPP protocol).</p>
<p>The idea of a bot spamming an IRC channel is not new. We (developers) are used to this good old funny Quizz bot hanging around in our chatroom at the end of the day. What’s new here is that theses bots are connected to real-life, developer-oriented stuff : think test tools, deployment orchestrators or nerdy webcomics.</p>
<p>For the last couple of months, I’ve been working on providing my team (the CANAL+ Innovation team) with some cool DevOps tools and practices. Last week, we adopted a new robotic gentleman in the family, which was loosely name Bob*. Let’s introduce him.</p>
<h2 id="the-urgent-need-for-automation">The urgent need for automation</h2>
<p>Today, major parts of our jobs tend to be automated. Many companies making their digital shift try to make conversation between coworkers easier. Most of the time they begin by trying to reduce the amount of email we send/receive every days (from dozens to hundreds, depending on what your job consist of on a daily basis). Softwares like <a href="https://slack.com/">Slack</a>, <a href="https://www.hipchat.com/">Hipchat</a> or <a href="https://discordapp.com/">Discord</a> are good candidates for this job. People spend less time typing long emails and just “chat” with their colleagues.</p>
<p>At the same time, the DevOps guys are trying to automate the testing and deployment (among other things) of our applications. But this automation comes with a trade-off : you have to learn new tools (with their own CLI or web-UI), new dialects, etc. Why can’t we just use the tool everybody already knows ?</p>
<p><strong>Here come the ChatOps magic thing, bringing your shiny tools into the conversation</strong>. You will never have to ask your favorite developer to “Open your browser, go to the [devops_tool] webUI, click here, and here, and on that big green button” again (true story). Let him chat with the tool!</p>
<p>From a developer’s point of view, it allows you to stay focused on the important things : the code and the value it creates for the people (customers?) who use the underlying services. Code-centric companies like Github went “officially” into the ChatOps world <a href="https://github.com/blog/968-say-hello-to-hubot">five years ago or so</a> and use these little robotic dudes on a daily basis to tackle annoying, repetitive tasks. It’s not too late to offer your team a virtual, smart and (not-so-)polite friend.</p>
<h2 id="a-devops-best-friend--err">A DevOps’ best friend : Err</h2>
<p>Err was created by <a href="https://twitter.com/gbin">@gbin</a> and maintained by a bunch of cool guys around the world (come and say hello on <a href="https://gitter.im/errbotio/errbot">gitter</a>!). Quoting from the official website, it is “a chatbot, a daemon that connects to your favorite chat service and bring your tools into the conversation.”</p>
<p>Errbot is written in Python. DevOps are (often) native Python speakers. It makes it really easy to port large amount of handcrafted scripts to Errbot “plugins”. The anatomy of a minimal Errbot plugin looks like this snippet :</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">errbot</span> <span class="kn">import</span> <span class="n">BotPlugin</span><span class="p">,</span> <span class="n">botcmd</span>
<span class="k">class</span> <span class="nc">MeetBob</span><span class="p">(</span><span class="n">BotPlugin</span><span class="p">):</span>
<span class="s">"""Bob is a polite plugin for Errbot"""</span>
<span class="o">@</span><span class="n">botcmd</span>
<span class="k">def</span> <span class="nf">hello</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">msg</span><span class="p">,</span> <span class="n">args</span><span class="p">):</span>
<span class="s">"""Wish you a good day"""</span>
<span class="k">return</span> <span class="s">"Good day, Sir!"</span>
</code></pre></div></div>
<p>See ? Really simple! Errbot provides you with the <code class="language-plaintext highlighter-rouge">@botcmd</code> decorator for standard messaging functions and a bunch of other cool APIs like a Scheduler for periodic tasks (polling, message broadcasting, …), an embedded Webserver to handle custom webhooks (from Github, Jenkins, …) and <a href="http://errbot.io/features.html#core-features">much more</a>. For a more complete example, check out the <a href="https://github.com/Djiit/err-meetup">err-meetup</a> plugin I wrote to interact with the <a href="https://meetup.com">meetup.com</a> API, i.e :</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>me : !meetup next Paris-py-Python-Django-friends
BOB : No upcoming events.
me : ;-(
</code></pre></div></div>
<p>Our very own Errbot (“Bob”) help us be more productive by joining the conversation in our chatrooms, listening everything we say and eventually triggering a Python function. You don’t need to bring your geeky terminal here : just ask Bob and he will politely execute your order or answer your question (about life, the universe and everything).</p>
<h2 id="making-working-with-jenkins-fun-again">Making working with Jenkins fun again</h2>
<p>Working with Jenkins is most of the time a pain in the *** for the average, code-focused, non-Jenkins’-old-fashion-UI-savvy developer. You can’t force people to use your new shiny tool if they don’t want - even if you think it will automate some part of their job, like testing and building their code in the case of Jenkins. Using ChatOps software allows them to, well, “chat” with Jenkins.</p>
<p>There was an old and un-maintained Errbot<->Jenkins plugin, only compatible with Python2. So I decided to fork it and make it usable with the latest version of Python, Errbot and Jenkins, adding some cool features on the way. This was a sort of test-run to see how easy it was to hack a plugin for Errbot. Basically, it uses the great <code class="language-plaintext highlighter-rouge">python-jenkins</code> package available to PyPI to communicate with Jenkins, and the Errbot embedded webserver to handle incoming webhooks. You’ll find the complete source <a href="https://github.com/Djiit/err-jenkins">here</a>.</p>
<p>Here is an example of our daily workflow for small-sized projects :</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>me : Bob, build myProject {'branch': 'feature-mysuperfeature'}
BOB : Your job should begin shortly: [link-to-myproject-build]
(...)
BOB : Build #42 SUCCESS for Job myProject! Yeah!
me : thx mate!
</code></pre></div></div>
<p>And voila! Everybody can chat with Bob, in our chatrooms or in a 1to1 conversation over Slack or IRC. This is just a starting point. We can enable Bob to do many, many more things, using just plain Python scripts, and it sounds terribly funny (to me ;-) ).</p>
<p>Have any idea or comments about Bob ? Please tell us on <a href="https://twitter.com/plusdedev">Twitter</a></p>
<h2 id="bob-list-links">Bob, list links</h2>
<ul>
<li>Errbot : http://errbot.io</li>
<li>Plugins created by the Errbot community : https://github.com/errbotio/errbot/wiki</li>
<li>ChatOps at Github : https://www.youtube.com/watch?v=NST3u-GjjFw</li>
<li>ChatOps everywhere : https://www.reddit.com/r/chatops/</li>
</ul>
<hr />
<p><em>Julien Tanay <a href="mailto:julien.tanay@canal-plus.com">julien.tanay@canal-plus.com</a></em></p>
<p>* I admit it : he takes his name from Bob the Builder. Shame on me.</p>Julien Tanayjulien.tanay@canal-plus.comhttp://julientanay.comMany teams here at Canal+ use their own ChatOps software to handle repetitive tasks and/or notifications through their favorite chat app (from IRC to Slack). From Hubot to custom solutions, they vary on user’s affinites with programming languages (Hubot scripts are written in CoffeeScript, Lita’s ones are in Ruby) and chat platform (some bots are not compatible with the XMPP protocol).Install NGINX reverse proxy with GitHub’s OAuth22015-11-07T00:00:00+01:002015-11-07T00:00:00+01:00https://developers.canal-plus.com/blog/install-nginx-reverse-proxy-with-github-oauth2<p>We at CANAL PLUS have many applications hosted on Amazon EC2. It is easy to set up and you can easily test and trash your instances as many times you want.</p>
<p>But, it also can be a bit more complicated if you want these services to be only used by people in your organisation. I was looking for an elegant way to restrict access to some internal dashboards and services, which would not imply to handle a new login/password dictionnary.</p>
<p>As we use Github for our public and private repositories, we decided to set up a reverse proxy with nginx and Github oauth2 authentication service.</p>
<p><img src="http://dandelion.github.io/slides/dandelion-0.10.0/assets/images/logo_github_small.gif" alt="" /></p>
<p>As I spent some times to (finaly) set a working configuration, I hope this article may help some of you.</p>
<p>I found a Go library: <a href="https://github.com/bitly/oauth2_proxy">Oauth2_proxy</a> that integrates with nginx and deals with all the oauth protocol for you. Tadaaa !</p>
<p>The principle is fairly simple. You set a nginx reverse proxy that receives incomming requests. It internaly sends these request to oauth2_proxy, who checks your Github credentials, and then “redirects” the trafic to your internal servers (upstream servers).</p>
<p><img src="https://cloud.githubusercontent.com/assets/45028/8027702/bd040b7a-0d6a-11e5-85b9-f8d953d04f39.png" alt="" /></p>
<p>As a result, you will have to log in with your Github credentials before you will access to the services that are protected with Oauth2_proxy.</p>
<p><img src="https://cloud.githubusercontent.com/assets/45028/4970624/7feb7dd8-6886-11e4-93e0-c9904af44ea8.png" alt="" /></p>
<blockquote>
<p>OAuth2 is a protocol that lets external apps request authorization to private details in a user’s GitHub account without getting their password. This is preferred over Basic Authentication because tokens can be limited to specific types of data, and can be revoked by users at any time.</p>
</blockquote>
<p>Source: <a href="https://developer.github.com/v3/oauth/">Github Oauth API</a>.</p>
<h2 id="register-a-github-application">Register a github Application</h2>
<p>A registered OAuth application is assigned a unique Client ID and Client Secret. The Client Secret should not be shared. You may create a personal access token for your own use or implement the web flow below to allow other users to authorize your application.</p>
<p>Go to the <a href="https://github.com/settings/applications/new">Register a new OAuth application</a> page and fill all needed fields.</p>
<p>You will be given a unique Client Id and Client Secret that will be used by Oauth2_proxy service.</p>
<blockquote>
<p>People who may want to have more details can check the <a href="https://developer.github.com/v3/oauth/">Github Oauth API</a>.</p>
</blockquote>
<h2 id="install-and-configure-nginx">Install and configure nginx</h2>
<p><img src="http://www.myiconfinder.com/uploads/iconsets/256-256-cf2ed3956a3a1484f83ed20d7e987f21.png" alt="" /></p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt-get <span class="nb">install </span>nginx
</code></pre></div></div>
<p>Then edit the configuration file <code>/etc/nginx/sites-enabled/default</code> to set up proxy pass to oauth2_proxy.</p>
<pre>
server {
listen 80;
server_name your.company.com;
location / {
proxy_pass http://127.0.0.1:4180;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 1;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
}
</pre>
<p>This part send incoming requests on port 80 to 127.0.0.1:4180 (which is the default listening port of oauth2_proxy)</p>
<p>When Oauth2_proxy has authenticated your connection, it will redirect users on an ‘upstream server’</p>
<p>In this config, my upstream server is a simple http static file server. I make it listen on localhost:8090.
Files are served from <code>/var/www directory</code>.</p>
<pre>
server {
listen 8090;
root /var/www;
location / {
try_files $uri $uri/ index.html index.php =404;
}
}
</pre>
<p>Then restart nginx:</p>
<p><code>
sudo service nginx restart
</code></p>
<h2 id="install-and-configure-oauth2_proxy">Install and configure oauth2_proxy</h2>
<p>Download <a href="Prebuilt Binary">Prebuilt Binary</a> (current release is v2.0.1) or build with <code>$ go get github.com/bitly/oauth2_proxy</code> which will put the binary in <code>$GOROOT/bin</code></p>
<p>Untar the archive, check the hash and copy the oauth2_proxy executable to <code>/usr/bin</code></p>
<pre>
tar xzvf oauth2_proxy-2.0.1.linux-amd64.go1.4.2.tar.gz
md5sum oauth2_proxy
6be4b7734898081ed30558fff38b80cb oauth2_proxy
sudo cp oauth2_proxy-2.0.1.linux-amd64.go1.4.2/oauth2_proxy /usr/bin/
</pre>
<p>I use this configuration to set github as Oauth2 provider, and to restrict access to CANALPLUS’s github organisation members only. See <a href="https://github.com/bitly/oauth2_proxy/blob/master/README.md">Oauth_proxy README</a> for more options:</p>
<pre>
oauth2_proxy -client-id=CLIENT_ID_PROVIDED_BY_GITHUB \
-client-secret=SECRET_KEY_PROVIDED_BY_GITHUB \
-provider=github \
-email-domain=* \
-upstream=http://127.0.0.1:8090 \
-cookie-secret=secretsecret \
-login-url=https://github.com/login/oauth/authorize \
-github-org=yourcompany \
-cookie-domain=your.company.com \
-cookie-secure=false
</pre>
<blockquote>
<p>Note: you can also restrict access to a specific Github team under your organisation.</p>
</blockquote>
<h2 id="configure-oauth2_proxy-as-a-linux-service">Configure oauth2_proxy as a Linux service</h2>
<p>Create a new script in <code>/etc/init.d/oauth2_proxy</code></p>
<pre>
#!/bin/sh
### BEGIN INIT INFO
# Provides:
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start daemon at boot time
# Description: Enable service provided by daemon.
### END INIT INFO
cmd="oauth2_proxy -client-id=CLIENT_ID_PROVIDED_BY_GITHUB \
-client-secret=SECRET_KEY_PROVIDED_BY_GITHUB \
-provider=github \
-email-domain=* \
-upstream=http://127.0.0.1:8090 \
-cookie-secret=secretsecret \
-login-url=https://github.com/login/oauth/authorize \
-github-org=yourcompany \
-cookie-domain=your.company.com \
-cookie-secure=false"
dir=""
name=`basename $0`
pid_file="/var/run/$name.pid"
stdout_log="/var/log/$name.log"
stderr_log="/var/log/$name.err"
get_pid() {
cat "$pid_file"
}
is_running() {
[ -f "$pid_file" ] && ps `get_pid` > /dev/null 2>&1
}
case "$1" in
start)
if is_running; then
echo "Already started"
else
echo "Starting $name"
cd "$dir"
echo $cmd
if [ -z "$user" ]; then
sudo $cmd >> "$stdout_log" 2>> "$stderr_log" &
else
sudo -u "$user" $cmd >> "$stdout_log" 2>> "$stderr_log" &
fi
echo $! > "$pid_file"
if ! is_running; then
echo "Unable to start, see $stdout_log and $stderr_log"
exit 1
fi
fi
;;
stop)
if is_running; then
echo -n "Stopping $name.."
kill `get_pid`
for i in {1..10}
do
if ! is_running; then
break
fi
echo -n "."
sleep 1
done
echo
if is_running; then
echo "Not stopped; may still be shutting down or shutdown may have failed"
exit 1
else
echo "Stopped"
if [ -f "$pid_file" ]; then
rm "$pid_file"
fi
fi
else
echo "Not running"
fi
;;
restart)
$0 stop
if is_running; then
echo "Unable to stop, will not attempt to start"
exit 1
fi
$0 start
;;
status)
if is_running; then
echo "Running"
else
echo "Stopped"
exit 1
fi
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
exit 1
;;
esac
exit 0
</pre>
<p>Then make it start with Linux, and start the program:</p>
<pre>
sudo update-rc.d oauth2_proxy defaults 95 10
sudo service oauth2_proxy start
</pre>
<h2 id="et-voila-">Et voila !</h2>
<p>Files served under <code>/var/www</code> now require to enter your github credentials. Only your company team will be allowed to access to the server.</p>Jean-Thierry Bonhommejean-thierry.bonhomme@canal-plus.comWe at CANAL PLUS have many applications hosted on Amazon EC2. It is easy to set up and you can easily test and trash your instances as many times you want.Analysing the state of a set-top box using it’s video output2015-09-23T00:00:00+02:002015-09-23T00:00:00+02:00https://developers.canal-plus.com/blog/analysing-the-state-of-a-set-top-box-using-its-video-output<p>One of the challenges that the software validation team faces is knowing what is the state of a given set-top box without having to look at its output, this is specially useful when automating tests since we’ll be sending a command to the box and then looking at its state to check if the command was successfully executed.
A software called R7Valid was developed for this purpose, it is used to run multiple tests automatically on set-top boxes using an infrared LED to emulate remote control commands and then it retrieves the state of the set-top box using external APIs.</p>
<center>
<img src="http://i.imgur.com/E5B8rZg.png" />
</center>
<p>The problem of this approach is that the set-top box has only a limited amount of information available through its API, we can for example retrieve what is the view displayed in the screen, or if we paused the video or not, but we can’t know for sure what is being displayed on the screen.</p>
<p>For this problem we developed a solution capable of giving us the state of the video being displayed in a given moment using a few photo resistors, an Arduino and an Ethernet shield.</p>
<p>The photo resistors used were TinkerKit LDR Sensors, that were plugged directly into the Analog inputs of arduino, the output of these sensors varies from 0V to 5V depending of the intensity of light that they’re exposed to, we then proceeded to fixing one of these sensors to a TV screen using a cardboard box.</p>
<center>
<img src="http://i.imgur.com/iQnBNlJ.jpg" height="250" />
<img src="http://i.imgur.com/abZRMTF.jpg" height="350" />
</center>
<p>We then developed a simple HTTP server and API to run on the Arduino so we would be able to send and receive data from it and started analyzing the signal that we received for different scenarios:</p>
<p><img src="http://i.imgur.com/3bYNGqm.png" alt="signal" /></p>
<p>As we can see by these signals it’s not hard to make a difference of the these situations, the first signal shows us that there are changes in the intensity of the light being received and therefore that the image is moving, the second one shows us that we have a fixed intensity of light that doesn’t change and we either are seeing a freeze in the video or a menu being shown, and the last case would be having a dark screen, which is a low intensity constant value in the sensor.</p>
<p>Since sometimes we might have a part of the screen that doesn’t move, it would be possible to misclassify a live signal as a freeze if that happened on the area under the cardboard, to minimize this problem we added another sensor to the screen covering a peripheral area, this has minimized the errors in classification.</p>
<p>This approach can be used for multiple purposes as checking if the set-top box has executed a command, checking if there is a live stream on the TV and even measuring the time for switching from a channel to another.
The next step now is to integrate this solution to R7Valid and improve the automatic tests and create a more reliable tool for testing our software automatically.</p>
<p>You can also easily modify the source code to create your own REST server on arduino for any application where connectivity is needed, this is the main file of the project, as you can see it is really eassy to create your own routes for the API.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>void getVideo(EthernetClient *client, char args[]){
int v1 = analogRead(A4);
int v2 = analogRead(A5);
client->print("{\"video1\":");
client->print(v1);
client->print(",\"video2\":");
client->print(v2);
client->println("}");
}
void setup() {
Serial.begin(9600);
myServer = new restServer(mac, ip, gateway, subnet,80);
delay(1000);
//create routes on our rest server
myServer->addRoute("/video", GET, &getVideo);
myServer->addRoute("/getlivestatus", POST, &getLiveStatus);
myServer->addRoute("/status", POST, &stbState);
}
void loop() {
myServer->serve();
}
</code></pre></div></div>
<p>To add new methods all you have to do is to create a new callback function:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>void newcallback(EthernetClient *client, char args[]){
TODO: your code here!
}
</code></pre></div></div>
<p>And then on the <em>setup()</em> method you can add a new route:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>myServer->addRoute("/newroute", GET, &newcallback);
</code></pre></div></div>
<p>The source code of the software and libraries are available on GitHub: <a href="https://github.com/canalplus/tvduino">TVduino</a></p>Filipe Caldasfcaldas@canal-plus.comOne of the challenges that the software validation team faces is knowing what is the state of a given set-top box without having to look at its output, this is specially useful when automating tests since we’ll be sending a command to the box and then looking at its state to check if the command was successfully executed. A software called R7Valid was developed for this purpose, it is used to run multiple tests automatically on set-top boxes using an infrared LED to emulate remote control commands and then it retrieves the state of the set-top box using external APIs. The problem of this approach is that the set-top box has only a limited amount of information available through its API, we can for example retrieve what is the view displayed in the screen, or if we paused the video or not, but we can’t know for sure what is being displayed on the screen.DASHme - The rx-player companion2015-07-15T00:00:00+02:002015-07-15T00:00:00+02:00https://developers.canal-plus.com/blog/dashme-the-rx-player-companion<p>As you’ve probably seen, we released a few weeks ago an HTML5 player based on reactive programming : <a href="https://github.com/canalplus/rx-player">RX Player</a>.
This is quite nice if you want to play your own content on your website, if you have some, but if you don’t, well, it’s kind of useless.</p>
<p>This is where DashMe come in handy.</p>
<p>The idea behind DashMe is to convert any video encoded with H264 into DASH content. It generates a DASH manifest and MP4 chunks from any type of video format (AVI, MOV, MKV, Smooth Streaming, …) and expose them through a simple REST interface. Therefore, combined with RX player, you can stream and play your own content converted and served by DashMe directly on your website with practically no hassle.</p>
<p>This project started when we needed to test our player, we needed to generate some content to use with RX Player without having to use GPAC and install a new HTTP server to serve it to the player. It then evolved as an experimentation platform to play around with DASH.</p>
<p>Behing the curtain, DashMe works using <a href="https://www.ffmpeg.org/">FFMPEG</a> or <a href="https://libav.org/">LIBAV</a> with <a href="https://golang.org/">Golang</a>.
Golang is a new and interresting language especially for its concurrency primitives and its complete standard library. For example, all the HTTP serving is done using only the standard library. The built-in concurrency primitive helps a lot to spawn concurrents tasks and retrieve their results easily. Also, because it’s a compiled language its performances are quite better than nodejs or python (generally) but also it enables to link easily to C code and shared libraries. Therefore, thanks to this language we could integrate FFMPEG/LIBAV and benefit from their video formats parsing and extract easily the H264 samples before rapacking it to fragmented MP4.
Nevertheless, it has a few drawbacks, some you can go around, some you can’t. For example its standard library can be considered as heavy and there’s nothing you can do about that. Also it has a really simple (dumb ?) garbage collector. In our case where we parse potentially heavy video it can be disastrous. Hopefully, with some tweeking in the code you can reduce this issue.</p>
<p>So why would you be interrested in DashMe ? Well, I can think of a few reasons :</p>
<ul>
<li>To test or use RX Player but you don’t have any content.</li>
<li>To discover Golang and what can be done with this new language</li>
<li>To better understand the DASH format</li>
</ul>
<p>Whatever is your reason, don’t hesitate to <a href="https://github.com/canalplus/DashMe">take a look</a>.</p>Canal Labscanallabs@canal-plus.comhttp://www.canalplus.fr/As you’ve probably seen, we released a few weeks ago an HTML5 player based on reactive programming : RX Player. This is quite nice if you want to play your own content on your website, if you have some, but if you don’t, well, it’s kind of useless.Introducing rx-player2015-06-30T00:00:00+02:002015-06-30T00:00:00+02:00https://developers.canal-plus.com/blog/introducing-rx-player<p>A few months ago, as Google announced the tearing down of Silverlight support in Chrome, CANAL+ started working on a new player, using the new HTML5 API (MSE, EME, …) to play video.</p>
<p>This player is now used by the CANAL+ users within <a href="http://live.mycanal.fr/tv/">myCANAL website</a>.</p>
<p>Today, we are excited to open source it and hope that it will help the community to play video contents on the web.</p>
<p>Building a streaming video player in javascript is a complex task due to the numerous interactions with the outside world it has to deal with. Whether they come from the user seeking at a particular moment of its movie, changing the current channel or the network congestion. The video player being the centerpiece of our applications, it needs to adapt very quickly to any of these inputs and stay resilient to various errors.</p>
<p>Many current video player implementations rely on classical object-oriented hierarchy and imperative event callbacks with shared mutable objects to manage all these asynchronous tasks and states. We found this approach to be the wrong abstraction to handle the complexity of a video player.</p>
<p>Rx on the contrary provides gracious interfaces and operators to compose asynchronous tasks together by representating changing states as observable stream of values. It also comes with a <strong>cancelation</strong> contract so that every asynchronous side-effect can be properly disposed when discarded by the system (this is still <a href="https://github.com/whatwg/fetch/issues/27">a controversial issue in the JS community</a>).</p>
<p>This allowed us to implement some nice features quite easily. For instance, because in the rx-player all asynchronous tasks are encapsulated in observable data-structures, we were able to add a transparent <a href="https://github.com/canalplus/canal-js-utils/blob/master/rx-ext.js#L73-L100">retry system</a> with a simple observable operator to declaratively handle any failure and replay the whole process.</p>
<p>Another example is the way we abstracted our transport layer into an observable pipeline, allowing us to support different type of streaming systems with its own asynchronous specifities. And because Rx is message-driven, this encapsulation allows us isolate the transport I/O into a WebWorker without any effort, or add an offline support for any pipeline implementation.</p>
<p>We will write more about it in the next weeks. For now, just have a look on <a href="https://github.com/canalplus/rx-player">github</a>.</p>Canal Labscanallabs@canal-plus.comhttp://www.canalplus.fr/A few months ago, as Google announced the tearing down of Silverlight support in Chrome, CANAL+ started working on a new player, using the new HTML5 API (MSE, EME, …) to play video.