<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Monkey see, monkey do; occasionally monkey learn.]]></title><description><![CDATA[random posts from a semi-sentient simian (heroku)]]></description><link>https://gatsby.ghost.org/</link><generator>Ghost 2.9</generator><lastBuildDate>Sat, 14 Nov 2020 00:31:50 GMT</lastBuildDate><atom:link href="https://gatsby.ghost.org/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Using Ghost as a headless CMS]]></title><description><![CDATA[In a previous post I described how I investigated hosting ghost for free using Heroku and how this will allow me to potentially pull data from Ghost and build it into a static site which I can also get hosted for free (or at least a lot cheaper than what I am currently paying). I did spend some time investigating various site scrapers that would pull the ghost content directly and build a static site and though I could get these (mostly) working locally I couldn't find anyway to run them simply/]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/using-ghost-as-a-headless-cms/</link><guid isPermaLink="false">Ghost__Post__5e116f3d44207c001e070b94</guid><category><![CDATA[ghost]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sun, 05 Jan 2020 07:34:16 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1535271968495-080edd1ba35c?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1535271968495-080edd1ba35c?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Using Ghost as a headless CMS"/><p>In a previous post I described how I investigated hosting ghost for free using Heroku and how this will allow me to potentially pull data from Ghost and build it into a static site which I can also get hosted for free (or at least a lot cheaper than what I am currently paying). I did spend some time investigating various site scrapers that would pull the ghost content directly and build a static site and though I could get these (mostly) working locally I couldn't find anyway to run them simply/cheaply in the cloud so that I could automate the process and make the effort of getting the scraper work properly worth it.</p><p>Ghost however refers to itself as a headless CMS and so I thought that I'll look into what the community is doing in this space and this is how I came across Gatsby and the deployment/hosting platform called Netlify, as I had a few false starts I'll describe how I got it to work.</p><h2 id="installing-and-running-gatsby">Installing and running Gatsby</h2><p>First download the Gatsby CLI tool using npm and then prepare the starter project that is pre-configured to pull data from Ghost via its API. </p><pre><code class="language-cmd">npm install -g gatsby-cli gatsby new gatsby-blog-ghost https://github.com/TryGhost/gatsby-starter-ghost</code></pre><p> This will create a folder called <code>gatsby-blog-ghost</code> and inside it will be, after a little configuration, all that is needed to build a static site from the Ghost platfrom. </p><ol><li>From the Ghost admin portal create a custom integration</li><li>Update the production entry in `.ghost.json` (found in the <code>gatsby-blog-ghost</code> folder) with the supplied API key and API URL</li><li>Test the settings work by and navigating to the supplied URL e.g. http://localhost:9000</li></ol><pre><code>gatsby build gatsby serve</code></pre><p>Work in progress...</p><p>Your feedback is appreciated</p>]]></content:encoded></item><item><title><![CDATA[Hosting Ghost For Free]]></title><description><![CDATA[I've been running my blog on a paid Ghost [https://github.com/TryGhost/Ghost] platform now for several years now but as I rarely blog I sort of chafe at the hosting cost, especially as this is now the only bit of hosting that is actually costing me. I've toyed with the idea of a static site and as there are a number of static site generators out there that support Ghost, I can potentially use these one of generators to push the site to an AWS S3 bucket for static web site hosting [https://docs.]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/hosting-ghost-for-free/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d765</guid><category><![CDATA[ghost]]></category><category><![CDATA[heroku]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 04 Jan 2020 10:02:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1421885568509-8d5319e54d89?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1421885568509-8d5319e54d89?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Hosting Ghost For Free"/><p>I've been running my blog on a paid <a href="https://github.com/TryGhost/Ghost">Ghost</a> platform now for several years now but as I rarely blog I sort of chafe at the hosting cost, especially as this is now the only bit of hosting that is actually costing me. I've toyed with the idea of a static site and as there are a number of static site generators out there that support Ghost, I can potentially use these one of generators to push the site to an AWS S3 bucket for <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html">static web site hosting</a>. </p><p>If I do go down this route it would still require me to self-host Ghost so that I can access the admin site when I need to, and here is the rub as these usually cost money. I also wanted the simplest hosting I could find and since I am planning on using a static site generator at some point then the I don't need anything that is permanently online/available so I looked into what it would take to host Ghost on <a href="https://www.heroku.com/home">heroku</a>. </p><h2 id="getting-started">Getting started</h2><p>Well it turned it to be simplest thing ever as this is virtually a 1-click <a href="https://elements.heroku.com/buttons/snathjr/ghost-on-heroku">deployment</a> and within minutes I have a Ghost (3.X) blog up and running on heroku. This particular installation defaults to using <a href="https://cloudinary.com/">Cloudinary</a> for image upload storage but we can reconfigure it to use S3 if we decide to. Once heroku had done its thing all I had to do was create the admin account and work out how to migrate the content. Luckily Ghost has an Export/Import content section in the "Labs" which deals with the majority of the blog content, unfortunately it doesn't have any mechanism to pull the images across so these had to be manually extracted and uploaded again. One thing I noticed was that during the uploading of images, the integration with Cloudinary did some optimisation on the images and the Ghost platform then referenced those optimised images instead; with the originals still available on Cloudinary but now marked with a <code>_o</code> suffix.</p><figure class="kg-card kg-image-card"><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/Capture.png" class="kg-image" alt="Hosting Ghost For Free"/></figure><h2 id="custom-themes">Custom themes</h2><p>The default installation comes with a number of built in themes which will work fine however I've spent some time customising the Casper theme over the years so that I can have support from disqus (comments) and algolia (search) and I would rather like to keep them. Unfortunately the heroku integration doesn't allow for persistent saving of an uploaded theme due the ephemeral nature of the heroku filesystem, so as long as the heroku dyno stays up the uploaded theme will still work but as soon as it stops/restarts the uploaded theme is gone and the blog is broken. </p><p>There is however a way forward but it will require that we maintain our own repository with our custom theme(s) installed and push them to heroku. We can also use the method to keep our copy of ghost up to date should we so wish.</p><p>First I download and install the h<a href="https://devcenter.heroku.com/articles/heroku-command-line">eroku CLI</a> and test that the blog still works</p><pre><code class="language-cmd">git clone https://github.com/snathjr/ghost-on-heroku cd ghost-on-heroku heroku login heroku git:remote -a YOURAPPNAME heroku info git push heroku master</code></pre><p>I had to use the the `--force` switch when pushing but I wasn't concerned as I knew I could always use the heroku portal to revert back to the last known good deployment should it get messed up.</p><p>Time to add the custom theme </p><ol><li>I removed the following entries from the .gitignore file so I could actually add the theme to the repository.</li></ol><pre><code class="language-text">content/themes/* !content/themes/.gitkeep</code></pre><p>2. Next I created a folder for our theme in the <code>/content/themes</code> folder e.g. <code>monkeysee</code> and then added the changes back to the repository and pushed them to heroku.</p><pre><code class="language-cmd">git add . git commit -m "Important changes" git push heroku master</code></pre><p>Once the changes are pushed, heroku rebuilt the site and once it was completed we could see the custom theme was available for selection, because the theme is now baked in we are unable to delete it via the admin site.</p><p>The result of this little experiment can be found here <a href="https://monkey-see-monkey-do-blog.herokuapp.com/">https://monkey-see-monkey-do-blog.herokuapp.com/</a> and now allows me to now experiment with the next phase which is introducing a static site generator.</p><p>As always, your feedback is appreciated.</p>]]></content:encoded></item><item><title><![CDATA[What is this "waterfall" thing of which you speak?]]></title><description><![CDATA[> I'm going to describe my personal views about managing large software developments. I have had various assignments during the past nine years, mostly concerned with the development of software packages for spacecraft mission planning, commanding and post-flight analysis. In these assignments I have experienced different degrees of success with respect to arriving at an operational state, on-time, and within costs. I have become prejudiced by my experiences and I am going to relate some of thes]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/what-is-this-waterfall-thing-of-which-you-speak-off/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d764</guid><category><![CDATA[agile]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sun, 17 Mar 2019 22:06:00 GMT</pubDate><media:content url="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/ShiFengWaterFall_002.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote> <img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/ShiFengWaterFall_002.jpg" alt="What is this "waterfall" thing of which you speak?"/><p>I'm going to describe my personal views about managing large software developments. I have had various assignments during the past nine years, mostly concerned with the development of software packages for spacecraft mission planning, commanding and post-flight analysis. In these assignments I have experienced different degrees of success with respect to arriving at an operational state, on-time, and within costs. I have become prejudiced by my experiences and I am going to relate some of these prejudices in this presentation.</p> </blockquote> <p>- Winston W. Royce (Managing the development of large software systems)</p> <!--kg-card-end: markdown--><p>The above quote is the start of the paper that to those of us who were taught any form of IT at university in the 80's and early 90's (and perhaps later) has become synonymous with the software development life cycle (SDLC) model known as "<a href="https://en.wikipedia.org/wiki/Waterfall_model">Waterfall</a>"; whether this association is correct or not I'll leave to the historians, or in this case <a href="https://en.wikipedia.org/wiki/Winston_W._Royce">Wikipedia</a>. It does seem to me though that anyone who has actually read the <a href="http://www-scf.usc.edu/~csci201/lectures/Lecture11/royce1970.pdf">paper</a> past page 2, would never make the assertion that a waterfall-like model was something Royce was recommending and what he was describing was actually something rather more familiar to what we try to do today.</p><p>Page 2 of the paper shows the classic waterfall model (or something very similar because Royce never used the term waterfall himself) that I recall having to memorise and recall for an exam.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/image.png" class="kg-image" alt="What is this "waterfall" thing of which you speak?"><figcaption>Waterfall SDLC</figcaption></img></figure><p>By the next few pages the model has gone iterative and by the time you reach the last page, you have the following, which I am going to call Royce's SDLC.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/image-1.png" class="kg-image" alt="What is this "waterfall" thing of which you speak?"><figcaption>Royce's SDLC</figcaption></img></figure><p>Now, if you are one to read the words and not just look at the pretty pictures you will see references to "Prototyping", "Testing" and "Involving the Customer" which I personally don't recall being mentioned during the course and to the current me it also feels a little "agile-y". There is also a long section on documentation (Step 2) of which Royce has very fond of but then he was also dealing with spaceflight and probably peoples' lives. It is interesting to read that even 40 years ago, developers and documentation didn't seem to mix well, actually I think it is people and documentation don't mix and something that we thankfully do less of nowadays and have instead replaced it with living documentation such as wikis and backlogs.</p><h3 id="has-anyone-actually-worked-on-a-waterfall-project">Has anyone actually worked on a waterfall project?</h3><p>I also had the chance recently to reflect over my resume, something that was long overdue, and that brought back a lot of memories about the projects I worked on, what practices we used then, and whether I would call those projects waterfall or something else. My first software development roles were at firms where hardware was part of the product and those engineers I worked with definitely iterated over their designs so it felt right to me that we did the same with software. All the big projects were somewhat iterative in nature, new features being added with regular releases. The first time I recall seeing a "big" specification was when I was on a UK government project in 2003-2005 (EU emissions trading, Kyoto protocol) but even then the development team was trying to use agile processes under the hood; not perfectly admittedly, we were just learning to do the agile-walk.</p><p>Some people claim to have done waterfall as I often get to see resumes with "skills: waterfall, agile, ..." etc but I do wonder if waterfall is an actual real-life process or if it is just used as a placeholder for when the SDLC is heavy-process, not-agile. I say a placeholder but I sometimes wonder if it is a straw man or even a bogeyman as it gives a target for those who dislike the not-agile world. I don't believe I have ever worked on a waterfall project myself, well not one that followed the flow that we would typically attribute to waterfall. I have however been on many-a-project whose process flow would look somewhat like the iterative cascade of Figure 10. I am still surprised that the waterfall model even came into being is it just feels unnatural and not how I, as a mostly self-taught developer, have ever thought would be a good idea. The cynic in me though feels it is an ideal process for the sort of company where change control and charging for changes is a big part of their operating/revenue model; it could be though that the charging model I am critical of was because of the single direction of the waterfall model and that backing up a step was so expensive.</p><p>As always, your feedback is appreciated.</p>]]></content:encoded></item><item><title><![CDATA[Debugging NuGet Packages]]></title><description><![CDATA[Have you ever noticed that now we have embraced the micro-services architecture of doing things, instead of one big monolithic repository of code we have it all broken into smaller repositories of single/reduced responsibility... To share these little bits of wisdom we package them up into little parcels that we can import into 100s of other projects distributed across our services. Then, something goes wrong and we can't debug what is going on inside these little gems and we wonder why we did ]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/debugging-nuget-packages/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d761</guid><category><![CDATA[microservices]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 16 Feb 2019 04:48:18 GMT</pubDate><media:content url="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/mistake-1966448_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/mistake-1966448_1280.jpg" alt="Debugging NuGet Packages"/><p>Have you ever noticed that now we have embraced the micro-services architecture of doing things, instead of one big monolithic repository of code we have it all broken into smaller repositories of single/reduced responsibility...</p> <p>To share these little bits of wisdom we package them up into little parcels that we can import into 100s of other projects distributed across our services. Then, something goes wrong and we can't debug what is going on inside these little gems and we wonder why we did this and consider if a monolithic repository was all that bad after all.</p> <p>Yeah, me too.</p> <p>So what can we do about it? Well I've picked up a few tricks, forgotten a few, and relearned them again once more. So I am going to note them down now for my future-self.</p> <h2 id="generaldebugging">General debugging</h2> <p>Before we talk about debugging NuGet packages we need to talk about what we need to actually debug our code and how the debugger uses the various assets we have available to help us do that. The following items are pretty well common for most compiled languages on windows and linux platforms.</p> <ol> <li>Our source-code: this is the stuff we write and where we are going to make changes should we find issues;</li> <li>The compiled execution: this is the thing we build and execute, usually an assembly;</li> <li>A map-file, this allows us to identify where in our source-code the execution is currently at when debugging.</li> </ol> <p>For .NET this looks like the following</p> <table> <thead> <tr> <th>source-code</th> <th>execution</th> <th>map-file</th> </tr> </thead> <tbody> <tr> <td>C#, F#, VB.NET, ...</td> <td>DLL, EXE</td> <td>PDB, MDB</td> </tr> </tbody> </table> <p>Now when we are debugging locally we have all these things but when we download a NuGet package it is unusual to find a PDB in the package, probably due to size, and it's highly unlikely that we'll see the source code packaged.</p> <h2 id="debuggingnugetpackages">Debugging NuGet packages</h2> <p>If you're making your own NuGet packages then please consider generating and including the PDB files with your compiled assets. It will make your life so, so much easier because you'll have better quality stacktraces should an error occur and you can quickly locate where an issue happened, hopefully avoiding the need to debug in the first place.</p> <p>Having the location of the errors in your stacktrace saves so much time in identifying where to look for the error from which you can then trace the cause.</p> <pre><code>System.InvalidOperationException: L'opération n'est pas valide en raison de l'état actuel de l'objet. File "C:\projects\opencover\main\OpenCover.Framework\Communication\MessageHandler.cs", line 113, col 21, in StandardMessage Int32 StandardMessage(OpenCover.Framework.Communication.MSG_Type, OpenCover.Framework.Manager.IManagedCommunicationBlock, System.Action`2[System.Int32,OpenCover.Framework.Manager.IManagedCommunicationBlock], System.Action`1[OpenCover.Framework.Manager.M... File "C:\projects\opencover\main\OpenCover.Framework\Communication\CommunicationManager.cs", line 60, col 13, in HandleCommunicationBlock ... </code></pre> <p>Before you ask, yes, you can generate PDB files for release builds of your assemblies; also please consider including any intellisense files in your NuGet packages if you have generated them but only if they are suitable for public consumption.</p> <p>Even with the PDB files available you are not going to be able to debug very well, if at all, unless you have access to the source code. You may be lucky as you may already have it downloaded (or cloned) or you can pull it from the relevant source control system. Even then you will still need to make sure your source code matches the code base at the time the assembly was built and that can be a little tricky at times as you'll need to know when it was built and revert your code back to that point in-time.</p> <h2 id="usingresharper">Using ReSharper</h2> <p>If you are using .NET and have the right tooling there is a simpler way. When using <a href="https://visualstudio.microsoft.com/vs/community/">Visual Studio</a> with <a href="https://www.jetbrains.com/resharper/">ReSharper</a> it will simply allow you to debug any assembly without source code as long as the PDB file exists. You can either step right into the method you want to investigate and ReSharper will automatically create the source code for you by using the IL and creating equivalent source code. You can also use the <code>Go To Implementation</code> shortcut (usually Ctrl+F12) and start drilling into the source code of the 3rd party assembly placing breakpoints as you go before you start your debugging run.</p> <p>What happens when you don't have a PDB? Well, you can still use the <code>Go To Implementation</code> shortcut. However whilst debugging, when you try to create a breakpoint, it will initially fail but if the PDB can be located i.e. in a <a href="https://docs.microsoft.com/en-us/windows/desktop/debug/symbol-servers-and-symbol-stores">symbol store</a>, or it can be generated dynamically, it will prompt you to enable debugging and then you can continue debugging as before; generated PDBs are, by default, placed in <code>%USERPROFILE%\AppData\Local\Temp\SymbolCache</code>.</p> <p>The one disadvantage of this approach is that the source code you are debugging is not your actual source code but it is a close representation of what it could be i.e. source code that when compiled should produce the same <a href="https://en.wikipedia.org/wiki/Common_Intermediate_Language">IL</a>.</p> <p>I also understand not everyone can afford ReSharper but all is not lost because the above functionality is also available as part of <a href="https://www.jetbrains.com/decompiler/">DotPeek</a> which is free.</p> <h2 id="isthereanotherway">Is there another way?</h2> <p>There does appear to be a solution on the horizon (still in Beta at the time of writing but has strong support) that will allow debugging of packages and it is called <a href="https://github.com/dotnet/sourcelink">"Source Link"</a>, it can be used with most git repositories such as GitHub, Bitbucket and Azure Devops etc and what it does is add a special link into your nuget package to your git repository along with the appropriate <a href="https://git-scm.com/book/en/v2/Git-Internals-Git-Objects">SHA</a>, with this information and a supporting debugger (i.e. Visual Studio) can pull the appropriate source code resources. As I said it is still beta but if you are okay to update all your packages going forward then this is probably the future.</p> <p>Over time I can see that the latest packages on NuGet will start to use this method as it makes sense in that it resolves the issue of stepping through the actual code rather than a close representation and by using the SHA it focuses on the files as they were at the time. Here is a <a href="https://docs.microsoft.com/en-us/dotnet/standard/library-guidance/sourcelink">demo</a> of Source Link being used in <a href="https://www.newtonsoft.com/json">Newtonsoft.Json</a>.</p> <h2 id="whatifihavenoaccesstothesourcecode">What if I have no access to the source code?</h2> <p>You can still create a PDB for your assembly using tools that come with Visual Studio and you can access from your developer command prompt e.g.</p> <pre><code>ildasm /out=target_assembly_source.il target_assembly.dll ilasm target_assembly_source.il /dll /pdb </code></pre> <p>Once you have your new DLL and associated PDB you will now need to recompile using the newly rebuilt assembly, then you'll be able to step into the code, you will be stepping though the generated IL code though, this isn't perfect though and <a href="http://encyclopedia2.thefreedictionary.com/Your+Mileage+May+Vary">YMMV</a>.</p> <p><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/DebugIL.png" alt="Debugging NuGet Packages"/></p> <p>As always, your feedback is appreciated.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Adding a tags page to a Ghost blog]]></title><description><![CDATA[An odd thing about Ghost is though it allows you to tag your posts it doesn't provide an out-of-the-box way to access all the tags as a single view. It is relatively easy to set one up yourself and give it a reasonably consistent look and feel to the main blog. To start you need to enable the host public API, you'll find this in the Labs section of your ghost admin. Then, you need to create a static page with a post url of tags. Now we have to create a special theme file called page-tags.h]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/adding-a-tags-page-to-ghost-blog/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d763</guid><category><![CDATA[ghost]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Mon, 11 Feb 2019 08:36:00 GMT</pubDate><media:content url="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/tags-1285373_1280-1.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/tags-1285373_1280-1.jpg" alt="Adding a tags page to a Ghost blog"/><p>An odd thing about Ghost is though it allows you to tag your posts it doesn't provide an out-of-the-box way to access all the tags as a single view.</p> <p>It is relatively easy to set one up yourself and give it a reasonably consistent look and feel to the main blog.</p> <p>To start you need to enable the host public API, you'll find this in the <em>Labs</em> section of your ghost admin. Then, you need to create a static page with a post url of <em>tags</em>.</p> <p><img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/tags-ghost.png" alt="Adding a tags page to a Ghost blog"/></p> <p>Now we have to create a special theme file called <code>page-tags.hbs</code> for that <em>tags</em> page. This page will be used to render content for this post instead of the default <code>post.hbs</code>.</p> <pre><code class="language-handlebars">{{!< default}} {{!-- The tag above means - insert everything in this file into the {body} of the default.hbs template --}} {{!-- The big featured header, it uses blog cover image as a BG if available --}} {{#post}} <header class="site-header outer {{#if feature_image}}" style="background-image: url({{feature_image}}){{else}}no-cover{{/if}}"> <div class="inner"> {{> "site-nav"}} <div class="site-header-content"> <h1 class="site-title">{{title}}</h1> </div> </div> </header> {{!-- The main content area --}} <main id="site-main" class="site-main outer"> <style scoped> .inner-page-tags { margin-top: inherit; } @media (min-width: 900px) { .inner-page-tags { margin-top: -8vw; } } </style> <div class="inner inner-page-tags"> <div class="post-feed"> {{#get 'tags' limit='all' include='count.posts' order='count.posts desc'}} {{#foreach tags}} {{!-- The tag below includes the markup for each tag - partials/tag-card.hbs --}} {{> "tag-card"}} {{/foreach}} {{/get}} </div> </div> </main> {{/post}} </code></pre> <p>In the partials folder create another theme file called <code>tag-card.hbs</code>, this file will be used to render each tag card so that it has a similar look and feel to the posts when viewed as a collection.</p> <pre><code class="language-handlebars"><article class="post-card {{post_class}}{{#unless feature_image}} no-image{{/unless}}"> {{#if feature_image}} <a class="post-card-image-link" href="{{url}}"> <div class="post-card-image" style="background-image: url({{feature_image}})"></div> </a> {{/if}} <div class="post-card-content"> <a class="post-card-content-link" href="{{url}}"> <header class="post-card-header"> <h2 class="post-card-title">{{name}}</h2> </header> <section class="post-card-excerpt"> <p>{{description}}</p> <p>A collection of {{plural count.posts empty='posts' singular='% post' plural='% posts'}}</p> </section> </a> </div> </article> </code></pre> <p>Finally add a link to your blog navigation so your visitors can find this new handy page.</p> <p><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/tag2.png" alt="Adding a tags page to a Ghost blog"/></p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Not dead yet...]]></title><description><![CDATA[So it has been a while but I am glad to finally say I have released another version of OpenCover - 4.7.922. The release candidate has been out for a few weeks now with no issues reported so I decided to bite the bullet and get the latest release out; the releases, as always, can be found on Nuget [https://www.nuget.org/packages/OpenCover/], Chocolatey [https://chocolatey.org/packages/OpenCover/4.7.922] and Github [https://github.com/OpenCover/opencover/releases/tag/4.7.922]. I'd say most of ]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/not-dead-yet/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d760</guid><category><![CDATA[open cover]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sun, 10 Feb 2019 08:26:49 GMT</pubDate><media:content url="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/not_dead_yet.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/not_dead_yet.jpg" alt="Not dead yet..."/><p>So it has been a while but I am glad to finally say I have released another version of OpenCover - 4.7.922. </p><p>The release candidate has been out for a few weeks now with no issues reported so I decided to bite the bullet and get the latest release out; the releases, as always, can be found on <a href="https://www.nuget.org/packages/OpenCover/">Nuget</a>, <a href="https://chocolatey.org/packages/OpenCover/4.7.922">Chocolatey</a> and <a href="https://github.com/OpenCover/opencover/releases/tag/4.7.922">Github</a><a>.</a></p><p>I'd say most of the major issues were addressed and OpenCover should have better support for assemblies built using .net core thanks to the wonderful <a href="https://github.com/jbevain/cecil">Mono.Cecil</a> library. I've also hooked in <a href="https://sentry.io">Sentry</a> to collect crash reports, we used to use <a href="https://drdump.com/crash-reporting-system">DrDump</a><a> </a>with 4.6.519 but I feel that Sentry will help us be more proactive in identifying and fixing any issues. If you do have any problems then please raise them on <a href="https://github.com/OpenCover/opencover/issues">issues page</a>. </p><p>There are a few defects/issues that I'll probably have a go at in the next few weeks/months but I can't see any major features being added in the near future, I will however look at and accept any pull-requests that adds value to the project so feel free to dive in.</p><p>I also recently found out that <a href="https://www.ndepend.com">NDepend</a> is also supporting OpenCover coverage files now, so that is a big win to the community that have been asking for that capability over the years.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">NDepend v2019.1 has been released with support for VisualStudio solution analysis, support for OpenCover and VS coverage binary format, 9 new security rule and more <a href="https://t.co/5cjhEHJgqz">https://t.co/5cjhEHJgqz</a> <a href="https://twitter.com/hashtag/ndepend?src=hash&ref_src=twsrc%5Etfw">#ndepend</a> <a href="https://twitter.com/hashtag/opencover?src=hash&ref_src=twsrc%5Etfw">#opencover</a> <a href="https://twitter.com/hashtag/visualstudio?src=hash&ref_src=twsrc%5Etfw">#visualstudio</a> #2019 <a href="https://t.co/SEiPmza2Ha">pic.twitter.com/SEiPmza2Ha</a></p>— ndepend (@ndepend) <a href="https://twitter.com/ndepend/status/1090929622303342592?ref_src=twsrc%5Etfw">January 31, 2019</a></blockquote> <script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"/> </figure><p>As always your feedback is appreciated.</p><p/>]]></content:encoded></item><item><title><![CDATA[A spike on `robust` JSON handling in .NET]]></title><description><![CDATA[When working with JSON formatted data in .NET I have always found it frustrating that I am losing something valuable if I want to use some form of contract to help reason about the code and improve understanding about the entities being worked upon. What did I just receive? The first issue I often come across is that after I have deserialized the data I simply can't tell the difference between null and undefined (or missing/absent) e.g. if we had the following entity public class User { str]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/introducing-hale-net/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d75e</guid><category><![CDATA[open source]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Tue, 20 Feb 2018 09:28:00 GMT</pubDate><media:content url="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/Comet_Hale_Bopp_Jyv-skyl-.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/Comet_Hale_Bopp_Jyv-skyl-.jpg" alt="A spike on `robust` JSON handling in .NET"/><p>When working with JSON formatted data in .NET I have always found it frustrating that I am losing something valuable if I want to use some form of contract to help reason about the code and improve understanding about the entities being worked upon.</p> <h6 id="whatdidijustreceive">What did I just receive?</h6> <p>The first issue I often come across is that after I have deserialized the data I simply can't tell the difference between <code>null</code> and <code>undefined</code> (or missing/absent) e.g. if we had the following entity</p> <pre><code class="language-csharp">public class User { string Name { get; set; } int? Age { get; set; } } </code></pre> <p>then if we deserialized either of the following JSON objects into it</p> <pre><code class="language-javascript">{ "Name" : "Arthur Dent", "Age" : null } </code></pre> <p>or,</p> <pre><code class="language-javascript">{ "Name" : "Arthur Dent" } </code></pre> <p>then in both cases, after deserialization, <code>Age</code> is <code>null</code>. Should we wish to serialize that entity we can either serialize <code>Age</code> as <code>null</code> or we can omit it, but, we can't do both i.e. we can never be 100% sure we are creating the same representation that was received and we could be adding or losing information that may have a detrimental effect on a downstream system that consumes the data.</p> <h6 id="idontknowaboutyouyet">I don't know about you yet!</h6> <p>Another issue I often encounter is if the JSON object received has more information than expected e.g.</p> <pre><code class="language-javascript">{ "Name" : "Arthur Dent", "Age" : 42, "Address" : "ZZ9 Plural Z Alpha" } </code></pre> <p>If we deserialize and serialize again, then we will now lose the <code>Address</code> field and this is something we probably want to avoid.</p> <p>In all these cases, there are ways to handle them but I find them tedious to implement and it probably leaves an ungodly mess behind for someone else to pick up.</p> <h5 id="couplingofservices">Coupling of services</h5> <p>In the last example, where an unknown field is potentially lost, this is often solved by ensuring all services have the same contract. When you add/remove fields from a contract we often use a shared package or file and this leads to laborious upgrading of each service or client that uses that contact and then we have to deal with the subsequent deployment and synchronization issues that then ensue. This just feels wrong as we should be developing (micro)services that can be independently deployed but now we have created a dependency between these services and have effectively created a <a href="https://www.infoq.com/news/2016/02/services-distributed-monolith">distributed monolith</a>. This problem is probably better explained in this Channel 9 talk "<a href="https://channel9.msdn.com/Blogs/Subscribe/DataContract-Coupling-in-Messaging">Data/Contract Coupling in Messaging</a>".</p> <h6 id="robustnessprinciple">Robustness Principle</h6> <p>We have better things to do with our time. What I needed was a way to take a typed contract e.g. using an interface that describes the entities I want to work with, that I can deserialize JSON objects into but be tolerant of what it reads whilst preserving the original JSON structure unless I have actually modified it; this sounds similar to the robustness principle.</p> <p>The Robustness Principle is also known as <a href="https://en.wikipedia.org/wiki/Robustness_principle">Postel's Law</a> and simply states:</p> <blockquote> <p>Be conservative in what you do, be liberal in what you accept from others</p> </blockquote> <p>In the world of services (and now micro-services I suppose) this is sometimes referred to as the <a href="https://martinfowler.com/bliki/TolerantReader.html">Tolerant Reader</a> where we need to be tolerant of what we read so that our contracts can simply evolve without holding us back.</p> <h4 id="introducinghalenet">Introducing Hale.NET</h4> <p>Now there was no way I am going to be able to create a better <a href="https://www.nuget.org/packages/Newtonsoft.Json">Json.NET</a> library for this purpose but I can use it to do the heavy lifting when handling JSON. I also feel I should try some <a href="https://en.wikipedia.org/wiki/Aspect-oriented_programming">AOP</a> techniques such as interceptors (e.g. <a href="http://www.castleproject.org/projects/dynamicproxy/">Castle Windsor Dynamic Proxy</a>) or weaving (e.g. <a href="https://github.com/Fody/Fody">Fody</a>) to remove some of the repetitiveness.</p> <h5 id="thespike">The spike</h5> <p>Since my spike is showing potential and that other people have expressed an interest I thought it would be better to share now rather than just tinker myself when time permits. The spike (definitely not production ready) can be found on <a href="https://github.com/sawilde/hale.net">Github</a> and I have released it under the MIT licence. It currently only handles the most basic of object structures (no arrays or hierarchies) but it does</p> <ol> <li>preserve the underlying data by using a <code>JObject</code> and an interceptor to handle the <code>get_</code> and <code>set_</code> of our properties.</li> <li>has some basic handling for dealing with <code>null</code> vs <code>undefined</code> using exceptions and some extension methods.</li> </ol> <pre><code class="language-csharp">Assert.Null(user.GetValueOrDefault(u => u.Age)); Assert.False(user.IsReferenced(u => u.Age)); </code></pre> <h5 id="nextsteps">Next steps</h5> <p>The next step is to handle hierarchies of objects and arrays because without the ability to do this, it is not going to be of any use whatsoever.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[A weekend at Random Hacks of Kindness]]></title><description><![CDATA[Random Hacks of Kindness [http://www.rhokaustralia.org/#rhok-home] (or RHoK for short) is a hacking event that I've been involved with on and off over the past four or five years. Since we have just had a recent RHoK event in Melbourne (Nov 2017) I thought I should write down what my own personal involvement was this time so that others may get a feel of what it is like to become involved in RHoK or other similar hacking events. RHoK is not a competitive hacking event with big prizes up for g]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/random-hack-of-kindness/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d75a</guid><category><![CDATA[rhok]]></category><category><![CDATA[volunteer]]></category><category><![CDATA[agile]]></category><category><![CDATA[cloud]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Mon, 04 Dec 2017 08:38:43 GMT</pubDate><media:content url="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/volunteer-1326758_1280.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/volunteer-1326758_1280.png" alt="A weekend at Random Hacks of Kindness"/><p><a href="http://www.rhokaustralia.org/#rhok-home">Random Hacks of Kindness</a> (or RHoK for short) is a hacking event that I've been involved with on and off over the past four or five years. Since we have just had a recent RHoK event in Melbourne (Nov 2017) I thought I should write down what my own personal involvement was this time so that others may get a feel of what it is like to become involved in RHoK or other similar hacking events.</p> <p><img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/randomhacks.png" alt="A weekend at Random Hacks of Kindness"/></p> <p>RHoK is not a competitive hacking event with big prizes up for grabs but instead aims to be a collaborative hacking event where some of the hackers will switch teams when they cease to be of use to their current team or feel they can provide more value on another team. Rather than trying to use some company's new API and do something cool with it we instead look at trying to solve real-world problems for the <a href="http://www.rhokaustralia.org/peeps/changemakers">changemakers</a> who usually come from a charity, community group or social-enterprise and who have a real-world problem that they would like help with i.e. usually idea-rich, cash-poor looking for a kick-start. Another feature of RHoK, is the emphasis on continuing involvement in the same project between RHoK events through what has become known as RHolls (Rock 'n' Roll - get it?) that take place every 4-6 weeks; and even using your own spare time if you have any to spare. There is no actual commitment to be involved with a project after a hack event has finished but it is really encouraged and it is up to each individual to decide on their continuing level of involvement.</p> <p>So who gets involved in RHoK? Well, it isn't just software developers or coders as these projects require people with a wide range of skills and backgrounds so that we can be effective and deliver the right thing. My friend Samara tweeted this picture</p> <blockquote class="twitter-tweet tw-align-center" data-lang="en"><p lang="en" dir="ltr">This is why you don’t put the computer scientist on hack weekend registration duty <a href="https://twitter.com/rhokmelb?ref_src=twsrc%5Etfw">@rhokmelb</a> 😉RHoKstar traits (v1.0) <a href="https://twitter.com/hashtag/hackathon?src=hash&ref_src=twsrc%5Etfw">#hackathon</a> <a href="https://twitter.com/hashtag/rhoksummer?src=hash&ref_src=twsrc%5Etfw">#rhoksummer</a> <a href="https://t.co/3NIwdTBwu3">pic.twitter.com/3NIwdTBwu3</a></p>— Samara (@EvilAngelPixie) <a href="https://twitter.com/EvilAngelPixie/status/934743513979273216?ref_src=twsrc%5Etfw">November 26, 2017</a></blockquote> <script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"/> <h3 id="thehackweekend">The hack weekend</h3> <p>Now that the stage has been set I'll continue to describe my involvement with the most recent Melbourne RHoK event.</p> <h4 id="day1">Day 1.</h4> <h6 id="thechangemakerpitches">The Changemaker Pitches</h6> <p>This was the final chance for each of worthy changemakers to make their cause known and to persuade any final non-committed hackers to join their team. In no particular order (mainly because I forgot) the changemakers on the day were.</p> <ul> <li><a href="https://www.berrystreet.org.au/">Berry Street</a> - Berry Street are back with their mission to help children in protection stay connected to their families and other important people in their lives. This changemaker, like several changemakers before them, are proof that you can get a team to engage in several RHoK and RHoLL events to get that project across the line.</li> <li><a href="https://gojanegive.org/">Go Jane Give</a> - Hoping that RHoK can help them extend the reach of the grassroots campaign that allows you to turn your talents and interests into a fundraiser for your favourite causes.</li> <li><a href="https://carerscouch.com.au">Carers Couch</a> - Needs help to provide online support to carers of people with cancer. Martina of Carers Couch has introduced a new term "cabbage salad" that we are sure is to replace the word "spaghetti" when dealing with awful/complex code.</li> <li>PollyannR - Wants to improve the way that regional creatives can get the funding they need.</li> <li><a href="https://www.startbroadband.com.au">Start Broadband</a> - Helping disadvantaged Aussie families get online.</li> <li><a href="https://www.shifra.io">Shifra</a> - Looking to use technology to increase/improve sexual and reproductive health resources for mobile populations, such as refugees.</li> <li><a href="https://www.caretocompare.com.au">Care to Compare</a> - An enterprise where consumers will get to compare and choose health products with 100% of the profits going to charity.</li> <li><a href="http://www.hitnet.com.au">Hitnet</a> - Co-creates and distributes health information and services to marginalised communities. They need help in maintaining and managing the content for their 70+ (and growing) information hubs that are provided to local communities across the country.</li> </ul> <h6 id="teamformingandstorming">Team forming and storming</h6> <p>Hackers who had already been to the information night held earlier in the month had already picked their cause and started to form around the respective changemakers, others quickly joined the teams that they felt they could contribute to. I joined up with Carers Couch as I had some previous interaction with them on the run-up to the hack event as I was asked to be their RHoK mentor and to help them prepare for the weekend. The team started off by introducing themselves and getting down to the task of determining what the needs were and what could be tackled in that weekend. As the team was large enough that there was even a possibility that several tasks could be tackled in the same weekend.</p> <p><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/DPb53i3VAAAe9qm.jpg" alt="A weekend at Random Hacks of Kindness"/></p> <p>During this time another RHoK mentor came over and asked if we had any UI/UX people willing to jump ship to help another team that really needed one. Once we had a lull in our session I went over to the Hitnet team to get more details about what they needed and it looked they had already found a UI person and had now picked their goldilocks problem (not too big or too small, just right) and were stuck with another issue involving XML. After a quick discussion I realised I could probably contribute more to this team rather than to the initial Carers Couch team I was currently involved in, and so after a quick apology to the team, I moved tables.</p> <p>After another round of introductions, we started to carve up the problem into a potential solution and assign some tasks to ourselves.</p> <h6 id="theproblem">The problem</h6> <p>Dan Laush has written a wonderful <a href="https://medium.com/@dan.laush/people-focus-in-order-to-scale-rhok-melbourne-summer-2017-d19d8ca78d6e">write-up</a> of the project itself so I'll only summarise here.</p> <p>The Hitnet team need to update an XML configuration file for each hub on their network to control what content is to be displayed on each and they currently do this by hand for over 70+ hubs each time new content is supplied to them.</p> <p><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/Screen-Shot-2017-11-26-at-2.11.02-PM-1.png" alt="A weekend at Random Hacks of Kindness"/></p> <h6 id="theapproach">The approach</h6> <p><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/20171126_164145c.jpg" alt="A weekend at Random Hacks of Kindness"/></p> <p>The team identified we had just the right skills to split into three to tackle each part of the intended solution of our simple multi-tier architecture. As I had already decided to jump onto this team to help them tackle the XML issue I decided to concentrate on the worker process that would take XML files that were used for hub configuration and update them based on data pulled from an API that was to be built by other members of the team. By breaking up the project this way kept the concerns separate and allowed individual teams to largely work independently of each other, mocking the data where required until the other team(s) were ready to serve us our data.</p> <h6 id="theworkerprocess">The worker process</h6> <p>Hitnet were already using Google Cloud Platform (GCP) to host their servers and so it seemed only right that we put the worker there right next to the servers/files it will need to interact with. An added bonus is that I had never used GCP before so now was a chance for a learning experience as well; a big plus in my opinion.</p> <p>The worker process had the task of taking an XML configuration file for a hub and updating it with data retrieved from an API call. The data itself was the description of the modules, i.e. content, that are to be displayed on that hub. We quickly fleshed out a number of tasks needed to meet the objectives and check if the worker was a feasible choice.</p> <ol> <li>How easy is it to create a Function on GCP?</li> <li>How do we read and write files from/to storage?</li> <li>How will we manipulate the XML files?</li> <li>How will we make the API calls?</li> </ol> <p>Assumptions</p> <ol> <li>As we did not have direct access to Hitnet's GCP account (and since we are new to this it is right we should be kept well away) we made the assumption that the XML files were available via a GCP storage bucket.</li> </ol> <p>Creating a Function on GCP is really easy and in minutes, I had created an account and got access to $300 of free credits to use over the next year. <a href="https://nodejs.org/en/">Node.js</a> appears to be the default language for writing Functions on GCP and Google has also provided a number of <a href="https://cloud.google.com/functions/docs/tutorials/">examples</a>; these examples had pretty well answered how to do steps (1) and (2) which was a win. The worker process itself is triggered by a simple HTTP(s) request and it seems it could also be triggered by a message placed on a queue but we have no need for that at the moment. There was no mechanism for a timed trigger like with have with Azure Functions but we could use a <a href="https://cloud.google.com/solutions/reliable-task-scheduling-compute-engine">cron job</a> or a service like Zapier.</p> <p>Manipulating the XML, proved to be harder than it should have been and I eventually settled on using the <a href="https://www.npmjs.com/package/xml2js">xml2js</a> NPM package as it used the <a href="https://www.npmjs.com/package/xmlbuilder">xmlbuilder</a> NPM package that would allow us to save the manipulated data back to XML again. We finally got to the stage where we had the ability to pull, manipulate and push back a single XML file. Finished the day by adding code to iterate each file in the target storage bucket and update them.</p> <h6 id="endoftheday">End of the day</h6> <p>At the end of the day, each member of the Hitnet team had something successful to declare:<br> UI team - had a React app up and running and created their first forms.<br> API team - had spun up an API using nginx and a Postgres database.<br> Worker team - had a worker process that could do all the parts required for it.</br></br></br></p> <h4 id="day2">Day 2.</h4> <h6 id="cominginbytrain">Coming in by train</h6> <p>I had a look at the code I helped create the day before whilst heading back to the event and after all the hacking and tweaking the day before it was a mix of <a href="https://developer.mozilla.org/en-US/docs/Glossary/Callback_function">callbacks</a> and <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise">promises</a> and it felt dirty. However, this is not a difficult thing to resolve and so I used the time to clean up the code.</p> <h6 id="puttingitalltogether">Putting it all together</h6> <p>After the success of the day before we were all ready to connect everything up and this really put the pressure on the API team to create the APIs we needed. Whilst I waited for the API team to create the many endpoints needed by the front-end team I spent some time adding some error handling, improving the detail of the console logging and, generally tidying up the code.</p> <p><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/DPg8NLiVoAAEye7c.jpg" alt="A weekend at Random Hacks of Kindness"/></p> <p>Once the API team were ready for me, we worked together to build the query that returns the required content modules for each hub. To consume the API I decided to use the <a href="https://www.npmjs.com/package/node-fetch">node-fetch</a> NPM package to make the API call and, because I dislike building URLs using string concatenation is this always seems to cause problems for someone later down the line, I decided to construct any URLs using the <a href="https://www.npmjs.com/package/url-assembler">url-assembler</a> NPM package.</p> <h6 id="themarketplace">The marketplace</h6> <p>At previous RHoK events we stopped at 3pm for presentations held in a lecture theatre but this year the organisers decided to use "The Marketplace". During the time allocated each team can spend time presenting and demoing to the other hackers and the judges and hopefully persuade those with remaining tokens to vote for them for the People's Choice awards (and yes you could vote for yourself if you wanted to but I don't know of anyone that did).</p> <h4 id="thefuture">The future</h4> <p>There are still things to do with this function to make it production ready and we hope to deal with these in the next few weeks through the RHoll events that we have already booked up.</p> <ul> <li>Security - we need some, not only to make sure only trusted users can use the new UI but to also make sure the API is not called incorrectly and to finally ensure the worker is not constantly triggered and use up our precious credits on GCP.</li> <li>Integration - we need to integrate with Hitnet's GCP and see if we can access their files via the cloud storage that we <a href="https://cloud.google.com/compute/docs/disks/gcs-buckets">assumed we could</a>.</li> <li>Process - improve the build, test, deployment process of the worker function. We are looking to hand this back to Hitnet at a later date so they can maintain this going forward.</li> </ul> <h4 id="summary">Summary</h4> <p>I definitely enjoyed the weekend and I definitely learnt something new, even better when it is doing something that would be of actual value to someone. The code for the worker (like everything produced at RHoK) can be found on their github account i.e. <a href="https://github.com/RHoKAustralia/hitnet-worker">hitnet-worker</a>.</p> <p>I believe everyone in the team tried/learnt something new, I saw this tweet from Liz who also joined the Hitnet team to work with Dan on the front-end.</p> <blockquote class="twitter-tweet tw-align-center" data-lang="en"><p lang="en" dir="ltr">Finally had a crack at building a usable React app at <a href="https://twitter.com/rhokmelb?ref_src=twsrc%5Etfw">@rhokmelb</a> for <a href="https://twitter.com/HITnet_au?ref_src=twsrc%5Etfw">@HITnet_au</a>. Also put on my UX/designer hat for it. So happy with what our team achieved! 😊 <a href="https://t.co/CJc0O7JTUD">pic.twitter.com/CJc0O7JTUD</a></p>— Lizarrrrgh (´・ω・`) (@lnoogn) <a href="https://twitter.com/lnoogn/status/934693948517269505?ref_src=twsrc%5Etfw">November 26, 2017</a></blockquote> <script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"/> and my understanding is that Ankit and Peter, who made up the API team, were also in somewhat personally unexplored territory when building the API using nginx and Postgres. <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Can you give something back?]]></title><description><![CDATA[Why does it seem, to me at least, that the vast body of developers will actively use software that has been contributed through open source efforts but will do nothing to add to that wealth and instead continue to rely on the efforts of others? How can we try to change that behaviour and get more developers contributing in some way? I wrote about a similar topic [/how_do_we_get_users_out_of_open_source_welfare_/] some years ago and I referred to it as a type of welfare. Perhaps it was a bit har]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/open-source-needs-you/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d75b</guid><category><![CDATA[volunteer]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Wed, 17 May 2017 08:40:41 GMT</pubDate><media:content url="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/work-1208048_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/work-1208048_1280.jpg" alt="Can you give something back?"/><p>Why does it seem, to me at least, that the vast body of developers will actively use software that has been contributed through open source efforts but will do nothing to add to that wealth and instead continue to rely on the efforts of others? How can we try to change that behaviour and get more developers contributing in some way?</p> <p>I wrote about a <a href="https://monkey-see-monkey-do-blog.herokuapp.com/how_do_we_get_users_out_of_open_source_welfare_/">similar topic</a> some years ago and I referred to it as a type of welfare. Perhaps it was a bit harsh because it tends to infer that these developers are taking advantage of those contributing, whereas instead they are just enjoying the fruits of what those people have contributed and, more importantly, freely given for such purposes. In a way it is a bit like art in the public gallery/domain, in that once the code/software has been written and released freely, anyone can use it without diminishing the pleasure of it for someone else.</p> <p>So why do some developers not contribute? I've tried to come up with some reasons I think may be the case, based on conversations I have had, and added my responses.</p> <h6 id="itsallaboutthemoney">It's all about the money!</h6> <p><em>You believe that these projects make millions as suggested by this cartoon and you don't want to give your valuable time for free so someone else can make money?</em><br> <img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/ME_439_OpenSource.png" alt="Can you give something back?"><sup><a href="https://commons.wikimedia.org/wiki/File:ME_439_OpenSource.png">https://commons.wikimedia.org/wiki/File:ME_439_OpenSource.png</a></sup></img></br></p> <p>It is something to consider I suppose and there are some projects where a number of service companies have spun up around supporting the project/product in the commercial space but these projects are rare when considered against the number of open source projects out there, well known perhaps, but still rare.<br> Though we should consider that when we use an open source projects in our own code we are saving millions for ourselves and our employers and clients and that is nearly as good as making millions. According to the open hub network the popular <a href="https://www.nuget.org/packages/Newtonsoft.Json/">Json.NET package</a> has taken 33 person years of effort with an estimated <a href="https://www.openhub.net/p/JsonNET/estimated_cost">cost of $1.8m</a>. I personally believe the cost is much higher.</br></p> <h6 id="wheredoistart">Where do I start?</h6> <p><em>You do not know where to start or even if your contribution would be accepted?</em> I suppose it can be a little scary but also a little exciting as well when you submit that first pull request, most projects have guidelines that help any new contributers to a project and it is unlikely that a submission is rejected outright. It maybe be pushed back a bit due to things like testing or coding style but that is all part of the learning process and the things that we do every day when we join new teams. There are a few initiatives out there that aim to get beginners involved in contributing to open source projects e.g. <a href="http://www.firsttimersonly.com/">first timers only</a> and though there are not many issues using <a href="https://github.com/search?q=label%3Afirst-timers-only&state=open&type=Issues">the 'first-timers-only' tag</a> at the moment, it is a start.</p> <p>I first started to contribute to projects that solve, or nearly solve, my own issues (I would say that is still where the majority of my own efforts go) and just dived in and gave it a go; if I didn't fix it, well, who was to know except my own ego.</p> <h6 id="willmybossesletme">Will my bosses let me?</h6> <p><em>You are unsure about your legal obligations when to comes to contributions to open source projects?</em> This is a tough one and right now I am going to use that irritating phrase that people use when they are about to talk something legal, I am not a lawyer (IANAL) but...</p> <p>If you have an employment contract read it over and look for relevant clauses - a lot of these contracts have clauses from a bygone era and just haven't changed to keep up with the time, talk it over with your bosses explain them the benefits of contributing e.g.</p> <ol> <li>Fixing an issue that affects your day to day work and would in turn make you more effective or the software you are developing for them better.</li> <li>Personal and professional development by working on software and concepts that are not normally part of your day to day work, improving those skills will have a beneficial impact on your work.</li> <li>You'll work on it in your own time, though you may need to get permission to use company resources e.g. laptop/software. You should try to get an agreement about when you use company time vs personal time i.e. if the issue you are addressing is directly related to work then you can use company time, treat it as part of the project and encourage others to contribute.</li> </ol> <p>As long as you are not contributing to a open source product that rivals your company's business, most employers I'd have thought, will agree to some level of contribution - some more pro-arguments for contributing can found in this <a href="https://blog.codeship.com/why-your-employees-should-be-contributing-to-open-source/">blog article</a>.</p> <h6 id="whataboutmyrights">What about my rights?</h6> <p>Some projects have a <a href="https://gist.github.com/sawilde/4820db0a6151a1144a0c">Community Licence Agreement</a> for contributions and these agreements are more about protecting the project and making sure that your contribution can be used without restriction for both the project and for yourself. Open source projects often don't have the dollars to fight any legal battles and so this way these projects protect themselves from any predatory legal action. It is highly unlikely you will be divulging any trade secrets through any contribution you make and it would/should be pretty obvious to yourself if you are treading the thin line. Just like when you use an open source project, or pull a package from nuget, you should look at the licence that comes with it to make sure that you usage complies with your business activities, again IANAL so you may want to have it checked out if you are unsure.</p> <h6 id="ijustdonthavethetime">I just don't have the time!</h6> <p>And this is probably the main crux of it all, it is tough finding time to learn outside of work especially with family obligations and keeping up with those social connections that we do, contrary to popular opinion regarding software engineers, actually have.</p> <blockquote> <p>So this is my <strong>big</strong> ask, if you use open source projects in your work or personal projects all I want from you 4 hours a month.</p> </blockquote> <p><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/1ozetn.jpg" alt="Can you give something back?"/></p> <p>Yes, less than 1 hour a week on average is all I am asking of you to try and contribute, hopefully you are doing more than this anyway for personal development and improving your <em>craft</em>. Perhaps integrate it as part of that development; want to learn a new programming language and hate just working through code samples? Find a project and just pitch in, it'll probably more fun and rewarding than just following trite <a href="http://wiki.c2.com/?HelloWorldInManyProgrammingLanguages">"hello world" examples</a>.<br> Also, though not an argument you may want to use with your employers, it is useful to show your contributions to prospective future employers as it demonstrates a willingness to learn and grow that go far beyond the stock "I have a Pluralsight subscription" (as good as it is by the way but along as long as you actually use it) responses during interviews. Not all of us have website portfolios to demonstrate our capability and it goes far beyond the code challenges that we are often presented with as we move from role to role, so consider it an investment in the future you.</br></p> <p>As always your thoughts are appreciated.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Aw shucks, thank you!]]></title><description><![CDATA[I just received this today and it really made my day and I thought I should share. > Hiya Shaun, > Just wanted to drop a quick email of emphatic thanks for your many years of work with OpenCover. I was tasked with the job just recently of introducing unit + service tests, along with things to ensure quality outcomes are being achieved with this (i.e., measurable outputs)... OpenCover was such a simple drop-in tool. > Having never used it, I was up and going in under 2 hrs, complete with HTM]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/aw-shucks-thankyou/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d759</guid><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Thu, 20 Apr 2017 07:00:00 GMT</pubDate><media:content url="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/thanks-1804597_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/thanks-1804597_1280.jpg" alt="Aw shucks, thank you!"/><p>I just received this today and it really made my day and I thought I should share.</p> <blockquote> <p>Hiya Shaun,</p> </blockquote> <blockquote> <p>Just wanted to drop a quick email of emphatic thanks for your many years of work with OpenCover. I was tasked with the job just recently of introducing unit + service tests, along with things to ensure quality outcomes are being achieved with this (i.e., measurable outputs)... OpenCover was such a simple drop-in tool.</p> </blockquote> <blockquote> <p>Having never used it, I was up and going in under 2 hrs, complete with HTML reports, and a wrapping cmd file so it's about as idiot proof as I can get things.</p> </blockquote> <blockquote> <p>So... no reason to this email other than to make sure you know at least someone appreciates your hard work :-)</p> </blockquote> <blockquote> <p>Hope life is treating you well!</p> </blockquote> <blockquote> <p>Cheers,</p> </blockquote> <p><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/0519986c195c198779774fd25e111030fe728b-wm-2.jpg" alt="Aw shucks, thank you!"/></p> <p>As I said that really made my day, I am glad that someone has found our work on OpenCover useful and not just at places I/we happen to <a href="https://blog.many-monkeys.com/happy_birthday_open_cover/">work at</a>. I must also give a <a href="https://www.urbandictionary.com/define.php?term=shout-out">shout out</a> to the other other <a href="https://github.com/OpenCover/opencover/graphs/contributors">contributors</a> of this project over the years and give a big thanks to Daniel who wrote the <a href="https://github.com/danielpalme/ReportGenerator">reporting generator</a> tool that makes the output from OpenCover look so good.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The importance of canary builds.]]></title><description><![CDATA[So now that Visual Studio 2017 [https://www.visualstudio.com/downloads/] is officially out I thought I would use the long Easter weekend to upgrade the OpenCover project to this new version and tackle any issues that I normally encounter when this upgrade time comes around. However before I can start this upgrade process I first need to tackle the bit rot that has recently set in. > Bit rot (or software rot [https://en.wikipedia.org/wiki/Software_rot]) is when your code just starts failing due ]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/the-importance-of-canary-builds/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d758</guid><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Fri, 14 Apr 2017 02:46:31 GMT</pubDate><media:content url="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/canary-coal-mine-main.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/canary-coal-mine-main.jpg" alt="The importance of canary builds."/><p>So now that <a href="https://www.visualstudio.com/downloads/">Visual Studio 2017</a> is officially out I thought I would use the long Easter weekend to upgrade the OpenCover project to this new version and tackle any issues that I normally encounter when this upgrade time comes around. However before I can start this upgrade process I first need to tackle the bit rot that has recently set in.</p> <blockquote> <p>Bit rot (or <a href="https://en.wikipedia.org/wiki/Software_rot">software rot</a>) is when your code just starts failing due to lack of maintenance or a change in the build or execution environment.</p> </blockquote> <p>Fortunately I have a weekly canary build that I previously set-up that amongst other things will alert me when this happens (as well as inform me if I get any new <a href="https://blog.many-monkeys.com/improving-your-source-code/">code quality</a> issues), it started <a href="https://ci.appveyor.com/project/sawilde/opencover/build/4.6.648">bleating</a>,<br> or should that be tweeting, a few weeks back so I knew I had some issues to rectify but not the actual severity. So in the space of a few months (I decided to stop work on OpenCover over the summer as I needed a break and time to look at other technologies) the following things happened:</br></p> <ol> <li>AppVeyor upgraded its Visual Studio 2015 image to use a release version of .NET Core. This caused a number of issues that I couldn't repeat on my local build environment so I have temporarily resolved the issue by uninstalling it for my server builds.</li> <li>The local and server builds don't build the same flavour of the .NET Core application so I needed to update the tests to take that into account; I intend to look into this more after completing the upgrade.</li> <li>Badges were failing and so the landing page was looking a bit <a href="https://www.urbandictionary.com/define.php?term=daggy">daggy</a>.</li> <li>The build was failing and was not reporting it properly, this was easy to resolve and I was surprised it had escaped unnoticed for all this time.</li> </ol> <p>If it wasn't for the regular weekly build I would not have been aware of some of these issues and makes me glad that I configured that build to trigger regardless of whether I or anyone else had submitted code to the repository. Now that I have a working (and hopefully more reliable) build again, I am now ready to tackle the upgrade and more than likely I will have to <a href="https://blog.many-monkeys.com/death-by-a-thousand-cuts/">update the tooling</a> again.</p> <p><sub>Title Image: Getty, CC-BY</sub></p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Keep your development workspace tidy]]></title><description><![CDATA[A big issue I find when working in today's modern development environment is the ever increasing number of git repositories we now have to maintain. With each repository comes a number of branches that we create, push and pull as we progress the development of each service and over time they just start to build up like driftwood on a beach; not the pretty driftwood of photos and art-pieces but junk useless wood that have long served their purpose. Just recently I counted that in the space of a ]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/keep-your-development-workspace-tidy/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d757</guid><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 01 Apr 2017 03:08:22 GMT</pubDate><media:content url="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/hurricane-matthew-1769040_1280-1.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/hurricane-matthew-1769040_1280-1.jpg" alt="Keep your development workspace tidy"/><p>A big issue I find when working in today's modern development environment is the ever increasing number of git repositories we now have to maintain.</p> <p>With each repository comes a number of branches that we create, push and pull as we progress the development of each service and over time they just start to build up like driftwood on a beach; not the pretty driftwood of photos and art-pieces but junk useless wood that have long served their purpose. Just recently I counted that in the space of a couple of months I had pulled in over 40 repositories and most of those repositories had multiple branches and in one extreme case I had over 30 branches, all of the branches had either been merged back to master or I had abandoned e.g. spikes.</p> <p>Now I could have done the simplest thing and just removed each one individually as I went along or just devote sometime every so often and do the same but en-masse. Instead I decided to do neither and instead I wrote a script that I could execute from a bash shell and later refined it into an alias.</p> <pre><code class="language-language-bash">alias -p bfg='for d in */; { echo $d; cd $d; { git checkout -q master; git branch | egrep -v "(^\*)|(^\s+(master|dev|hotfix|qa))" | xargs --no-run-if-empty git branch -D ; }; cd ..; };' </code></pre> <p>I call it <a href="https://en.wikipedia.org/wiki/BFG_(weapon)">BFG</a> because it shows no mercy and I only run it when I just want to <s>tidy</s>blow things up.</p> <p>Let me describe how it works (and I am doing this as much for my sake as for yours.)</p> <ul> <li>For each subfolder under the current folder</li> <li>switch to the master branch <br/>e.g. <code>git checkout -q master</code></li> <li>list all the available branches <br/>e.g. <code>git branch ...</code></li> <li>filter out those in the 'keep' list <br/>e.g. <code>... | egrep -v "(^\*)|(^\s+(master|dev|hotfix|qa))" ...</code></li> <li>if any extra branches found delete them <br/>e.g. <code>... | xargs --no-run-if-empty git branch -D ;</code></li> </ul> <p>I now try to run this at least once a month, especially when we have delivered on a number of features, and I know that it is safe to do so. If I am feeling nervous about running the script I substitute the <code>-D</code> for <code>-d</code> when executing <code>git branch</code> as this will not remove a branch if it has not yet been merged with its upstream.</p> <p>Any questions or suggestions for improvements please post below.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Adding search to your [ghost] blog - part 2]]></title><description><![CDATA[Okay this is the next step in adding search functionality to my blog. The biggest let down of the original integration was that I need to remember to run the search updater every time I made a new, or just updated a, blog post, but what if I was writing the entry remotely or, more than likely, just plain forgot? At the end of the last post [https://blog.many-monkeys.com/adding-search-to-your-ghost-blog-2/] I mentioned the possibility of adding a IFTTT snippet (actually they call them applets) t]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/adding-search-to-your-ghost-blog-part-2/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d756</guid><category><![CDATA[ghost]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sun, 26 Mar 2017 21:22:00 GMT</pubDate><media:content url="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/computer-1346046_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/computer-1346046_1280.jpg" alt="Adding search to your [ghost] blog - part 2"/><p>Okay this is the next step in adding search functionality to my blog.</p> <p>The biggest let down of the original integration was that I need to remember to run the search updater every time I made a new, or just updated a, blog post, but what if I was writing the entry remotely or, more than likely, just plain forgot?<br> At the end of the <a href="https://blog.many-monkeys.com/adding-search-to-your-ghost-blog-2/">last post</a> I mentioned the possibility of adding a IFTTT snippet (actually they call them applets) to automate the process, so whilst the idea was still fresh I decided to investigate the idea.<br> Unfortunately I could not find a way for IFTTT to execute a node script when my blog changed but I could use it to send a web request to another service which could then execute a script when it received that request. For this second service I decided to use Azure functions because well I have some spare free credit per month and I might as well use it for something useful.</br></br></p> <h6 id="settinguptheazurefunction">Setting up the Azure function</h6> <p>Creating an Azure function is relatively easy (assuming you have an account) but if you decide to use another service, e.g. AWS Lambda, then the below may provide hints about how you may implement similar functionality.</p> <ul> <li>If you don't already have one, create a "Function App" (Compute).</li> <li>Once created, Create a simple app that uses Javascript (node) and could be triggered by an API.<br> [I also chose to set the authorization level as Function as I prefer some level of access-control.]</br></li> </ul> <p><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/function.png" alt="Adding search to your [ghost] blog - part 2"/></p> <ul> <li>We need to pull in the <code>algolia-webcrawler</code> package, to do so upload the <code>package.json</code> file below and run <code>npm install</code> from the Console. [I upload all the files into the same folder as the function I am creating. I also find the <a href="https://blogs.msdn.microsoft.com/benjaminperkins/2014/03/24/using-kudu-with-windows-azure-web-sites/">Kudu console</a> works best for this as it displays the output/progress of the command.]</li> </ul> <pre><code class="language-javascript">{ "name": "rss-ghost-algoliasearch", "version": "1.0.0", "dependencies": { "algolia-webcrawler": "^1.0.3" } } </code></pre> <ul> <li> <p>Next, upload your configured <a href="https://github.com/DeuxHuitHuit/algolia-webcrawler/blob/master/config.json">config.json</a> file that is used to control the <code>algolia-webcrawler</code>.</p> </li> <li> <p>Now, replace the <code>index.js</code> file with the following snippet. [I did play about a bit here and finally decided that <code>spawn</code> was the method that worked best for me.]</p> </li> </ul> <pre><code class="language-javascript">var exec = require('child_process').spawn; module.exports = function (context, req) { context.log('HTTP trigger function processed a request.'); child = exec('node', [__dirname + '/node_modules/algolia-webcrawler', '--config', __dirname + '/config.json'], {}); child.stdout.on('data', function(data) { context.log(data.toString()); }); child.stderr.on('data', function(data) { context.log(data.toString()); }); child.on('close', function(code) { context.log('[END] code', code); }); res = { // status: 200, /* Defaults to 200 */ body: "" }; context.done(null, res); }; </code></pre> <ul> <li>Finally, run the application to test that it works. [You can see the output in the Logs section.]</li> </ul> <h6 id="settingupifttt">Setting up IFTTT</h6> <ul> <li>Goto IFTTT and create a New Applet</li> <li>For the "IF" choose RSS (new feed item) and enter the URL to the rss feed of the blog e.g. <a href="http://blog.many-monkeys.com/rss/">http://blog.many-monkeys.com/rss/</a></li> <li>For the "THAT" choose Maker WebHooks and enter the url to the function e.g. <a href="https://functions-many-monkeys.azurewebsites.net/api/UpdateBlogSearchIndex?code=K2mp">https://functions-many-monkeys.azurewebsites.net/api/UpdateBlogSearchIndex?code=K2mp</a>...</li> <li>Test the trigger works - this may take some time</li> </ul> <p>Assuming everything above works without issue, then whenever your blog updates the IFTTT applet will be triggered, usually within an hour - docs say 15 mins) and the search index on Algolia updated.</p> <h6 id="usingzapier">Using Zapier</h6> <p>I am not sure what is going on with IFTTT but I found I couldn't rely on it to run when I updated my posts so I tried a similar service called <a href="https://zapier.com">Zapier</a> instead. Creating a zap is just as easy as creating an applet, RSS->WebHook, and the free plan also states 5 (or 15 mins according to the free plan) between checks.</p> <p>To make Zapier work though I had to use the option "anything is different" for the RSS trigger, so the issue with IFTTT may be related to how Ghost generates the RSS feed that is being consumed by the trigger.</p> <p>I would appreciate any feedback and suggestions for further improvements.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Adding search to your [ghost] blog]]></title><description><![CDATA[Recently I wanted to find something in my own blog that I had written about and though I only have a small number of posts compared to some, it still took more time than it really should have. So I decided to look at adding some search functionality; because I am originally from Yorkshire it should be free, because I am lazy it should be relatively easy. A quick google search (the meta-irony is not lost on me) and a few dead ends, I ended up trying Algolia [https://www.algolia.com]. Now this lo]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/adding-search-to-your-ghost-blog-part-1/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d755</guid><category><![CDATA[ghost]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Fri, 24 Mar 2017 23:32:00 GMT</pubDate><media:content url="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/needle-1419606_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/needle-1419606_1280.jpg" alt="Adding search to your [ghost] blog"/><p>Recently I wanted to find something in my own blog that I had written about and though I only have a small number of posts compared to some, it still took more time than it really should have.</p> <p>So I decided to look at adding some search functionality; because I am originally from Yorkshire it should be free, because I am lazy it should be relatively easy. A quick google search (the meta-irony is not lost on me) and a few dead ends, I ended up trying <a href="https://www.algolia.com">Algolia</a>. Now this looks like it is a really powerful search tool and I am nowhere near going to be using it to its fullest potential for this blog but the <a href="https://www.algolia.com/demos">demos</a> I have seen look amazing.</p> <p>Now it was so simple to get going I thought I should share what I did to actually integrate it to my site (and so add to the myriad of other tutorials on the subject). First you've probably already seen the search on the main index page but if not this is what it looks like</p> <p><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/search1.png" alt="Adding search to your [ghost] blog"/></p> <p>and when searching</p> <p><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/search2.png" alt="Adding search to your [ghost] blog"/></p> <p>okay it's not the prettiest set of results at the moment however it is functional and I can tweak it later.</p> <h6 id="gettingstarted">Getting Started</h6> <p>Once signed up and <s>going</s>skipping through the tutorial you arrive at a dashboard where you can create an application and choose a region. It is explained and if your user-base is largely based in a single region then this is really useful but otherwise I was a little confused so I just choose the place nearest to where the blog is hosted, which is probably the same region.</p> <p>Once the application was created I was stuck (did I mention there was a tutorial?), I expected to find an "index this site" button or feature but it seems Algolia is API based and no such options existed for mere mortals like me.</p> <h6 id="loadingthesearchindex">Loading the search index</h6> <p>Another quick investigation and I came across an npm package named <a href="https://www.npmjs.com/package/algolia-webcrawler">algolia-webcrawler</a> which can be given a sitemap.xml and crawl the site/blog based on its contents. I took the sample <a href="https://github.com/DeuxHuitHuit/algolia-webcrawler/blob/master/config.json">config.json</a> and modified it up to look a bit like so.</p> <pre><code class="language-language-javascript">{ "app": "blog.many-monkeys.com", "cred": { "appid": "<<app-id-goes-here>>", "apikey": "<<admin-api-key-goes-here>>" }, "oldentries" : 86400000, "index": { "name": "siteIndex", "settings": { "attributesToIndex": ["title", "unordered(description)", "unordered(text)"], "attributesForFaceting": ["lang"] } }, "sitemaps": [ {"url": "https://blog.many-monkeys.com/sitemap.xml", "lang": "en"}, {"url": "https://blog.many-monkeys.com/sitemap-posts.xml", "lang": "en"} ], "http": { "auth": "" }, "selectors": { "title": "title", "image": "meta[property=\"og:image\"]", "description": "meta[name=\"description\"]", "text": "h1, h2, h3, h4, h5, h6, p, li" }, "formatters": { "title": "-" }, "defaults": { }, "blacklist": [ ] } </code></pre> <p>For some reason, that I obviously haven't yet investigated, it didn't work with just the main <code>sitemap.xml</code> and so I added the additional <code>sitemap-*.xml</code> files that I thought were valuable at this stage. Once the crawler was executed with the above config, I found that I had an index that I could use for searching my blog.</p> <h6 id="addingsearchtotheblog">Adding search to the blog</h6> <p>I did find another <a href="https://blog.lorentzca.me/install-algolia-search-on-ghost/">tutorial</a> about adding Algolia to a Ghost blog but it was in Japanese and Google translate didn't help me much but it did have a plain HTML snippet at the bottom that I shamelessly took and chopped up as necessary to use with my blog; these changes can be seen in this <a href="https://github.com/sawilde/Casper/commit/4c4e6983c9d8e6daefb9bb5af9fa7ed44f40af0f">commit</a> on GitHub.</p> <p>The example didn't work out of the box as-is for me but it also didn't take long to get it running with a bit of debugging and tweaking.</p> <pre><code class="language-language-markup">{{!-- The main content area on the homepage --}} <main id="content" class="content" role="main"> <div style="text-align: center;"> <input type="search" placeholder="Search posts" id="search-input" /> </div> {{!-- The tag below includes the post loop - partials/loop.hbs --}} {{> "loop"}} <script src="https://cdn.jsdelivr.net/algoliasearch/3/algoliasearch.min.js"></script> <script src="https://cdn.jsdelivr.net/autocomplete.js/0/autocomplete.min.js"></script> <script> var client = algoliasearch('<<app-id-goes-here>>', '<<search-api-key-goes-here>>') var index = client.initIndex('siteIndex'); autocomplete('#search-input', { hint: false }, [ { source: autocomplete.sources.hits(index, { hitsPerPage: 7 }), templates: { suggestion: function(suggestion) { return suggestion._highlightResult.title.value.link(suggestion.url); } } } ]).on('autocomplete:selected', function(event, suggestion, dataset) { console.log(suggestion, dataset); }); </script> </main> </code></pre> <h6 id="pros">Pros</h6> <ol> <li>It works and is free, okay you have to drop the Algolia image on your site somewhere (I kept it in the search box where it made sense) but nothing really comes for free and it isn't that intrusive.</li> <li>Easy to setup and integrate, writing this post took longer than installing the search functionality.</li> <li>Multiple indexes - I can see these becoming useful if I start to tweak my integration so as to try different things out during development without affecting the current live-integration.</li> <li>There appears to be a strong <a href="https://community.algolia.com/">community</a> to get plenty of help/support; in fact the snippet I used above looks like it may have initially come from here.</li> </ol> <h6 id="cons">Cons</h6> <ol> <li>Could be simpler to get started with a built in webcrawler of some form from the dashboard itself - there could be some standard usage scenarios for blogs (they have a <a href="https://community.algolia.com/docsearch/">documentation indexer</a> already so perhaps they are heading down that path).</li> <li>You have to run the webcrawler manually each time you publish a new post to the blog, though perhaps a <a href="https://ifttt.com/">IFTTT</a> snippet might be possible using the generated RSS feed, a post for another day perhaps.</li> </ol> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[It's astounding, time is fleeting, madness takes its toll, ...]]></title><description><![CDATA[Okay, starting a blog post using a title from the first lines of the lyrics to "The Time Warp" does seem a bit unusual but with what I intend to cover in this it will all make sense, probably. Time... what is it really? I am not going into some weird arty-pseudo-psycho babble of the ephemeral nature of time [http://www.mindfullyalive.com/blog/2015/6/7/massive-art-installation-on-the-ephemeral-nature-of-time] but as a software developer I usually have to deal with time, and dates, in some form ]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/its-astounding-time-is-fleeting-madness-takes-its-toll/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d753</guid><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Fri, 20 Jan 2017 23:08:00 GMT</pubDate><media:content url="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/time-1961312_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/time-1961312_1280.jpg" alt="It's astounding, time is fleeting, madness takes its toll, ..."/><p>Okay, starting a blog post using a title from the first lines of the lyrics to "The Time Warp" does seem a bit unusual but with what I intend to cover in this it will all make sense, probably.</p> <p>Time... what is it really? I am not going into some weird arty-pseudo-psycho babble of the <a href="http://www.mindfullyalive.com/blog/2015/6/7/massive-art-installation-on-the-ephemeral-nature-of-time">ephemeral nature of time</a> but as a software developer I usually have to deal with time, and dates, in some form everyday and I find myself having the same conversations with other developers about how to handle date and time in specific scenarios. What I have found is that even though we, as people, use time every day in our lives we really don't always understand it and even when we feel we have a grasp we often have to remind ourselves about what we are doing. Even now when someone refers to something like "midnight on the 20th", I have to query "Do you mean the midnight between the 19th and 20th or the 20th and 21st?"; people <a href="http://english.stackexchange.com/questions/6459/how-should-midnight-on-be-interpreted">usually mean the latter</a> but for computers it is always the former. Now, the observant amongst you may have noticed that I started off with 'time' but now I have started to use the terms 'date and time' together and that is very deliberate as time on its own is pretty useless for most cases without the date and together they create the <em>when</em>. And now I am going to add a third term to the mix and this is <em>where</em> because date and time we observe in the real-world is also dependant on where you are and I think anyone reading this will have experienced this at some point.</p> <h4 id="myrules">My rules</h4> <p>I am not going to say my grasp on the subject is perfect, as obvious from above, and to help developers like myself there are some wonderful libraries that really take the grunt work out of the handling of date and time but even with these tools we can still mess it up because we forget about what it is we are trying to handle. Personally, when it comes to development I have a few simple rules that pretty well cover 99% of all cases that I have to deal with when handling date and time that I would like to share.</p> <h5 id="rule1ifindoubtmakesureyouuseutc">Rule 1: If in doubt, make sure you use UTC...</h5> <p>... and make it someone else's problem to convert it; this is pretty well my default position on most occasions when I need to deal with time.</p> <blockquote> <p>UTC stands for <a href="https://www.timeanddate.com/worldclock/timezone/utc">Coordinated Universal Time</a> and if you are confused about about the abbreviation well it was apparently chosen so that it <a href="https://www.timeanddate.com/time/utc-abbreviation.html">does not favour any one language</a>.</p> </blockquote> <p>But regardless of the abbreviation employed it is an awesome thing for us developers as it gives us a common baseline that can be utilised by the languages we use in our daily development; it is unlikely that you will find a modern language that does not support some way of getting the current date and time in UTC e.g.</p> <ul> <li><a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now">JavaScript</a>: var when = Date.now();</li> <li><a href="https://msdn.microsoft.com/en-us/library/system.datetime.utcnow(v=vs.110).aspx">.NET</a>: var when = DateTime.UtcNow;</li> </ul> <p>UTC is great for logging and auditing purposes, and really anytime we want to store a time-stamp against a specific piece of data or an event. Even better for computers, as long as they are correctly sync'd to a time server, then the UTC time you will get on each computer will be the same (I am ignoring things like clock-drift etc for this article but I acknowledge its existence and the issues it can cause). Finally UTC isn't affected by DST and so you don't get odd behaviour due to time when this local time change occurs.</p> <blockquote> <p>DST stands for <a href="https://www.timeanddate.com/time/dst/">Daylight Savings Time</a> and its application around the world is rather arbitrary with <a href="https://en.wikipedia.org/wiki/Daylight_saving_time">some countries adopting it</a>, others ignoring and some using it, applying it all year round, abandoning it. And in some cases like Australia it depends on the <a href="http://www.australia.gov.au/about-australia/facts-and-figures/time-zones-and-daylight-saving#Daylightsaving">state</a>.</p> </blockquote> <p>UTC is also great for storing past events because, the past doesn't change and so converting it to local time for a viewer is always predictable; this is where those libraries really come in useful as they can also take care of that oft-forgotten DST offset that sometimes needs to be applied as well as the users timezone (the <em>when</em> and the <em>where</em> aspects) when you need to present the time of the event back to the user.</p> <h6 id="learntoreadtime">Learn to read time</h6> <p>Most languages support the basics of dealing with dates and time but it is also good to make sure you also understand how to interpret some common time representations.</p> <ul> <li>"2017-02-05 05:00:00" - 5am on 5th February 2017, however there is no timezone information and so will, more often than not, be interpreted as local time by most languages.</li> <li>"2017-02-05 05:00:00Z" - 5am on 5th February 2017 UTC, the <em>Z</em> indicates what is known as <a href="https://www.timeanddate.com/worldclock/timezone/zulu">Zulu</a> time and is another, usually military/aviation, term for UTC.</li> <li>"2017-02-05 05:00:00+00:00" - is also 5am on 5th February 2017 UTC, this time we have provided the offset in hours and minutes from UTC that the time represents.</li> <li>"2017-02-05 05:00:00+11:00" - is also 5am on 5th February 2017 but this time in a timezone that is currently 11 hours ahead of UTC; this is equivalent to "2017-02-04 18:00:00+00:00".</li> <li>"2017-02-05 05:00:00-07:00" - is also 5am on 5th February 2017, in a timezone that is currently 7 hours behind UTC; this is equivalent to "2017-02-05 12:00:00+00:00".</li> <li>"2017-02-05T05:00:00+1100" - is also 5am on 5th February 2017, in a timezone that is currently 11 hours ahead of UTC; this is a common representation often seen, a T is used to separate the date and time and the offset is still hours and minutes but they are sometimes not separated by a colon (:); in this case however if the the offset is missing then assume UTC<sup>(*)</sup>.</li> </ul> <blockquote> <p><sup>(*)</sup> “Your "common representation often seen" is <a href="https://en.wikipedia.org/wiki/ISO_8601">ISO 8601</a>. This is the universal standard. A pox on all those framework designers who bastardised it slightly because reasons...”<br> <br/>- I Shepherd</br></p> </blockquote> <p style="text-align: center;"><a href="https://xkcd.com/1179/"><img src="//imgs.xkcd.com/comics/iso_8601.png" title="ISO 8601 was published on 06/05/88 and most recently amended on 12/01/04." alt="It's astounding, time is fleeting, madness takes its toll, ..." srcset="//imgs.xkcd.com/comics/iso_8601_2x.png 2x"/><small>https://xkcd.com/1179/</small></a></p> <blockquote> <p>If you do see a time stamp with an offset e.g. +11:00 or -07:00, then to convert it back to UTC you subtract the hours and minutes from the time presented, as shown in the above examples; please note that you should treat the +/- sign as you would in any addition operation.</p> </blockquote> <h6 id="learntowriteandparsetime">Learn to write and parse time</h6> <p>Though reading time seems pretty straightforward for us humans, for computers it can be quite tricky because us humans are not sensible/consistent and some libraries don't default the way you would think they would e.g. <code>01/02/2017</code> is this Jan 2nd or Feb 1st, depending on where you are from will probably determine what you say; if you are US American then you will probably go with the former but the rest of the world will more likely go with the latter (exceptions apply). But what about computer libraries what will they do?</p> <pre><code class="language-javascript">// Wed Feb 01 2017 console.log(new Date(Date.parse('2017/2/1')).toString()); // Mon Jan 02 2017 console.log(new Date(Date.parse('1/2/2017')).toString()); // Invalid Date console.log(new Date(Date.parse('13/2/2017')).toString()); </code></pre> <p>So we see that default parsing in JavaScript will assume that if you put a year first then it will be followed by month and then date but it will assume US American format if you put the year at the end. So my rule of thumb here is to always format dates as year/month/day and if I have to parse a date format that I didn't write myself to ask about the format used because you may need to work just a little bit harder when you parse it.</p> <h5 id="rule2forreallifeeventsstorethetimezone">Rule 2: For real-life events, store the timezone...</h5> <p>... of <em>where</em> the event is expected to take place. This will then allow you to convert the stored time to another local time in another timezone. This is really important for events that are to happen in the future because the future is fluid and it is not unknown for some countries to change when they may trigger daylight savings with extremely short notice but if you keep your computer patched with the updates then you should be okay i.e. if you are on a Microsoft OS then <a href="https://support.microsoft.com/en-au/kb/914387">KB914387</a> is the one to keep any eye on; you may find it quiet interesting to see what changes have happened around the world.</p> <p>Now you may have noticed that I haven't said anything about whether the date and time of the event should be stored in UTC or local time and that is because it depends on your scenario.</p> <p><strong>Example 1 - TV Schedule.</strong> The "Local News at 6pm" will, more often than not, be scheduled at 6pm and this will happen day after day regardless of the DST in effect, even if DST usage was suddenly changed. In this scenario storing the local date and time along with the timezone is probably fine. The same would apply for any sort of events that stop and start in the same location e.g. meetings, restaurant bookings etc. You may even find that the timezone is largely unused if the event is just physical but if there is any electronic sharing of the booking to persons or systems outside the location then it is usually important to maintain the timezone component.</p> <p><strong>Example 2 - Travel.</strong> Events like travel where the start time and end time could possibly be in different times zones e.g. Air travel, then storing the date and time in UTC along with the timezone(s) will probably be the best approach. The complexity involved in planning flights etc would not adapt quickly to a sudden change in DST usage and so the people affected may just find that the local time of the flight has changed but not as it relates to the rest of the world.</p> <h6 id="whichtimezonelist">Which timezone list?</h6> <p>When storing your timezone try to store and use a library that will let you use the timezone list from the <a href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones">tz database</a>, this list usually breaks a timezone into a region and place e.g. Europe/London or Australia/Melbourne, and tends to have more cross adoption between languages than other, usually <a href="https://msdn.microsoft.com/en-us/library/gg154758.aspx">OS specific</a>, lists.</p> <p>This list is also more granular and deal with the oddballs of the world e.g. in Australia for instance we have:</p> <ul> <li>NSW (+10:00), VIC (+10:00), SA (+09:30), ACT (+10:00) and TAS (+10:00) all of which use DST but QLD (+10:00), NT (+9:30) and WA (+08:00) currently do not.</li> <li>a town called Broken Hill (<a href="https://en.wikipedia.org/wiki/Yancowinna_County">Yancowinna</a>) that is in NSW (+10:00) but instead shares the same timezone as SA (+09:30) aka "Australia/Broken_Hill".</li> <li>a town in WA (+08:00) called <a href="https://en.wikipedia.org/wiki/Eucla,_Western_Australia">Eucla</a> (+08:45) which is in a timezone that has a +45 minutes offset to the rest of the state aka "Australia/Eucla".</li> <li>and (finally) and island off NSW (+10:00) call <a href="https://en.wikipedia.org/wiki/Lord_Howe_Island">Lord Howe Island</a> (+10:30) that utilises a +30 minute DST during summer.</li> </ul> <p>It's not just the wildlife that makes Australia just one strange place.</p> <h6 id="libraries">Libraries</h6> <p>There are many libraries out there for your chosen language that will help you handle time especially if your chosen language has limited support. I'll only list ones that I have used recently that have done the job asked of them (and support the above timezone list) rather than discuss the merits of each library.</p> <ul> <li>.NET: <a href="http://blog.nodatime.org/">NodaTime</a></li> <li>JavaScript: <a href="https://momentjs.com">moment.js</a> and <a href="https://momentjs.com/timezone/">moment-timezone.js</a></li> </ul> <h5 id="rule3avoidusinglocalnow">Rule 3: Avoid using "Local Now"...</h5> <p>... and instead use "UTC Now" whenever possible, especially when processing 2 or more times. "Local Now" is subject to the computer settings the code it is executed on and when the wrong "now" is used the defects are really subtle and may not be easily repeatable on a developers machine.<br> If you are in</br></p> <ul> <li><strong>Europe/Africa</strong>: then the odds you will not notice the issues during your working day but your users may report odd things happening late at night.</li> <li><strong>North/South America</strong>: if anything is going to occur it will probably be during the afternoon when the weird stuff happens.</li> <li><strong>Asia/Australia</strong>: the system will probably have odd behaviour during the morning.</li> </ul> <p>The issues usually arise because developers have their machines running in local time but it is not unusual nowadays for servers in the cloud to be running UTC (with no DST applied) and so if a developer uses "Local Now" they should really use "UTC Now" and convert it to the actual timezone. Personally I would ban any local now usage from my code base and I would like that Microsoft make methods/functions like <code>DateTime.Now</code> obsolete.</p> <h6 id="testing">Testing</h6> <p>Testing your time manipulations and conversions can be very tricky especially if you use the System API to get the current time in your tests. If you can, try and mock out the usage of such APIs; some libraries such as NodaTime supply one for you, with JavaScript something like <a href="http://sinonjs.org/">Sinon.JS</a> may be your best bet. You may also find it worthwhile that when you run your tests that you change the timezone of you development environment occasionally and check that your tests continue to work regardless of the timezone of the computer running the tests. You may also find that you need to check your test data with an external system so I recommend the <a href="https://www.timeanddate.com/worldclock/meeting.html">meeting planner</a> on <a href="http://www.timeanddate.com">http://www.timeanddate.com</a> as it allows you to enter your own times and see the results for multiple locations.</p> <p><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/Capture-time.png" alt="It's astounding, time is fleeting, madness takes its toll, ..."/></p> <h6 id="finally">...finally</h6> <p>These are the rules I try to abide by but rules can be broken sometimes and there will always be special cases.</p> <p>If you have further suggestions about what has worked for you that you would care to share, or see or believe that I have made a grave error in my thinking then please comment below, I am always willing to discuss and learn.</p> <h6 id="otherresources">Other Resources</h6> <p><a href="http://infiniteundo.com/post/25326999628/falsehoods-programmers-believe-about-time">Falsehoods programmers believe about time</a> - <a href="http://infiniteundo.com">Infinite Undo!</a><br/><br> <a href="https://stackoverflow.com/questions/2292334/difference-between-utc-and-gmt-standard-time-in-net">GMT Standard Time - A Gotcha</a></br></p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Configuring Cloudflare Page Rules]]></title><description><![CDATA[I moved to Cloudflare [https://www.cloudflare.com/] sometime ago so that I could take advantage of their tools such as free DNS, SSL, analytics, caching etc. to support my ghost [https://ghost.org/] based blog. I wanted to host my blog on https://blog.many-monkeys.com but I didn't want to host http://www.many-monkeys.com nor http://many-monkeys.com anywhere and wanted to keep my options open i.e. if I wanted to use the domain for more than just a blog. I still wanted to support https://www... ]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/configuring-cloudflare-page-rules/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d754</guid><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 14 Jan 2017 05:31:20 GMT</pubDate><media:content url="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/sunburst-547223_1920.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/sunburst-547223_1920.jpg" alt="Configuring Cloudflare Page Rules"/><p>I moved to <a href="https://www.cloudflare.com/">Cloudflare</a> sometime ago so that I could take advantage of their tools such as free DNS, SSL, analytics, caching etc. to support my <a href="https://ghost.org/">ghost</a> based blog.</p> <p>I wanted to host my blog on <a href="https://blog.many-monkeys.com">https://blog.many-monkeys.com</a> but I didn't want to host <a href="http://www.many-monkeys.com">http://www.many-monkeys.com</a> nor <a href="http://many-monkeys.com">http://many-monkeys.com</a> anywhere and wanted to keep my options open i.e. if I wanted to use the domain for more than just a blog.</p> <p>I still wanted to support <a href="https://www">https://www</a>... as there are a number of old links out there that point to my old site/blog but I didn't want to pay hosting for a single page that would redirect someone to the blog page. When I read up on this I found some DNS providers have a URL record which is used to configure this sort of behaviour. However I couldn't find out how to configure this sort of record in Cloudflare and so I remained stuck on the single page hosting solution.</p> <p>The other day I was clicking around the Cloudflare site and I stumbled on the Page Rules and after a bit of trial an error I got it to behave the way I wanted and so I thought I would share.</p> <h6 id="1configurethepagerules">1. Configure the page rules</h6> <p><img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/Capture-PR.png" alt="Configuring Cloudflare Page Rules"/></p> <p>I decided to point both names at the blog with a temporary redirect as that will allow me to change it in the future.</p> <h6 id="2configurethedns">2. Configure the DNS</h6> <p><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/Capture-DNS.png" alt="Configuring Cloudflare Page Rules"/></p> <p>I configured the DNS such that all the domains I wanted to support will route through Cloudflare (the orange cloud image) but instead of being forwarded onto the blog hosting site at ghost.io which would/does fail, the Page Rules above kick-in and do the redirect to blog.many-monkeys.com and all works as I wanted.</p> <p>By the way you get three free Page Rules and I only needed to use two, I wonder if I'll ever find a need for the third.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Only time will tell]]></title><description><![CDATA[Recently, I decided to review all my old articles that have been scattered across the internet over the past decade or two with the idea of either tidying them up or answering questions that I may have neglected etc. During this review I came across an old article from 2007 hosted on CodeProject [http://www.codeproject.com/Articles/18834/Create-custom-dialogs-for-use-in-your-Visual-Studi] that demonstrates a way of creating custom dialogs for Visual Studio setup projects and was not documented]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/only-time-will-tell/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d752</guid><category><![CDATA[codeproject]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Wed, 13 Jul 2016 04:06:17 GMT</pubDate><media:content url="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/time.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/time.jpg" alt="Only time will tell"/><p>Recently, I decided to review all my old articles that have been scattered across the internet over the past decade or two with the idea of either tidying them up or answering questions that I may have neglected etc.</p> <p>During this review I came across an old article from 2007 hosted on <a href="http://www.codeproject.com/Articles/18834/Create-custom-dialogs-for-use-in-your-Visual-Studi">CodeProject</a> that demonstrates a way of creating custom dialogs for Visual Studio setup projects and was not documented by Microsoft. Since I went through the pain of working it all out, I thought I should share what I'd found out in case anyone else was in the same boat and needed to do the same. The problem I now have with this particular article is that people are still asking questions about how to solve something using the technique I provided and all I want to do is shout "No, that was 2007 there is a better way now, there was even a better way then. Don't any of you read the background of why we did this? Please, just stop!". I say this because there <em>are</em> better ways of working with installers and I now feel guilty that I may have sent one too many people down a horrible path, but what do I do? I could delete the article but that would mean that those who have used the article would lose the information I had provided, so instead I decided to add a new section to the top of the article, a sort of a "Caveat Emptor" or "Here be dragons!"</p> <p><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/stop.png" alt="Only time will tell"/></p> <p>I hope it works but only time will tell.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Death by a thousand cuts]]></title><description><![CDATA[Every year I decide to spend some time refreshing OpenCover i.e. upgrading to the latest tools such as Visual Studio, upgrading all the packages that OpenCover depends on etc. etc. and it is never a good time for me. I don't know why I do this to myself, I know it is going to hurt and it's always the tiny things that somehow take ages to remedy. However I need to do this so that I can uninstall old versions of Visual Studio before I move on to addressing some of the latest features and issues w]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/death-by-a-thousand-cuts/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d751</guid><category><![CDATA[open cover]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sun, 22 May 2016 11:13:09 GMT</pubDate><media:content url="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/barbed-wire-235759_1920.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/barbed-wire-235759_1920.jpg" alt="Death by a thousand cuts"/><p>Every year I decide to spend some time refreshing OpenCover i.e. upgrading to the latest tools such as Visual Studio, upgrading all the packages that OpenCover depends on etc. etc. and it is never a good time for me.</p> <p>I don't know why I do this to myself, I know it is going to hurt and it's always the tiny things that somehow take ages to remedy. However I need to do this so that I can uninstall old versions of Visual Studio before I move on to addressing some of the latest features and issues within OpenCover.</p> <p>Thing is, as projects go that I have to deal with on a regular basis, OpenCover is one of the simplest, it has around 20 projects and there are around 30 external packages and the majority of those projects and packages are there for testing purposes. I am quite happy about the level of testing (it is not perfect - but when is it ever?) and the number of serious defects are quite low; as of time of writing, the only defects in the <a href="https://github.com/OpenCover/opencover/issues">issues</a> are the ones we raised ourselves from the error logs we received and an issue relating to support for the latest <a href="https://msdn.microsoft.com/en-us/library/hh549175.aspx">Fakes</a> that comes with Visual Studio 2015.</p> <p>I've decided this time that I am going to list the steps I go through over the next few days (or weeks depending on whatever time I can find) as I do this again for 2016.</p> <ol> <li>Upgrade the project from Visual Studio 2013 to Visual Studio 2015</li> </ol> <ul> <li>[Note: that I didn't go for <a href="https://blogs.msdn.microsoft.com/visualstudio/2016/03/30/visual-studio-15-preview/">Visual Studio "15"</a> because it isn't supported by <a href="https://github.com/appveyor/ci/issues/753">AppVeyor yet</a> and I am not a masochist.]</li> <li>Installer project failed to upgrade - upgrade to latest Wix in the 3.X series and try again.</li> <li>No errors during upgrade this time.</li> </ul> <ol start="2"> <li>Building</li> </ol> <ul> <li>Fix C++ issues due to <em>std::hash_map</em> no longer being <a href="https://msdn.microsoft.com/en-us/library/0d462wfh.aspx">supported</a>.</li> <li>Some C# projects no longer compile due to some Roslyn related intellisense error, what the ...? <ul> <li>[external package <a href="https://www.nuget.org/packages/Mono.Gendarme/">Mono.Gendarme</a> can no longer be found even though it is in the references, another developer also found this and reported something similar on our <a href="https://gitter.im/OpenCover/opencover">gitter channel</a>.]</li> <li>This makes no sense, remove it, add it again - nope, remove it and get ReSharper to add it - nope.</li> <li>Check the signing process [the package comes unsigned so I have to manually sign it; is it really that hard to add a strong name key everyone?] - no change, still works in Visual Studio 2013 though. Why me?</li> <li>Upgrade to latest ReSharper [thanks again for the freebie JetBrains.], wait....</li> <li>Still doesn't find the reference (yes, I know, it was wishful thinking.) It's there, I can see it, why don't you look harder Visual Studio - Roslyn - whatever...!</li> <li>... have few beers whilst going round in circles and my <a href="http://www.urbandictionary.com/define.php?term=google-fu">google-fu</a> is failing me.</li> <li>New tack, use new ReSharper to decompile library into code and pull the bits I want into a new file, compile, move on - <a href="https://github.com/mono/mono-tools/blob/077b798b9c7823e42dddd07a4f70cd4dc8ed00af/gendarme/MIT.X11">bite me</a>!</li> <li>Fix installers and other packaging steps due to assembly removal.</li> </ul> </li> </ul> <ol start="3"> <li>Testing <ul> <li>Seems changes to the compiler behaviour mean that some of the tests that use the output of the compiler to test how OpenCover works now fail, at least it is consistent in release and debug builds. Fix them up.</li> <li>SpecFlow integration tests fail on command line. Have to install new plugin that supports Visual Studio 2015 to run them in Visual Studio using ReSharper.</li> <li>Find I can't debug (use breakpoints that is) tests that have been built as Release even though full PDBs exist. Get latest updates of Visual Studio 2015 i.e. Service pack 2, wait... [I am sure I installed Visual Studio 2015 quicker than it took to service pack it.]. What have they done to the logo..? Never mind, let's just move on.</li> <li>Breakpoints are now working and, okay, fix the parsing [how did we get away with that for so long] and the tests are now passing again.</li> </ul> </li> <li>Building</li> </ol> <ul> <li>Build on the command line again... Success!</li> <li>Build <em>release</em> build on the command line... Success!</li> <li>Push to GitHub and run a test build on AppVeyor... Success!</li> </ul> <ol start="5"> <li>Upgrading the Packages</li> </ol> <ul> <li>First undo the hack from what now seems weeks away as I can now once again reference <em>Mono.Gendarme</em>; It seems the Service Pack fixed many things, I promise I am not bitter about this, much.</li> <li>I am now on familiar ground, so let's have a look at what packages need upgrading using Visual Studio, okay 9 updates, 8 of them are used only for testing and 6 of them have gone through major release number changes; I am worried about these because if there is going to be a breaking change then it'll be those 6; it's not showing the solution only packages so I'll deal with them manually.</li> <li>First, upgrade the 3 packages with the potentially least impact i.e. no major version number changes. [By the way, the UI for Nuget package management is far, far better than before]. Build, test... Success!</li> <li>Now try the other packages one by one [this allows me to rollback or fix issues with just a single package at a time]. My biggest concern is Nunit and Specflow as these are dependent on each other and my "spider-sense" is tingling, so I am going to leave these to last. <ul> <li><strong>Unity</strong> (used for testing only). Build, test... Success!</li> <li><strong>xUnit</strong> (used for testing only). Build, test... Fail. Update xunit.runners package, find I also need to update nuget.exe itself [at least that is just a one liner], only to find it is empty except for a readme file informing me to download another package. Download that package. Update scripts, build, test, ... Success!</li> <li><strong>Specflow</strong> (used for testing only). There are a number of packages all interlinked on versions and dependencies. Now running in a strange universe with two versions of NUnit packages in my solution; I know from past experience they will not play nicely with each in the same folder so will need to take the leap on NUnit soon. Run tests using ReSharper, tests fail due to execution folder no longer the same, fix them up and try again... Success!</li> <li><strong>NUnit</strong> (used for testing only). I've heard some war stories about this upgrade... Well that went well only one build issue (obsolete method) and easily corrected. Build, test... Fail. It seems more than just obsoleted types have changed, though not sure if it is a NUnit thing or a ReSharper integration thing. Ah, more tests failing due to execution folder has changed, lets solve them first as we have a solution for that. And, now we are left with a change in how we handle fields used for <em>ValueSource</em>, easy to remedy. Now I need to manually update the runner for the command line build. Build, test... Success! Because I've upgraded the test runner I'll push the build to AppVeyor again just in case something serious has changed.</li> <li><strong>log4net</strong> (used in multiple projects). Build, test... Success!</li> </ul> </li> </ul> <p>And we're done, well hopefully, nothing serious has been caught and we are now running on Visual Studio 2015 with the ability to, at last, start using the latest syntax available. Now this upgrade took about 3 days overall with a lot of elapsed time in between whilst I did other things (life) and waiting on stuff to install.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Improving your source code quality]]></title><description><![CDATA[One reason I work on Open Source projects such as OpenCover [https://github.com/OpenCover/opencover] is so that I can try things out, experiment if you wish, sometimes it's TDD techniques or a new mocking framework, and sometimes it's tooling; some of these experiments were successes and some were successful failures; my experiment in using SpecFlow for unit testing was interesting but I'll never do that again; my knowledge in what I can do in SpecFlow however has greatly improved. Tools help u]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/improving-your-source-code/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d750</guid><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 23 Jan 2016 00:24:07 GMT</pubDate><media:content url="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/premium-991221_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/premium-991221_1280.jpg" alt="Improving your source code quality"/><p>One reason I work on Open Source projects such as <a href="https://github.com/OpenCover/opencover">OpenCover</a> is so that I can try things out, experiment if you wish, sometimes it's TDD techniques or a new mocking framework, and sometimes it's tooling; some of these experiments were successes and some were successful failures; my experiment in using SpecFlow for unit testing was <em>interesting</em> but I'll never do that again; my knowledge in what I can do in SpecFlow however has greatly improved.</p> <p>Tools help us produce better software, the better our tools <strong>and</strong> the more we know how to use those tools help us become better developers. Just because you have the best tools and splashed out many $$$ to access those tools, if you don't learn how to use them properly or at least the basics then you might as well save your money. Thing is, the lack of money in the Open Source world is our problem, we've devoted our time and usurped our work laptops to access those <a href="https://www.visualstudio.com/products/how-to-buy-vs">expensive IDEs</a> and additional tooling, but can we really tell our partners we have personally spent thousands on some software or infrastructure to help make better code we are giving away for free. Thankfully we .NET developers now have access to <a href="https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx">Visual Studio Community Edition</a> and we may be able to get away with the odd $100 here and there but <em>Code Quality</em> tools are probably at the more expensive end of the scale and out of reach for most individuals. There are however a few good <em>Code Quality</em> tools available to us open source developers for free, in most cases, and they are relatively easy to set up.</p> <h5 id="coverity">Coverity</h5> <p><a href="https://scan.coverity.com/">Coverity</a> was the first quality metric tool we integrated into the OpenCover pipeline; it handles C#, C++ and many others. This tool was suggested sometime ago by one of the OpenCover contributors and we addressed the more serious issues he found at the time but it took a little while before we got round to integrating it into the pipeline. Integration took about a day of tinkering locally and then building the scripts so that it would run on <a href="https://ci.appveyor.com/project/sawilde/opencover/build/4.6.424">AppVeyor</a>; I now ask myself why did we wait? AppVeyor had already preinstalled the Coverity package onto their images and added it to the path so it was quite a simple task. Once we have finally succeeded in uploading our first scan for analysis we got a clean <a href="https://scan.coverity.com/projects/opencover-opencover">dashboard</a> and the ability to configure code exclusions and manage the issues.</p> <p>As you can see, a number of issues were found and the team rallied around to fix them. Only a few defects were dismissed as false positives and only under one occasion did fixing one defect introduce another; we are after all only human. We now have it scheduled to run once a week to keep us honest.</p> <p><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/coverity.png" alt="Improving your source code quality"/></p> <p>You can also run Coverity locally on your machine and upload to their site (open source projects only). I'll provide the steps I used and there are some <a href="https://www.nuget.org/packages/PublishCoverity/">nuget packages</a> about to help you if you wish to use them but I didn't really feel a need.</p> <p><strong>Installation steps:</strong></p> <ul> <li>create an account on Coverity and provide some project details for you project. You can't view the results for free anywhere other than the portal so you might as well do this.</li> <li><a href="https://scan.coverity.com/download">download</a> the package for your platform, <a href="http://www.thewindowsclub.com/fix-windows-blocked-access-file">unblock</a> and unpack.</li> </ul> <p><strong>Running:</strong></p> <ul> <li>this is very simple as you just used the supplied tool <code>cov-build.exe</code> to run your build in our case it is <code>build.bat build-release-platforms-x64</code> e.g. from the build script</li> </ul> <pre><code> <exec program="${coverity.exe}" commandline="--dir cov-int --encoding=UTF-8 build.bat build-release-platforms-x64" /> </code></pre> <p><strong>Viewing the results:</strong></p> <ul> <li>you will need to upload your results you can use a nuget package for this but I used curl instead e.g. from my build script</li> </ul> <pre><code> <exec program="${tools.folder}/7-Zip/7za.exe"> <arg value="a" /> <arg value="coverity.zip" /> <arg value="cov-int" /> </exec> <exec program="${curl.exe}" commandline='--form token=${coverity.token} --insecure --form email=${coverity.email} --form file=@coverity.zip --form version="${ci.buildNumber}" --form description="${ci.buildNumber}" https://scan.coverity.com/builds?project=OpenCover%2Fopencover' /> </code></pre> <ul> <li>wait... sometimes your code gets analysed really quickly and sometimes it doesn't, there are a few restrictions with open source projects such as frequency of submission and code size.</li> <li>play...</li> </ul> <h5 id="sonarqube">SonarQube</h5> <p><a href="http://www.sonarsource.com/">SonarSource</a> has an Open Source offering called <a href="http://www.sonarqube.org/">SonarQube</a> and even offers integration into their own online <a href="https://nemo.sonarqube.org/">dashboard</a>. This integration is not currently available to those who require a windows build platform so until they have implemented their push feature there is probably going to be some sort of hosting outlay to make your results publicly accessible.</p> <p><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/sonar.png" alt="Improving your source code quality"/></p> <p>SonarQube is a bit more verbose/pedantic than Coverity and found 13 critical defects. All of these were OWASP related issues due to the <code>Console.WriteLine</code> statements but since OpenCover is a console application they will all be Resolved as 'won't-fix' or 'false-positive'; still trying to work out what is the best approach. In fact it would be easy to dismiss many of the issues found as some of them are a matter of style; SonarQube does allow you to review all the rules and change their usage depending on how the team wishes to treat each rule. In hindsight, before we started fixing the defects found by Coverity, it might have been better to get both products working to compare the output and see if they found the same issues</p> <p>One rule that I habitually turn off is the "Literal suffixes should be upper case" rule. This is a 'minor' rule that tries to insist that I write my decimals and doubles as <code>0M</code> and <code>0F</code> rather than my preferred <code>0m</code> and <code>0f</code>; I just think the latter is easier to read. They do at least provide a reason for each rule which in this case is, 'Using upper case literal suffixes removes the potential ambiguity between "1" (digit 1) and "l" (letter el) for declaring literals.' i.e. is it <code>0l</code> or <code>01</code>, which is fair enough for that case but to then blanket apply the rule is a bit excessive. There are some alternative developer fonts e.g. <a href="http://hivelogic.com/articles/top-10-programming-fonts">top-10-programming-fonts</a>, that you can use in your development environments that make it easier to distinguish the <code>1</code> and <code>l</code> if it is such an issue for you and you don't want to apply the rule like myself.</p> <p>I did find setting up SonarQube a lot trickier than Coverity as I also had to self-host, steps were available but just not in one place that I could find. For initial testing I just used the in-memory database but it has some caveats and I have since experimented with a MySql+SonarQube setup on windows and linux. The install steps I used for my initial windows hosted experiment follows</p> <p><strong>Installation Steps:</strong></p> <ul> <li>install/update your <a href="http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html">Java Runtime Environment</a></li> <li>download, unblock, and unpack <a href="https://sonarsource.bintray.com/Distribution/sonarqube">sonarqube</a></li> <li>download, unblock, and unpack <a href="https://sonarsource.bintray.com/Distribution/sonar-csharp-plugin">sonar-csharp-plugin</a> and place in extensions</li> <li>download, unblock, and unpack <a href="https://github.com/SonarSource/sonar-msbuild-runner/releases">MSBuild.SonarQube.Runner</a></li> </ul> <p><strong>Running:</strong></p> <p>*change to match your setup</p> <ul> <li>run <code>sonarqube-5.3\bin\windows-x86-32\startsonar.bat</code></li> <li>open Visual Studio Developer Prompt 2013/5 and run the following</li> </ul> <pre><code>\Projects\sonarqube-runner\MSBuild.SonarQube.Runner.exe begin /k:"opencover" /n:"opencover" /v:"0.0.0.1" msbuild main\OpenCover.sln /t:rebuild \Projects\sonarqube-runner\MSBuild.SonarQube.Runner.exe end </code></pre> <p><strong>Access Results:</strong></p> <ul> <li>Access the site on <a href="http://localhost:9000">http://localhost:9000</a> using admin/admin</li> <li>change password of the setup if publicly accessible.</li> <li>play...</li> </ul> <p>We are still looking at how we host and integrate SonarQube into our pipeline and we my look into using the c++ community plugin</p> <h5 id="resharper">ReSharper</h5> <p><a href="https://www.jetbrains.com/resharper/">ReSharper</a> is one of the best productivity tools about for a .NET developer IMO and it also has some built in code quality rules. We've been using ReSharper for some time now and often try to get to the ReSharper "Green tick of approval" on the files, in doing so we probably preemptively reduced the number of Coverity and SonarQube issues detected when we ran those tools.</p> <h6 id="update25012016">Update (25/01/2016)</h6> <p>I eventually got round to integrating it into the build pipeline, I used a headless ubuntu vm and followed these <a href="http://dev.mamikon.net/installing-sonarqube-on-ubuntu/">instructions</a>; remembering of course to not use the same <em>default</em> password. The results of these efforts can be found <a href="https://http.cat/404">here</a>.</p> <h6 id="update17072016">Update (17/07/2016)</h6> <p>I have since migrated the information to <a href="https://sonarqube.com/overview?id=opencover">sonarqube.com</a>; or nemo as it used to be known.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[A line in the sand]]></title><description><![CDATA[Just recently I read Rework [http://amzn.to/1KD6kqx] again on my kindle as this book really resonated with me at the time and I thought it was about time I read it again; the book is from the guys at 37 Signals aimed at people starting a business. When I got to the section titled "Draw a line in the sand" I realised that this book is also appropriate to anyone who is thinking of creating/managing or getting involved in an open-source project/product; I am not talking abut flinging up some source]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/a-line-in-the-sand/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d74f</guid><category><![CDATA[review]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Mon, 03 Aug 2015 10:01:09 GMT</pubDate><media:content url="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/6515079935_90bfac548f_b.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/6515079935_90bfac548f_b.jpg" alt="A line in the sand"/><p>Just recently I read <a href="http://amzn.to/1KD6kqx">Rework</a> again on my kindle as this book really resonated with me at the time and I thought it was about time I read it again; the book is from the guys at 37 Signals aimed at people starting a business. When I got to the section titled "Draw a line in the sand" I realised that this book is also appropriate to anyone who is thinking of creating/managing or getting involved in an open-source project/product; I am not talking abut flinging up some source code or sending a pull-request and going ta-da, I mean potentially committing several hours per week over many months if not years.</p> <p>This particular section in the book talks about drawing a line under what your product does and doesn't do and asks what are you willing to leave out to keep the product true even if it means some of your customers don't get the features they want. We see this far too often with great commercial products that start to become bloated as they try to chase every possible sale with more and more features that fewer and fewer people use and with an open source project I think it is all too easy to fall into the trap of fulfilling each feature request because you may think "great, someone wants to use my project but only if I do X, Y and Z by next week." I've had this scenario many times over the years with <a href="https://github.com/OpenCover/opencover">OpenCover</a> and I have to decide whether or not to implement a feature (also do I have the time to do it and how will I support it) so I've had to <em>draw a line in the sand</em> many times using the following criteria "do I need it". It really is as simple as that, if I don't need it myself I am not going to spend my time on it. Now that doesn't mean I will reject code from other developers that meets their needs and enhances the product (especially since they have spent the time and effort to share their work, that would just be churlish) but I have to draw a line under what I am willing to commit to in order that I enjoy working on the project and not feel I am working for someone else, for free.</p> <p>One feature I've never implemented is creating a UI for OpenCover even though <a href="https://github.com/sawilde/partcover.net4">PartCover</a> (OpenCover's predecessor) has one, as I felt the UI provided by <a href="https://github.com/danielpalme/ReportGenerator">ReportGenerator</a> was awesome and did the job perfectly that I felt anything I did implement would always be inferior. More recently another UI related feature that I didn't implement due to lack of personal need was "Visual Studio Integration", I use <a href="https://www.jetbrains.com/resharper/">ReSharper</a> as a test runner and I don't actually look at the code coverage until I think I have all the right tests in place; this keeps me honest, I believe, in that it I try to write tests for the code rather than tests for the coverage. However some members of the community wanted one and went and developed an extension themselves, actually there are two extensions currently in development <a href="https://github.com/OpenCoverUI/OpenCover.UI">OpenCoverUI</a> and <a href="https://github.com/leemorris/Testify">Testify</a> which I think is great.</p> <p>Another example is a feature that has sat on backlog for over 2 years "<a href="https://github.com/OpenCover/opencover/issues/144">Support Coverage of Windows Store Applications on Windows 8</a>". I know how to do it, well I have some guidance from <a href="http://blogs.msdn.com/b/davbr/archive/2013/01/09/writing-a-profiler-of-windows-store-apps.aspx">David Broman at Microsoft</a> so I know where to start, but I don't actually need it. When I write a windows store or phone app I put the majority of the code in an assembly separate from the UI and use unit testing to get the coverage. I don't care that much about coverage from integration testing as I find the results I get from unit testing far more valuable. Now I know people sometimes in the pursuit of 100% coverage want to get coverage results from integration testing and have some automated UI tests etc and OpenCover will <em>usually</em> support them. Hooking into windows services was implemented by someone who needed that capability, I tested and accepted their pull-request but I've never had to use the capability. IIS integration is tricky, I always use IISExpress myself if I need to go down this path, but the guidance was provided by someone who needed it themselves and so I shared it. One day I may need to implement this particular feature so I am not going to take it down and perhaps if someone wants it beforehand the guidance is on the issue if they want to implement it; though there has been no request for it though either.</p> <p>I could list other sections that seemed related and applicable to open source projects e.g. "Be a curator" or "Don't be a hero", but frankly I think I would just be listing the entire book contents, best you get <a href="http://amzn.to/1KD6kqx">Rework</a> yourself and read it and then read it again; each time I read it I get something new from it.</p> <p><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/rework.jpg" alt="A line in the sand"/></p> <p><em><sub>Image - <a href="http://www.flickr.com/photos/64588110@N00/6515079935">Mandala, Coney Beach 4</a> via <a href="http://photopin.com">photopin</a> <a href="https://creativecommons.org/licenses/by-nc-nd/2.0/">(license)</a></sub></em></p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Free stuff for Open Source .NET development]]></title><description><![CDATA[For the past few years that I've been been working on OpenCover [https://github.com/OpenCover/opencover] I've had the opportunity to use a number of tools during its development, a few of those are commercial tools that have been made available for free to developers of Open Source projects or just to the project itself because I asked nicely. Some of those tools I still use and some just carried the project through part of its journey so I thought it would be nice to give those tools a shout o]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/free-stuff-for-opensource-net-development/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d74e</guid><category><![CDATA[open source]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 25 Jul 2015 06:58:58 GMT</pubDate><media:content url="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/spring-651836_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/spring-651836_1280.jpg" alt="Free stuff for Open Source .NET development"/><p>For the past few years that I've been been working on <a href="https://github.com/OpenCover/opencover">OpenCover</a> I've had the opportunity to use a number of tools during its development, a few of those are commercial tools that have been made available for free to developers of Open Source projects or just to the project itself because I asked nicely.</p> <p>Some of those tools I still use and some just carried the project through part of its journey so I thought it would be nice to give those tools a shout out and perhaps other developers of Open Source projects in this space may find them useful.</p> <h4 id="development">Development</h4> <p><strong><a href="https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx">Visual Studio - Community Edition</a></strong> - When I first started developing OpenCover I had been previously supporting <a href="https://github.com/sawilde/partcover.net4">PartCover</a> which I had upgraded to support .NET4 so that we could use it for work. Then as I started planning/developing OpenCover I was lucky to have been given a free MSDN licence. Since those days however Microsoft have decided to release a Community Edition of Visual Studio that we developers (subject to some <a href="https://www.visualstudio.com/support/legal/mt171547">restrictions</a>) can use the latest and greatest tooling and really keep ahead.</p> <p><strong><a href="https://github.com/">GitHub</a></strong> - Unless you've been living under a rock for the past few years you probably know what GitHub is; for those rock dwellers it is a source code repository for those using <a href="https://git-scm.com/">Git</a></p> <p><strong><a href="https://www.jetbrains.com/resharper/">Resharper</a></strong> - created by JetBrains and probably the mainstay of most .NET developer toolsets at work (after Visual Studio of course). If you have used ReSharper (or any of the JetBrains) tools you'll know just how more productive you become when you use their tools.</p> <p><strong><a href="http://www.ndepend.com/">NDepend</a></strong> - was one of the earliest supporters of OpenCover. NDepend also consumes the coverage output from OpenCover (after converting it to the NCover format) and integrates that with its static analysis of your code to produce a wonderful dashboard of your code.</p> <p><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/DependencyGraphSnapshot.png" alt="Free stuff for Open Source .NET development"/></p> <p><strong><a href="https://bitbucket.org/">BitBucket</a></strong> - similar to GitHub it is a source code repository but supports <a href="https://en.wikipedia.org/wiki/Mercurial">Mercurial</a> as well as Git.</p> <p><strong><a href="https://www.jetbrains.com/decompiler/">DotPeek</a></strong> - another great tool from JetBrains and is free. It integrates well with ReSharper but is missing some features such as show IL; when I need to go that low level I use <a href="https://msdn.microsoft.com/en-us/library/f7dy01k1(v=vs.110).aspx">ildasm</a> or more often than not I use <a href="http://ilspy.net/">ILSpy</a>.</p> <h4 id="continuousintegration">Continuous Integration</h4> <p><strong><a href="https://www.atlassian.com/software/bamboo">Bamboo</a></strong> - another tool provided by Atlassian, this was the first build system used to build OpenCover based on commits pushed up to GitHub. It wasn't totally free as I still had to pay the compute usage of the windows image I was running in AWS but it rarely went over $5 a month so I can't really complain. It was quite configurable and as I already had a command line build script it was easy to set up and integrate. It was lacking a few features at the time such as building on pull requests etc but it was useful as it meant that I could check that anything I pushed would build on another developer's machine.</p> <p><strong><a href="http://www.appveyor.com/">AppVeyor</a></strong> - is awesome, I can't rave enough about how great this CI system is for .NET developers. It's so easy to set up that I use it on some other .NET projects that I am involved with e.g. <a href="https://github.com/OpenCoverUI/OpenCover.UI">OpenCover.UI</a> and <a href="https://github.com/MYOB-Technology/AccountRight_Live_API_.Net_SDK">AccountRight Live API SDK</a>. It works with pull requests and the nicest thing is that it also integrates with Nuget/MyGet and GitHub so I automate the pushing of releases by just pushing the code to a specific branch.</p> <h4 id="reporting">Reporting</h4> <p><strong><a href="https://github.com/danielpalme/ReportGenerator">ReportGenerator</a></strong> - is a great tool for visualizing the output from OpenCover.</p> <p><strong><a href="https://coveralls.io/">Coveralls.io</a></strong> - I use Coveralls to display online the test coverage of my projects, yes I actually use OpenCover to get coverage statistics of OpenCover. The online nature means that any contributors of pull-requests can get that coverage feedback.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Moving blogging platform to Ghost]]></title><description><![CDATA[Welcome to my new blog. Like it? I hope you do as I know I prefer it to the the Blogger version I was originally using. Moving platforms had its ups and downs so I thought I would detail a bit of the journey. Getting started was quite easy, I decided to trial it with Ghost [https://ghost.org] where the nice guys there will do the import for you (of course they will, they want you to sign up with them). Most of the transformation went without a hitch but I have quite a few code samples and not a]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/moving-blogging-platform-to-ghost/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d74d</guid><category><![CDATA[ghost]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 18 Jul 2015 09:13:21 GMT</pubDate><media:content url="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/cemetery-395953_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/cemetery-395953_1280.jpg" alt="Moving blogging platform to Ghost"/><p>Welcome to my new blog. Like it? I hope you do as I know I prefer it to the the Blogger version I was originally using. Moving platforms had its ups and downs so I thought I would detail a bit of the journey.</p> <p>Getting started was quite easy, I decided to trial it with <a href="https://ghost.org">Ghost</a> where the nice guys there will do the import for you (of course they will, they want you to sign up with them). Most of the transformation went without a hitch but I have quite a few code samples and not all of them converted properly and so I had to do that work myself. Thankfully <a href="https://ghost.org">Ghost</a> uses <a href="https://guides.github.com/features/mastering-markdown/">Markdown</a> so with a bit of abuse of the <pre /> tag I managed to get some semblance of order to the posts. But it wasn't perfect as the code samples were rather plain and I rather liked the code-highlighter script I was using on Blogger. To get round this I needed to work out how to create my own theme.</p> <p>Creating your own theme sounds daunting doesn't it, I know I was already in strange waters so I was initially hesitant to go down this path. I looked at using the <strong>Code Injection</strong> section but unless the highlighter code I was intending to use had a CDN or I had an alternate hosting location this was going to get messy and very quickly unwieldy. I also wanted to use my little purple monkey icon for the <em>favicon.ico</em> and I couldn't see how I could do that within the management portal. So, I had a deeper investigation into how themes are deployed into Ghost via the portal and quickly realised it was just a zip'd up folder of files. I actually liked the default <a href="https://github.com/TryGhost/Casper">Casper</a> theme so I used Github to fork the repo and then branched again to make my changes. I know it seems complex but I wanted a way to apply any fixes back (should, in the unlikely event, I find any) but more so I could take fixes from the original and apply them to my branch and even cherry pick changes should I so wish. Eventually I suspect the changes I will probably make will make this a near impossible task but until then...</p> <p>So applying the <em>favicon.ico</em> change seemed the simplest thing to start with and with a little help from <a href="https://www.ghostforbeginners.com/how-to-add-a-favicon-to-your-ghost-blog/">Ghost For Beginners</a> (an excellent resource) I was on my way; you can see my change here on <a href="https://github.com/sawilde/Casper/commit/e5f154ccad5aa7f170c17d3dba958edcf7bec69b">github</a>. Emboldened I decided to try my next change which was add a code highlighter script. It was suggested I try <a href="http://prismjs.com/download.html">Prism</a> and as it had most of the features I wanted I gave it go; see change here on <a href="https://github.com/sawilde/Casper/commit/3318169692fb146cf26947eb6fb6f5f95c351cee">github</a>.</p> <p>Both these changes were quite simple and testing them by zipping the folder and uploading via the portal was quite easy. It would be nice to be able to push my changes via Git (as all the cool kids like to do) but for now it would suffice.</p> <p>Now all I had to do was re-edit my posts and apply the appropriate markup and I was done.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Using GMock with Visual Studio CppUnitTestFramework]]></title><description><![CDATA[One of the things I have been a bit disappointed with myself during the development of OpenCover [https://github.com/OpenCover/opencover] is the lack of unit testing around the C++ code that makes up the profiler. I did toy with GTest [https://code.google.com/p/googletest/] and got some decent tests around the instrumentation engine but I was never able to actually test the profiler callbacks, also I found the lack of GTest integration with Visual Studio quite irritating; I know I have been spo]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/using_gmock_with_visual_studio_cpp_unit_test_framework/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d738</guid><category><![CDATA[gmock]]></category><category><![CDATA[tdd]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Fri, 03 Apr 2015 06:06:00 GMT</pubDate><media:content url="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/macbook-577758_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/macbook-577758_1280.jpg" alt="Using GMock with Visual Studio CppUnitTestFramework"/><p>One of the things I have been a bit disappointed with myself during the development of <a href="https://github.com/OpenCover/opencover">OpenCover</a> is the lack of unit testing around the C++ code that makes up the profiler.</p> <p>I did toy with <a href="https://code.google.com/p/googletest/">GTest</a> and got some decent tests around the instrumentation engine but I was never able to actually test the profiler callbacks, also I found the lack of GTest integration with Visual Studio quite irritating; I know I have been spoilt by ReSharper. Recently however, during handling <a href="https://msdn.microsoft.com/en-us/library/hh549175.aspx">Fakes</a> through OpenCover, I had an opportunity to work out how to load the profiler using registry free loading and realised that perhaps such testing might be within my reach, what I was missing however was a mocking library and one that I could use with Visual Studio tooling.</p> <p>Frankly GMock was the only candidate, the commercial alternatives being out as this was for an OSS project, but the instructions all seemed to want to build a number of libraries (64/32 bit Debug/Release) that I would have to statically link to and maintain these builds should the source or build options change. I decided to try a different tack that wouldn't involve building libraries and it has worked out reasonably successful, so I thought it would be worth commenting on here.</p> <h4 id="step1">Step 1</h4> <p>Get the latest GMock (1.7.0) library as a zip file and uncompress it somewhere within your repository.</p> <h4 id="step2">Step 2</h4> <p>From within Visual Studio update the Additional Include Directories to include the following paths</p> <pre><code class="language-bash">$(SolutionDir)lib\gmock-1.7.0 $(SolutionDir)lib\gmock-1.7.0\include $(SolutionDir)lib\gmock-1.7.0\gtest $(SolutionDir)lib\gmock-1.7.0\gtest\include </code></pre> <h4 id="step3">Step 3</h4> <p>Add the following to your "stdafx.h"</p> <pre><code class="language-cpp">#include "gmock/gmock.h" #include "gtest/gtest.h" </code></pre> <h4 id="step4">Step 4</h4> <p>Add the following to your "stdafx.cpp"</p> <pre><code class="language-cpp">// The following lines pull in the real gmock *.cc files. #include "src/gmock-cardinalities.cc" #include "src/gmock-internal-utils.cc" #include "src/gmock-matchers.cc" #include "src/gmock-spec-builders.cc" #include "src/gmock.cc" // The following lines pull in the real gtest *.cc files. #include "src/gtest.cc" #include "src/gtest-death-test.cc" #include "src/gtest-filepath.cc" #include "src/gtest-port.cc" #include "src/gtest-printers.cc" #include "src/gtest-test-part.cc" #include "src/gtest-typed-test.cc" </code></pre> <h4 id="step5">Step 5</h4> <p>Now all you need to do is add initialise GMock and you are ready; as I am using the CppUnitTestFramework I do the following.</p> <pre><code class="language-cpp">TEST_MODULE_INITIALIZE(ModuleInitialize) { // enable google mock ::testing::GTEST_FLAG(throw_on_failure) = true; int argc = 0; TCHAR **argv = NULL; ::testing::InitGoogleMock(&amp;argc, argv); } </code></pre> <p>Now all you need to do is follow the GMock documentation and add some expectations etc you can as I discovered even mock COM objects and have expectations on them e.g.</p> <pre><code class="language-cpp">EXPECT_CALL(*profilerInfo, SetEventMask(EVENT_MASK_WHEN_FAKES)) .Times(1) .WillRepeatedly(Return(S_OK)); </code></pre> <h4 id="bonusround">Bonus Round</h4> <p>There were a few little niggles however the first of which is that if an expectation fails, the Visual Studio test runner takes a little too long to close down (I suspect this may be something on my machine related to DrWatson). The second was that if an expectation did fail I could only initially see the result using DebugView - ugh - however I found a solution at <a href="http://www.durwella.com/post/96457792632/extending-microsoft-cppunittestframework">http://www.durwella.com/post/96457792632/extending-microsoft-cppunittestframework</a> which involves using some extra macros; I added these to my "stdafx.h" and voila the results are now available in Visual Studio. Finally, I found the mocks were not very lightweight and in fact if I left them hooked in caused performance issues, replacing them with an admittedly less useful stubs I could avoid this when necessary.</p> <p><strong>Update 20/2/2016</strong> The link to www.durwella.com has stopped working, but a copy of the article can be found on tumblr - <a href="http://durwella.tumblr.com/post/96457792632/extending-microsoft-cppunittestframework#96457792632">http://durwella.tumblr.com/post/96457792632/extending-microsoft-cppunittestframework#96457792632</a>. For completeness however I am posting the macros here as well with all attribution belonging to durwella.com</p> <pre><code class="language-cpp">#define _TEST_METHOD_EX_EXPANDER(_testMethod)\ _testMethod { try // Adds support for seeing std::exception in test output. Requires TEST_METHOD_EX_END after test. // Example: // TEST_METHOD_EX_BEGIN(MyFailingTest){ throw std::exception("What happened"); } TEST_METHOD_EX_END; #define TEST_METHOD_EX_BEGIN(_methodName) _TEST_METHOD_EX_EXPANDER(TEST_METHOD(_methodName)) // Use following test declared with TEST_METHOD_EX_BEGIN #define TEST_METHOD_EX_END\ catch (::std::exception& ex) \ { \ ::std::wstringstream ws; ws << "Unhandled Exception:" << ::std::endl << ex.what(); \ ::Microsoft::VisualStudio::CppUnitTestFramework::Assert::Fail(ws.str().c_str());\ } \ } </code></pre> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Happy Birthday OpenCover]]></title><description><![CDATA[Happy Birthday Today OpenCover [https://github.com/OpenCover/opencover] is 4 (four) years old, where has the time gone? In that time it has had over 60,000 nuget downloads [http://www.nuget.org/packages/opencover], been adopted by the SharpDevelop community as the coverage tool for their IDE, and, as I found out the other day, is also being used by the corefx team [https://github.com/dotnet/corefx] to supply coverage information [http://dotnet-ci.cloudapp.net/job/dotnet_corefx_coverage_windows/C]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/happy_birthday_open_cover/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d73b</guid><category><![CDATA[open cover]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 21 Feb 2015 21:33:00 GMT</pubDate><media:content url="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/clown-652241_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h4 id="happybirthday">Happy Birthday</h4> <img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/clown-652241_1280.jpg" alt="Happy Birthday OpenCover"/><p>Today <a href="https://github.com/OpenCover/opencover">OpenCover</a> is 4 (four) years old, where has the time gone? In that time it has had over 60,000 <a href="http://www.nuget.org/packages/opencover">nuget downloads</a>, been adopted by the SharpDevelop community as the coverage tool for their IDE, and, as I found out the other day, is also being used by the <a href="https://github.com/dotnet/corefx">corefx team</a> to supply <a href="http://dotnet-ci.cloudapp.net/job/dotnet_corefx_coverage_windows/Code_Coverage_Report/">coverage information</a> on their tests.</p> <p><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/yes.jpg" alt="Happy Birthday OpenCover"/></p> <p>Four years ago I started on OpenCover (<a href="https://github.com/OpenCover/opencover/commit/23ecce5026b5f609faad57bae3917d4248749316">first commit</a> - not very interesting but a stake in the ground) in order to create a code coverage tool for the .NET platform that could be used by anyone, but especially so that those of us in the open source community could have a tool available to us to help enhance our testing feedback; in the past we have seen some tools go commercial, some just vanish and others just abandoned. I also wanted to share some of the knowledge I had picked up in this area but no longer used in my day-to-day activities and to ensure it remains within the community by making it maintainable and available without restriction.</p> <p>It took nearly 6 months to get the first <a href="https://monkey-see-monkey-do-blog.herokuapp.com/2011/06/opencover-first-beta-release">beta release</a> and since that time we have added sequence and branch coverage, support for .NET 2 and .NET 4+, 32 and 64 bit support, and even Silverlight. Later features such as coverage by test and hooking into services and IIS support; not everything works as seamlessly as I would like but the community has either lived with it or improved it - which was the outcome I was seeking. Just recently we even added support for Microsoft.Fakes because some people wanted to use OpenCover for coverage with their tests that used Fakes rather than the coverage tool that they already had available; that was an interesting learning exercise, as well due to some very fortuitous googling.</p> <p>There even seems to be some movement to make a Mono version of OpenCover which was not something I saw coming but is also quite exciting, especially as Visual Studio now has support for Android and iPhone development, we knew about Xamarin/Mono but actual Visual Studio integration? Who 4 years ago would have seen that one coming ...?</p> <h4 id="highlights">Highlights</h4> <p>One of the highlights of the past few years was starting at my current place of work (MYOB) and then overhearing a conversation within the devops/build team who were discussing the coverage results of this free coverage tool they had found on github, imagine my delight when I realised it was OpenCover they were discussing, and, in mostly favourable terms; this was the first place I had seen OpenCover being used and it wasn't even introduced by me. I secretly implemented a <a href="https://github.com/OpenCover/opencover/issues/133">feature</a> in response to their comments.</p> <p>Another highlight is seeing that at least two Visual Studio integrations involving OpenCover are currently in play, both of these have been started independently, and though I am currently partly involved with one of them it will be interesting to see how they both progress.</p> <p>I'd like to thank everyone who has contributed to OpenCover either through direct contribution, suggestions, free stuff (more please) and just using it. Here's to another 4+ interesting years and I wonder what will happen to OpenCover in that time - suggestions?</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Microservices... Where to Start?]]></title><description><![CDATA[Micro-services are becoming a "thing" now and are probably de-facto when someone begins a new project and are thinking about hosting in the cloud but where do you start when you have a brown field project. Now I don't have any hot answers or amazing insights here all I can do is describe what my first "micro-service" was and how it came into being. Over time the application was getting more use and the number of servers involved started to increase; we were using auto-scaling and the number of ]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/microservices_where_to_start_/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d73c</guid><category><![CDATA[microservices]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Wed, 29 Oct 2014 08:34:00 GMT</pubDate><media:content url="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/entrepreneur-593378_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/entrepreneur-593378_1280.jpg" alt="Microservices... Where to Start?"/><p>Micro-services are becoming a "thing" now and are probably de-facto when someone begins a new project and are thinking about hosting in the cloud but where do you start when you have a brown field project. Now I don't have any hot answers or amazing insights here all I can do is describe what my first "micro-service" was and how it came into being.</p> <p>Over time the application was getting more use and the number of servers involved started to increase; we were using auto-scaling and the number of servers increased in line with usage but wavered between 8 and 24 instances. This quite rightly caused some consternation so we tinkered with number of core settings for each instance and thresholds for triggers to scale up and down but nothing seemed to alter the number of total cores being used. We actually have a hefty bit of logging and we can control the output through logging levels so we decided to change the logging to try and get more diagnostic information and this is when things got interesting. As this is a production system getting hold of this log information was initially problematic and slow so we had already started forwarding all the messages to <a href="https://www.splunkstorm.com/">SplunkStorm</a> using the available API and all was well (for over a year) and we were very impressed with how we could use that information for ad-hoc queries. However when we changed the logging levels the servers started scaling and we started to get database errors; unusual ones involving SQL connection issues rather than SQL query errors. We quickly reverted the changes and decided to try and replicate the problem in our CI/SIT environments.</p> <p>What we realized was that it was our own logging that was causing our performance issues and even more awkwardly was also responsible for the SQL connection issues as the logging to SplunkStorm via its API was using up the available TCPIP connections; this was even more pronounced when we changed the logging level. What we needed to do was refactor our logging such that we could get all our data into SplunkStorm (and Splunk as we were also in the process of migrating to SplunkStorm's big brother) with minimum impact to the actual production systems. Thankfully our logging framework used NLog, which we had wrapped in another entity for mocking purposes, so what we decided to do was write a new NLog target that would instead log to a queue (service-bus) and then have another service read messages from that queue and forward them to Splunk and SplunkStorm and thus our first micro-service was born.</p> <p>The new NLog target took the log messages, batch pushed them to the queue, then a microservice was written that monitors the queue, pulls messages off in batches, and then pushes them to Splunk and SplunkStorm, also in batches. The initial feasibility spike took 1/2 a day with the the final implementation being ready and pushed into production the following week. Because we were using .NET we could also take advantage of multiple threads so we used thead-pools to limit the number of active Splunk/SplunkStorm messages being sent in parallel. What we found after deployment was that we could scale back our main application servers to 4 instances with only a pair of single core services dealing with the logging aspect, we also noticed that the auto scaling never reaches its old thresholds and the instance count has been stable ever since. Another advantage is that the queue can now be used by other services to push messages to Splunk and can even use the same NLog target in their projects to deal with all the complexities.</p> <p>I hope the above shows that your first micro-service does not have to be something elaborate but instead deal with a mundane but quite essential task and the benefits can be quite astounding.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Excluding code from coverage...]]></title><description><![CDATA[This may (no guarantees) turn into a series of posts on how to refactor your code for testing using simple examples. This particular example came from a request to add an "Exclude Lines from Coverage" feature to OpenCover [https://github.com/OpenCover/opencover]. Now there are many ways this could be achieved, none of which I had any appetite for as they were either too clunky and/or could make OpenCover very slow. I am also not a big fan on excluding anything from code coverage; though OpenCov]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/excluding_code_from_coverage_/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d739</guid><category><![CDATA[open cover]]></category><category><![CDATA[tdd]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sun, 12 Oct 2014 20:14:00 GMT</pubDate><media:content url="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/binary-code-507786_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/binary-code-507786_1280.jpg" alt="Excluding code from coverage..."/><p>This may (no guarantees) turn into a series of posts on how to refactor your code for testing using simple examples.</p> <p>This particular example came from a request to add an "Exclude Lines from Coverage" feature to <a href="https://github.com/OpenCover/opencover">OpenCover</a>. Now there are many ways this could be achieved, none of which I had any appetite for as they were either too clunky and/or could make OpenCover very slow. I am also not a big fan on excluding anything from code coverage; though OpenCover has several exclude options I just thought that this was one step too far in order to achieve that 100% coverage value as it could too easily abused. Even if I did think the feature was useful it still may not get implemented by myself for several days, weeks or months.</p> <p>But sometimes there are other ways to cover your code without a big refactoring and mocking exercise which can act as a deterrent to doing the right thing.</p> <p>In this case the user was using EntityFramework and wanted to exclude the code in the catch handlers because they couldn't force EntityFramework to crash on demand - this is quite a common problem in my experience. The user also knew that one approach was to push all that EntityFramework stuff out to another class and could then test their exception handling via mocks but didn't have the time/appetite to go down that path and thus wanted to exclude that code.</p> <p>I imagined that the user has code that looked something like this:</p> <pre><code class="language-csharp">public void SaveCustomers(ILogger logger) { CustomersEntities ctx = CustomersEntities.Context;//) try { // awsome stuff with EntityFramework ctx.SaveChanges(); } catch(Exception ex) { // do some awesome logging logger.Write(ex); throw; } } </code></pre> <p>and I could see why this would be hard (but not impossible) to test the exception handling. Now instead of extracting out all the interactions with the EntityFramework so it is possible to throw an exception during testing I suggested the following refactoring:</p> <pre><code class="language-csharp">internal void CallWrapper(Action doSomething, ILogger logger) { try { doSomething(); } catch(Exception ex) { // do some awesome logging logger.Write(ex); throw; } } </code></pre> <p>which I would then use like this:</p> <pre><code class="language-csharp">public void SaveCustomers(ILogger logger) { CustomersEntities ctx = CustomersEntities.Context;//) CallWrapper(() => { // awsome stuff with EntityFramework ctx.SaveChanges(); }, logger); } </code></pre> <p>My original tests should still continue as before and now I have a new method that I can now test independently.</p> <p>I know this isn't the only way to tackle this sort of problem and I'd love to hear about other approaches.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[A simple TDD example]]></title><description><![CDATA[I recently posted a response to StackOverflow wrt TDD and Coverage [http://stackoverflow.com/a/26152423/189163]and I thought it would be worth re-posting the response here. The example is simple but hopefully shows how writing the right tests using TDD gives you a better suite of tests for your code than you would probably write if you wrote the tests after the code (which may have been re-factored as you developed). "As the [original] accepted answer has pointed out your actual scenario reduce]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/a_simple_tdd_example/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d73a</guid><category><![CDATA[open cover]]></category><category><![CDATA[tdd]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Mon, 06 Oct 2014 01:39:00 GMT</pubDate><media:content url="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/hex-675576_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/hex-675576_1280.jpg" alt="A simple TDD example"/><p>I recently posted a response to <a href="http://stackoverflow.com/a/26152423/189163">StackOverflow wrt TDD and Coverage</a> and I thought it would be worth re-posting the response here. The example is simple but hopefully shows how writing the right tests using TDD gives you a better suite of tests for your code than you would probably write if you wrote the tests after the code (which may have been re-factored as you developed).</p> <p>"As the [original] accepted answer has pointed out your actual scenario reduces to collection.Sum() however you will not be able to get away with this every time.</p> <p>If we use TDD to develop this (overkill I agree but easy to explain) we would [possibly] do the following (I am also using <a href="http://www.nunit.org/">NUnit</a> in this example out of preference).</p> <pre><code class="language-csharp">[Test] public void Sum_Is_Zero_When_No_Entries() { var bomManager = new BomManager(); Assert.AreEqual(0, bomManager.MethodToTest(new Collection&lt;int&gt;())); } </code></pre> <p>and then write the following code (note: we write the minimum to meet the current set of tests)</p> <pre><code class="language-csharp">public int MethodToTest(Collection&lt;int&gt; collection) { var sum = 0; return sum; } </code></pre> <p>We would then write a new test e.g.</p> <pre><code class="language-csharp">[Test] [TestCase(new[] { 0 }, 0)] public void Sum_Is_Calculated_Correctly_When_Entries_Supplied(int[] data, int expected) { var bomManager = new BomManager(); Assert.AreEqual(expected, bomManager.MethodToTest(new Collection&lt;int&gt;(data))); } </code></pre> <p>If we ran our tests they would all pass (green) so we need a new test(cases)</p> <pre><code class="language-csharp">[TestCase(new[] { 1 }, 1)] [TestCase(new[] { 1, 2, 3 }, 6)] </code></pre> <p>In order to satisfy those tests I would need to modify my code e.g.</p> <pre><code class="language-csharp">public int MethodToTest(Collection&lt;int&gt; collection) { var sum = 0; foreach (var value in collection) { sum += value; } return sum; } </code></pre> <p>Now all my tests work and if I run that through <a href="http://www.nuget.org/packages/opencover">OpenCover</a> I get 100% sequence and branch coverage - Hurrah!.... And I did so without using coverage as my control but writing the right tests to support my code.</p> <p>BUT there is a 'possible' defect... what if I pass in null? Time for a new test to investigate</p> <pre><code class="language-csharp">[Test] public void Sum_Is_Zero_When_Null_Collection() { var bomManager = new BomManager(); Assert.AreEqual(0, bomManager.MethodToTest(null)); } </code></pre> <p>The test fails so we need to update our code e.g.</p> <pre><code class="language-csharp">public int MethodToTest(Collection&lt;int&gt; collection) { var sum = 0; if (collection != null) { foreach (var value in collection) { sum += value; } } return sum; } </code></pre> <p>Now we have tests that support our code rather than tests that test our code i.e. our tests do not care about how we went about writing our code.</p> <p>Now we have a good set of tests so we can now safely refactor our code e.g.</p> <pre><code class="language-csharp">public int MethodToTest(IEnumerable&lt;int&gt; collection) { return (collection ?? new int[0]).Sum(); } </code></pre> <p>And I did so without affecting any of the existing tests."</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The API Journey for AccountRight Live]]></title><description><![CDATA[My talk at ALT.NET on designing the MYOB API. "Building an API for your product isn’t just about choosing your technology and planning your scaling capabilities when you unleash it upon the world. For nearly 2 years MYOB have been developing an API for our AccountRight Live product and we would like to share with you our journey into making an API that is used by our own products, such as PayDirect, and one that our developer partners can also use productively - “Integrating with the MYOBapi ha]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/the-api-journey-for-accountright-live/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d75f</guid><category><![CDATA[api]]></category><category><![CDATA[cloud]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Tue, 29 Jul 2014 08:00:00 GMT</pubDate><media:content url="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/Apisssss.gif" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/Apisssss.gif" alt="The API Journey for AccountRight Live"/><p>My talk at ALT.NET on designing the MYOB API.</p> <p>"Building an API for your product isn’t just about choosing your technology and planning your scaling capabilities when you unleash it upon the world. For nearly 2 years MYOB have been developing an API for our AccountRight Live product and we would like to share with you our journey into making an API that is used by our own products, such as PayDirect, and one that our developer partners can also use productively - “Integrating with the MYOBapi has been so quick for us. For example [when] MYOBapi let us know when the new endpoints were available for payroll we were able to complete our full depth integration over the space of 16 hours."</p> <p>The full talk can be found <a href="https://www.youtube.com/watch?v=nxEcjFG0tl4-Y">here</a> on YouTube.</p> <p><a href="https://www.youtube.com/watch?v=nxEcjFG0tl4-Y" title="Play Now"><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/YT-Capture.jpg" alt="The API Journey for AccountRight Live"/></a></p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Customsing New Relic installation during Azure deployments]]></title><description><![CDATA[For about a year we've been running New Relic to monitor our WebRoles running on the Azure platform. Installing has been quite simple by following the instructions initially found on the New Relic [https://docs.newrelic.com/docs/dotnet/] site and is now available via Nuget [http://www.nuget.org/packages/NewRelicWindowsAzure]; however two things about this process have been irking me. First, I wanted to be able to distinguish the CI and Production deployments in the New Relic portal by making th]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/customsing_new_relic_installation_during_azure_deployments/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d73d</guid><category><![CDATA[cloud]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Thu, 03 Apr 2014 05:09:00 GMT</pubDate><media:content url="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/construction-652292_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/construction-652292_1280.jpg" alt="Customsing New Relic installation during Azure deployments"/><p>For about a year we've been running New Relic to monitor our WebRoles running on the Azure platform. Installing has been quite simple by following the instructions initially found on the <a href="https://docs.newrelic.com/docs/dotnet/">New Relic</a> site and is now available via <a href="http://www.nuget.org/packages/NewRelicWindowsAzure">Nuget</a>; however two things about this process have been irking me.</p> <p>First, I wanted to be able to distinguish the CI and Production deployments in the New Relic portal by making them have different names, but the name as it appears in the New relic portal is controlled through a setting in the web.config and cannot be controlled though the Azure portal.</p> <p>Second, I wanted to be able to control the licence key we used for CI (free licence, limited functionality) and Production (expensive licence, full functionality) deployments, however the key is embedded in the newrelic.cmd and is applied when the New Relic agent is installed; this is not easy to change during/post deployment.</p> <p>The initial solution to both these problems involved producing two packages, one for the CI environment(s) and one for the Production environment. Instead of the normal Debug and Release build outputs, a 3rd target, Production, was used and the web.config was modified during the build process using a <a href="http://msdn.microsoft.com/en-us/library/dd465318(v=vs.100).aspx">transform</a> that changed the name to what was wanted. The licence key issue was resolved by have two newrelic.cmd items in the project and then packaging the required one with the appropriate build. This was not ideal but it worked in a fashion however the ProdOps guys were keen on having control over the name and licence key used in production.</p> <h4 id="changingtheapplicationname">Changing the Application name</h4> <p>New Relic gets the Application name from a setting in the web.config and so what is necessary is to read a setting in the Azure configuration and update the web.config. There are many ways to resolve this issue but the approach we took was based on the solution to an identical issue raised on GitHub. <br> For completeness I will however reiterate the steps below:</br></p> <ol> <li>In the ServiceDefinition.csdef file add a setting to the <ConfigurationSettings/> section</li> </ol> <pre><code class="language-language-markup"><ConfigurationSettings> <Setting name="NewRelicApplicationName" /> </ConfigurationSettings> </code></pre> <ol start="2"> <li>In the ServiceConfiguration file for your environment add a setting that will be used to set the Application name in New Relic</li> </ol> <pre><code class="language-language-markup"><ConfigurationSettings> <Setting name="NewRelicApplicationName" value="MyApplication" /> </ConfigurationSettings> </code></pre> <ol start="3"> <li>In the WebRole.cs file for your application amend your code with the following</li> </ol> <pre><code class="language-language-csharp"> public class WebRole : RoleEntryPoint { public override bool OnStart() { ConfigureNewRelic(); return base.OnStart(); } private static void ConfigureNewRelic() { if (RoleEnvironment.IsAvailable && !RoleEnvironment.IsEmulated) { string appName; try { appName = RoleEnvironment.GetConfigurationSettingValue("NewRelicApplicationName"); } catch (RoleEnvironmentException) { /*nothing we can do so just return*/ return; } if (string.IsNullOrWhiteSpace(appName)) return; using (var server = new ServerManager()) { // get the site's web configuration const string siteNameFromServiceModel = "Web"; var siteName = string.Format("{0}_{1}", RoleEnvironment.CurrentRoleInstance.Id, siteNameFromServiceModel); var siteConfig = server.Sites[siteName].GetWebConfiguration(); // get the appSettings section var appSettings = siteConfig.GetSection("appSettings").GetCollection(); AddConfigElement(appSettings, "NewRelic.AppName", appName); server.CommitChanges(); } } } private static void AddConfigElement(ConfigurationElementCollection appSettings, string key, string value) { if (appSettings.Any(t => t.GetAttributeValue("key").ToString() == key)) { appSettings.Remove(appSettings.First(t => t.GetAttributeValue("key").ToString() == key)); } ConfigurationElement addElement = appSettings.CreateElement("add"); addElement["key"] = key; addElement["value"] = value; appSettings.Add(addElement); } } </code></pre> <p>And that should be it.</p> <h4 id="changingthenewreliclicencekey">Changing the New Relic licence key</h4> <p>The New Relic licence key is applied when the New Relic agent is installed on the host so what we is needed is to read the Azure configuration when the newrelic.bat is executed as part of the Startup tasks (defined in the ServiceDefinition.csdef) and apply it when the agent is installed. There does not appear to be way of changing the licence key if your agents have already been installed other than reducing the number of instances to 0 and then scaling back up (I suggest you use the staging slot for this). In the ServiceDefinition.csdef file add a setting to the <ConfigurationSettings> section</p> <pre><code class="language-language-markup"><ConfigurationSettings> <Setting name="NewRelicLicenceKey"></Setting> </ConfigurationSettings> </code></pre> <p>and add a new Environment variable to the newrelic.cmd startup task that will be set by the new configuration setting</p> <pre><code class="language-language-markup"><task commandline="newrelic.cmd" executioncontext="elevated" tasktype="simple"> <environment> <variable name="EMULATED"> <roleinstancevalue xpath="/RoleEnvironment/Deployment/@emulated"></roleinstancevalue> </variable> <variable name="NewRelicLicence"> <!-- http://msdn.microsoft.com/en-us/library/windowsazure/hh404006.aspx --> <roleinstancevalue xpath="/RoleEnvironment/CurrentInstance/ConfigurationSettings/ConfigurationSetting[@name=&apos;NewRelicLicenceKey&apos;]/@value"> </roleinstancevalue> </variable> <variable name="IsWorkerRole" value="false"></variable> </environment> </task> </code></pre> <p>In the ServiceConfiguration file for your environment add a setting that will be used to set the Application name in New Relic</p> <pre><code class="language-language-markup"><configurationsettings> <setting name="NewRelicLicenceKey" value="<ADD YOUR KEY HERE>"></setting> </configurationsettings> </code></pre> <p>Edit your newrelic.cmd to use the Environment variable</p> <pre><code class="language-language-bash">:: Update with your license key SET LICENSE_KEY=%NewRelicLicenceKey% </code></pre> <p>Now you should be able to control the New Relic licence key during your deployment.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Book Review - Building Mobile Applications Using Kendo UI Mobile and ASP.NET Web API]]></title><description><![CDATA[I've written a book review on 'Building Mobile Applications Using Kendo UI Mobile and ASP.NET Web API' and posted it up on CodeProject [http://www.codeproject.com/Articles/696464/Building-Mobile-Applications-Using-Kendo-UI-Mobile] . Summary I liked this book and I took a lot from it that I am now using to build that sample application using KendoUI [http://www.kendoui.com/]. If you want to learn about ASP.NET Web API then this book isn't for you and you'll learn a lot more from the ASP.NET Web A]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/book_review_building_mobile_applications_using_kendo_ui_mobile_and_asp_net_web_api/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d73e</guid><category><![CDATA[api]]></category><category><![CDATA[review]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 14 Dec 2013 05:46:00 GMT</pubDate><media:content url="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/apple-256261_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/apple-256261_1280.jpg" alt="Book Review - Building Mobile Applications Using Kendo UI Mobile and ASP.NET Web API"/><p>I've written a book review on 'Building Mobile Applications Using Kendo UI Mobile and ASP.NET Web API' and posted it up on <a href="http://www.codeproject.com/Articles/696464/Building-Mobile-Applications-Using-Kendo-UI-Mobile">CodeProject</a>. <strong>Summary</strong> I liked this book and I took a lot from it that I am now using to build that sample application using <a href="http://www.kendoui.com/">KendoUI</a>. If you want to learn about ASP.NET Web API then this book isn't for you and you'll learn a lot more from the <a href="http://www.asp.net/web-api">ASP.NET Web API</a> site.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Getting code coverage from your .NET testing using OpenCover.]]></title><description><![CDATA[Introduction OpenCover is a free, open-sourced [https://github.com/sawilde/opencover], code coverage tool for .NET 2.0 and above running on the .NET platform. It supports sequence coverage, branch coverage and has a cover by test facility. Though OpenCover is command line only, a rich HTML UI of the results can be visualized using ReportGenerator [http://reportgenerator.codeplex.com/]. We will aim to demonstrate how you can use this utility to get visibility into your testing coverage. Backgro]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/getting-code-coverage-from-your-net-testing-using-opencover/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d75c</guid><category><![CDATA[codeproject]]></category><category><![CDATA[open cover]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 02 Nov 2013 13:00:00 GMT</pubDate><media:content url="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/lego-1.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="introduction">Introduction</h3> <img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/lego-1.jpg" alt="Getting code coverage from your .NET testing using OpenCover."/><p>OpenCover is a free, <a href="https://github.com/sawilde/opencover">open-sourced</a>, code coverage tool for .NET 2.0 and above running on the .NET platform. It supports sequence coverage, branch coverage and has a cover by test facility. Though OpenCover is command line only, a rich HTML UI of the results can be visualized using <a href="http://reportgenerator.codeplex.com/">ReportGenerator</a>.</p> <p>We will aim to demonstrate how you can use this utility to get visibility into your testing coverage.</p> <h3 id="background">Background</h3> <p>OpenCover is currently the only actively developed and maintained open-sourced tool of it's type for the .NET platform but it was not the first, some of the others being:</p> <p>NCover - Probably the most well-known commercial tool for .NET code coverage started life as an <a href="http://sourceforge.net/projects/ncover/?source=directory">open-source</a> project.</p> <p>Coverage.eye - Originated at Microsoft and was available on gotdotnet, the repository/sample has since gone and no full mirrors appear to exist (The wayback machine only has the <a href="http://web.archive.org/web/20080122001923/http://www.gotdotnet.com/Community/UserSamples/Details.aspx?SampleGuid=881a36c6-6f45-4485-a94e-060130687151">text</a>).</p> <p>PartCover - This appears to have been created in response to NCover going commercial and though actively used it has limitations e.g. 32-bit only. The original <a href="http://sourceforge.net/projects/partcover/">repository</a> is no longer being maintained by its original developers and is now being maintained on <a href="https://github.com/sawilde/partcover.net4">github</a>.</p> <h3 id="usingopencover">Using OpenCover</h3> <h5 id="preparingforopencover">Preparing for OpenCover</h5> <p>The following steps detail how to download OpenCover (and other supporting tools) from NuGet.</p> <p>OpenCover is available as a zip or msi download via its bitbucket <a href="https://bitbucket.org/shaunwilde/opencover/downloads">mirror</a> but for the sake of this article we will use the OpenCover nuget package.</p> <ol> <li>Create a new solution</li> <li>For that solution "Enable NuGet Package Restore"</li> <li>Using "Manage NuGet Packages for Solution" add the following packages</li> </ol> <ul> <li>OpenCover</li> <li>ReportGenerator</li> <li>NUnit.Runners</li> </ul> <p>Once completed you should have a solution with a .nuget folder which contains a packages.config that looks something like the following</p> <pre><code class="language-language-xml"><?xml version="1.0" encoding="utf-8"?> <packages> <package id="NUnit.Runners" version="2.6.3" /> <package id="OpenCover" version="4.5.1923" /> <package id="ReportGenerator" version="1.9.1.0" /> </packages> </code></pre> <p>Next</p> <ol> <li>Create a C# project (class library) called Sample</li> <li>Create a C# project (class library) called Sample.Test and reference the Sample project</li> <li>Use NuGet to add the following package</li> </ol> <ul> <li>NUnit</li> </ul> <p>Now to add a simple class and method and a test that will exercise that method.</p> <p>In the sample project add the following class</p> <pre><code class="language-language-csharp">public class Target { public static void DoSomething() { try { Console.WriteLine("I ran!"); } catch (Exception ex) { Console.WriteLine(ex.Message); } } } </code></pre> <p>In the Sample.Test project add the following class</p> <pre><code class="language-language-csharp">using NUnit.Framework; [TestFixture] public class TargetTest { [Test] public void DoSomethingTest() { Target.DoSomething(); } } </code></pre> <h4 id="runningtestswithopencover">Running tests with OpenCover</h4> <p>OpenCover does not directly execute your tests but instead needs to execute another application that executes your tests in this case we are using NUnit.</p> <p>First lets create a batch file that we can execute our tests from the command line</p> <pre><code>..\..\..\packages\NUnit.Runners.2.6.3\tools\nunit-console.exe sample.test.dll /noshadow </code></pre> <p>This batch file can be added to the Sample.Test project (with Copy to Output Directory set to Copy Always).</p> <p>Now we can use OpenCover to execute this batch file, again we can add this as a a batch file to our Sample.Test project so that we can execute it on the command line</p> <pre><code>..\..\..\packages\OpenCover.4.5.1923\OpenCover.Console.exe -target:runtests.bat -register:user -filter:+[Sample]* </code></pre> <p>The arguments are:</p> <p><code>-target:</code> - used to indicate the target process for OpenCover to execute.</p> <p><code>-register:user</code> - used to register the COM objects that OpenCover uses to instrument your assemblies.</p> <p><code>-filter:</code> - used to control which assemblies OpenCover will gather coverage data for. The filter is capable of including and excluding assemblies and classes and is actually the same filter format that PartCover uses. The filter is one of the more complex features of OpenCover and more detail is provided with the documentation that is installed alongside OpenCover or in the Usage wiki.</p> <p>When executed, OpenCover will produce an XML file (default results.xml) that contains all the data related to that test run. To visualize that data we can use ReportGenerator to produce some rich HTML output. ReportGenrator can also be run on the command line</p> <pre><code>..\..\..\packages\ReportGenerator.1.9.1.0\reportgenerator.exe -reports:results.xml -targetdir:coverage </code></pre> <p>If we open the produced coverage output (coverage\index.htm) we can see the visualization of the coverage of our target code.<br> <img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/Capture-cpver.png" alt="Getting code coverage from your .NET testing using OpenCover."><br> As you can see it is quite simple to understand what code your tests actually cover.</br></img></br></p> <p>For a real-world example OpenCover is actually used to gather coverage on it's own tests and the results can be seen on the OpenCover <a href="https://opencover.atlassian.net/builds/browse/OPC-DEF-40/artifact/JOB1/coverage/index.htm">build pipeline</a>.</p> <hr> <p>NOTE: This article was originally published on <a href="https://www.codeproject.com/Articles/677691/Getting-code-coverage-from-your-NET-testing-using">CodeProject</a> Nov 3rd 2013</p> <!--kg-card-end: markdown--></hr>]]></content:encoded></item><item><title><![CDATA[Application Tracing]]></title><description><![CDATA[So OpenCover [https://github.com/sawilde/opencover]is as feature complete as I care to take it at the moment, I may do this one feature involving Windows Store applications [https://github.com/sawilde/opencover/issues/144] should I have a need for it, and I decided to not continue with OpenMutate [/2012/02/mutation-testing-use-for-re-jit] as I can't really find a need for it other than an exploratory investigation into reJIT. I do have one more itch to scratch when it comes to profilers and tha]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/application_tracing/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d744</guid><category><![CDATA[open source]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 14 Sep 2013 22:21:00 GMT</pubDate><media:content url="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/tire-tracks-583208_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/tire-tracks-583208_1280.jpg" alt="Application Tracing"/><p>So <a href="https://github.com/sawilde/opencover">OpenCover</a> is as feature complete as I care to take it at the moment, I may do this one feature involving <a href="https://github.com/sawilde/opencover/issues/144">Windows Store applications</a> should I have a need for it, and I decided to not continue with <a href="https://monkey-see-monkey-do-blog.herokuapp.com/2012/02/mutation-testing-use-for-re-jit">OpenMutate</a> as I can't really find a need for it other than an exploratory investigation into reJIT.</p> <p>I do have one more itch to scratch when it comes to profilers and that is application tracing and this may allow me to play with other technologies which I'll list later. This itch started a few months back, perhaps 6+ months ago, when I was trying to integrate some commercial tracing applications to an application I was working on and they both died horribly and I started to look for alternatives and found nothing available. Now I could have started the project then but I decided to pester both vendors until they fixed the problem which they eventually did (within a week or two of each other or so it seemed to me) and I integrated one of the solutions and moved on... but the itch never went away.</p> <p>So what am I thinking... well a profiler (obviously) with 32/64 support (again given) and making obscene abuse of the <a href="http://msdn.microsoft.com/en-us/library/ms231874.aspx">COR_PRF_MONITOR_ENTERLEAVE</a> functionality. The problem here is I don't really know what people will want to track (hey that is why there are companies that do this thing with BAs and such like to decide on this) so in the first instance I'll go at tracing everything (which will probably be very slow) and go from there.</p> <p>This leads to the next problem, data, lots of it, lots and lots of it, and that data is going to need a home but a home I can then use to create reports at some point. For this I am thinking asynchronous, initially a queue and potentially an event source like data store like <a href="http://geteventstore.com/">EventStore</a> or <a href="https://github.com/NEventStore/NEventStore">NEventStore</a>. The advantage of an event source would allow the ability to regenerate the views once we know what they are, perhaps something along the lines of <a href="http://www.splunk.com/">Splunk</a> or <a href="https://www.splunkstorm.com/">SplunkStorm</a> would come into play.</p> <p>So a name... always the hardest part but thankfully we have the internet and online dictionaries so I've gone with <a href="https://github.com/sawilde/opendiscover">OpenDiscover</a>.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Creating a simple NodeJS app with Mongo]]></title><description><![CDATA[Okay, I woke up this morning (6am) with a need to create a simple reporting dashboard to display the coverage results from OpenCover when it dog-foods its own tests. Now that OpenCover has no **_reported _**bugs, I decided to use my spare time to investigate other technologies for a while. What I needed was simple 'online' storage to capture results from the build system and the ability to extract that data into charts. Normally I'd probably knock up a simple rails app because it is easy to do,]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/creating_a_simple_node_js_app_with_mongo/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d740</guid><category><![CDATA[mongo db]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 09 Mar 2013 00:16:00 GMT</pubDate><media:content url="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/mongoose-721120_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/mongoose-721120_1280.jpg" alt="Creating a simple NodeJS app with Mongo"/><p>Okay, I woke up this morning (6am) with a need to create a simple reporting dashboard to display the coverage results from OpenCover when it dog-foods its own tests. Now that OpenCover has no **_reported _**bugs, I decided to use my spare time to investigate other technologies for a while.</p> <p>What I needed was simple 'online' storage to capture results from the build system and the ability to extract that data into charts. Normally I'd probably knock up a simple rails app because it is easy to do, however I decided, probably due to the heat, to use the following:</p> <ul> <li><a href="http://nodejs.org/">node.js</a> - a technology I haven't used but have meant to for a while; I also like the JavaScript syntax better than ruby (it's a personal thing)</li> <li><a href="http://www.mongodb.org/">mongodb</a> - a database I am familiar with</li> <li><a href="https://developers.google.com/chart/">google charts</a> - free; as in beer.</li> <li><a href="http://www.heroku.com/">heroku </a>- free; well my intended usage will be.</li> </ul> <p>A quick time-boxed search of the web about how to use node with mongodb and create a RESTful API and I settled on the following packages:</p> <ul> <li><a href="http://mongoosejs.com/">mongoose</a> - for interacting with the mongo database</li> <li><a href="http://mcavage.github.com/node-restify/">restify</a> - for creating a simple rest server</li> <li><a href="https://github.com/remy/nodemon">nodemon</a> - monitors changes in your app and restarts; sounds useful</li> </ul> <p>I'll assume other packages will be added to the mix as challenges present themselves. It's now 7am and time for breakfast and then the fun starts... And a few hours later we have a simple storage system hosted on heroku all we need now is the chartsThe repository can be found on github. I am sure it will evolve over time but it was very simple to get to this stage by leveraging the work of all those who have gone before.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[MongoDB, Mongoid, MapReduce and Embedded Documents.]]></title><description><![CDATA[I am using Mongoid [http://mongoid.org/en/mongoid/index.html]to store some data as documents in a MongoDB [http://www.mongodb.org/]database and then run some MapReduce [http://en.wikipedia.org/wiki/MapReduce]queries against the data. Now I have no trouble with mapping data from normal documents and an embedded document but I could not extract data from an embedded collection of documents i.e. class Foo include Mongoid::Document #fields field :custom_id, :type => String #relations ]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/mongo_db_mongoid_map_reduce_and_embedded_documents_/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d73f</guid><category><![CDATA[mongo db]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 25 Aug 2012 03:39:00 GMT</pubDate><media:content url="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/mongolia-695267_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/mongolia-695267_1280.jpg" alt="MongoDB, Mongoid, MapReduce and Embedded Documents."/><p>I am using <a href="http://mongoid.org/en/mongoid/index.html">Mongoid </a>to store some data as documents in a <a href="http://www.mongodb.org/">MongoDB </a>database and then run some <a href="http://en.wikipedia.org/wiki/MapReduce">MapReduce </a>queries against the data. Now I have no trouble with mapping data from normal documents and an embedded document but I could not extract data from an embedded collection of documents i.e.</p> <pre><code class="language-language-ruby">class Foo include Mongoid::Document #fields field :custom_id, :type =&gt; String #relations embeds_many :bars end </code></pre> <pre><code class="language-language-javascript">class Bar include Mongoid::Document #fields field :custom_field, :type =&gt; String #relations embedded_in :Foo end </code></pre> <p>First it looks like that we need to run the <strong>map</strong> part of the MapReduce against the parent document and not the child i.e. <strong>Foo.map_reduce(...)</strong> will work find documents but <strong>Bar.map_reduce(...)</strong> does not, however that is not surprising as it is also not possible to count all <strong>Bar</strong> documents by doing <strong>Bar.all.count</strong> in the rails console.</p> <p>Now a MapReduce query in MongoDB is done as a pair of JavaScript scripts, the first does the map by _emit_ting a mini-document of data and the second that aggregates the data in some manner. So thinking I had a collection (array) my first attempt to map data from the embedded document was this:</p> <p>MAP:</p> <pre><code class="language-language-javascript">function() { if (this.bars == null) return; for (var bar in this.bars){ emit(bar.custom_field, { count: 1 }); } } </code></pre> <p>REDUCE:</p> <pre><code class="language-language-javascript">function(key, values) { var total = 0; for ( var i=0; i&lt; values.length; i++ ) { total += values[i].count; } return { count: total }; } </code></pre> <p>This produced an unusual result such that there was only a single aggregated document with a null key and the count was the total number of child documents (summed across all the parents).</p> <p>Now I could have just broken the child document out and not embedded it but I didn't want to break the model over something so trivial that must, in my eyes, be possible.</p> <p>After much googling and reading of forum posts, I couldn't find any samples. I eventually observed of some 'unusual' syntax on an unrelated topic which led me to rewrite the <strong>map</strong> script into this:</p> <pre><code class="language-language-javascript">function() { if (this.bars== null) return; for (var bar in this.bars){ emit(this.bars[bar].custom_field, { count: 1 }); } } </code></pre> <p>Which produced the expected results. Okay this was probably obvious to anyone who knows MongoDB+MapReduce well but it took me a while to find out and it still isn't that intuitive, though I think I now know why it is this way, so I thought I'd write it up as a bit of a reference.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The "Pigs" and "Chickens" fable]]></title><description><![CDATA[I think anyone who is anyone who has heard of Agile and Scrum have heard of the Pigs and Chickens story and how it describes those who are committed to the delivery of the project, as "Pigs", and those who are just involved, as "Chickens"; if not click on the image below and learn more about it. [https://en.wikipedia.org/wiki/The_Chicken_and_the_Pig]However I was just recently re-reading "Death March [https://www.amazon.com/Death-March-2nd-Edward-Yourdon/dp/013143635X]" by Edward Yourdon (1st E]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/the_pigs_and_chickens_fable/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d74a</guid><category><![CDATA[agile]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Thu, 07 Jun 2012 22:19:00 GMT</pubDate><media:content url="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/suckling-pig-95896_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/suckling-pig-95896_1280.jpg" alt="The "Pigs" and "Chickens" fable"/><p>I think anyone who is anyone who has heard of Agile and Scrum have heard of the Pigs and Chickens story and how it describes those who are committed to the delivery of the project, as "Pigs", and those who are just involved, as "Chickens"; if not click on the image below and learn more about it.</p> <a class="nolinkunderline" href="https://en.wikipedia.org/wiki/The_Chicken_and_the_Pig" target="_blank"> <img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/pigs-n-chickens.jpg" alt="The "Pigs" and "Chickens" fable"/></a> <p>However I was just recently re-reading "<a href="https://www.amazon.com/Death-March-2nd-Edward-Yourdon/dp/013143635X">Death March</a>" by Edward Yourdon (1st Edition) and I came across this response to the parable, in the context of commitment whilst on a death march.</p> <blockquote> <p>“I’m not sure you will find any old pigs in development perhaps more chickens. I think that kind of commitment continues until (inevitably?) you get into the first death march project – then there is a rude awakening. Either the pig realises what’s happening, this is the slaughterhouse! RUN!! Or the pig is making bacon…”<br> <br/>- Paul Mason (Death March).</br></p> </blockquote> <p>I just found it quite amusing and thought I should share...</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Mutation Testing; a use for re-JIT?]]></title><description><![CDATA[Where to start... Mutation testing is described [http://en.wikipedia.org/wiki/Mutation_testing] as modifying a program in small amounts and then executing the original 'passing' tests that exercise that code and then watching them fail. It is a way of making sure your tests are actually testing what you believe they are testing. Setting the stage... So how can we do this with .NET? Well first we need to know what tests execute what code and we can use OpenCover [https://github.com/sawilde/openc]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/mutation_testing_a_use_for_re_jit_/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d743</guid><category><![CDATA[open cover]]></category><category><![CDATA[open source]]></category><category><![CDATA[github]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Fri, 03 Feb 2012 02:56:00 GMT</pubDate><media:content url="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/826548-two-headed-snake.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/826548-two-headed-snake.jpg" alt="Mutation Testing; a use for re-JIT?"/><p><strong>Where to start...</strong><br> Mutation testing is <a href="http://en.wikipedia.org/wiki/Mutation_testing">described</a> as modifying a program in small amounts and then executing the original 'passing' tests that exercise that code and then watching them fail. It is a way of making sure your tests are actually testing what you believe they are testing.</br></p> <p><strong>Setting the stage...</strong><br> So how can we do this with .NET? Well first we need to know what tests execute what code and we can use <a href="https://github.com/sawilde/opencover">OpenCover</a> for that when it is using it's tracking by test feature. With that feature we can see which tests execute which sequence points and also see what branches were exercised, it is this later information we can take advantage of when creating a mutation testing utility.</br></p> <p><strong>New toys to play with...</strong><br> Now this mutation tester is going to be working at the IL level and as such we could use the JIT (Just-in-time) compilation feature that is used with <a href="https://github.com/sawilde/opencover">OpenCover</a> (and <a href="https://github.com/sawilde/partcover.net4">PartCover</a>). However that would mean a complicated instrumentation that we would then have to control which path we would want to exercise, or we could have simpler instrumentation but that would require the process under test (e.g. nunit, mstest, ...) to be stop and started each time to allow new code to be exercised. With .NET 4.5 (in preview at the time of writing) there is a re-JIT compilation feature that we could use instead and this would allow us to use simple instrumentation without needing to stop and start the process under test. There are a number of <a href="http://blogs.msdn.com/b/davbr/archive/2011/10/10/rejit-limitations-in-net-4-5.aspx">limitations</a> of re-JIT but after reviewing them (several times) I don't think any are actually show stoppers.</br></p> <p>However to make the Re-JIT useful we need a way of executing a test or tests repeatedly without having to restart the application under test and this isn't possible with nunit and mstest. However it should be possible to use the test runners from <a href="http://github.com/continuoustests/AutoTest.Net">AutoTest.Net</a> if we host them directly or in a separate process that can be communicated with.</p> <p><strong>A plan...</strong><br> So the flow will be something like this (I wonder how will this will stand up to the test of time) I haven't looked at the latest profiler API in-depth but documentation on <a href="http://msdn.microsoft.com/en-us/library/hh362351(v=vs.110).aspx">MSDN</a>) and <a href="http://blogs.msdn.com/b/davbr/archive/2011/10/12/rejit-a-how-to-guide.aspx">David Broman's</a> Blog seem to indicate this should be possible.</br></p> <ul> <li>Run OpenCover to produce an XML file with a list of what tests exercised what branches</li> <li>For each branch point =>All it needs is a name...All of this will be hosted on GitHub under OpenMutate. Let the games begin....</li> </ul> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Unusual coverage in VB.NET]]></title><description><![CDATA[Recently a user posted on StackOverflow [http://stackoverflow.com/questions/8926063/code-coverage-why-is-end-marker-red-end-if-end-try] on why he was seeing unusual coverage results in VB.NET with MSTEST and Visual Studio. The the question already had answers that helped the questioner but I decided to delve a little deeper and find out why the solution proposed worked. The issue was that in his code sample the End Trywas not being shown as covered even though he had exercised the Try and the C]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/unusual_coverage_in_vb_net/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d745</guid><category><![CDATA[open cover]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Fri, 20 Jan 2012 20:51:00 GMT</pubDate><media:content url="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/car-271921_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/car-271921_1280.jpg" alt="Unusual coverage in VB.NET"/><p>Recently a user posted on <a href="http://stackoverflow.com/questions/8926063/code-coverage-why-is-end-marker-red-end-if-end-try">StackOverflow</a> on why he was seeing unusual coverage results in VB.NET with MSTEST and Visual Studio. The the question already had answers that helped the questioner but I decided to delve a little deeper and find out why the solution proposed worked.</p> <p>The issue was that in his code sample the <strong>End Try</strong> was not being shown as covered even though he had exercised the Try and the Catch parts of his code.</p> <p>First I broke his sample down into something simpler and I have highlighted the offending line.</p> <pre><code class="language-vbnet">Function Method() As String Try Return "" Catch ex As Exception Return "" End Try End Function </code></pre> <p>In debug we can extract the following sequence points (I am, obviously, using <a href="https://github.com/opencover/opencover">OpenCover</a> for this.)</p> <pre><code class="language-html"><SequencePoints> <SequencePoint offset="0" ordinal="0" uspid="261" vc="0" ec="32" el="7" sc="5" sl="7"/> <SequencePoint offset="1" ordinal="1" uspid="262" vc="0" ec="12" el="8" sc="9" sl="8"/> <SequencePoint offset="2" ordinal="2" uspid="263" vc="0" ec="22" el="9" sc="13" sl="9"/> <SequencePoint offset="19" ordinal="3" uspid="264" vc="0" ec="30" el="10" sc="9" sl="10"/> <SequencePoint offset="20" ordinal="4" uspid="265" vc="0" ec="22" el="11" sc="13" sl="11"/> <SequencePoint offset="40" ordinal="5" uspid="266" vc="0" ec="16" el="12" sc="9" sl="12"/> <SequencePoint offset="41" ordinal="6" uspid="267" vc="0" ec="17" el="13" sc="5" sl="13"/> </SequencePoints> </code></pre> <p>(where sl = start line, el = end line, sc = start column, ec = end column and offset = IL offset in decimal)</p> <p>However these only make sense when you look at the IL...</p> <pre><code class="language-bash,line-numbers">.method public static string Method () cil managed { // Method begins at RVA 0x272c // Code size 43 (0x2b) .maxstack 2 .locals init ( [0] string Method, [1] class [mscorlib]System.Exception ex ) IL_0000: nop IL_0001: nop .try { IL_0002: ldstr "" IL_0007: stloc.0 IL_0008: leave.s IL_0029 IL_000a: leave.s IL_0028 } // end .try catch [mscorlib]System.Exception { IL_000c: dup IL_000d: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::SetProjectError(class [mscorlib]System.Exception) IL_0012: stloc.1 IL_0013: nop IL_0014: ldstr "" IL_0019: stloc.0 IL_001a: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::ClearProjectError() IL_001f: leave.s IL_0029 IL_0021: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::ClearProjectError() IL_0026: leave.s IL_0028 } // end handler IL_0028: nop IL_0029: ldloc.0 IL_002a: ret } // end of method Module1::Method </code></pre> <p>Now as you can see the End Try line that is causing concern would only be marked as hit (assuming they are using similar instrumentation to OpenCover) if the code reached IL instruction at offset 40 (IL_0028) however when one looks at the IL produced it is not possible to see how you would ever reach that instruction due to the odd IL produced (<a href="http://en.wikipedia.org/wiki/List_of_CIL_instructions"><strong>leave.s</strong></a> is a small jump like instruction that is used to exit try/catch/finally blocks) and if you follow the code you see that you will always reach a <strong>leave.s</strong> that jumps to IL_0029 first.</p> <p>In release the IL changes to something more like what I was expecting beforehand and it has no unusual extra IL...</p> <pre><code class="language-bash,line-numbers">.method public static string Method () cil managed { // Method begins at RVA 0x2274 // Code size 30 (0x1e) .maxstack 2 .locals init ( [0] string Method, [1] class [mscorlib]System.Exception ex ) .try { IL_0000: ldstr "" IL_0005: stloc.0 IL_0006: leave.s IL_001c } // end .try catch [mscorlib]System.Exception { IL_0008: dup IL_0009: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::SetProjectError(class [mscorlib]System.Exception) IL_000e: stloc.1 IL_000f: ldstr "" IL_0014: stloc.0 IL_0015: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::ClearProjectError() IL_001a: leave.s IL_001c } // end handler IL_001c: ldloc.0 IL_001d: ret } // end of method Module1::Method </code></pre> <p>but so do the sequence points...</p> <pre><code class="language-html"><SequencePoints> <SequencePoint offset="0" ordinal="0" uspid="33" vc="0" ec="22" el="9" sc="13" sl="9"/> <SequencePoint offset="15" ordinal="1" uspid="34" vc="0" ec="22" el="11" sc="13" sl="11"/> <SequencePoint offset="28" ordinal="2" uspid="35" vc="0" ec="17" el="13" sc="5" sl="13"/> </SequencePoints> </code></pre> <p>So now one will never see your try/catch lines marked covered, so this is not helpful.</p> <p>So lets try changing your code as suggested and go back to debug (because that is where you will be running coverage from usually.)</p> <pre><code class="language-vbnet">Function Method2() As String Dim x As String Try x = "" Catch ex As Exception x = "" End Try Return x End Function </code></pre> <p>Again we look at the sequence points...</p> <pre><code class="language-html"><SequencePoints> <SequencePoint offset="0" ordinal="0" uspid="268" vc="0" ec="33" el="15" sc="5" sl="15"/> <SequencePoint offset="1" ordinal="1" uspid="269" vc="0" ec="12" el="17" sc="9" sl="17"/> <SequencePoint offset="2" ordinal="2" uspid="270" vc="0" ec="19" el="18" sc="13" sl="18"/> <SequencePoint offset="17" ordinal="3" uspid="271" vc="0" ec="30" el="19" sc="9" sl="19"/> <SequencePoint offset="18" ordinal="4" uspid="272" vc="0" ec="19" el="20" sc="13" sl="20"/> <SequencePoint offset="31" ordinal="5" uspid="273" vc="0" ec="16" el="21" sc="9" sl="21"/> <SequencePoint offset="32" ordinal="6" uspid="274" vc="0" ec="17" el="22" sc="9" sl="22"/> <SequencePoint offset="36" ordinal="7" uspid="275" vc="0" ec="17" el="23" sc="5" sl="23"/> </SequencePoints> </code></pre> <p>and the IL...</p> <pre><code class="language-bash,line-numbers">.method public static string Method2 () cil managed { // Method begins at RVA 0x282c // Code size 38 (0x26) .maxstack 2 .locals init ( [0] string Method2, [1] string x, [2] class [mscorlib]System.Exception ex ) IL_0000: nop IL_0001: nop .try { IL_0002: ldstr "" IL_0007: stloc.1 IL_0008: leave.s IL_001f } // end .try catch [mscorlib]System.Exception { IL_000a: dup IL_000b: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::SetProjectError(class [mscorlib]System.Exception) IL_0010: stloc.2 IL_0011: nop IL_0012: ldstr "" IL_0017: stloc.1 IL_0018: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::ClearProjectError() IL_001d: leave.s IL_001f } // end handler IL_001f: nop IL_0020: ldloc.1 IL_0021: stloc.0 IL_0022: br.s IL_0024 IL_0024: ldloc.0 IL_0025: ret } // end of method Module1::Method2 </code></pre> <p>So for the <strong>End Try</strong> to be covered we need line 21 to be hit and that is offset 31 (IL_001F) and as it can be seen both <strong>leave.s</strong> instructions jump to that point so now that line will be marked as covered.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Adding OpenCover to TeamCity]]></title><description>< couldn't be easier however if you need help follow these simple steps. 1. Download [https://github.com/sawilde/opencover/downloads]and install OpenCover 2. Download [http://reportgenerator.codeplex.com/]and install ReportGenerator (actually unzip) 3. Register the OpenCover profiler DLLs using the regsvr32 utility regsvr32 /s x86\OpenCover.Profiler.dll regsvr32]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/adding_open_cover_to_team_city/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d746</guid><category><![CDATA[open cover]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sun, 02 Oct 2011 06:29:00 GMT</pubDate><media:content url="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/navy-blue-angels-571102_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/navy-blue-angels-571102_1280.jpg" alt="Adding OpenCover to TeamCity"/><p>Adding OpenCover to the latest version of <a href="http://www.jetbrains.com/teamcity/">TeamCity</a> (6.5) couldn't be easier however if you need help follow these simple steps.</p> <ol> <li> <p><a href="https://github.com/sawilde/opencover/downloads">Download</a> and install OpenCover</p> </li> <li> <p><a href="http://reportgenerator.codeplex.com/">Download</a> and install ReportGenerator (actually unzip)</p> </li> <li> <p>Register the OpenCover profiler DLLs using the regsvr32 utility</p> <p>regsvr32 /s x86\OpenCover.Profiler.dll<br> regsvr32 /s x64\OpenCover.Profiler.dll</br></p> </li> <li> <p>Using TeamCity add a new Build Step to your configuration</p> </li> <li> <p>Choose C<strong>ommand Line</strong> as the runner type then choose <strong>Custom Script</strong> for the Run option.</p> </li> <li> <p>Now all is needed is to set up the command to run the profiler against your tests e.g. for OpenCover the working directory is set to** main\bin\debug** and so we have</p> <p>"%env.ProgramFiles(x86)%\opencover\opencover.console.exe" "-target:......\tools\NUnit-2.5.10.11092\bin\net-2.0\nunit-console-x86.exe" -targetargs:"OpenCover.Test.dll /noshadow" -filter:"+[Open*]* -[OpenCover.T*]*" "-output:......\opencovertests.xml"</p> <p>"%env.ProgramFiles(x86)%\ReportGenerator\bin\ReportGenerator.exe" ......\opencovertests.xml ......\coverage</p> </li> <li> <p>Finally setup the artifacts so that you can view the results in TeamCity e.g.</p> <p>%teamcity.build.workingDir%\opencovertests.xml<br> %teamcity.build.workingDir%\coverage\**\*.*</br></p> </li> </ol> <p>And there you have it, OpenCover running under TeamCity and visual reports provided by ReportGenerator. I am sure you will find ways to improve upon this for your own builds.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The problem with sequence coverage. (part 2)]]></title><description><![CDATA[Previously I mentioned why just relying on sequence coverage is not a good idea as it is possible to have 100% sequence coverage but not 100% code coverage. However I only described a scenario that used a branch that had 2 paths i.e. the most common form of the conditional branches, but there is one other member of the conditional branch family that exists in IL and that is the switch instruction; this instruction can have many paths. This time I am using the code from the Newtonsoft.Json [http:]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/the_problem_with_sequence_coverage_part_2_/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d747</guid><category><![CDATA[open cover]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 27 Aug 2011 13:21:00 GMT</pubDate><media:content url="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/puzzle-587821_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/puzzle-587821_1280.jpg" alt="The problem with sequence coverage. (part 2)"/><p>Previously I mentioned why just relying on sequence coverage is not a good idea as it is possible to have 100% sequence coverage but not 100% code coverage. However I only described a scenario that used a branch that had 2 paths i.e. the most common form of the conditional branches, but there is one other member of the conditional branch family that exists in IL and that is the switch instruction; this instruction can have many paths. This time I am using the code from the <a href="http://json.codeplex.com/">Newtonsoft.Json</a> library because a) it has tests and b) it is very well covered at 83% sequence coverage, but only 72% (by my calculations) branch coverage. The subject of this investigation is BsonReader::ReadType(BsonType) this method has a very large switch statement, that actually is defined as a switch statement in IL, with a default and several <a href="http://en.wikipedia.org/wiki/Switch_statement">fall-throughs</a>; a fall-through is where two or more case statements call the same code. The method itself has 98% sequence coverage and 82% branch coverage; the only code that is uncovered is the handler for the <strong>default:</strong> path.<br> <img src="https://res-4.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/seq_2_code.png" alt="The problem with sequence coverage. (part 2)"/></br></p> <p>which is not unexpected as it is a handler for an Enum which should not be set to any value that is not part of the allowed values. Looking at the branch coverage report we have the following results (the switch instruction we are interested in is at IL offset 8.</p> <p><img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/seq_2_output.png" alt="The problem with sequence coverage. (part 2)"><br> Now the first path (0) is unvisited, but we knew that, so the next unvisited branch is #14 and the next is #17; luckily for us the enum in question that is used by the switch instruction is well defined.<br> <img src="https://res-5.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/seq_2_enum.png" alt="The problem with sequence coverage. (part 2)"><br> And as such we can thus deduce that the method is never called during testing with the values Symbol and TimeStamp but the code that they would call is covered; in fact we can see from the code that both these enum values are part of the switch/case statement and are part of fall-throughs. So again we see how branch coverage helps identify 'potential' issues and test candidates.</br></img></br></br></img></p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The problem with sequence coverage.]]></title><description><![CDATA[Sequence coverage is probably the simplest coverage metric, the information is packaged in PDB files and can be read using tools likeMono.Cecil [http://www.mono-project.com/Cecil], but just because a method has 100% sequence coverage does not mean you have 100% code coverage. I'll use an example from OpenCover's own dogfood tests to demonstrate what I mean. Here is a method which shows that it has 100% coverage (sequence point that is). However I see an issue and that is that on line 101 the]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/the_problem_with_sequence_coverage_/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d748</guid><category><![CDATA[open cover]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Thu, 25 Aug 2011 13:23:00 GMT</pubDate><media:content url="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/puzzle-587821_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/puzzle-587821_1280.jpg" alt="The problem with sequence coverage."/><p>Sequence coverage is probably the simplest coverage metric, the information is packaged in PDB files and can be read using tools like <a href="http://www.mono-project.com/Cecil">Mono.Cecil</a>, but just because a method has 100% sequence coverage does not mean you have 100% code coverage.</p> <p>I'll use an example from OpenCover's own dogfood tests to demonstrate what I mean. Here is a method which shows that it has 100% coverage (sequence point that is).</p> <p><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/seq_1_code.png" alt="The problem with sequence coverage."/></p> <p>However I see an issue and that is that on line 101 there is a condition, i.e. a branch, and yet if the visit count is 1 then there is no possibility that both paths for that branch could have been tested. We can therefore infer that even if we had 10000 visits there is no guarantee that every path would be covered even in such a simple method.</p> <p>Looking at the OpenCover results from which the coverage report was generated. We get</p> <p><img src="https://res-3.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/seq_1_output.png" alt="The problem with sequence coverage."/></p> <p>We can see that each sequence point has been visited once, however the branch coverage shows that only one of the paths for the condition we have identified had been visited (in this case it is the true path); which is good as that is what we deduced.So if you are using code coverage tools do NOT just rely on sequence point coverage alone to determine how well covered your code is. Luckily OpenCover now, as of 25th Aug 2011, supports branch coverage and ReportGenerator 1.2 displays most of the information to help you identify possible coverage mismatches.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[OpenCover Performance Impact (part 2)]]></title><description><![CDATA[I think I now have a handle on why I was getting the results I earlier reported i.e. OpenCover [https://github.com/sawilde/opencover] and PartCover [https://github.com/sawilde/partcover.net4] were not some magical performance boosters that added Go Faster stripes to your code. After a heads up by leppie and his investigations of using OpenCover on his IronScheme project I realised that I needed to spend some time on optimizing how I get data from the profiler and aggregate it into the report. I]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/open_cover_performance_impact_part_2_/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d741</guid><category><![CDATA[open cover]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Wed, 10 Aug 2011 09:01:00 GMT</pubDate><media:content url="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/highway-828985_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/highway-828985_1280.jpg" alt="OpenCover Performance Impact (part 2)"/><p>I think I now have a handle on why I was getting the results I earlier reported i.e. <a href="https://github.com/sawilde/opencover">OpenCover</a> and <a href="https://github.com/sawilde/partcover.net4">PartCover</a> were not some magical performance boosters that added Go Faster stripes to your code.</p> <p>After a heads up by leppie and his investigations of using OpenCover on his IronScheme project I realised that I needed to spend some time on optimizing how I get data from the profiler and aggregate it into the report. In case you are wondering IronScheme test that took just shy of 1 minute to run on my machine took over 60mins when running under the profiler, Ouch!</p> <h5 id="theproblem">The problem</h5> <p>First of all I should explain what sort of data OpenCover gathers (and why) and then I can describe what I did to improve performance. OpenCover records each visit to a sequence point and stores these visits into shared memory; I did it this way as I am hoping to be able to use the order of visits for some form of path coverage analysis at a later date. After 8000 visits it informs the host process that there is a block ready for processing. The host takes this block, makes a copy, releases the shared memory back to the profiler and then processes the data. After processing the data the hosts then waits for the next message. It was this latter stage that was the bottleneck as the host was spending too much time aggregating the data that the profiler was already ready with the next 8000 points.</p> <h5 id="aninterimsolution">An (interim) solution</h5> <p>I say interim solution as I am not finished with performance improvements yet but decided that what I had implemented so far was okay for release.<br> First I looked at how the results were being aggregated and noticed that a lot of the time was being spent looking up the sequence points so that the visit count could be updated, I switched this to a list and mapped the visit count data to the model at the end of the profiling run. This helped but only by bringing the profiling run down to ~40mins.</br></p> <p>I realised that I just had to get the data out of the way quickly and process it at a later date, so I added a processing thread and a ConcurrentQueue. This was an interesting turn of events as the target process now finished in 4 mins but the host took nearly 40 mins to process the data and the memory usage went up to 2.5GB and a backlog of 40K messages. Hmmm....</p> <p>After some toying, whilst looking for inspiration, I noticed that the marshaling of the structure (2 integers) was where most of the time was spent. I switched this to using BitConvertor, which also meant that I could avoid the memory pinning required by the marshaling. Now the target process still ran in just under 4 mins but the backlog very rarely reached 20 messages and memory usage stayed at a comfortable level (<100MB) I decided this was enough for now and released a version of the profiler.</p> <h4 id="butwhatabouttheearlierresults">But what about the earlier results?</h4> <p>Those earlier results though were still are cause for thought. Why should the OpenCover dogfood tests be faster but the ironscheme test be so much slower. Well the IronScheme tests were doing a lot of loops and were running parts of the code many 1000's of times whereas the dogfood tests were unit tests and the code was only being run several times before moving onto the next test fixture and next section of code. I am now thinking that the issue is due to the optimization that is normally performed by the JIT compiler, but is turned off by the profiler i.e. when running the tests (without profiler) the JIT compiler spends time optimizing the code but the time spent is not recovered as the code is not run enough times to get a net gain, compared to when the JIT compiler just compiles the non-optimised modified code that the profiler produces.</p> <p>So in conclusion you may see some speed improvements if running tests where your code is only visited a few times but if you are doing intensive execution of code then don't be surprised if the performance is degraded.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[OpenCover Performance Impact]]></title><description><![CDATA[So how does OpenCover's profiling impact your testing. The best way is to get some figures so that you can judge for yourself. I decided to use OpenCover's own tests and use the timing value produced by Nunit itself; just like I'd expect any user who is trying to determine impact I suppose. I've also added the results from PartCover for comparison. Before I took any numbers I (warmed) the code by running the code several times beforehand. Nunit32Nunit32 (OpenCover)Nunit32 (PartCover)Nunit64Nun]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/open_cover_performance_impact/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d749</guid><category><![CDATA[open cover]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sun, 24 Jul 2011 08:59:00 GMT</pubDate><media:content url="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/highway-828985_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/highway-828985_1280.jpg" alt="OpenCover Performance Impact"/><p>So how does OpenCover's profiling impact your testing. The best way is to get some figures so that you can judge for yourself.</p> <p>I decided to use OpenCover's own tests and use the timing value produced by Nunit itself; just like I'd expect any user who is trying to determine impact I suppose. I've also added the results from PartCover for comparison. Before I took any numbers I (warmed) the code by running the code several times beforehand.</p> <table><tbody><tr><td/><td>Nunit32</td><td>Nunit32 (OpenCover)</td><td>Nunit32 (PartCover)</td><td>Nunit64</td><td>Nunit64 (OpenCover)</td></tr><tr><td/><td>2.643</td><td>2.691</td><td>2.639</td><td>4.544</td><td>3.807</td></tr><tr><td/><td>2.629</td><td>2.69</td><td>2.611</td><td>4.426</td><td>3.753</td></tr><tr><td/><td>2.642</td><td>2.638</td><td>2.612</td><td>4.46</td><td>4.036</td></tr><tr><td>Average</td><td>2.638</td><td>2.673</td><td>2.621</td><td>4.477</td><td>3.865</td></tr></tbody></table> <p>I don't know how to interpret these results as they don't make much sense, OpenCover seemed to add on average 1.3% to the total time (which I'd expect), whereas PartCover appears to make the code go faster by 0.64%. I can't explain why the results for 64 bit seem to show that OpenCover improves performance by 13.6%.</p> <p>I tried to come up with a number of reasons for the above but the results I keep getting are reasonably consistent, so I decided to post them anyway and perhaps someone else will be able to tell me what is happening.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Questions about open source and liability in the workplace]]></title><description><![CDATA[Last weekend I attended DDDSydney and one of the most interesting sessions was a panel session about Microsoft and Opensource (Open Source & Microsoft Ecosystem); though as these things go, it went quickly off(ish) topic as expected by the panelists whom I'll refer to as the crazy drupal girl and the 3 stooges (honestly no offence folks, it was highly entertaining). However it got me thinking about the number of projects where I come have across an unusual bit of open source software that has s]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/questions_about_open_source_and_liability_in_the_workplace/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d742</guid><category><![CDATA[open source]]></category><category><![CDATA[community]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 09 Jul 2011 02:15:00 GMT</pubDate><media:content url="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/hammer-719066_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://res-1.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/hammer-719066_1280.jpg" alt="Questions about open source and liability in the workplace"/><p>Last weekend I attended DDDSydney and one of the most interesting sessions was a panel session about Microsoft and Opensource (Open Source & Microsoft Ecosystem); though as these things go, it went quickly off(ish) topic as expected by the panelists whom I'll refer to as the crazy drupal girl and the 3 stooges (honestly no offence folks, it was highly entertaining).</p> <p>However it got me thinking about the number of projects where I come have across an unusual bit of open source software that has some use (but has not found a niche or has since been surpassed) and I find that this was introduced by a developer as it was their pet open source project. Now the first question is "what is the liability under this scenario?"</p> <p>Did the developer ask first as they should before using any open source software on a project? If so then the company accepted the situation but what happens if they did not (or what not made aware) are they still liable or is the developer liable? I assume it would be the company as they should be having some sort of oversight but for small overworked teams where process may not be as strong this may get overlooked.</p> <p>The other issue is what happens if you introduce your pet open source software project and then you leave, who supports it? How do you separate the open source project needs and the day-job, when they are so intermingled? Does the remaining team support it, do they have the skills? What happens if the parting was acrimonious in nature then they, the team, raised a legitimate issue would you fix it, or leave them to stew?</p> <p>I don't have answers to the above, I did title this "Questions about...", that can be applied universally the answer to most I suppose is "it depends". Each situation will be different I suspect but I think these type of questions should be asked by any company hoping to use open source software and developers wishing to introduce it, whether that are contributors or not.</p> <p>Personally I have decided to NOT introduce the open source software I develop into my workplace, yes they could use it and find it useful but they can also afford commercial alternatives. If someone else suggested it, I'd have to make sure there was an agreement should an issue arise that affects them, that if they want it fixed quick then I may have to use 'work' time i.e. no guarantees that it would be done that evening or even that week; after all it is supposed to be fun and not stressful.</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[How do we get Users out of [open source] Welfare?]]></title><description><![CDATA[How do we get Users out of [open source] Welfare? Okay an odd title but something I've been thinking about for some time and I suppose is the source of much frustration I have been having whilst maintaining PartCover [https://github.com/sawilde/partcover.net4]; I am hoping to reverse the situation with OpenCover [https://github.com/sawilde/opencover]. Categorizing open source users First I'd like to explain that I like to roughly categorize people involved in open source like thus: Contribut]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/how_do_we_get_users_out_of_open_source_welfare_/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d74c</guid><category><![CDATA[open cover]]></category><category><![CDATA[open source]]></category><category><![CDATA[community]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 25 Jun 2011 07:45:00 GMT</pubDate><media:content url="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/coins-361488_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="howdowegetusersoutofopensourcewelfare">How do we get Users out of [open source] Welfare?</h3> <img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/coins-361488_1280.jpg" alt="How do we get Users out of [open source] Welfare?"/><p>Okay an odd title but something I've been thinking about for some time and I suppose is the source of much frustration I have been having whilst maintaining <a href="https://github.com/sawilde/partcover.net4">PartCover</a>; I am hoping to reverse the situation with <a href="https://github.com/sawilde/opencover">OpenCover</a>.</p> <p><strong>Categorizing open source users</strong></p> <p>First I'd like to explain that I like to roughly categorize people involved in open source like thus:</p> <p><em>Contributors</em> - these are the guys and gals at the pit-face, developing software, writing documentation and generally striving to make an open source product better.</p> <p><em>Investors</em> - these individuals use open source software and help to make the product better via feedback and raising, and following up, issues (probably as it is in their interest to do so).</p> <p><em>Benefactors</em> - usually companies that give tools to open source developers or sponsor a project in other ways i.e. free licenses or free hosting e.g. NDepend, JetBrains and GitHub.</p> <p><em>Angels</em> - these people provide invaluable advice in just managing a open source project and may not be actively involved in the development itself but just keep you sane.</p> <p><em>Community</em> - Our main user base, users of open source but don't actively contribute back and hence why sometimes I refer to them as Welfare. Maybe in the case of the this group it is just a failure to engage, the product just works and they have no need to be involved outside of viewing forums and stackoverflow. But I feel that without the involvement of this group a lot of open source software, no matter how good, can fall by the wayside.</p> <p>But how do we get them involved? Well first we have to find them, in my case with PartCover as the project had been abandoned the users stopped raising issues on the SourceForge forums and tended to ask questions on other outlets such as StackOverflow, SharpDevelop or Gallio forums and mailing lists.</p> <p><strong>Finding the users</strong></p> <p>I scoured the internet and compiled a list of popular places that PartCover was mentioned or supported. I was surprised to find that PartCover was used or supported by SharpDevelop, TeamCity and TypeMock amongst others (and yet again I am surprised it was abandoned and not adopted by anyone sooner).</p> <p>StackOverflow seems to be the main place where people ask questions and to keep track of questions I have subscribed to an RSS feed for the partcover tag; and as soon as an opencover tag becomes available, or I get enough rep to create it, I'll subscribe to that.</p> <p>Twitter is also quite a common medium nowadays so I have also set up the following search filter "opencover OR partcover -rt -via" to see if anyone mentions either of the projects.</p> <p><strong>Engaging the users</strong></p> <p>Now I have found the users, or the majority of, I started notifying these lists, forums and projects that PartCover was alive again (and I have started to do the same to inform them about OpenCover). Hopefully bringing them back or at least notifying them that if they have really big issues there is somewhere to go.</p> <p><strong>Involving the Community users</strong></p> <p>This is the big ask and I don't have an answer. If the product works then they don't need to talk to the forums or declare their appreciation of a job well done. I think sites like ohloh are trying to address the balance. Some OS projects have a donate button, but I am not sure we are doing open source for money, though some projects do eventually go commercial, anyone else can pick up the original code and develop it. Maybe the users don't know how to be involved, in the case of my OS projects they are quite specialised and the learning curve may be too much for some. But I don't think you have to just be involved in projects you use a lot.</p> <p><strong>Possible ways to get involved</strong></p> <p>If you are good at graphics why not offer to knock up some graphics for use on web-sites and in the application. [I am quite lucky that Danial Palme added support for PartCover and OpenCover to his Report Generator tool and has done a much better job than I would ever do.]</p> <p>If you are good at installers, or even if you want to learn more about them, offer to manage them on behalf of the project.If there is a project you like, support them on the forums like StackOverflow and help other users.</p> <p>Perhaps update the wikis and forums, sometimes the users know how a product works or can be used better then the developers.</p> <p>If your company uses a lot of open source, why not buy some licenses for useful software tools and donate them, geeks love shiny new toys; quite a few vendors such as NDepend will donate licenses to open source projects.</p> <p>If you have an issue, try to help the developers as much as possible to resolve it by supplying as much information as you can and repeatable samples, remember the developers are international and doing this in their own time (as you probably know trying to repeat a scenario from scant information is very frustrating) and maintain contact whilst it is being resolved and let them know when it is.</p> <p>Okay that's me done on the subject for now, suggestions anyone?</p> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[OpenCover First Beta Release]]></title><description><![CDATA[OpenCover First Beta Release Okay, the first post on a blog I created many, many months ago and still not got round to starting. Why the delay? Well, just been busy and not a lot to say; actually some would say I have too much to say it's just not publishable. But now I am happy to announce that the first release of OpenCover is now available on GitHub [https://github.com/sawilde/opencover/downloads]. "So what?" I hear you say, "we have NCover, dotCover and PartCover [and probably many others ]]></description><link>https://monkey-see-monkey-do-blog.herokuapp.com/open_cover_first_beta_release/</link><guid isPermaLink="false">Ghost__Post__5e0ff8a37e474c001ec3d74b</guid><category><![CDATA[open cover]]></category><category><![CDATA[open source]]></category><category><![CDATA[github]]></category><dc:creator><![CDATA[Shaun Wilde]]></dc:creator><pubDate>Sat, 18 Jun 2011 05:37:00 GMT</pubDate><media:content url="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/microphone-233717_1280.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="opencoverfirstbetarelease">OpenCover First Beta Release</h3> <img src="https://res-2.cloudinary.com/hxqubdrzs/image/upload/q_auto/v1/ghost-blog-images/microphone-233717_1280.jpg" alt="OpenCover First Beta Release"/><p>Okay, the first post on a blog I created many, many months ago and still not got round to starting. Why the delay? Well, just been busy and not a lot to say; actually some would say I have too much to say it's just not publishable.</p> <p>But now I am happy to announce that the first release of OpenCover is now available on <a href="https://github.com/sawilde/opencover/downloads">GitHub</a>.</p> <p>"So what?" I hear you say, "we have NCover, dotCover and PartCover [and probably many others with the word cover in the name,] do we need another code coverage tool?" Well, I think the answer is "Yes!" but before I say why a brief history.</p> <p>About a year ago I adopted PartCover when I found it lost and abandoned and only supporting .NET2; also PartCover has a large user base, SharpDevelop, Gallio, to name but two, and I felt it was a shame to just let it fall by the wayside. I had also done some work on CoverageEye (another OpenSource tool that was originally hosted on GotDotNet and has since vanished) whilst working for a client in the UK, so I felt I had a fighting chance to do the upgrade to .NET4; I don't know if my changes ever got uploaded to GotDotNet as I was not in charge of that.</p> <p>The adoption was far from easy for a number of reasons, one of which was I was surprised just how little C++ I could actually remember and it's changed a bit since I last used it in anger. Also due to lack of communication with the original developers meant that I was on my own in working out a) how it worked and b) just what the issues are (a lot of the reported issues had long since been abandoned by the reporters).</p> <p>At the beginning of the adoption I cloned the SourceForge repository to GitHub, git being the in-thing at the time, and after I was eventually admitted access to SourceForge I attempted to maintain both repositories. Due to the lack of permissions on SourceForge, no matter how many times I asked, I eventually abandoned SourceForge and kept all development to GitHub; I also updated the SourceForge repository with a lot of ReadMe posts to point to GitHub.</p> <p>So upgrading PartCover progressed and thankfully bloggers such as <a href="http://blogs.msdn.com/b/davbr/">David Broman</a> had already covered the subject matter about upgrading .NET2 profilers to .NET4 and things to look out for. That, it would turn out, was the easy bit.</p> <p>PartCover had 3 main issues (other than lack of .NET4 support)</p> <ol> <li>Memory usage</li> <li>64 bit support</li> <li>If the target crashed then you got no results.</li> </ol> <p>I'll tackle each of these in turn:</p> <ol> <li> <p>Memory - PartCover builds a model of each assembly/method/instrumented point in memory; though I managed to cut down memory usage by moving some of the data gathering to the profiler host it wasn't enough - PartCover also added 10 IL instructions (23 bytes) for each sequence point identified + 4 bytes allocated memory for the counter.</p> </li> <li> <p>64 bit support - PartCover used a complex COM + Named Pipe RPC, which thankfully just worked but I couldn't work out how to upgrade it to 64 bit (a few other helpers have offered and then gone incommunicado, I can only assume the pain was too much).</p> </li> <li> <p>Crashing == no results - this was due to the profiler being shutdown unexpectedly and the runtime not calling the <a href="http://msdn.microsoft.com/en-us/library/ms230217.aspx">::Shutdown</a> method and as such all that data not being streamed to the host process; thankfully people were quite happy to fix crashing code so not a major issue but still an annoyance.</p> </li> </ol> <p>All of this would take major rework of substantial portions of the code and the thought was unbearable. I took a few stabs at bits and pieces but got nowhere.</p> <p>Thankfully I had received some good advice and though I tried to apply it to PartCover I realised the only way was to start again, taking what I had learned from the guys who wrote PartCover and some ideas I had come across from looking at other opensource tools such as CoverageEye and Mono.Cecil.</p> <p><strong>OpenCover was born.</strong></p> <p>This time I created a simple COM object supporting the interfaces and then made sure I could compile it in both 32 and 64 bit from day one.</p> <p>I then decided to make the profiler as simple as possible, so it is maintainable and move as much of the model handling to the profiler host, thank heavens for <a href="https://github.com/jbevain/cecil">Mono.Cecil</a>. The only complex thing was deconstructing the IL and reassembling it after it had been instrumented. OpenCover only inserts 3 IL instruction (9/13 bytes depending on 32/64 bit) per instrumented point; it forces a call into the profiler assembly itself and this C++ code then records the 'hit'.</p> <p>Finally I decided I had to get the data out of the profiler and into the host as soon as possible. I toyed with WCF and WWSAPI but this also meant I had no XP support, but at least I could test other ideas. However if my target/profiler crashed I would loose the last packet of data; not drastic but not ideal. Eventually I bit the bullet and switched to using shared memory.</p> <p>The switch to shared memory has brought a number of benefits one of which is the ability to handle a number of processes under the same profiling session, both 64 and 32 bit and to aggregate the results as they all use the same shared memory. I have yet to work out how to set this up via configuration files but anyone wishing to experiment can do so via modifying the call to ProfilerManager::RunProcess in the OpenCover.Host::Program.</p> <p>So this is where we are now, OpenCover has been released (beta obviously) and as of time of writing some people have actually downloaded it. I am now braced for the issues to come flooding/trickling in.</p> <p>Feel free to download and comment, raise issues on GitHub, get involved; Daniel Palme, he of <a href="http://reportgenerator.codeplex.com/">Report Generator fame</a>, is hopefully going to upgrade his tool to include OpenCover. OpenCover First Beta Release</p> <!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>