http://ian.bebbs.co.uk/Ian Bebbington20212021-07-12T13:01:25ZIObservable<Opinion>http://ian.bebbs.co.uk/posts/UnoonnxState-Of-The-Art Natural Language Processing in .NET on the Edge2021-05-06T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>In this post I show how .NET can be used to run state-of-the-art Natural Language Processing (NLP) models on "the edge". I provide a simple means for downloading and converting 'transformer' models from <a href="https://huggingface.co/">HuggingFace</a> into models that can perform inference from managed .NET code on resource constrained devices. Finally I use <a href="https://platform.uno/">Uno Platform</a> to implement a cross-platform user-interface that allows real-time inference using these models.</p>
<h2 id="bitizen">Bitizen</h2>
<p>At <a href="https://www.bitizen.uk/">Bitizen</a> we are working to revitalize democracy by promoting understanding of - and engagement with - politics in the UK. As a first step towards this goal, we have built a platform which is able to ingest hundreds of forms of data from across the political landscape and present this data to users as meaningful information. Much of this data is unstructured text so we use state-of-the-art machine learning models to help us analyse, categorise and summarise the data in a manner which facilitates downstream processing (i.e. cataloging, searching, presentation, etc).</p>
<p>Given that we are a .NET shop and that most research around ML and AI takes place using either R or Python, we usually deploy models by containerizing them in their native environment accompanied by an HTTP API. This allows us to call the model from .NET and works beautifully in our containerized, event-driven architecture.</p>
<p>However, as we move towards promoting engagement, we wanted our smartphone app to be... well... smart. For example, while users were interacting with the app (i.e. contributing to a discussion, searching for additional information, etc) we wanted to be able to perform inferences similar to those we run on the backend on the device itself. Privacy and latency considerations meant calling a hosted endpoint wasn't really a great solution so we started looking round for alternatives.</p>
<p>This is what we came up with...</p>
<h3 id="a-quick-call-to-arms">A quick call-to-arms</h3>
<p>Bitizen is currently looking for a web-developer and/or designer to help improve our online presence and bring some of our app smarts to the web. If you have an interest in UK politics and like the idea of working with a intrepid, young, bootstrapped start-up, please do <a href="mailto:ian@bitizen.uk">drop us a line</a> as we'd love to hear from you.</p>
<h2 id="ml.net-vs-nlp">ML.NET vs NLP</h2>
<p>Microsoft has a fairly strong ML offering for .NET developers in <a href="https://dotnet.microsoft.com/apps/machinelearning-ai/ml-dotnet">ML.NET</a>. Indeed, I illustrated ML.NET's capabilities in a blog post last year titled <a href="https://ian.bebbs.co.uk/posts/MLinUWP">'State-of-the-art ML in UWP'</a> which used a recent (at the time) ML model to perform salient object detection and image segmentation; a process very much suited to ML.NET strengths. Unfortunately the story around using ML.NET for NLP wasn't so strong and there were very few examples of how to use modern, <a href="https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)">'transformer'</a> based models from within ML.NET.</p>
<p>Until, that is, <a href="https://github.com/GerjanVlot">Gjeran Vlot</a> published his <a href="https://github.com/GerjanVlot/BERT-ML.NET">BERT-ML.NET repository on GitHub</a>. In an incredibly concise and simple implementation, he illustrated how a <a href="https://en.wikipedia.org/wiki/BERT_(language_model)">BERT based</a> ONNX model could be used within ML.NET to perform 'Question Answering' (aka machine comprehension) based inference. This was fantastic and exactly what we had been looking for... except... we didn't want to perform (just) 'Question Answering' based inference. BERT - and related transformers - can be used for a broad variety of tasks including (but certainly not limited to) sentiment analysis, text classification and named entity recognition.</p>
<p>Given the other prepared models available in the <a href="https://github.com/onnx/models">ONNX Model Zoo</a> - from which Gjeran sourced his model - seemed fairly limited, we decided to go model hunting...</p>
<h2 id="hugging-face">Hugging Face</h2>
<p>If you've not been to <a href="https://huggingface.co/">Hugging Face</a> before, I would certainly recommend checking it out. Through the provision of excellent tooling and the formation of a vibrant, open community of users, Hugging Face have established themselves as the de-facto source for NLP models. On a single site you can explore, test and download models (with accompanying parameters and code) from a huge variety of sources (including Microsoft, Google and Elastic), pretrained (but with <a href="https://huggingface.co/transformers/training.html">fine-tuning recommended</a>) for a huge variety of use cases.</p>
<p>I decided that I wanted to initially try something that would give me quantifiable results (i.e. something more than just a probability) and knew that I wanted to try to run a model on an 'edge' (i.e. resource constrained) device. This meant finding an alternative to the BERT based models which are typically in excess of 400Mb.</p>
<p>Fortunately Hugging Face had me covered and, in short order, I had decided to use a <a href="https://huggingface.co/distilbert-base-uncased">DistilBERT</a> based model trained for Token Classification (aka Named Entity Recognition). After quickly experimenting with a few, I found <a href="https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english">a model by Elastic</a> that provided pretty good results and, at less than half the size of a comparable BERT model, seemed like it might be usable on an edge device.</p>
<h2 id="open-neural-network-exchange">Open Neural Network Exchange</h2>
<p>However, Hugging Face provides models for ease of consumption from it's own toolkit which usually means they're made available in either PyTorch or Tensorflow based formats. ML.NET, on the other hand, is only able to load models in the Open Neural Network Exchange (ONNX) format. This meant I needed to convert the models before I could use them.</p>
<p>Yet again, Hugging Face came to the rescue through the provision of <a href="https://huggingface.co/transformers/serialization.html">an API</a> which allows export of their models to ONNX. Knowing I would likely want to use multiple models in this manner (and not wanting to install various versions of Python on my workstation) I decided to build a docker container which would run the conversion and save the converted ONNX model to a mapped location. This proved to be shockingly easy with the image built using just a single Dockerfile containing:</p>
<pre><code>FROM python:latest
RUN pip install tensorflow
RUN pip install torch
RUN pip install transformers
RUN pip install keras2onnx
RUN pip install onnxruntime
ENTRYPOINT [ "python", "/usr/local/lib/python3.9/site-packages/transformers/convert_graph_to_onnx.py" ]
</code></pre>
<p>This could then be run from Powershell like this:</p>
<pre><code>docker run --rm -v ${PWD}/Output:/Output ibebbs/huggingfacetoonnx:latest --framework pt --opset 12 --pipeline ner --model elastic/distilbert-base-cased-finetuned-conll03-english /Output/elastic/distilbert-base-cased-finetuned-conll03-english.onnx
</code></pre>
<p>Whereupon the script will download the specified model (in this case 'elastic/distilbert-base-cased-finetuned-conll03-english') using the specified framework ('pt' for PyTorch, 'tf' for Tensorflow), convert it to ONNX (using opset 12) including layers for the specific pipeline (in this case 'ner' for named entity recognition) and finally write the converted model to the 'Output/elastic' subdirectory of the current folder.</p>
<p>Should you wish to use this docker image, it is available - accompanied by full usage instructions - on <a href="https://hub.docker.com/r/ibebbs/huggingfacetoonnx">Docker Hub</a> including a link to the <a href="https://github.com/ibebbs/HuggingFaceToOnnx">source repository</a>.</p>
<h2 id="netron">Netron</h2>
<p>After downloading and converting the model, we need to examine it to determine the shape of the input and output layers. This is very easily done with <a href="https://netron.app/">Netron</a>.</p>
<p>Shown below is (a small section of) the DistilBERT model. By clicking on the 'input_ids' node a side-pane is shown which includes all the information we need.</p>
<p><a data-fancybox="Netron" href="/Content/Unoonnx/Netron - Full.png"><img src="/Content/Unoonnx/Netron - Full.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Neutron"/></a></p>
<p>As can be seen, Netron shows us the two inputs to the model: <code>input_ids</code> and <code>attention_mask</code>, both of which being two dimensional arrays of <code>Int64</code> values. It also shows us the output from the model: <code>output_0</code>, a three dimensional array of <code>float</code>.</p>
<h3 id="model-input">Model Input</h3>
<p>While using this model for inference, the <code>attention_mask</code> input is simply filled with 1s (each token has equal attention) so we will not discuss this input any further. Equally we will not be using multiple batches in this project so the <code>batch</code> dimension can be ignored leaving us with a single, <code>sequence</code> dimension of values to fill for <code>input_ids</code>.</p>
<p>In this model, the size of the <code>sequence</code> dimension is not specified illustrating that this model can accept dynamically sized input. As such, should we wanted to perform NLP on the sentence "Sarah lives in London and works for Acme Corporation", we might expect to provide something like this to the <code>input_ids</code> input:</p>
<table class="table">
<thead>
<tr>
<th style="text-align: right;">Sequence</th>
<th>Word</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td>Sarah</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td>lives</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td>in</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td>London</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td>and</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td>works</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td>for</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td>Acme</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td>Corporation.</td>
</tr>
</tbody>
</table>
<p>But, as can be seen above, the model accepts integers, not strings, so we must first 'tokenize' the input using a vocabulary specific to this model. This is done by downloading the vocabulary for the model from Hugging Face (available <a href="https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/blob/main/vocab.txt">here</a>) then using a specific tokenizer to convert the input text into a series of tokens in a format the model expects; for BERT based models, a <a href="https://machinelearnit.com/2018/08/19/wordpiece-tokenisation/">"WordPiece Tokenizer"</a> is used.</p>
<p>Fortunately for us, we're able to use the "WordPieceTokenizer" provided in Gjeran's <a href="https://github.com/GerjanVlot/BERT-ML.NET/blob/master/Microsoft.ML.Models.BERT/Tokenizers/WordPieceTokenizer.cs">BERT-ML.NET repository</a>. Running the above input through this tokenizer would give use the following <code>input_ids</code> value:</p>
<table class="table">
<thead>
<tr>
<th style="text-align: right;">Sequence</th>
<th style="text-align: right;">Token Id</th>
<th>Token</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">101</td>
<td>[CLS]</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">21718</td>
<td>sa</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">10659</td>
<td>##rah</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2491</td>
<td>lives</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">1107</td>
<td>in</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: right;">25338</td>
<td>lo</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: right;">17996</td>
<td>##ndon</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: right;">1105</td>
<td>and</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: right;">1759</td>
<td>works</td>
</tr>
<tr>
<td style="text-align: right;">9</td>
<td style="text-align: right;">1111</td>
<td>for</td>
</tr>
<tr>
<td style="text-align: right;">10</td>
<td style="text-align: right;">170</td>
<td>a</td>
</tr>
<tr>
<td style="text-align: right;">11</td>
<td style="text-align: right;">1665</td>
<td>##c</td>
</tr>
<tr>
<td style="text-align: right;">12</td>
<td style="text-align: right;">3263</td>
<td>##me</td>
</tr>
<tr>
<td style="text-align: right;">13</td>
<td style="text-align: right;">9715</td>
<td>corporation</td>
</tr>
<tr>
<td style="text-align: right;">14</td>
<td style="text-align: right;">119</td>
<td>.</td>
</tr>
<tr>
<td style="text-align: right;">15</td>
<td style="text-align: right;">102</td>
<td>[SEP]</td>
</tr>
</tbody>
</table>
<p>And it's these 'Token Ids' that are the input to our model.</p>
<h3 id="model-output">Model Output</h3>
<p>As we can see, the <code>output_0</code> layer consists of the same <code>batch</code> and <code>sequence</code> dimensions but adds an additional dimension with 9 elements. This additional dimension contains the probability of the token at <code>[batch,sequence]</code> belonging to a specific classification. The labels for each classifications are provided by the model's <a href="https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/blob/main/config.json">'config.json' file on Hugging Face</a> as shown below:</p>
<pre><code class="language-json">{
...
"id2label": {
"0": "O",
"1": "B-PER",
"2": "I-PER",
"3": "B-ORG",
"4": "I-ORG",
"5": "B-LOC",
"6": "I-LOC",
"7": "B-MISC",
"8": "I-MISC"
},
...
}
</code></pre>
<p>As can be seen, this model uses <a href="https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)"><code>Inside-outside-beginning</code> tagging</a> to delineate the beginning and inside of a specific classification from other classifications but, for the most part we can just treat this as 5 classifications:</p>
<ol start="0">
<li>Other</li>
<li>Person</li>
<li>Organisation</li>
<li>Location</li>
<li>Misc</li>
</ol>
<h2 id="bertonnx">BertONNX</h2>
<p>Armed with the model and an understanding of how to provide input/interpret output, I spiked out a quick .NET Core test project. Looking to simplify Gjeran's implementation even further I ended up with end-to-end, command line based inference engine in just 7 classes (including Gjeran's WordPieceTokenizer along with a Hugging Face configuration deserializer).</p>
<p>Should you wish to take a look, the source for this spike can be found in my <a href="https://github.com/ibebbs/BertOnnx">BertOnnx repository on Github</a>.</p>
<p>By far the biggest headache was working out how to shape the input (<code>Feature</code>) and output (<code>Result</code>) types to match the expected model shapes. ML.NET uses <code>[ColumnName([name])]</code> and <code>[VectorType([x,y])]</code> property attributes to bind properties to the model but, given the model was capable of processing dynamically sized input, I wasn't sure what values to use for the <code>VectorType</code> attribute.</p>
<p>Initially I tried omitting shape information from the attribute (<code>[VectorType]</code>) whereupon the app unceremoniously crashed with the error "Variable length input columns not supported". A little searching revealed that this error meant exactly what it said and we couldn't use dynamically sized input with ML.NET!</p>
<p>So, instead I elected to use try a different approach and pad all input to a specific size (256 elements). This gave me <code>Feature</code> and <code>Result</code> types that looked like this:</p>
<pre><code>public class Feature
{
[VectorType(1, 256)]
[ColumnName("input_ids")]
public long[] Tokens { get; set; }
[VectorType(1, 256)]
[ColumnName("attention_mask")]
public long[] Attention { get; set; }
}
</code></pre>
<pre><code>public class Result
{
[VectorType(1,256,9)]
[ColumnName("output_0")]
public float[] Output { get; set; }
}
</code></pre>
<p>After this it was fairly plain sailing and in short order I had this:</p>
<p><a data-fancybox="BertONNX - Full" href="/Content/Unoonnx/BertONNX - Full.gif"><img src="/Content/Unoonnx/BertONNX - Full.gif" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="BertONNX"/></a></p>
<p>As you can see, the DistilBERT model correct identifies 'Sarah' as a 'B-PER' (person), 'London' as a 'B-LOC' (location) and 'Acme' as a 'B-ORG' (organisation) in just 202ms. Perfect!</p>
<h2 id="quantization">Quantization</h2>
<p>While having a custom built inference engine was pretty cool, I was a little concerned about memory consumption if I wanted to use the model on edge devices. Despite the DistilBERT model being significantly smaller than full BERT, memory consumption during inference hit around 1Gb. This would almost certainly be a stretch for many of the devices I'd like to run this model on.</p>
<p>Fortunately, ONNX has a little trick up it's sleeve called <a href="https://www.onnxruntime.ai/docs/how-to/quantization.html">'Quantization'</a>.</p>
<p>Quoting <a href="https://medium.com/microsoftazure/faster-and-smaller-quantized-nlp-with-hugging-face-and-onnx-runtime-ec5525473bb7">this article</a> on the matter:</p>
<blockquote class="blockquote">
<p>Quantization approximates floating-point numbers with lower bit width numbers, dramatically reducing memory footprint and accelerating performance. Quantization can introduce accuracy loss since fewer bits limit the precision and range of values. However, researchers have extensively demonstrated that weights and activations can be represented using 8-bit integers (INT8) without incurring significant loss in accuracy.</p>
<p>Compared to FP32, INT8 representation reduces data storage and bandwidth by 4x, which also reduces energy consumed. In terms of inference performance, integer computation is more efficient than floating-point math.</p>
</blockquote>
<p>Incredibly, quantizing a model using Hugging Face's ONNX export is as simple as specifying a <code>--quantize</code> flag. This meant generating a quantized version of the model took no more than effort than just running the following command:</p>
<pre><code>docker run --rm -v ${PWD}/Output:/Output ibebbs/huggingfacetoonnx:latest --framework pt --opset 12 --pipeline ner --model elastic/distilbert-base-cased-finetuned-conll03-english --quantize /Output/quantized-distilbert-base-cased-finetuned-conll03-english/model.onnx
</code></pre>
<p>The quantized version of the model was just 64Mb (75% smaller) and, due to it's input and output layers remaining unchanged, it was a drop in replacement for the unquantized model. Running with the quantized version resulted in:</p>
<p><a data-fancybox="BertONNX - Quantized" href="/Content/Unoonnx/BertONNX - Quantized.gif"><img src="/Content/Unoonnx/BertONNX - Quantized.gif" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="BertONNX"/></a></p>
<p>As you can see, the model loaded significantly faster and inference speed also got a boost. Best of all, memory consumption during inference was reduced to just 265Mb, definitely within the realms of possibility for an edge device.</p>
<p>Buoyed by this success, I pushed on to...</p>
<h2 id="unoonnx">UnoOnnx</h2>
<p>As per the initial driver for this exploration, I wanted an app on an edge device that would allow me to perform interactive inference. Knowing that <a href="https://platform.uno/">Uno Platform</a> could easily create apps that run across a variety of devices, I decided to whip up an app to do just this.</p>
<p>And so was born UnoOnnx ('Oo-noo-nx'?):</p>
<video class="img-responsive" style="margin: auto; width:66%; margin-top: 6px; margin-bottom: 6px;" controls>
<source src="/Content/Unoonnx/UnoOnnx - Windows.mp4" type="video/mp4"/>
Your browser does not support the video tag
</video>
<p>As you can see, the first inference is quite slow as it (lazily) loads the model but subsequent inferences are more than fast enough for an interactive app.</p>
<p>Then, with a little Uno Platform magic, I ran exactly the same code under Linux ('Oo-noo-nux'?):</p>
<video class="img-responsive" style="margin: auto; width:66%; margin-top: 6px; margin-bottom: 6px;" controls>
<source src="/Content/Unoonnx/UnoOnnx - Linux.mp4" type="video/mp4"/>
Your browser does not support the video tag
</video>
<p>(BTW, Loading the model isn't usually that slow - my machine was busy doing something else while I recorded this video).</p>
<p>Pretty Neat!</p>
<p>As with BertOnnx, the source for UnoOnnx is <a href="https://github.com/ibebbs/UnoOnnx">available on GitHub</a> if you want to take a look.</p>
<h2 id="moving-forward">Moving Forward</h2>
<p>In a subsequent post - and assuming there's sufficient interest - I hope to illustrate how to run these models on mobile devices (i.e. Android & iOS). If this is of interest to you, please drop me a tweet and/or star the repositories above to let me know.</p>
<h2 id="conclusion">Conclusion</h2>
<p>As you can see, with the right toolset and a little bit of knowledge, it is fairly straight forward to use state-of-the-art machine learning models from .NET even within the resource constrained environment of an 'edge' device. While some use-cases that depend on sequence length (i.e. sentiment analysis) might be tricky to implement effectively in ML.NET, many other uses (text generation/classification, machine comprehension, translation, etc) should be pretty much pattern part.</p>
<p>However, working through the above has left me extremely concerned about Microsoft's strategy towards desktop (i.e. non-web) development. It seems to me that many of Microsoft's frameworks and SDKs for .NET desktop development are suffering from a distinct lack of resourcing/focus meaning development is slow and the frameworks are getting left behind by other languages/platforms. For example, here is the commit chart of ML.NET comparer to Hugging Face's native API:</p>
<table>
<tr>
<td>
<a data-fancybox="CommitComparison" href="/Content/Unoonnx/ML Commits.png"><img src="/Content/Unoonnx/ML Commits.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="ML.NET"/></a>
</td>
<td>
<a data-fancybox="CommitComparison" href="/Content/Unoonnx/Huggingface Commits.png"><img src="/Content/Unoonnx/Huggingface Commits.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="HuggingFace"/></a>
</td>
</tr>
</table>
<p>I think you'll agree, one of these projects looks significantly healthier than the other.</p>
<p>Furthermore, Microsoft's strategy/execution around UWP/WinUI/Project Reunion is an <strong>utter shambles</strong>. While I understand WinUI 3.0 is very new and Project Reunion still in preview, I honestly couldn't believe how poor the development experience was with these technologies.</p>
<p><em><strong>@Microsoft, were it not for Uno Platform providing at least a modicum of continuity through the disastrous landscape that is Windows UI development, I - and I believe many others - would have jumped ship to other UI platforms a long time ago.</strong></em>
<em><strong>Please step up your game here. Many of us who have stuck with Windows UI technologies despite its fragmented and frustrating history really are getting to the end of our tether.</strong></em></p>
<h2 id="finally">Finally</h2>
<p>If you're interested in deploying state-of-the-art machine learning models within .NET or using the Uno Platform to deliver cross-platform apps, then please feel free to drop me a line using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>. As a freelance software developer and .NET consultant I'm always interested in hearing from potential new clients or ideas for new collaborations.</p>
<p>In this post I show how .NET can be used to run state-of-the-art Natural Language Processing (NLP) models on "the edge". I provide a simple means for downloading and converting 'transformer' models from <a href="https://huggingface.co/">HuggingFace</a> into models that can perform inference from managed .NET code on resource constrained devices. Finally I use <a href="https://platform.uno/">Uno Platform</a> to implement a cross-platform user-interface that allows real-time inference using these models.</p>http://ian.bebbs.co.uk/posts/UsingGMailForCustomDomainEmailUsing GMail To Send Email From A Custom Domain2021-01-04T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>Do you use a domain registrar that provides email forwarding facilities? Then read this post to see how to set up Gmail to ensure the correct "From" address when replying to emails sent to your domains.</p>
<h2 id="intro">Intro</h2>
<p>I am currently working with a couple of co-founders on an early stage start-up about which I hope to share more information shortly. Part of the prep work for this start-up has been registering a domain name and establishing communications channels.</p>
<p>To do this, I use a domain registrar that provides free email forwarding facilities. This is great as it allows you to receive email sent to '[person]@[yourdomain.com]' as part of your regular email account. However, if you subsequently reply to email you received this way, your reply will be "From" your regular email account, not '[person]@[yourdomain.com]'.</p>
<p>In this post I detail how to establish a completely free façade for email communication via your domain using a personal GMail account to receive and reply to email using a custom domain email addresses.</p>
<h2 id="prerequisites">Prerequisites</h2>
<p>Prior to following the steps below, you should ensure that you've enabled Two-Factor Authentication (2FA) on your GMail account. While it is possible to follow these steps without 2FA, this is not recommended as 2FA is basic practice for good security.</p>
<p>You should also ensure you're able to receive email forwarded by your domain registrar in your GMail account. As each domain registrar will provide different mechanisms for setting up and maintaining email forwarding rules, it will not be covered as part of this post. So, before following the steps below, you should ensure you're able to send an email to '[person]@[yourdomain.com]' and receive it in your GMail account.</p>
<h2 id="app-password">App Password</h2>
<p>Before being able to set up "Send mail as" functionality, you'll need an 'App Password'. An 'App Password' allows software/authentication flows that aren't compatible with 2FA to successfully authenticate with Google Services and is essentially just a 'special' password that is a) highly complex - to prevent brute-force attacks, and b) can be revoked should it ever be compromised.</p>
<p>To set up an 'App Password', from your GMail inbox click your account icon in the top right hand corner, then click "Manage your Google Account" as shown below:</p>
<p><a data-fancybox="gmail" href="/Content/UsingGMailForCustomDomainEmail/ManageGMailAccount.png"><img src="/Content/UsingGMailForCustomDomainEmail/ManageGMailAccount.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Manage GMail Account"/></a></p>
<p>This will open a new tab for managing your Google Account. Select the "Security" category from the menu on the left (or across the top on smaller devices) and scroll down to the "Signing in to Google" section. Here, you should see that 2-Step verification has been turned on (if it isn't, turn it on now before continuing) and an "App passwords" option as shown below:</p>
<p><a data-fancybox="gmail" href="/Content/UsingGMailForCustomDomainEmail/SigningInToGoogle.png"><img src="/Content/UsingGMailForCustomDomainEmail/SigningInToGoogle.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Signing In To Google"/></a></p>
<p>Click the "App passwords" option to create a new App password. In the "Select app" drop-down select "Other (Custom name)" and enter a name for the app password (I tend to use the domain name). Once this is entered click the Generate button as shown below:</p>
<p><a data-fancybox="gmail" href="/Content/UsingGMailForCustomDomainEmail/AddAppPassword.png"><img src="/Content/UsingGMailForCustomDomainEmail/AddAppPassword.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Add App Password"/></a></p>
<p>Clicking "Generate" will display a dialog containing your new app password as shown below. Copy this password and keep it safe (I would recommend adding it to your password manager).</p>
<p><a data-fancybox="gmail" href="/Content/UsingGMailForCustomDomainEmail/NewAppPassword.png"><img src="/Content/UsingGMailForCustomDomainEmail/NewAppPassword.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="New App Password"/></a></p>
<p>You can now close the "Google Account" tab and return to Gmail.</p>
<h2 id="send-mail-as">Send Mail As</h2>
<p>Back in GMail, click the 'cog' icon in the top right to display "Quick Settings" and, from there, click the "See all settings" button as shown below:</p>
<p><a data-fancybox="gmail" href="/Content/UsingGMailForCustomDomainEmail/SeeAllGmailSettings.png"><img src="/Content/UsingGMailForCustomDomainEmail/SeeAllGmailSettings.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="See All Gmail Settings"/></a></p>
<p>This will take you to the Settings page where you should see an "Accounts and Import" category. Click this category to reveal the "Send mail as" options as shown below:</p>
<p><a data-fancybox="gmail" href="/Content/UsingGMailForCustomDomainEmail/SendMailAs.png"><img src="/Content/UsingGMailForCustomDomainEmail/SendMailAs.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Send Mail As"/></a></p>
<p>Click the "Add another email address" link which will display a new window allowing you to "Enter information about your other email address". In the "Email Address" text box, enter the email address of the domain account set up to forward email to your Gmail account as shown below:</p>
<p><a data-fancybox="gmail" href="/Content/UsingGMailForCustomDomainEmail/EnterEMailAddress.png"><img src="/Content/UsingGMailForCustomDomainEmail/EnterEMailAddress.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Enter EMail Address"/></a></p>
<p>Click "Next Step >>" to reveal the "Send emails through your SMTP server" page. This is where we get clever and, instead of entering the details of our own SMTP server (which used to work until Google changed requirements a while back) we're instead going to send through GMails own servers. In the 'SMTP Server' text box enter 'smtp.gmail.com' then in the 'Username' text box enter your GMail username (i.e. me@gmail.com). Finally, in the 'Password' box, enter the App Password we generated in the previous section. Once everything is entered the dialog should look similar to this:</p>
<p><a data-fancybox="gmail" href="/Content/UsingGMailForCustomDomainEmail/SMTPServerConfiguration.png"><img src="/Content/UsingGMailForCustomDomainEmail/SMTPServerConfiguration.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="SMTP Server Configuration"/></a></p>
<p>Click "Add Account >>" button which, if everything was entered correctly, should take you to the "Confirm verification and add your email address" screen. Here you are prompted for a confirmation code as shown below:</p>
<p><a data-fancybox="gmail" href="/Content/UsingGMailForCustomDomainEmail/VerifyEmailAddress.png"><img src="/Content/UsingGMailForCustomDomainEmail/VerifyEmailAddress.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Verify Email Address"/></a></p>
<p>Leaving the dialog window open, return to your GMail inbox where you should have received an email from "Gmail Team" titled "Gmail Confirmation - Send Mail as [person]@[yourdomain.com]". In this email you should see a confirmation code which you can copy and paste into the "Confirm verification and add your email address" dialog window.</p>
<p>If you don't receive this email then you should double check your email forwarding rules to ensure the account your entered in the dialog above is configured to forward to your Gmail account. Also bear in mind that some domain registrars can take quite a while to forward email for new accounts (I've seen up to 24 hours previously) so, if you've confirmed that your settings are correct, you may simply need to be a little patient here.</p>
<p>Anyway, assuming your received the confirmation code email and pasted it into the "Enter and verify the confirmation code" text box, you should be able to click Verify, at which point the dialog will disappear. Congratulations, you're now able to send email using your domain email address!</p>
<h2 id="sending-email">Sending Email</h2>
<p>Now, when composing a new email, you're able to click the "From" name and select the email address you want the recipient to see. Furthermore, when you receive email sent to the domain email address, GMail will automatically use this address as the "From" account when replying.</p>
<p>Enjoy!</p>
<p>Do you use a domain registrar that provides email forwarding facilities? Then read this post to see how to set up Gmail to ensure the correct "From" address when replying to emails sent to your domains.</p>http://ian.bebbs.co.uk/posts/UnoB2CCross-Platform App Authentication with Azure AD B2C And The Uno Platform2020-11-11T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>In this post I comprehensively show how apps written using the <a href="https://platform.uno/">Uno Platform</a> can leverage <a href="https://azure.microsoft.com/en-us/services/active-directory/external-identities/b2c/">Azure AD B2C</a> & <a href="https://github.com/AzureAD/microsoft-authentication-library-for-dotnet">MSAL.Net</a> to provide Identity and Access Management services across platforms as diverse as Windows, Android, iOS and the web. As you will see, this combination of technologies provides extremely cheap, simple and flexible identity management functionality that runs from a single code base.</p>
<h2 id="intro">Intro</h2>
<p>Seamless identity management in client-facing apps is critically important to customer engagement yet extremely difficult to implement correctly. In recent years, numerous IDentity as a Service (IDaaS) providers have emerged to help developers address this challenge yet somehow secure authentication and authorization remain one of the most arduous parts of app development.</p>
<p>In this article I present a suite of technologies that can be leveraged to provide identity management in a simple and affordable yet flexible and scalable manner. I show how recent changes to these technologies allow you to leverage the most recent and secure authentication flows (i.e. <a href="https://romikoderbynew.com/2019/09/20/oauth-2-0-authorization-code-with-pkce-vs-implicit-grant/#:%7E:text=Does%20your%20Authorization%20Server%20supprot%20CORS%3F%20Can,your%20clients%20use%20modern%20browsers%20that%20support%20CORS%3F">"Authorization Code with PKCE" instead of "Implicit Grant"</a>) and I illustrate how this technology stack can be used to implement apps that run across all major platforms - including the web - without the need for the developer to maintain onerous platform-specific code.</p>
<p>Finally, much of this post is composed of information from - and links to - other articles from around the web. I have aggregated and annotated these posts below such that the reader is provided a comprehensive guide to using these technologies within a cross-platform Uno application. While I specifically discuss only the major platforms (UWP, Android, iOS and Web) the approaches used below should be pertinent to any platform supported by Uno.</p>
<h2 id="technologies">Technologies</h2>
<p>The suite of technologies used in this article is comprised of: <a href="https://platform.uno/">Uno Platform</a>, <a href="https://azure.microsoft.com/en-us/services/active-directory/external-identities/b2c/">Azure AD B2C</a> and <a href="https://github.com/AzureAD/microsoft-authentication-library-for-dotnet">MSAL.Net</a>. I provide a brief introduction to these technologies below before proceeding to show how they can be combined to provide a holistic cloud-based user-management solution.</p>
<h3 id="uno-platform">Uno Platform</h3>
<p>Regular readers of my blog will be well aware of the Uno Platform by now but, for new readers, the Uno Platform allows UWP apps to run <em>natively</em> on every major platform including desktop (Windows, Mac, Linux), mobile (Android & iOS) and the web (in pretty much any browser). It achieves this by implementing WinRT APIs on top of Xamarin (for desktop/mobile) and WASM (for the web) which allows the developer to write a single code-base which can be transparently shared across each of these platforms.</p>
<p>I have <a href="https://ian.bebbs.co.uk/tags/uno-platform">blogged about Uno Platform extensively over the past year</a> as, in my opinion, it represents the best platform for cross-platform UI develop and empowers developers to utilise a <a href="https://ian.bebbs.co.uk/posts/UnoValue">"one-stack" solution architecture</a>. My consultancy - <a href="https://www.cogenity.com/">Cogenity</a> - specialise in providing support for, and bespoke development of, cross-platform applications written using the Uno Platform. Should you have any questions regarding this article or the Uno Platform in general, please feel free to <a href="https://www.cogenity.com/#three">drop us a line</a> - we love to hear about applications being built with Uno and help our clients deliver on the promise of this amazing technology.</p>
<h3 id="azure-ad-b2c">Azure AD B2C</h3>
<p>Azure Active Directory B2C (AAD B2C) is Microsoft's Azure based Identity and Access Management (IAM) offering for business-to-consumer (B2C) applications. Unlike regular <a href="https://azure.microsoft.com/en-us/services/active-directory/">Azure Active Directory</a> which is very much aimed at B2B and LoB applications, AAD B2C has been designed from the ground up for providing seamless IAM for customer-facing apps. As such it allows the developer to easily leverage advanced scenarios such as social login and multi-factor authentication while simultaneous providing the means to customise "every pixel of the registration and sign-in experience".</p>
<p>Amazingly, this service is offered at an incredibly low price-point. The first 50,000 monthly actives users are free and subsequent users cost just £0.002423p/m! This is easily enough to bootstrap an application and gain market traction prior to being faced with a significant bill for IDaaS and, in any event, these costs will almost certainly be less than the cost of writing and hosting a bespoke solution.</p>
<h3 id="msal.net">MSAL.Net</h3>
<p>Microsoft Authentication Library for .NET (MSAL.NET) is Microsoft's successor to Active Directory Authentication Library for .NET (ADAL.NET). It is part of the <a href="https://docs.microsoft.com/en-gb/azure/active-directory/develop/v2-overview">Microsoft Identify Platform for Developers</a> and represents current best practise for Azure AD authentication from .NET applications.</p>
<p>As we will see below, authentication with MSAL.NET is really very simple and works beautifully in cross-platform scenarios on the Uno Platform.</p>
<h2 id="getting-started-with-azure-ad-b2c">Getting started with Azure AD B2C</h2>
<p>So, with introductions out the way, let get started with Azure AD B2C by creating a new tenant. This is by far the most complicated part of the process and covering it in detail could easily balloon this post to an unmanageable size. Fortunately <a href="https://twitter.com/CodeMillMatt">Matthew Soucoup</a> has covered all the steps for creating a Azure AD B2C tenant on his <a href="https://codemilltech.com/">blog</a>. In the steps below I will be pointing you to Matt's blog posts which I very much encourage you to read and follow before continuing.</p>
<h3 id="step-1-understanding-terminology">Step 1 - Understanding terminology</h3>
<p>One of the most confusing parts of authentication is understanding the various terminology. In his <a href="https://codemilltech.com/xamarin-authentication-with-azure-active-directory-b2c/">first post about Azure AD B2C</a> Matt digs into the various terminology you'll need to understand in order to correctly setup and use Azure AD B2C. If you're at all unsure about terms such as Tenant, Providers or Policies, I'd very much recommend a read of this post before continuing.</p>
<h3 id="step-2-creating-a-tenant">Step 2 - Creating A Tenant</h3>
<p>Now we understand the terminology used, we can go ahead and create an Azure AD B2C Tenant. Again, Matt covers this fantastically well in a <a href="https://codemilltech.com/creating-an-ad-b2c-tenant/">blog post</a>. He also covers the process in a <a href="https://www.youtube.com/watch?v=zfyHwD9sJJ4&feature=youtu.be">YouTube</a> video which helps convey some of the "tricky" behaviour of Azure Directories. Read or view either of these links and follow the steps therein. Once complete you should have a new "[tenant].onmicrosoft.com" directory with an Azure AD B2C service as shown here:</p>
<img src="/Content/UnoB2C/New B2C Tenant.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="New B2C Tenant"/>
<h3 id="step-3-adding-a-policy">Step 3 - Adding A Policy</h3>
<p>While the Azure AD B2C Tenant provides the infrastructure for cloud-based IDaaS, policies dictate who can use this service and how. In order for users of your app to be able to register and/or log in, you need to create a "User flow" policy in your tenant. Matt covers this process in the "Creating A Policy" section of this <a href="https://codemilltech.com/adding-authentication-and-authorization-with-azure-ad-b2c/#creatingapolicy">blog post</a> however the post is slightly out of date as the Azure Portal has changed significantly since he authored it. I would suggest reading Matt's blog post so you understand the process then following the screen shots shown below (tap to enlarge):</p>
<table>
<tr>
<td><a data-fancybox="addingapolicy" href="/Content/UnoB2C/New User Flow.png"><img src="/Content/UnoB2C/New User Flow.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="New User Flow"/></a></td>
<td><a data-fancybox="addingapolicy" href="/Content/UnoB2C/Sign Up and sign in flow.png"><img src="/Content/UnoB2C/Sign Up and sign in flow.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Sign Up and sign in flow"/></a></td>
</tr>
<tr>
<td style="text-align: center"><h5>1. Create a new user flow</h5></td>
<td style="text-align: center"><h5>2. Select recommended sign up and sign in flow</h5></td>
</tr>
<tr>
<td><a data-fancybox="addingapolicy" href="/Content/UnoB2C/New User Flow Name.png"><img src="/Content/UnoB2C/New User Flow Name.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="New User Flow Name"/></a></td>
<td><a id="newuserflowclaims" data-fancybox="addingapolicy" href="/Content/UnoB2C/New User Flow Claims.png"><img src="/Content/UnoB2C/New User Flow Claims.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="New User Flow Claims"/></a></td>
</tr>
<tr>
<td style="text-align: center"><h5>3. Name the user flow</h5></td>
<td style="text-align: center"><h5>4. Select registration attributes and token claims</h5></td>
</tr>
</table>
<p>Make sure you take note of your sign-up and sign-in flow name as you'll need this later.</p>
<h3 id="step-4-add-app-registration">Step 4 - Add App Registration</h3>
<p>The last step is to add an app registration. This controls how your app is expected to interact with Azure AD B2C and it's credentials for doing so. Again <a href="https://codemilltech.com/adding-authentication-and-authorization-with-azure-ad-b2c/#step2settinguptheazureadb2capplication">Matt has us covered</a> but again, his descriptions and screenshots are a little out of date. Furthermore we need to add a couple of "platforms" to the app registration in order to support the variety of operating systems and devices available to Uno applications.</p>
<p>The screen shots below show how to set up an app registration that leverages Authorization Code Flow with PKCE for UWP/WASM authentication and protocol activation for Android / iOS:</p>
<table>
<tr>
<td><a data-fancybox="addappregistration" href="/Content/UnoB2C/App Registrations.png"><img src="/Content/UnoB2C/App Registrations.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="App Registrations"/></a></td>
<td><a data-fancybox="addappregistration" href="/Content/UnoB2C/New App Registration.png"><img src="/Content/UnoB2C/New App Registration.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Add App Registration"/></a></td>
</tr>
<tr>
<td style="text-align: center"><h5>1. Navigate to app registration</h5></td>
<td style="text-align: center"><h5>2. Add a new registration</h5></td>
</tr>
<tr>
<td><a data-fancybox="addappregistration" href="/Content/UnoB2C/Register an Application.png"><img src="/Content/UnoB2C/Register an Application.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Register the Application"/></a></td>
<td><a data-fancybox="addappregistration" href="/Content/UnoB2C/Note Application Id.png"><img src="/Content/UnoB2C/Note Application Id.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Note Application Id"/></a></td>
</tr>
<tr>
<td style="text-align: center"><h5>3. Name the application and change Redirect URI</h5></td>
<td style="text-align: center"><h5>4. Note the application id and click Redirect URIs</h5></td>
</tr>
<tr>
<td><a data-fancybox="addappregistration" href="/Content/UnoB2C/Confirm Authorization Code Flow with PKCE.png"><img src="/Content/UnoB2C/Confirm Authorization Code Flow with PKCE.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Confirm Authorization Code Flow with PKCE"/></a></td>
<td><a data-fancybox="addappregistration" href="/Content/UnoB2C/Add a platform.png"><img src="/Content/UnoB2C/Add a platform.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Add a platform"/></a></td>
</tr>
<tr>
<td style="text-align: center"><h5>5. Confirm Authorization Code Flow with PKCE</h5></td>
<td style="text-align: center"><h5>6. Click 'Add a platform'</h5></td>
</tr>
<tr>
<td><a data-fancybox="addappregistration" href="/Content/UnoB2C/Select Mobile and desktop applications.png"><img src="/Content/UnoB2C/Select Mobile and desktop applications.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Select Mobile and desktop applications"/></a></td>
<td><a data-fancybox="addappregistration" href="/Content/UnoB2C/Add msal redirect uri.png"><img src="/Content/UnoB2C/Add msal redirect uri.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Add msal redirect uri"/></a></td>
</tr>
<tr>
<td style="text-align: center"><h5>7. Select 'Mobile and desktop applications'</h5></td>
<td style="text-align: center"><h5>8. Add MSAL Redirect URI then click Configure</h5></td>
</tr>
</table>
<p>And there we go. We now have an Azure AD B2C tenant set up that is able to authenticate users using best practices across a variety of platforms. If everything is set up correctly, your tenant should look similar to this:</p>
<p><a data-fancybox="addappregistration" href="/Content/UnoB2C/SPA and desktop Redirect URIs.png"><img src="/Content/UnoB2C/SPA and desktop Redirect URIs.png" class="img-responsive" style="margin: auto; max-width:50%; margin-top: 6px; margin-bottom: 6px;" alt="SPA and desktop Redirect URIs"/></a></p>
<h2 id="create-an-uno-application">Create an Uno Application</h2>
<p>We'll now use Visual Studio to create a cross-platform Uno Platform application which is able to authenticate users using the Azure AD B2C tenant we set up above. If you're not sure how to create a new Uno Platform application then follow the steps <a href="https://platform.uno/docs/articles/getting-started-tutorial-1.html">here</a>. I'm going to name my project <code>UnoAuth</code>.</p>
<h3 id="install-dependencies">Install Dependencies</h3>
<p>We're going to need to install the following packages to all projects in the solution:</p>
<ol>
<li><a href="https://www.nuget.org/packages/Microsoft.Identity.Client">Microsoft.Identity.Client</a></li>
<li><a href="https://www.nuget.org/packages/Uno.UI.MSAL">Uno.UI.MSAL</a></li>
<li><a href="https://www.nuget.org/packages/System.IdentityModel.Tokens.Jwt">System.IdentityModel.Tokens.Jwt</a></li>
</ol>
<p>The easiest way to do this is the "Manage Packages for Solution" (via right-clicking on the solution in Solution Explorer) as shown here:</p>
<img src="/Content/UnoB2C/Manage Packages for Solution.png" class="img-responsive" style="margin: auto; max-width:66%; margin-top: 6px; margin-bottom: 6px;" alt="Manage Packages for Solution"/>
<h3 id="authentication-configuration">Authentication Configuration</h3>
<p>With the prerequisite dependencies installed we're going to provide the authentication settings required by Azure AD B2C. As some of these settings should be considered sensitive (i.e. the ClientId) we're going to use a partial class (<code>Authentication</code>) split between two files (<code>Authentication.cs</code> and <code>Authentication.Secrets.cs</code>) so that we can put access logic in one and sensitive values in the other. We can then ensure the second file doesn't get committed to source control (via <code>.gitignore</code>).</p>
<p>The <code>Authentication.cs</code> should look like this:</p>
<pre><code class="language-c#">using System.Collections.Generic;
namespace UnoAuth
{
public static partial class Authentication
{
// ClientIdSecret should be provided in `Authentication.Secrets.cs` as part of the
// partial class
public static string Tenant => TenantSecret;
// ClientIdSecret should be provided in `Authentication.Secrets.cs` as part of the
// partial class
public static string ClientId => ClientIdSecret;
// PolicySecret should be provided in `Authentication.Secrets.cs` as part of the
// partial class
public static string Policy => PolicySecret;
// RedirectUriSecret should be provided in `Authentication.Secrets.cs` as part of the
// partial class
#if __ANDROID__ || __IOS__
public static string RedirectUri => RedirectUriSecretDesktop;
#else
public static string RedirectUri => RedirectUriSecret;
#endif
#if __IOS__
// BundleNameSecret should be provided in `Authentication.Secrets.cs` as part of the
// partial class
public static string BundleName => BundleNameSecret;
#endif
// ScopesSecret should be provided in `Authentication.Secrets.cs` as part of the
// partial class
public static IEnumerable<string> Scopes => ScopesSecret;
public static string AuthorityBase => $"https://{Tenant}.b2clogin.com/tfp/{Tenant}.onmicrosoft.com/";
public static string Authority => $"{AuthorityBase}{Policy}";
public static string GivenNameClaimType => "given_name";
}
}
</code></pre>
<p>Note the <code>#if ... #else ... #endif</code> compiler directives. These directives allows us to use <a href="https://platform.uno/docs/articles/platform-specific-csharp.html">platform specific code</a> such that the correct redirect URI is used on each platform and platform specific values can be provided only only the platforms that require them.</p>
<p>Next, <code>Authentication.Secrets.cs</code> should look like this (but with the appropriate values):</p>
<pre><code class="language-c#">using System.Collections.Generic;
namespace UnoAuth
{
public static partial class Authentication
{
// In this sample, this value will be "bebbsauthspike"
private static readonly string TenantSecret = "[REPLACE THIS VALUE]";
// This is the ClientId value from the app registration.
// It will be in the form of "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
private static readonly string ClientIdSecret = "[REPLACE THIS VALUE]";
// In this sample, this value will be "B2C_1_signup-signin"
private static readonly string PolicySecret = "[REPLACE THIS VALUE]";
// In this sample, this value will be "http://localhost:5000"
private static readonly string RedirectUriSecret = "[REPLACE THIS VALUE]";
private static readonly string RedirectUriSecretDesktop = $"msal{ClientIdSecret}://auth";
// In this sample, this value will be "com.companyname.UnoAuth"
private static readonly string BundleNameSecret = "[REPLACE THIS VALUE]";
// Note, we're currently only interested in authenticating, not defining any additional scopes which a
// user may or may not have access to. As such, we only request access to the `openid` scope.
private static readonly IEnumerable<string> ScopesSecret = new[] { "https://graph.microsoft.com/openid" };
}
}
</code></pre>
<h3 id="create-the-ui">Create the UI</h3>
<p>Finally we're going to create the UI. Given our app will have three distinct states - Unauthenticated, Authenticating & Authenticated - we're going to use <a href="https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Xaml.VisualState?view=winrt-19041">visual states</a> to directly reflect these states in the UI. So, in <code>Main.xaml</code>, update the Xaml to the following:</p>
<pre><code class="language-xaml"><Page
x:Class="UnoAuth.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">
<Grid x:Name="StateGrid" Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<VisualStateManager.VisualStateGroups>
<VisualStateGroup x:Name="AuthenticationStates">
<VisualState x:Name="Unauthenticated"/>
<VisualState x:Name="Authenticating">
<VisualState.Setters>
<Setter Target="AuthenticatingGrid.(UIElement.Visibility)" Value="Visible"/>
<Setter Target="AuthenticatedGrid.(UIElement.Visibility)" Value="Collapsed"/>
<Setter Target="UnauthenticatedGrid.(UIElement.Visibility)" Value="Collapsed"/>
</VisualState.Setters>
</VisualState>
<VisualState x:Name="Authenticated">
<VisualState.Setters>
<Setter Target="AuthenticatedGrid.(UIElement.Visibility)" Value="Visible"/>
<Setter Target="AuthenticatingGrid.(UIElement.Visibility)" Value="Collapsed"/>
<Setter Target="UnauthenticatedGrid.(UIElement.Visibility)" Value="Collapsed"/>
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
<Grid x:Name="UnauthenticatedGrid" Visibility="Visible" Background="#FF1D437C">
<StackPanel HorizontalAlignment="Center" VerticalAlignment="Center">
<TextBlock Text="Click 'Sign In' To Authenticate" TextWrapping="Wrap" HorizontalAlignment="Center" Style="{ThemeResource TitleTextBlockStyle}" Margin="32,32,32,32" Foreground="White"/>
<Button x:Name="SignInButton" HorizontalAlignment="Center" Padding="32,16,32,16" Margin="32,32,32,32" Click="SignInButton_Click" Background="#FF412663">
<TextBlock Text="Sign In" TextWrapping="Wrap" Style="{ThemeResource SubtitleTextBlockStyle}" Foreground="White"/>
</Button>
</StackPanel>
</Grid>
<Grid x:Name="AuthenticatingGrid" Visibility="Collapsed" Background="#FFC07000">
<StackPanel Orientation="Vertical" HorizontalAlignment="Center" VerticalAlignment="Center">
<TextBlock HorizontalAlignment="Center" Text="Authenticating" Style="{ThemeResource TitleTextBlockStyle}" Margin="32" Foreground="White"/>
<TextBlock HorizontalAlignment="Center" Text="One Sec..." Style="{ThemeResource SubtitleTextBlockStyle}" Margin="32" Foreground="White"/>
</StackPanel>
</Grid>
<Grid x:Name="AuthenticatedGrid" Visibility="Collapsed" Background="#FF1F6900">
<StackPanel Orientation="Vertical" HorizontalAlignment="Center" VerticalAlignment="Center">
<TextBlock HorizontalAlignment="Center" Style="{ThemeResource TitleTextBlockStyle}" Margin="32" Foreground="White">
<Run Text="Hi "/><Run Text="{x:Bind Path=GivenName, Mode=OneWay}"/><Run Text="!"/>
</TextBlock>
<TextBlock HorizontalAlignment="Center" Text="How are you?" Style="{ThemeResource SubtitleTextBlockStyle}" Margin="32" Foreground="White"/>
<Button x:Name="SignOutButton" HorizontalAlignment="Center" Padding="32,16,32,16" Margin="32,32,32,32" Click="SignOutButton_Click" Background="#FF412663">
<TextBlock Text="Sign Out" TextWrapping="Wrap" Style="{ThemeResource SubtitleTextBlockStyle}" Foreground="White"/>
</Button>
</StackPanel>
</Grid>
</Grid>
</Page>
</code></pre>
<p>Here you can see the three visual states named: <code>Unauthenticated</code>, <code>Authenticating</code> & <code>Authenticated</code>. In the <code>Unauthenticated</code> state the <code>UnauthenticatedGrid</code> will be visible while both the <code>AuthenticatingGrid</code> and <code>AuthenticatedGrid</code> will be collapsed. This pattern is repeated in the other states (<code>Authenticating</code> only showing <code>AuthenticatingGrid</code> & <code>Authenticated</code> only showing <code>AuthenticatedGrid</code>) such that only elements pertinent to the current state are displayed.</p>
<p>In the <code>UnauthenticatedGrid</code> we have a <code>SignInButton</code> from which we use the Click event handler to invoke the authentication process. While authentication is taking place, the <code>AuthenticatingGrid</code> will be shown which asks the user to wait. Finally in the <code>AuthenticatedGrid</code> we have a <code>TextBlock</code> which will shown the given name of the authenticated user and a <code>SignOutButton</code> which allows the user to sign-out.</p>
<h3 id="implement-the-code">Implement the Code</h3>
<p>In the <code>MainPage.xaml.cs</code> code-behind file we implement the <code>SignInButton_Click</code> method to perform authentication using Azure AD B2C and the <code>SignOutButton_Click</code> method to remove the cached authentication tokens. Here's the code:</p>
<pre><code class="language-c#">using Microsoft.Identity.Client;
using System.Collections.Generic;
using System.IdentityModel.Tokens.Jwt;
using System.Linq;
using Uno.UI.MSAL;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
namespace UnoAuth
{
[TemplateVisualState(GroupName = AuthenticationStatesGroupName, Name = UnauthenticatedStateName)]
[TemplateVisualState(GroupName = AuthenticationStatesGroupName, Name = AuthenticatingStateName)]
[TemplateVisualState(GroupName = AuthenticationStatesGroupName, Name = AuthenticatedStateName)]
public sealed partial class MainPage : Page
{
private const string AuthenticationStatesGroupName = "AuthenticationStates";
private const string UnauthenticatedStateName = "Unauthenticated";
private const string AuthenticatingStateName = "Authenticating";
private const string AuthenticatedStateName = "Authenticated";
public static readonly DependencyProperty GivenNameProperty = DependencyProperty.Register("GivenName", typeof(string), typeof(MainPage), new PropertyMetadata(string.Empty));
private readonly IPublicClientApplication _authenticationClient;
public MainPage()
{
this.InitializeComponent();
_authenticationClient = PublicClientApplicationBuilder
.Create(Authentication.ClientId)
#if __IOS__
.WithIosKeychainSecurityGroup(Authentication.BundleName)
#endif
.WithB2CAuthority(Authentication.Authority)
.WithRedirectUri(Authentication.RedirectUri)
.WithUnoHelpers()
.Build();
}
private void TransitionToAuthenticated(AuthenticationResult authResult)
{
var token = new JwtSecurityToken(authResult.IdToken);
GivenName = token.Claims
.Where(claim => Authentication.GivenNameClaimType.Equals(claim.Type))
.Select(claim => claim.Value)
.First();
VisualStateManager.GoToState(this, AuthenticatedStateName, true);
}
private async void SignInButton_Click(object sender, RoutedEventArgs e)
{
VisualStateManager.GoToState(this, AuthenticatingStateName, true);
try
{
var accounts = await _authenticationClient.GetAccountsAsync();
var result = await _authenticationClient
.AcquireTokenSilent(Authentication.Scopes, accounts.FirstOrDefault())
.ExecuteAsync();
TransitionToAuthenticated(result);
}
catch (MsalUiRequiredException)
{
try
{
var result = await _authenticationClient
.AcquireTokenInteractive(Authentication.Scopes)
.WithPrompt(Prompt.ForceLogin)
.WithUnoHelpers()
.ExecuteAsync();
TransitionToAuthenticated(result);
}
catch
{
// Something went wrong, the the user try again
VisualStateManager.GoToState(this, UnauthenticatedStateName, true);
}
}
}
private async void SignOutButton_Click(object sender, RoutedEventArgs e)
{
IEnumerable<IAccount> accounts = await _authenticationClient.GetAccountsAsync();
while (accounts.Any())
{
await _authenticationClient.RemoveAsync(accounts.First());
accounts = await _authenticationClient.GetAccountsAsync();
}
VisualStateManager.GoToState(this, UnauthenticatedStateName, true);
}
public string GivenName
{
get { return (string)GetValue(GivenNameProperty); }
set { SetValue(GivenNameProperty, value); }
}
}
}
</code></pre>
<p>There's a lot here so lets break it down:</p>
<h4 id="publicclientapplicationbuilder">PublicClientApplicationBuilder</h4>
<pre><code class="language-c#">_authenticationClient = PublicClientApplicationBuilder
.Create(Authentication.ClientId)
#if __IOS__
.WithIosKeychainSecurityGroup(Authentication.BundleName)
#endif
.WithB2CAuthority(Authentication.Authority)
.WithRedirectUri(Authentication.RedirectUri)
.WithUnoHelpers()
.Build();
</code></pre>
<p>The <code>PublicClientApplicationBuilder</code> class is used to configure and build a <code>PublicClientApplication</code> instance. This class is used to:</p>
<blockquote class="blockquote">
<p>acquire tokens in desktop or mobile applications (Desktop / UWP / Xamarin.iOS / Xamarin.Android). Public client applications are not trusted to safely keep application secrets, and therefore they only access Web APIs in the name of the user only</p>
</blockquote>
<p>To rephrase, because apps that get installed on desktop or mobile devices can be relatively easily decompiled, they can't effectively keep secrets in the way apps that run on a remote machine (i.e. server rendered web-apps) can. As such, this class is able to invoke an authentication flow using only a ClientId and RedirectUri which, while sensitive, do not directly grant the app any authentication rights and are therefore not considered secret. Equally, while this application is running in a browser (via WASM) as an SPA, we need to ensure no secrets are held in the JavaScript VM instance as these can also be retrieved by malicious actors.</p>
<p>To build a <code>PublicClientApplication</code> instance we need to provide the <code>PublicClientApplicationBuilder</code> with the ClientId, Authority and RedirectUri values we encountered while <a href="http://localhost:5080/posts/UnoB2C#step-2-creating-a-tenant">creating our tenant</a>. For iOS we also need to provide the 'IosKeychainSecurityGroup' value which we enclose in a compiler directive so that it is only used on that platform. We provide these values via the <code>Authentication</code> class which will read the values from <code>Authentication.Secret.cs</code>.</p>
<p>Of particular note here is the <code>.WithUnoHelpers()</code> line. This extension method provides a custom implementation of <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.identity.client.extensibility.icustomwebui?view=azure-dotnet"><code>ICustomWebUI</code></a> and <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.identity.client.imsalhttpclientfactory?view=azure-dotnet"><code>IMsalHttpClientFactory</code></a> to MSAL.NET which allows it to perform authentication in WASM in <em>exactly the same way</em> as it would for an app running on a desktop or mobile device. This is just fantastic and both the MSAL.NET team and Uno Platform deserve kudos for creating and exploiting hooks that allow this use-case to function with so little friction.</p>
<h4 id="acquiretokensilent">AcquireTokenSilent</h4>
<pre><code class="language-c#">var accounts = await _authenticationClient.GetAccountsAsync();
var result = await _authenticationClient
.AcquireTokenSilent(Authentication.Scopes, accounts.FirstOrDefault())
.ExecuteAsync();
</code></pre>
<p>When a user successfully authenticates with Azure AD B2C, they are provided both an <a href="https://auth0.com/docs/tokens/access-tokens">access token</a> and a <a href="https://auth0.com/docs/tokens/refresh-tokens">refresh token</a>. Both these tokens are stored in a local cache associated with the application. These cached tokens can be used across/between sessions to ensure a user isn't constantly being prompted to authenticate with a service.</p>
<p>As such, the first thing we endeavour to do when starting the authentication process is to check to see if there is a valid access token or refresh token (which will be automatically exchanged for a new access token) in the local cache. If there is, then the user has already authenticated and we should use the current tokens to avoid prompting the user to authenticate a second time.</p>
<p>And this is what <code>AcquireTokenSilent</code> silent does. We first get a list of accounts in the token cache and (for simplicity) use the first account we find to check for the presence of a valid token. If one is found, authentication succeeds and no further action is required. If a valid token is not found, then the <code>MsalUiRequiredException</code> is thrown which we handle to perform authentication interactively.</p>
<h4 id="acquiretokeninteractive">AcquireTokenInteractive</h4>
<pre><code class="language-c#">var result = await _authenticationClient
.AcquireTokenInteractive(Authentication.Scopes)
.WithPrompt(Prompt.ForceLogin)
.WithUnoHelpers()
.ExecuteAsync();
</code></pre>
<p>If a cached token was not available, we need to prompt the user to authenticate using an interactive process. This process involves opening a browser window and navigating to the authentication page for your Azure AD B2C tenant. Once authentication is complete, an authorization code is returned via the RedirectUri that MSAL is able to exchange for access and refresh tokens which are then stored in the local cache.</p>
<p>Again, note the call to <code>.WithUnoHelpers()</code>. This call performs platform dependent set-up such that the browser/device is able to correctly display a browser and return to the calling application once authentication is complete.</p>
<p>Finally, you may be wondering about the <code>.WithPrompt(Prompt.ForceLogin)</code>. Well, currently MSAL.NET doesn't <a href="https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/589">support a unified means to "sign out" of an account</a>. While you are able to remove cached tokens (see the "sign out" code below) it doesn't clear cookies in the browser used to sign in to an account. This would result in a subsequent call to <code>AcquireTokenInteractive</code> simply logging the user in to the previously used account without prompting them for credentials. To prevent this the <code>.WithPrompt(Prompt.ForceLogin)</code> line is used to ensure the user is prompted for credentials regardless of cookie state.</p>
<h4 id="transitiontoauthenticated">TransitionToAuthenticated</h4>
<pre><code class="language-c#">private void TransitionToAuthenticated(AuthenticationResult authResult)
{
var token = new JwtSecurityToken(authResult.IdToken);
GivenName = token.Claims
.Where(claim => Authentication.GivenNameClaimType.Equals(claim.Type))
.Select(claim => claim.Value)
.First();
VisualStateManager.GoToState(this, AuthenticatedStateName, true);
}
</code></pre>
<p>Once a user has been authenticated (either silently or interactively) we transition to the <code>Authenticated</code> state. Before doing so however, we use the access token returned from the authentication process to determine the name of the person who authenticated. As Azure AD B2C returns the access token as a <a href="https://auth0.com/docs/tokens/json-web-tokens">JSON Web Token (JWT)</a> we use the <code>JwtSecurityToken</code> class from the <code>System.IdentityModel.Tokens.Jwt</code> package to parse the token. The token will contain many claims many determined by the registration attributes and tokens claims <a href="#newuserflowclaims">we selected while setting up the policy for our Azure AD B2C tenant</a>.</p>
<p>In this instance, we're interested in the <code>given_name</code> claim so we enumerate through the claims and set the <code>GivenName</code> property to the value of the first claim of this type.</p>
<p>Finally we use the <code>VisualStateManager</code> to transition the UI to the <code>Authenticated</code> state which will greet the user by name.</p>
<h4 id="sign-out">Sign Out</h4>
<pre><code class="language-c#">IEnumerable<IAccount> accounts = await _authenticationClient.GetAccountsAsync();
while (accounts.Any())
{
await _authenticationClient.RemoveAsync(accounts.First());
accounts = await _authenticationClient.GetAccountsAsync();
}
</code></pre>
<p>If we're able to sign-in then we need to be able to sign-out. Unfortunately this process is not quite a slick as the fluent, async methods we used for sign-in and, as described above, doesn't do anything to remove browser cookies which can be used to transparently re-authenticate. This does seem to be the subject of <a href="https://stackoverflow.com/questions/47517434/how-to-sign-out-from-azure-ad-2-0-msal-in-a-desktop-application">much</a> <a href="https://stackoverflow.com/questions/37792244/logout-does-not-work-when-using-microsoft-authentication-library-msal">confusion</a> on both StackOverflow and Github where many of the <a href="https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/589">associated</a> <a href="https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/425">issues</a> have been closed without a satisfactory solution. Any mention of improving the sign-out experience even seems to have disappeared from the <a href="https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/projects/1">MSAL.NET project boards</a>.</p>
<p>Still, the <code>.WithPrompt(Prompt.ForceLogin)</code> workaround resolves the primary issue for now so we're able to just rely on the code above to remove cached tokens.</p>
<h3 id="android-changes">Android Changes</h3>
<p>In order for authentication to succeed on Android we need to modify both <code>AndroidManifest.xml</code> and the <code>MainActivity.cs</code></p>
<h4 id="androidmanifest.xml">AndroidManifest.xml</h4>
<p>In the 'UnoAuth.Droid' project, expand 'Properties' to show the "AndroidManifest.xml" file. Double-click this file to edit it such that it looks similar to the following:</p>
<pre><code class="language-xml"><?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="UnoAuth" android:versionCode="1" android:versionName="1.0">
<uses-sdk android:minSdkVersion="16" android:targetSdkVersion="29" />
<application android:label="UnoAuth">
<activity android:name="microsoft.identity.client.BrowserTabActivity">
<intent-filter>
<action android:name="android.intent.action.VIEW" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.BROWSABLE" />
<data android:scheme="msal[ClientId]" android:host="auth" />
</intent-filter>
</activity>
</application>
</manifest>
</code></pre>
<p>Make sure you amend the <code>android:scheme</code> value to use the ClientId from your App Registration then save changes and close the file.</p>
<h4 id="mainactivity">MainActivity</h4>
<p>Open the 'MainActivity.cs' file and amend it to include the following:</p>
<pre><code class="language-c#">using Android.App;
using Android.Content;
using Android.Views;
using Microsoft.Identity.Client;
namespace UnoAuth.Droid
{
[Activity(
MainLauncher = true,
ConfigurationChanges = global::Uno.UI.ActivityHelper.AllConfigChanges,
WindowSoftInputMode = SoftInput.AdjustPan | SoftInput.StateHidden)]
public class MainActivity : Windows.UI.Xaml.ApplicationActivity
{
protected override void OnActivityResult(int requestCode, Result resultCode, Intent data)
{
base.OnActivityResult(requestCode, resultCode, data);
AuthenticationContinuationHelper.SetAuthenticationContinuationEventArgs(requestCode, resultCode, data);
}
}
}
</code></pre>
<h3 id="ios-changes">iOS Changes</h3>
<p>As with most everything on iOS, the changes to make authentication work are a little more tricky. We need to change the iOS project properties and both the 'Info.plist' and 'Entitlements.plist' files.</p>
<h4 id="project-properties">Project Properties</h4>
<p>Right click on the iOS project properties, navigate to "iOS Bundle Signing" and select "Manual Provisioning". Next, under Additional Resources, make sure the Custom Entitlements setting is set to "Entitlements.plist".</p>
<p>Your iOS Bundle Signing page should now look like this:</p>
<p><a data-fancybox="iosbundlesigning" href="/Content/UnoB2C/iOS Bundle Signing.png"><img src="/Content/UnoB2C/iOS Bundle Signing.png" class="img-responsive" style="margin: auto; max-width:66%; margin-top: 6px; margin-bottom: 6px;" alt="iOS Bundle Signing"/></a></p>
<h4 id="info.plist">Info.plist</h4>
<p>Right click on the 'Info.plist' file in the iOS project and select <code>View Code</code>. At the end of the root <code><dict></code> element add the <code>CFBundleURLTypes</code> key and value shown below (amending the <code>CFBundleURLSchemes</code> value to use the ClientId for your app registration):</p>
<pre><code class="language-xml"><?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDisplayName</key>
<string>UnoAuth</string>
<key>CFBundleIdentifier</key>
<string>com.companyname.UnoAuth</string>
...
<key>CFBundleURLTypes</key>
<array>
<dict>
<key>CFBundleURLName</key>
<string>MSAL</string>
<key>CFBundleURLSchemes</key>
<array>
<string>msal[ClientID]</string>
</array>
<key>CFBundleTypeRole</key>
<string>None</string>
</dict>
</array>
</dict>
</plist>
</code></pre>
<p>Finally copy the <code>CFBundleIdentifier</code> value (in this case <code>com.companyname.UnoAuth</code>) then save and close the file.</p>
<h4 id="entitlements.plist">Entitlements.plist</h4>
<p>Double click on the 'Entitlements.plist' file in the iOS project to open the visual editor. In the 'Entitlements' list select "Keychain" and then tick "Enable Keychain" in the 'Description' section. Finally, paste the bundle identifier you copied from 'Info.plist' into the 'Keychain Groups' text box so it looks like this:</p>
<p><a data-fancybox="entitlementsplist" href="/Content/UnoB2C/Entitlements plist.png"><img src="/Content/UnoB2C/Entitlements plist.png" class="img-responsive" style="margin: auto; max-width:66%; margin-top: 6px; margin-bottom: 6px;" alt="Entitlements plist"/></a></p>
<p>Finally save the changes and close the file.</p>
<h2 id="testing">Testing</h2>
<p>Now, if everything is set up correctly, you should be able to use Azure AD B2C and MSAL.NET to authenticate users. Here is UnoAuth running on...</p>
<h3 id="uwp">UWP</h3>
<video class="img-responsive" style="margin: auto; width:80%; margin-top: 6px; margin-bottom: 6px;" controls>
<source src="/Content/UnoB2C/UWP Authentication.mp4" type="video/mp4"/>
Your browser does not support the video tag
</video>
<h3 id="wasm">WASM</h3>
<video class="img-responsive" style="margin: auto; width:80%; margin-top: 6px; margin-bottom: 6px;" controls>
<source src="/Content/UnoB2C/WASM Authentication.mp4" type="video/mp4"/>
Your browser does not support the video tag
</video>
<h3 id="android">Android</h3>
<video class="img-responsive" style="margin: auto; width:40%; margin-top: 6px; margin-bottom: 6px;" controls>
<source src="/Content/UnoB2C/Droid Authentication.mp4" type="video/mp4"/>
Your browser does not support the video tag
</video>
<h3 id="ios">iOS</h3>
<video class="img-responsive" style="margin: auto; width:40%; margin-top: 6px; margin-bottom: 6px;" controls>
<source src="/Content/UnoB2C/iOS Authentication.mp4" type="video/mp4"/>
Your browser does not support the video tag
</video>
<h2 id="conclusion">Conclusion</h2>
<p>As we can see, it is now possible to use Azure AD B2C and MSAL.NET to perform client-side authentication, across multiple platforms, using a single code-base. Furthermore, while a few platform specific tweaks are required in a couple of the head projects, the code to perform authentication is both concise, understandable and shared by all platforms.</p>
<p>While IAM remains a complicated subject (as attested to by the length of this post!) I hope the above provides sufficient information that a reader is able to quickly get these technologies working together and allow them to move on to more engaging parts of their app.</p>
<h2 id="finally">Finally</h2>
<p>If you're interested in using the Uno Platform to deliver cross-platform apps or have an upcoming project for which you'd like evaluate Uno Platform's fit, then please feel free to drop me a line using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>. As a freelance software developer and remote contractor I'm always interested in hearing from potential new clients or ideas for new collaborations.</p>
<p>In this post I comprehensively show how apps written using the <a href="https://platform.uno/">Uno Platform</a> can leverage <a href="https://azure.microsoft.com/en-us/services/active-directory/external-identities/b2c/">Azure AD B2C</a> & <a href="https://github.com/AzureAD/microsoft-authentication-library-for-dotnet">MSAL.Net</a> to provide Identity and Access Management services across platforms as diverse as Windows, Android, iOS and the web. As you will see, this combination of technologies provides extremely cheap, simple and flexible identity management functionality that runs from a single code base.</p>http://ian.bebbs.co.uk/posts/BlogMilestoneA Blogging Milestone2020-09-04T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>Today my blog hit a minor milestone: over 12,000 page views in the last 365 days. That's over two-thousand page views a month! While a long way short of other notable tech bloggers (yes, I'm looking at you <a href="https://www.hanselman.com/blog/">Hanselman</a>), I think it's a pretty decent number, particularly when considering the somewhat limited audience for my very targeted content. In this post I provide insights into my "Top 20" posts and my plans for the coming months.</p>
<h2 id="top-20">Top 20</h2>
<h3 id="posts-by-weighted-page-view">Posts By Weighted Page View</h3>
<p>While looking over the blog statistics for the past year, I was very interested to understand which were my "top" posts. Initially I thought total page views for each post would be a good metric but, given older posts will naturally have more hits, I decided to weight total page views by publication date. This gave me the following:</p>
<img src="/Content/BlogMilestone/Weighted Posts.png" class="img-responsive" style="margin: auto; max-width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Weighted Posts"/>
<p>In this chart the outer ring is the Top 20 blog posts based on weighted total page views and the inner ring is the same 20 blog posts based on actual total page views.</p>
<p>The Top 20 are as follows:</p>
<table class="table">
<thead>
<tr>
<th>Post</th>
<th style="text-align: right;">Weighted Views</th>
<th style="text-align: right;">Total Views</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/UnoValue">On the incredible value proposition of .NET & the Uno Platform</a></td>
<td style="text-align: right;">152.19</td>
<td style="text-align: right;">739</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/LessReSTMoreHotChocolate">Less ReST, more Hot Chocolate</a></td>
<td style="text-align: right;">112.25</td>
<td style="text-align: right;">1741</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/UnoLinux">Running UWP on Linux With Uno</a></td>
<td style="text-align: right;">100.57</td>
<td style="text-align: right;">445</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/UnoPi">Running UWP on a Raspberry Pi Using Uno Platform</a></td>
<td style="text-align: right;">88.17</td>
<td style="text-align: right;">348</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/MLinUWP">State-of-the-art ML in UWP</a></td>
<td style="text-align: right;">87.87</td>
<td style="text-align: right;">299</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/BuildingDotNetCore3WithAzurePipelines">Building .NET Core 3.0 With Azure Pipelines</a></td>
<td style="text-align: right;">62.03</td>
<td style="text-align: right;">1138</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/UnoChat">Cross-Platform Real-Time Communication with Uno & SignalR</a></td>
<td style="text-align: right;">56.38</td>
<td style="text-align: right;">401</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/Uno">The Seven GUIs of Christmas</a></td>
<td style="text-align: right;">53.81</td>
<td style="text-align: right;">1028</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/AugmentingTheGenericHost">Augmenting the .NET Core 3.0 Generic Host</a></td>
<td style="text-align: right;">47.2</td>
<td style="text-align: right;">806</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/COduo-Part4">Many platforms, one world - Part 4</a></td>
<td style="text-align: right;">31.36</td>
<td style="text-align: right;">340</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/LightweightRuntimeCompositionForGenericHost">Light-weight run-time composition for the .NET Core 3.0 Generic Host</a></td>
<td style="text-align: right;">28.24</td>
<td style="text-align: right;">488</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/UnoWasmDocker">Uno WebAssembly Containerization</a></td>
<td style="text-align: right;">25.47</td>
<td style="text-align: right;">220</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/NetworkBootingManyRaspberryPis">Network Booting Many Raspberry Pis</a></td>
<td style="text-align: right;">22.61</td>
<td style="text-align: right;">355</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/COduo-Part1">Many platforms, one world - Part 1</a></td>
<td style="text-align: right;">19.54</td>
<td style="text-align: right;">230</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/COduo-Part3">Many platforms, one world - Part 3</a></td>
<td style="text-align: right;">12.12</td>
<td style="text-align: right;">138</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/ReactiveStateMachines">Reactive State Machines</a></td>
<td style="text-align: right;">11.83</td>
<td style="text-align: right;">442</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/UnoWithSwagger">Giving Uno Some Swagger</a></td>
<td style="text-align: right;">11.63</td>
<td style="text-align: right;">105</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/UsingHyperlinkInMVVM">Using a Hyperlink in MVVM</a></td>
<td style="text-align: right;">11.44</td>
<td style="text-align: right;">440</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/COduo-Part2">Many platforms, one world - Part 2</a></td>
<td style="text-align: right;">10.69</td>
<td style="text-align: right;">124</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/Codewars">A Kata for Katas</a></td>
<td style="text-align: right;">10.47</td>
<td style="text-align: right;">98</td>
</tr>
</tbody>
</table>
<p>This Top 20 by weighted total page views accounts for almost 90% of the page views this year and, interestingly, aligns pretty well with my (hypothetical) favourite blog post list.</p>
<h3 id="posts-by-weighted-engagement">Posts By Weighted Engagement</h3>
<p>Next I was interested to see if there was a close correlation between page views and "engagement". In this instance I deemed engagement to be based on the average time spent reading a given page but, given readers will naturally take longer to read lengthier posts, I weighted time spent on the page by word count (excluding posts with less than 500 words). Here's what I ended up with:</p>
<table class="table">
<thead>
<tr>
<th>Post</th>
<th style="text-align: right;">Word Count</th>
<th style="text-align: right;">Average Reading Time</th>
<th style="text-align: right;">Weighted Engagement</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/HomeNetworkMonitoring-PartIII">Home Network Monitoring - Part III</a></td>
<td style="text-align: right;">849</td>
<td style="text-align: right;">00:08:08</td>
<td style="text-align: right;">16748</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/ReactiveReadModels">Reactive ReadModels</a></td>
<td style="text-align: right;">1069</td>
<td style="text-align: right;">00:09:02</td>
<td style="text-align: right;">16577</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/UsingSVGInUWP">The absolute easiest way to use SVG icons in UWP apps</a></td>
<td style="text-align: right;">663</td>
<td style="text-align: right;">00:05:59</td>
<td style="text-align: right;">13942</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/DartInVisualStudioCode">Dart web development with Visual Studio Code</a></td>
<td style="text-align: right;">1538</td>
<td style="text-align: right;">00:07:25</td>
<td style="text-align: right;">11347</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/Nano2Docker">Nano2Docker</a></td>
<td style="text-align: right;">1870</td>
<td style="text-align: right;">00:07:33</td>
<td style="text-align: right;">10476</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/TechAdventuresInSustainability-PartI">Tech Adventures in Sustainability</a></td>
<td style="text-align: right;">1364</td>
<td style="text-align: right;">00:05:54</td>
<td style="text-align: right;">9585</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/Codewars">A Kata for Katas</a></td>
<td style="text-align: right;">1979</td>
<td style="text-align: right;">00:07:05</td>
<td style="text-align: right;">9554</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/HomeNetworkMonitoring-PartII">Home Network Monitoring - Part II</a></td>
<td style="text-align: right;">1384</td>
<td style="text-align: right;">00:05:19</td>
<td style="text-align: right;">8575</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/ASmartHome-Part2">A SmartHome... NoT - Part II</a></td>
<td style="text-align: right;">1883</td>
<td style="text-align: right;">00:05:52</td>
<td style="text-align: right;">8112</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/CqrsEsMvvmRxEfSqlUwpPcl">CQRS/ES & MVVM using RX, EF & SQL in UWP & PCL</a></td>
<td style="text-align: right;">1081</td>
<td style="text-align: right;">00:04:24</td>
<td style="text-align: right;">8030</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/UsingATouchOverlayInPortrainOnRaspbian">Using A Touch Overlay, In Portrait, On Raspbian Buster</a></td>
<td style="text-align: right;">620</td>
<td style="text-align: right;">00:03:16</td>
<td style="text-align: right;">7872</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/FluentNamespacing">Fluent Namespacing</a></td>
<td style="text-align: right;">1165</td>
<td style="text-align: right;">00:04:14</td>
<td style="text-align: right;">7442</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/ReactiveBehaviors">Reactive Behaviors</a></td>
<td style="text-align: right;">926</td>
<td style="text-align: right;">00:03:43</td>
<td style="text-align: right;">7328</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/UnoPi">Running UWP on a Raspberry Pi Using Uno Platform</a></td>
<td style="text-align: right;">1808</td>
<td style="text-align: right;">00:04:56</td>
<td style="text-align: right;">6961</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/COduo-Part1">Many platforms, one world - Part 1</a></td>
<td style="text-align: right;">2037</td>
<td style="text-align: right;">00:05:05</td>
<td style="text-align: right;">6769</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/CombiningUwpSpeechSynthesizerWithAudioGraph">Combining the UWP SpeechSynthesizer and AudioGraph APIs</a></td>
<td style="text-align: right;">965</td>
<td style="text-align: right;">00:03:21</td>
<td style="text-align: right;">6470</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/HomeNetworkMonitoring-PartI">Home Network Monitoring - Part I</a></td>
<td style="text-align: right;">2183</td>
<td style="text-align: right;">00:04:58</td>
<td style="text-align: right;">6378</td>
</tr>
<tr>
<td><a href="https://ian.bebbs.co.uk/posts/ASmartHome-Part1">A SmartHome... NoT - Part I</a></td>
<td style="text-align: right;">2187</td>
<td style="text-align: right;">00:04:21</td>
<td style="text-align: right;">5581</td>
</tr>
</tbody>
</table>
<p>Interestingly this bares little resemblance to the "Top 20 Posts By Weighted Page View" and contains many of my older posts. My guess here is that these are posts that people have come across these posts by actively searching for related keywords. As such, they're likely to have spent longer reading the page in more depth or interactively following steps therein as they're related to what they were searching for. In contrast to this, I imagine a large number of posts in the "Top 20 Posts By Weighted Page View" list are encountered via social media (Twitter) or news aggregator (The Morning Brew, Dotnet Kicks, Dew Drop, etc) and, as such, are read out of idle curiosity rather than specific interest which might explain the lower "engagement".</p>
<h3 id="tags-by-total-time">Tags By Total Time</h3>
<p>Lastly I was interested to see just how much of the internet's time I've occupied with my myriad ramblings. The chart below shows the total time the internet has spent reading articles on my blog by tag (calculated as <code>average time on page * total views</code>):</p>
<img src="/Content/BlogMilestone/Tags By Total Time.png" class="img-responsive" style="margin: auto; max-width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Tags By Total Time"/>
<p>Somewhat amazingly, over 6 person-days (!!!) has been spent reading articles on my blog about XAML and over 5 person-days reading articles about <a href="https://platform.uno/">Uno Platform</a>. That's pretty cool - although I very much hope this time has helped developers achieve goals rather than just killing time at work.</p>
<p>Anyway, here's the full Top 20 (apologies if some of the tag links don't work, I've been a bit inconsistent with casing):</p>
<table class="table">
<thead>
<tr>
<th style="text-align: left;">Tag</th>
<th style="text-align: right;">Total Views</th>
<th style="text-align: right;">Average Time On Page</th>
<th style="text-align: right;">Total Time On Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/xaml">xaml</a></td>
<td style="text-align: right;">3018</td>
<td style="text-align: right;">00:03:04.4648148</td>
<td style="text-align: right;">6.00:27:37</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/uwp">uwp</a></td>
<td style="text-align: right;">3784</td>
<td style="text-align: right;">00:02:18.4156249</td>
<td style="text-align: right;">5.20:28:43</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/uno-platform">uno platform</a></td>
<td style="text-align: right;">4271</td>
<td style="text-align: right;">00:02:05.8527777</td>
<td style="text-align: right;">5.13:24:03</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/net-core">.net core</a></td>
<td style="text-align: right;">6184</td>
<td style="text-align: right;">00:01:36.6097803</td>
<td style="text-align: right;">4.10:16:13</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/android">android</a></td>
<td style="text-align: right;">2414</td>
<td style="text-align: right;">00:02:01.3119047</td>
<td style="text-align: right;">3.00:51:41</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/raspberry-pi">raspberry pi</a></td>
<td style="text-align: right;">1263</td>
<td style="text-align: right;">00:02:18.9000000</td>
<td style="text-align: right;">2.13:32:44</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/mvvm">mvvm</a></td>
<td style="text-align: right;">440</td>
<td style="text-align: right;">00:07:32</td>
<td style="text-align: right;">2.07:14:40</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/rx">rx</a></td>
<td style="text-align: right;">684</td>
<td style="text-align: right;">00:03:11.2500000</td>
<td style="text-align: right;">2.05:01:45</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/ios">ios</a></td>
<td style="text-align: right;">1233</td>
<td style="text-align: right;">00:02:16.9366666</td>
<td style="text-align: right;">1.23:31:10</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/reactive">reactive</a></td>
<td style="text-align: right;">522</td>
<td style="text-align: right;">00:02:45</td>
<td style="text-align: right;">1.21:19:07</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/patterns">patterns</a></td>
<td style="text-align: right;">481</td>
<td style="text-align: right;">00:03:57.6666666</td>
<td style="text-align: right;">1.17:36:22</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/state-machines">state machines</a></td>
<td style="text-align: right;">442</td>
<td style="text-align: right;">00:05:24</td>
<td style="text-align: right;">1.15:46:48</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/linux">linux</a></td>
<td style="text-align: right;">793</td>
<td style="text-align: right;">00:02:57.9000000</td>
<td style="text-align: right;">1.12:00:19</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/rest">rest</a></td>
<td style="text-align: right;">1852</td>
<td style="text-align: right;">00:01:28.7192982</td>
<td style="text-align: right;">1.10:04:10</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/nswag">nswag</a></td>
<td style="text-align: right;">1846</td>
<td style="text-align: right;">00:02:02.5789473</td>
<td style="text-align: right;">1.10:02:04</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/surface">surface</a></td>
<td style="text-align: right;">832</td>
<td style="text-align: right;">00:02:12.0208333</td>
<td style="text-align: right;">1.06:04:33</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/graphql">graphql</a></td>
<td style="text-align: right;">1741</td>
<td style="text-align: right;">00:00:59.1578947</td>
<td style="text-align: right;">1.04:36:34</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/dual">dual</a></td>
<td style="text-align: right;">492</td>
<td style="text-align: right;">00:02:43.2500000</td>
<td style="text-align: right;">1.02:27:20</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/gui">gui</a></td>
<td style="text-align: right;">1181</td>
<td style="text-align: right;">00:01:22.2500000</td>
<td style="text-align: right;">1.01:20:31</td>
</tr>
<tr>
<td style="text-align: left;"><a href="https://ian.bebbs.co.uk/tags/webassembly">webassembly</a></td>
<td style="text-align: right;">726</td>
<td style="text-align: right;">00:02:06.6166666</td>
<td style="text-align: right;">1.01:08:42</td>
</tr>
</tbody>
</table>
<h2 id="still-being-brave">Still Being Brave</h2>
<p>Just under a year ago I wrote <a href="https://ian.bebbs.co.uk/posts/BeBraveLikeBATMan.html">"Be Brave. Like BAT, man!"</a> about my transition to using the <a href="https://brave.com/beb095">Brave browser</a> and signing up to the <a href="https://publishers.basicattentiontoken.org/">Brave Rewards Creators Program</a>. I'm pleased to say that I'm still using Brave as my default browser on Android (additional browser on PC) and have yet to find a website that didn't work correctly (or at least no worse than Chrome) despite having lots of ads/trackers blocked.</p>
<p>Furthermore the Brave Rewards Creators Program has proven to be <em>relatively</em> lucrative. Here's my current Uphold wallet:</p>
<img src="/Content/BlogMilestone/Uphold.png" class="img-responsive" style="margin: auto; max-width:33%; margin-top: 6px; margin-bottom: 6px;" alt="Uphold"/>
<p>While earnings of £50 (it was actually > £60 just a few days back) certainly aren't going to see me retire any time soon, it's still a decent amount for doing <em>nothing</em>. Indeed, I'd have written these blog posts anyway so any earnings from them are a bonus. Moreover, it's got me into the "crypto game" - effectively for free - and I'm very much enjoying speculating on relative rise and fall of BAT and BTC using just my Brave Rewards earnings.</p>
<p>Should you be a privacy conscious person - <a href="https://locusmag.com/2016/09/cory-doctorowthe-privacy-wars-are-about-to-get-a-whole-lot-worse/">and you really ought to be</a> - then I'd definitely recommend dumping Chrome ASAP and moving to a browser not created by a company that makes most of it's money from selling data about you. Firefox is a great choice for PC, especially with their <a href="https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/">Multi-Account Container</a> extension. On mobile though I would definitely recommend Brave due to it's additional privacy features which save you time and money.</p>
<p>If you'd like to try Brave browser, then please use my referral link <a href="https://brave.com/beb095">here</a> as it'll net me a few additional BAT.</p>
<h2 id="the-future">The Future</h2>
<p>Writing this blog has been - and continues to be - a fantastic experience. Not only has it put me in contact with loads of brilliant people but it has also fundamentally improved my understanding of many of the technologies I've written about. Or, to quote one of my personal heroes:</p>
<blockquote class="twitter-tweet tw-align-center"><p lang="en" dir="ltr">If you want to master something, teach it. The more you teach, the better you learn. Teaching is a powerful tool to learning. - Richard Feynman <a href="https://twitter.com/hashtag/Math?src=hash&ref_src=twsrc%5Etfw">#Math</a> <a href="https://twitter.com/hashtag/STEM?src=hash&ref_src=twsrc%5Etfw">#STEM</a> <a href="https://t.co/xY3AdtW5EL">pic.twitter.com/xY3AdtW5EL</a></p>— Math Meaning 🧠🚀 (@MathMeaning) <a href="https://twitter.com/MathMeaning/status/1299006155268685824?ref_src=twsrc%5Etfw">August 27, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>Moving forwards my hope is to publish at least a couple of new posts each month. I've got loads of interesting projects afoot which should afford me the opportunity to expand on some of the technologies I've already covered (i.e. <a href="https://ian.bebbs.co.uk/tags/uno-platform">Uno platform</a>, <a href="https://ian.bebbs.co.uk/tags/ML">ML</a>, etc) and a whole new bunch of technologies I'm currently interested in (Azure AD B2C Authentication and RDF stores/ SparQL queries to name just a couple). I'm also considering migrating my blog from <a href="https://wyam.io/">Wyam</a> to <a href="https://statiq.dev/">Statiq</a> (both written by the amazing <a href="https://twitter.com/daveaglick">Dave Glick</a>) and will certainly write about the migration if/when it happens.</p>
<h2 id="finally">Finally...</h2>
<p>... a huge thank-you to my readers! I sincerely hope you've all enjoyed reading my blog as much as I've enjoyed writing it. If there's something in particular you'd like me to write about (either expanding on a previous post or something new you feel I might be interested in) then just drop me a line using the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>; I'm always happy to make new acquaintances and always interested in new collaborations.</p>
<p>Today my blog hit a minor milestone: over 12,000 page views in the last 365 days. That's over two-thousand page views a month! While a long way short of other notable tech bloggers (yes, I'm looking at you <a href="https://www.hanselman.com/blog/">Hanselman</a>), I think it's a pretty decent number, particularly when considering the somewhat limited audience for my very targeted content. In this post I provide insights into my "Top 20" posts and my plans for the coming months.</p>http://ian.bebbs.co.uk/posts/MLinUWPState-of-the-art ML in UWP2020-08-24T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>In this post I show how to use a state-of-the-art machine learning model to implement Salient Object Detection and Image Segmentation. I then show how this model can be used to provide local inference capabilities entirely within a UWP app.</p>
<h2 id="intro">Intro</h2>
<p>A while ago I found myself prototyping a UI in which I wanted to show portrait images of people. However, I wanted to remove the background from these portrait images so that they appeared integrated into the UI rather than layered on top of it. For example, something like this Premier League Player of the Month card:</p>
<img src="/Content/MLinUWP/Pukki.png" class="img-responsive" style="margin: auto; max-width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Premier League Player of the Month card"/>
<p>Looking around I came across <a href="https://www.remove.bg/">this website</a> which purported to use "sophisticated AI technology to detect foreground layers and separate them from the background". Intrigued I gave it a go and was shocked at how good the results were. Here's an image of my little girl (endeavouring to learn how to go cross-eyed) followed by the image produced by <a href="https://www.remove.bg/">Remove.bg</a>:</p>
<table>
<tr>
<td><img src="/Content/MLinUWP/CrossEyed-Original.png" class="img-responsive" style="margin: auto; max-width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Original"/></td>
<td><img src="/Content/MLinUWP/CrossEyed-removebg-preview.png" class="img-responsive" style="margin: auto; max-width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Background Removed"/></td>
</tr>
<tr>
<td style="text-align: center">Original</td>
<td style="text-align: center">Background Removed</td>
</tr>
</table>
<p>Very cool and easily integrated using their API.</p>
<p>Unfortunately my use-case required background removal from user supplied content and the project costing probably wouldn't extend to paying for (potentially) thousands of calls a month.</p>
<p>So, like any good hacker, I hit the books to learn how this "sophisticated AI" worked...</p>
<h2 id="salient-object-detection-image-segmentation">Salient Object Detection & Image Segmentation</h2>
<p>A thoroughly enjoyable couple of hours study commenced whereupon I learned of the wonders of <a href="https://paperswithcode.com/task/salient-object-detection">Salient Object Detection</a> and <a href="https://towardsdatascience.com/image-segmentation-in-2020-756b77fa88fc">Image Segmentation</a>.</p>
<p>During this research I happened upon <a href="https://github.com/NathanUA/U-2-Net">U²-Net</a>, a very recently published (May 2020) "deep network architecture" for salient object detection. In this repository they provided everything needed to start using their model including all weights and even sample code for inference. Moreover this model had already been used to great effect in the "AR Cut & Paste" demo shown below (<a href="https://twitter.com/cyrildiagne/status/1256916982764646402">link here for Firefox users (like me) who don't see the tweet embedded correctly</a>):</p>
<blockquote class="twitter-tweet tw-align-center"><p lang="en" dir="ltr">4/10 - Cut & paste your surroundings to Photoshop<br><br>Code: <a href="https://t.co/cVddH3u3ik">https://t.co/cVddH3u3ik</a><br><br>Book: <a href="https://twitter.com/HOLOmagazine?ref_src=twsrc%5Etfw">@HOLOmagazine</a><br>Garment: SS17 by <a href="https://twitter.com/thekarentopacio?ref_src=twsrc%5Etfw">@thekarentopacio</a> <br>Type: Sainte Colombe by <a href="https://twitter.com/MinetYoann?ref_src=twsrc%5Etfw">@MinetYoann</a> <a href="https://twitter.com/ProductionType?ref_src=twsrc%5Etfw">@ProductionType</a><br>Technical Insights: ↓<a href="https://twitter.com/hashtag/ML?src=hash&ref_src=twsrc%5Etfw">#ML</a> <a href="https://twitter.com/hashtag/AR?src=hash&ref_src=twsrc%5Etfw">#AR</a> <a href="https://twitter.com/hashtag/AI?src=hash&ref_src=twsrc%5Etfw">#AI</a> <a href="https://twitter.com/hashtag/AIUX?src=hash&ref_src=twsrc%5Etfw">#AIUX</a> <a href="https://twitter.com/hashtag/Adobe?src=hash&ref_src=twsrc%5Etfw">#Adobe</a> <a href="https://twitter.com/hashtag/Photoshop?src=hash&ref_src=twsrc%5Etfw">#Photoshop</a> <a href="https://t.co/LkTBe0t0rF">pic.twitter.com/LkTBe0t0rF</a></p>— Cyril Diagne (@cyrildiagne) <a href="https://twitter.com/cyrildiagne/status/1256916982764646402?ref_src=twsrc%5Etfw">May 3, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>I was inspired. I wanted that tech in my product. But how?</p>
<h2 id="the-easy-way">The "Easy" Way</h2>
<p>In this day and age we, as developers, are somewhat spoiled. Once you understand enough about what it is you need to be able to ask the right questions, you can almost guarantee that someone out there has already posted the answers. This is very much why I endeavour to document my continued learnings on this blog; a sort of "pay-it-forward" thank-you for all the myriad blogs and SO answers I've benefitted from across the years.</p>
<p>Anyway, once I knew I wanted to use U²-Net, it didn't take me long to find <a href="https://hub.docker.com/r/luukio/u2net-bg-removal">a docker image</a> which provided an Http endpoint for performing U²-Net inference on a supplied image and which returned the supplied image with the background removed. Perfecto!</p>
<p>Unfortunately, trying to run this docker image caused an error. Looking at the <a href="https://github.com/ideo/bg-removal-with-u2net/blob/master/Dockerfile">Dockerfile</a> in the associated <a href="https://github.com/ideo/bg-removal-with-u2net">Github repository</a> explained why: the <a href="https://pytorch.org/">PyTorch</a> image on which this docker image was based expected to have CUDA hardware available to it. As I was running on Windows, with the docker container running within WSL2 (<a href="https://docs.microsoft.com/en-us/windows/win32/direct3d12/gpu-cuda-in-wsl">and didn't want to go back to running a Fast Ring build</a>) this docker image was of little direct use.</p>
<p>However, given the Dockerfile provided a good breakdown of all the software required to get U²-Net running, it wasn't rocket science (but perhaps artificial brain surgery?) to write a new Dockerfile which limited PyTorch to only using cpu inference. Spinning this up provided me a local (and free!) endpoint which could take my original sample image and return one with the background removed as shown below:</p>
<table>
<tr>
<td><img src="/Content/MLinUWP/CrossEyed-Original.png" class="img-responsive" style="margin: auto; max-width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Original"/></td>
<td><img src="/Content/MLinUWP/CrossEyed-u2net-local.png" class="img-responsive" style="margin: auto; max-width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Background Removed"/></td>
</tr>
<tr>
<td style="text-align: center">Original</td>
<td style="text-align: center">U²-Net Result</td>
</tr>
</table>
<p>For some this would be enough and, accordingly, I pushed the <a href="https://hub.docker.com/r/ibebbs/u2net-http">docker image</a> and <a href="https://github.com/ibebbs/U2Net-cpu-HTTP">associated repository</a> for others to use (please star them should you find them useful/helpful).</p>
<p>But... why pay for docker instance hosting when my users could perform the inference on their own machines from within a UWP app?</p>
<h2 id="the-hard-way">The "Hard" Way</h2>
<p>For those that are unaware, Windows actually ships with strong support for machine learning in UWP via the <a href="https://docs.microsoft.com/en-us/uwp/api/windows.ai.machinelearning?view=winrt-19041">Windows.AI.MachineLearning</a> namespace. Using the types provided here, a developer is able to load and perform inference using ONNX (Open Neural Network eXchange) models (up to version 1.4 -opset 9) in a (relatively) straight forward manner.</p>
<p>However, in accordance with their strategy of decoupling core technologies from releases of the OS, Microsoft have recently shifted development toward the <a href="https://github.com/Microsoft/onnxruntime">open-source</a> <a href="https://www.nuget.org/packages/Microsoft.AI.MachineLearning">Microsoft.AI.MachineLearning</a> nuget package. This package can can be installed on any recent build of windows (I believe back to 18362) and provides compatibility for the very latest ONNX models (versiol 1.7 - opset 12).</p>
<p>Given, PyTorch (the ML framework used for U²-Net) has strong support for exporting to ONNX, my challenge was clear:</p>
<ol>
<li>Export a fully weighted U²-Net model from PyTorch to ONNX.</li>
<li>Use the Microsoft.AI.MachineLearning package to load the ONNX model.</li>
<li>Write code to process a source image into U²-Net's input tensor.</li>
<li>Use the ONNX model to perform inference on the input image.</li>
<li>Write code to process a result image using U²-Net's output tensor as an alpha channel.</li>
<li>Test</li>
</ol>
<p>Now, while none of these tasks are super-difficult, you will need to be fairly analytical as they involve interpreting Python code (along with lots of Python packages) and byte bashing pixel data to/from 4 dimensional arrays.</p>
<h3 id="exporting-from-pytorch-to-onnx">Exporting from PyTorch to ONNX</h3>
<p>Given we already have a docker image that has everything needed to perform inference using U²-Net, I am going to use this image to export the ONNX model. This can be achieved by running the docker image and overriding the entry-point such that we get access to a command prompt; like so:</p>
<pre><code>docker run -it --entrypoint /bin/bash ibebbs/u2net-http
</code></pre>
<p>Once we have access to a command prompt within the container, we can use Python interactively to load and export the ONNX model. So, from the container's command prompt, start Python (in the <code>U-2-Net</code> directory) by running:</p>
<pre><code>cd U-2-Net
python3
</code></pre>
<p>This will land you at the Python command prompt <code>>>></code> from which we can follow the steps in <a href="https://github.com/ibebbs/U2Net-cpu-HTTP/blob/master/u2net.py"><code>u2net.py</code></a> to load the model as shown below (many of these imports are unnessary but it was just easier to include them all):</p>
<pre><code>import sys
sys.path.insert(0, 'U-2-Net')
from skimage import io, transform
import torch
import torchvision
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from torch.utils.data import Dataset, DataLoader
import numpy as np
from PIL import Image
from data_loader import RescaleT
from data_loader import ToTensorLab
from model import U2NET
model_dir = './saved_models/u2net/u2net.pth'
net = U2NET(3, 1)
net.load_state_dict(torch.load(model_dir, map_location=torch.device('cpu')))
</code></pre>
<p>At this point we have the <code>net</code> variable loaded with the U²-Net architecture and weights from "u2net.pth". Now we need to export this variable as an ONNX model.</p>
<p>Fortunately, PyTorch has some excellent documentation for exporting ONNX (for example <a href="https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html?highlight=onnx">here</a> and <a href="https://pytorch.org/docs/master/onnx.html">here</a>) which made exporting the model fairly trivial:</p>
<pre><code>import torch.onnx
dummy_input = torch.randn(1, 3, 320, 320, device='cpu')
input_names = [ "input" ]
output_names = [ "o0", "o1", "o2", "o3", "o4", "o5", "o6" ]
torch.onnx.export(net, dummy_input, "u2net.onnx", export_params=True, opset_version=12, input_names=input_names, output_names=output_names)
</code></pre>
<p>Here we create a random dummy input, name the input and output tensors and then export the model to ONNX using the latest operator set (<code>opset_version=12</code>).</p>
<p>This will take a few seconds and you might see a few warnings about various functions having been deprecated but, once complete, if you exit interactive Python (using <code>exit()</code>) and return to the container's command prompt, you should be able to see a "u2net.onnx" file in the directory as shown below:</p>
<pre><code>>>> exit()
root@88fa6881c8ea:/app/U-2-Net# dir
LICENSE __pycache__ figures saved_models u2net.onnx u2net_train.py
README.md data_loader.py model test_data u2net_test.py
</code></pre>
<p>You now need to extract the "u2net.onnx" from the container. There are many ways to do this, for me the easiest was to use <a href="https://en.wikipedia.org/wiki/Secure_copy_protocol">"secure copy"</a> to transfer the file to my machine, but do whatever is easiest for you.</p>
<h3 id="load-the-onnx-model-from-a-uwp-app">Load the ONNX model from a UWP app</h3>
<p>With the "u2net.onnx" model in hand, we're now going to use the <code>Microsoft.AI.MachineLearning</code> package to load the model in preparation for running inference.</p>
<p>Before we create the UWP app though, we're going to install the <a href="https://marketplace.visualstudio.com/items?itemName=WinML.mlgenv2">"Windows Machine Learning Code Generator"</a> extension which automatically scaffolds code for interacting with an ONNX model and makes getting started with ML super-easy. So, start VS and install the extension before continuing to the next step (if you're not using VS or would prefer not to install the extension, you can simply copy the file generated in the next step from <a href="https://github.com/ibebbs/UwpMl/blob/master/UwpMl/u2net.cs%5D">here</a>.</p>
<p>Now, from Visual Studio and with the extension installed, create a new UWP project - I named mine "UwpMl" - and add the "u2net.onnx" model to the Assets folder. As you do so, you should see that a "u2net.cs" file is also added to the project thanks to the "Windows Machine Learning Code Generator" extension. Opening this file should show class definitions similar to the following:</p>
<pre><code class="language-c#">// This file was automatically generated by VS extension Windows Machine Learning Code Generator v3
// from model file u2net.onnx
// Warning: This file may get overwritten if you add add an onnx file with the same name
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Windows.Media;
using Windows.Storage;
using Windows.Storage.Streams;
using Windows.AI.MachineLearning;
namespace UwpMl
{
public sealed class u2netInput
{
public TensorFloat input; // shape(1,3,320,320)
}
public sealed class u2netOutput
{
public TensorFloat o0; // shape(1,1,320,320)
public TensorFloat o1; // shape(1,1,320,320)
public TensorFloat o2; // shape(1,1,320,320)
public TensorFloat o3; // shape(1,1,320,320)
public TensorFloat o4; // shape(1,1,320,320)
public TensorFloat o5; // shape(1,1,320,320)
public TensorFloat o6; // shape(1,1,320,320)
}
public sealed class u2netModel
{
private LearningModel model;
private LearningModelSession session;
private LearningModelBinding binding;
public static async Task<u2netModel> CreateFromStreamAsync(IRandomAccessStreamReference stream)
{
u2netModel learningModel = new u2netModel();
learningModel.model = await LearningModel.LoadFromStreamAsync(stream);
learningModel.session = new LearningModelSession(learningModel.model);
learningModel.binding = new LearningModelBinding(learningModel.session);
return learningModel;
}
public async Task<u2netOutput> EvaluateAsync(u2netInput input)
{
binding.Bind("input", input.input);
var result = await session.EvaluateAsync(binding, "0");
var output = new u2netOutput();
output.o0 = result.Outputs["o0"] as TensorFloat;
output.o1 = result.Outputs["o1"] as TensorFloat;
output.o2 = result.Outputs["o2"] as TensorFloat;
output.o3 = result.Outputs["o3"] as TensorFloat;
output.o4 = result.Outputs["o4"] as TensorFloat;
output.o5 = result.Outputs["o5"] as TensorFloat;
output.o6 = result.Outputs["o6"] as TensorFloat;
return output;
}
}
}
</code></pre>
<p>Well, there you go. By just adding the "onnx" file to the project, we now have a "u2netModel" which is able to load the model (<code>CreateFromStreamAsync</code>) and use it to perform inference (<code>EvaluateAsync</code>).</p>
<p>However, we should note the <code>using Windows.AI.MachineLearning;</code> line. As discussed earlier, the "Windows.AI.MachineLearning" namespace is included as part of recent builds of Windows and, while it allows us to use ONNX models without any additional packages, it only supports ONNX models up to version 1.4 (opset 9). Given we exported the ONNX model for U²-Net using opset 12 we need to migrate to using the more recent "Microsoft.AI.MachineLearning" package. Fortunately this is very straight forward and simply involves installing the <a href="https://www.nuget.org/packages/Microsoft.AI.MachineLearning/">"Microsoft.AI.MachineLearning" nuget package</a> into the project then changing the above <code>using</code> clause to <code>using Microsoft.AI.MachineLearning</code>. Everything else remains the same.</p>
<p>Next we'll implement a UI which will allow us to load the image on which we want to perform inference and which will display both input and output images. For simplicity, we'll eschew MVVM and use the code-behind file for "MainPage" to implement this functionality.</p>
<p>So, in "MainPage.xaml", add the following:</p>
<pre><code class="language-xaml"><Page
x:Class="UwpMl.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:UwpMl"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="0.6*"/>
<RowDefinition Height="0.3*"/>
<RowDefinition Height="Auto"/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="0.5*"/>
<ColumnDefinition Width="0.5*"/>
</Grid.ColumnDefinitions>
<Image Grid.Column="0" Source="/Assets/Checkerboard.png" Width="320" Height="320" Stretch="UniformToFill" HorizontalAlignment="Center" VerticalAlignment="Center" />
<Image Grid.Column="0" x:Name="sourceImage" Stretch="None" HorizontalAlignment="Center" VerticalAlignment="Center" />
<Image Grid.Column="1" Source="/Assets/Checkerboard.png" Width="320" Height="320" Stretch="UniformToFill" HorizontalAlignment="Center" VerticalAlignment="Center" />
<Image Grid.Column="1" x:Name="targetImage" Stretch="None" HorizontalAlignment="Center" VerticalAlignment="Center" />
<ScrollViewer Grid.Row="1" Grid.ColumnSpan="2" HorizontalScrollMode="Auto" HorizontalScrollBarVisibility="Auto" VerticalScrollMode="Disabled" VerticalScrollBarVisibility="Hidden">
<StackPanel Orientation="Horizontal">
<Image x:Name="o6" Grid.Row="6" Stretch="Uniform" HorizontalAlignment="Center" VerticalAlignment="Center" />
<Image x:Name="o5" Grid.Row="5" Stretch="Uniform" HorizontalAlignment="Center" VerticalAlignment="Center" />
<Image x:Name="o4" Grid.Row="4" Stretch="Uniform" HorizontalAlignment="Center" VerticalAlignment="Center" />
<Image x:Name="o3" Grid.Row="3" Stretch="Uniform" HorizontalAlignment="Center" VerticalAlignment="Center" />
<Image x:Name="o2" Grid.Row="2" Stretch="Uniform" HorizontalAlignment="Center" VerticalAlignment="Center" />
<Image x:Name="o1" Grid.Row="1" Stretch="Uniform" HorizontalAlignment="Center" VerticalAlignment="Center" />
</StackPanel>
</ScrollViewer>
<StackPanel Grid.Row="2" Orientation="Horizontal" HorizontalAlignment="Center" Margin="4" Grid.ColumnSpan="2">
<Button Content="Go!" Padding="32,16" Margin="4" Click="Button_Click"/>
</StackPanel>
</Grid>
</Page>
</code></pre>
<p>Here you'll see that we add an <code>Image</code> named "sourceImage" which is used to display the input image and another <code>Image</code> named "targetImage" which is used to display the output. Behind these images I add additional <code>Image</code> elements which display a checkerboard pattern; this is to demonstrate opacity in the target image and is completely optional but should you wish to display these you can find the "Checkerboard.png" file <a href="https://github.com/ibebbs/UwpMl/blob/master/UwpMl/Assets/Checkerboard.png">here</a>.</p>
<p>Underneath these images I add a horizontally oriented <code>StackPanel</code> containing further <code>Image</code> elements. These are used to display the intermediate results of the U²-Net architecture which I found very useful for debugging but again is completely optional as it has no bearing on the final output.</p>
<p>Finally, in the bottom row of the UI we have a <code>StackPanel</code> containing a singular <code>Button</code> displaying the content "Go!". This button will be used to load and display an image, perform inference and, finally, display the output image. We'll use the "Click" event to invoke this functionality in the "MainPage.xaml.cs" file as shown below:</p>
<pre><code class="language-c#">using Microsoft.AI.MachineLearning;
using System;
using System.Threading.Tasks;
using Windows.Graphics.Imaging;
using Windows.Storage;
using Windows.Storage.Streams;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media.Imaging;
namespace UwpMl
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
public MainPage()
{
this.InitializeComponent();
}
private async void Button_Click(object sender, RoutedEventArgs e)
{
// Use Picket to get file
var file = await GetImageFile();
SoftwareBitmap softwareBitmap;
byte[] bytes;
// Load image & scale to tensor input dimensions
using (IRandomAccessStream stream = await file.OpenAsync(FileAccessMode.Read))
{
bytes = await GetImageAsByteArrayAsync(stream, 320, 320, BitmapPixelFormat.Rgba8);
softwareBitmap = await GetImageAsSoftwareBitmapAsync(stream, 320, 320, BitmapPixelFormat.Bgra8);
}
// Display source image
var source = new SoftwareBitmapSource();
await source.SetBitmapAsync(softwareBitmap);
sourceImage.Source = source;
// Convert rgba-rgba-...-rgba to bb...b-rr...r-gg...g as colour weighted tensor (0..1)
var input = TensorFloat.CreateFromIterable(new long[] { 1, 3, 320, 320 }, TensorBrg(bytes));
// Load model & perform inference
StorageFile modelFile = await StorageFile.GetFileFromApplicationUriAsync(new Uri($"ms-appx:///Assets/u2net.onnx"));
u2netModel model = await u2netModel.CreateFromStreamAsync(modelFile);
u2netOutput output = await model.EvaluateAsync(new u2netInput { input = input });
// Display intermediate results
await ToImage(output.o6, o6);
await ToImage(output.o5, o5);
await ToImage(output.o4, o4);
await ToImage(output.o3, o3);
await ToImage(output.o2, o2);
await ToImage(output.o1, o1);
// Display final result using the tensor as alpha mask on source image
await ToImage(bytes, output.o0, targetImage);
}
}
}
</code></pre>
<p>As you can see, quality has been traded for clarity here to ensure the flow of how an image is retrieved and passed to a <code>u2netmodel</code> instance is clear. Pasting this code into "MainPage.xaml.cs" will give you a bunch of red squigglies indicating undefined methods which we'll implement next, starting with the easy bits:</p>
<h4 id="getimagefile">GetImageFile</h4>
<pre><code class="language-c#">private async Task<StorageFile> GetImageFile()
{
var picker = new Windows.Storage.Pickers.FileOpenPicker();
picker.ViewMode = Windows.Storage.Pickers.PickerViewMode.Thumbnail;
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.PicturesLibrary;
picker.FileTypeFilter.Add(".jpg");
picker.FileTypeFilter.Add(".jpeg");
picker.FileTypeFilter.Add(".png");
var file = await picker.PickSingleFileAsync();
return file;
}
</code></pre>
<p>This code uses a file picker to allow the user to select the source image.</p>
<h4 id="getimageassoftwarebitmapasync">GetImageAsSoftwareBitmapAsync</h4>
<pre><code class="language-c#">private async Task<SoftwareBitmap> GetImageAsSoftwareBitmapAsync(IRandomAccessStream stream, uint width, uint height, BitmapPixelFormat pixelFormat)
{
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(stream);
var transform = new BitmapTransform() { ScaledWidth = width, ScaledHeight = height, InterpolationMode = BitmapInterpolationMode.NearestNeighbor };
var softwareBitmap = await decoder.GetSoftwareBitmapAsync(pixelFormat, BitmapAlphaMode.Premultiplied, transform, ExifOrientationMode.IgnoreExifOrientation, ColorManagementMode.DoNotColorManage);
return softwareBitmap;
}
</code></pre>
<p>This code loads an image from the specified <code>IRandomAccessStream</code> and uses a <code>BitmapTransform</code> and a <code>BitmapPixelFormat</code> to transform the source image to the desired size and pixel format for displaying in the UI. Finally it returns a <code>SoftwareBitmap</code> which can be conveniently displayed.</p>
<h4 id="getimageasbytearrayasync">GetImageAsByteArrayAsync</h4>
<pre><code class="language-c#">private async Task<byte[]> GetImageAsByteArrayAsync(IRandomAccessStream stream, uint width, uint height, BitmapPixelFormat pixelFormat)
{
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(stream);
var transform = new BitmapTransform() { ScaledWidth = width, ScaledHeight = height, InterpolationMode = BitmapInterpolationMode.NearestNeighbor };
var data = await decoder.GetPixelDataAsync(pixelFormat, BitmapAlphaMode.Premultiplied, transform, ExifOrientationMode.IgnoreExifOrientation, ColorManagementMode.DoNotColorManage);
return data.DetachPixelData();
}
</code></pre>
<p>This code loads an image from the specified <code>IRandomAccessStream</code> and uses a <code>BitmapTransform</code> and a <code>BitmapPixelFormat</code> to transform the source image to the desired size and pixel format for convenient translation into our ONNX model. Finally it returns a <code>byte[]</code> representing the transformed image.</p>
<p>Now comes the tricky bits...</p>
<h3 id="transform-the-source-image-into-u2-nets-input-tensor">Transform the source image into U²-Net's input tensor.</h3>
<p>To perform inference, the input image needs to be translated into a "Tensor". Don't let the terminology scare you here, a "tensor" is simply a multi-dimensional array of floating point numbers with a defined "shape" (i.e. the size of each dimension). We can see the desired "shape" of the input tensor by looking at the <code>u2netInput</code> class which contains the following:</p>
<pre><code class="language-c#">public sealed class u2netInput
{
public TensorFloat input; // shape(1,3,320,320)
}
</code></pre>
<p>In case it's not apparent, the sizes of each dimension relate to the values per pixel (3 - red, green & blue) along with the height (320 pixels) and width (320 pixels) dimensions of the source image. We needn't worry about the initial dimension here which - in this instance - just acts as a "container" for the other dimensions and will always have a size of 1.</p>
<p>Now, while translating our input image into this tensor, it's important to ensure we provide the tensor values in the format/order the underlying model expects them. Specifically here we must:</p>
<ol>
<li>Provide multiple greyscale images<br />
Given the shape of this input tensor - (3, 320, 320) - we can see the model is expecting to see 3 greyscale images, sized 320x320 apiece, with each "grayscale" image calculated from one of the input image's colour channels. Furthermore, careful examination of <a href="https://github.com/NathanUA/U-2-Net/blob/0b27f5cc958bac88825b1001f8245f663faeb1b8/data_loader.py#L218"><code>data_loader.py</code></a> shows that the model is expecting these images in blue, red, green order. This means that our (scaled) input image needs to be translated such that the index [1,1,1] - which would ordinarily return the red component of the top left pixel - returns the blue component of the top left pixel instead, and the index [2,1,1] - which would return the red component of the second pixel from the left on the top row of the image - instead returns the red component of the top left pixel of the image. And so on and so forth.</li>
<li>"Normalize" pixel values<br />
Pixels in our input image are in the Rgba8 format (as shown in the call to <code>GetImageAsByteArrayAsync</code>) meaning each pixel is composed of 4 channels (red, green, blue and alpha) and each channel is represented by a single byte ranging in value from 0 to 255. Each of these pixel values need to be translated into a value between 0 and 1 and "normalized" with a <a href="https://github.com/NathanUA/U-2-Net/blob/0b27f5cc958bac88825b1001f8245f663faeb1b8/data_loader.py#L212">channel specific divisor</a>.</li>
</ol>
<p>I implement these considerations in the <code>TensorBrg</code> method as shown here:</p>
<pre><code class="language-c#">public IEnumerable<float> TensorBrg(byte[] bytes)
{
// Original in rgb (0,1,2), we want brg(2,0,1)
// Return the blue channel
for (int i = 2; i < bytes.Length; i += 4)
{
var b = Convert.ToSingle(((bytes[i] / 255.0) - 0.406) / 0.225);
yield return b;
}
// Return the red channel
for (int i = 0; i < bytes.Length; i += 4)
{
var r = Convert.ToSingle(((bytes[i] / 255.0) - 0.485) / 0.229);
yield return r;
}
// Return the green channel
for (int i = 1; i < bytes.Length; i += 4)
{
var g = Convert.ToSingle(((bytes[i] / 255.0) - 0.456) / 0.224);
yield return g;
}
}
</code></pre>
<p>This method uses the <code>yield return</code> keyword to return the result of the mapping as an IEnumerable<float> thereby alleviating the need for an intermediate buffer and is used as follows:</p>
<pre><code class="language-c#">// Convert rgba-rgba-...-rgba to bb...b-rr...r-gg...g as colour weighted tensor (0..1)
TensorFloat input = TensorFloat.CreateFromIterable(new long[] { 1, 3, 320, 320 }, TensorBrg(bytes));
</code></pre>
<h3 id="perform-inference">Perform inference</h3>
<p>Now we have a tensor of the expected shape containing the expected values, we're able to use our ONNX model to perform inference. This is - thanks to the "Windows Machine Learning Code Generator" - extremely easy as shown below:</p>
<pre><code class="language-c#">// Load model & perform inference
StorageFile modelFile = await StorageFile.GetFileFromApplicationUriAsync(new Uri($"ms-appx:///Assets/u2net.onnx"));
u2netModel model = await u2netModel.CreateFromStreamAsync(modelFile);
u2netOutput output = await model.EvaluateAsync(new u2netInput { input = input });
</code></pre>
<p>And that's it. We have successfully used an ONNX model to perform inference using a state-of-the-art machine learning model. Just one small thing left... interpreting the results.</p>
<h3 id="transform-the-source-image-into-target-image-using-u2-nets-output-tensor-as-an-alpha-channel">Transform the source image into target image using U²-Net's output tensor as an alpha channel.</h3>
<p>We have two methods left to implement: <code>ToImage</code> & <code>ToBlendedImage</code>. The first of these takes an output tensor and converts it to a grey scale image. This method is used with the "intermediate" output tensors to show the progression towards the result and is really just used for debugging purposes or out of interest. The code is shown here:</p>
<pre><code class="language-c#">private async Task ToImage(TensorFloat tensorFloat, Image image)
{
var pixels = tensorFloat
.GetAsVectorView()
.SelectMany(
f =>
{
byte v = Convert.ToByte(f * 255);
return new byte[] { v, v, v, 255 };
})
.ToArray();
var writeableBitmap = new WriteableBitmap(320, 320);
// Open a stream to copy the image contents to the WriteableBitmap's pixel buffer
using (Stream stream = writeableBitmap.PixelBuffer.AsStream())
{
await stream.WriteAsync(pixels, 0, pixels.Length);
}
var dest = SoftwareBitmap.CreateCopyFromBuffer(writeableBitmap.PixelBuffer, BitmapPixelFormat.Bgra8, 320, 320, BitmapAlphaMode.Premultiplied);
var destSouce = new SoftwareBitmapSource();
await destSouce.SetBitmapAsync(dest);
image.Source = destSouce;
}
</code></pre>
<p>Conversely, <code>ToBlendedImage</code> composes our desired output image by using the final output tensor of the U²-Net model as both a mask and an alpha channel for the input image. This is shown below:</p>
<pre><code class="language-c#">private IEnumerable<byte> ApplyTensorAsMask(byte[] data, TensorFloat tensorFloat, float cutoff)
{
var tensorData = tensorFloat.GetAsVectorView().ToArray();
for (int i = 0; i < data.Length; i += 4)
{
var alpha = Math.Clamp(tensorData[i / 4], 0, 1);
if (alpha > cutoff)
{
yield return Convert.ToByte(data[i + 2] * alpha);
yield return Convert.ToByte(data[i + 1] * alpha);
yield return Convert.ToByte(data[i + 0] * alpha);
yield return Convert.ToByte(alpha * 255);
}
else
{
yield return 0;
yield return 0;
yield return 0;
yield return 0;
}
}
}
private async Task ToBlendedImage(byte[] data, TensorFloat tensorFloat, Image target)
{
var image = ApplyTensorAsMask(data, tensorFloat, 0.0f).ToArray();
var writeableBitmap = new WriteableBitmap(320, 320);
// Open a stream to copy the image contents to the WriteableBitmap's pixel buffer
using (Stream stream = writeableBitmap.PixelBuffer.AsStream())
{
await stream.WriteAsync(image, 0, image.Length);
}
var dest = SoftwareBitmap.CreateCopyFromBuffer(writeableBitmap.PixelBuffer, BitmapPixelFormat.Bgra8, 320, 320, BitmapAlphaMode.Premultiplied);
var destSouce = new SoftwareBitmapSource();
await destSouce.SetBitmapAsync(dest);
target.Source = destSouce;
}
</code></pre>
<p>With these methods implemented, there should be no more sqigglies in our <code>MainPage.xaml.cs</code> and we should be able to successfully compile and run the project.</p>
<h3 id="testing">Testing</h3>
<p>Run the project and click the "Go!" button. While you are free to use any source image you like to test the code above, I would suggest that, when prompted for a source image, you use one of the <a href="https://github.com/NathanUA/U-2-Net/tree/master/test_data/test_images">test images provided by U²-Net</a>; in the screen shot below I've used <a href="https://github.com/NathanUA/U-2-Net/blob/master/test_data/test_images/bike.jpg">bike.jpg</a>.</p>
<p>After selecting the image, it will be scaled and displayed in the UI before performing inference and displaying the output images. It should only take a few seconds for the output image to appear and, of this time, inference via the ONNX model should - depending on your hardware - be less than a second. This shows that there is significant potential for optimization in the preparation of the input tensor and the processing of the output tensor but, given <a href="https://stackify.com/premature-optimization-evil/">premature optimisation is the root of all evil</a>, I didn't attempt to optimize these processes and instead just focused on getting the solution running.</p>
<p>Anyway, once processing is complete, you should see something similar to this:</p>
<img src="/Content/MLinUWP/UWP Background Removal.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="UWP Background Removal"/>
<p>Nice! Let's compare it to the docker produced image of my girl above:</p>
<img src="/Content/MLinUWP/CrossEyed-u2net-UWP.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="CrossEyed Background Removal from UWP"/>
<p>That is pretty good, just as fast as as the Docker solution and doesn't require an internet connection. Sweet!</p>
<h1 id="bonus">Bonus</h1>
<p>Now we're able to remove backgrounds using a state-of-the-art machine learning model both in and out of process, let's revisit the "Premier League Player of the Month" to see if we can easily create one of our own. Quickly combining the following XAML with a processed image of my boy gives us:</p>
<pre><code class="language-xaml"><Viewbox>
<Canvas xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Name="Layer_3_0" Width="640.089" Height="896.125" Canvas.Left="0" Canvas.Top="0">
<Path Width="594.812" Height="887.356" Canvas.Left="0.00104256" Canvas.Top="6.10352e-005" Stretch="Fill" Fill="#FF3C9A24" Data="..."/>
<controls:DropShadowPanel BlurRadius="50.0" ShadowOpacity="0.80" OffsetX="114.0" OffsetY="0.0" Color="#AF000000">
<Image Source="Assets/Poopi.png" Width="640" Height="547" />
</controls:DropShadowPanel>
<Rectangle Width="595" Height="50" Canvas.Top="494" >
<Rectangle.Fill>
<LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0">
<GradientStop Color="#00000000"/>
<GradientStop Color="#7F000000" Offset="1"/>
</LinearGradientBrush>
</Rectangle.Fill>
</Rectangle>
<Canvas x:Name="Layer_4" Width="640.089" Height="855" Canvas.Left="0" Canvas.Top="91">
<Path Width="868.328" Height="171.341" Canvas.Left="-138.987" Canvas.Top="437.023" Stretch="Fill" Fill="#FFD7533E" Data="..."/>
</Canvas>
<TextBlock Text="POOPI" Canvas.Left="29" Canvas.Top="553" Height="119" Width="532" FontFamily="Impact" FontSize="96" Foreground="White" TextAlignment="Center" />
</Canvas>
</Viewbox>
</code></pre>
<img src="/Content/MLinUWP/Poopi.png" class="img-responsive" style="margin: auto; max-width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Poopi"/>
<p>Yup, that works.</p>
<p>(Sorry son but, after what happened Sunday, you deserve it ;0)</p>
<h2 id="conclusion">Conclusion</h2>
<p>As you can see, using state-of-the machine learning models from UWP is fairly straight forward and certainly not any more complicated than using them from Python. UWP - via "Microsoft.AI.MachineLearning" - has excellent support for the very latest versions of ONNX and, given most mainstream machine learning frameworks can export to ONNX, allows UWP developers to easily leverage the entire vista of modern machine learning algorithms for their purposes (resources permitting).</p>
<p>The source code for this article can be found in my <a href="https://github.com/ibebbs/UwpMl">UwpMl repository</a> on GitHub; please star it if you find it helpful or informative. Should you have any questions or comments, please feel free to drop me a line using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>.</p>
<p>In this post I show how to use a state-of-the-art machine learning model to implement Salient Object Detection and Image Segmentation. I then show how this model can be used to provide local inference capabilities entirely within a UWP app.</p>http://ian.bebbs.co.uk/posts/UnoPiRunning UWP on a Raspberry Pi Using Uno Platform2020-08-20T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>A few days ago I showed how the recent <a href="https://platform.uno/blog/announncing-uno-platform-3-0-linux-support-fluent-material-and-more/">v3 release</a> of the <a href="https://platform.uno/">Uno Platform</a> allowed you to run UWP apps on Linux. This was fantastic but really only half the story I wanted to tell. What I really wanted to do was see if I could get an app written in my favourite UI framework running on my favourite SBC; to wit, UWP on the the <a href="https://www.raspberrypi.org/">Raspberry Pi</a>. In this post I show how, yet again, the Uno team have made this not only possible but startlingly easy and shockingly powerful.</p>
<h2 id="intro">Intro</h2>
<p>In my last post <a href="https://ian.bebbs.co.uk/posts/UnoLinux">"Running UWP on Linux With Uno"</a>, I used the Uno Platform to write a UWP app which could run on Linux under WSL2. This was a great proof-of-concept and showed that, despite only in preview, Uno's support for UWP under Linux was <a href="https://ian.bebbs.co.uk/posts/UnoLinux#bonus">more than skin deep</a>. However, running UWP in Linux on the desktop, while cool, wasn't my primary motivator here. No, what I really wanted to do was run a UWP app on a <a href="https://www.raspberrypi.org/">Raspberry Pi</a>.</p>
<p>Now, those who have worked in the UWP space for a while probably know that you've been able to run UWP on a Pi for some time via <a href="https://docs.microsoft.com/en-us/windows/iot-core/windows-iot-core">Windows 10 IoT Core</a>. Unfortunately Windows 10 IoT Core seems destined for the same fate as many cool Microsoft technologies, namely <a href="https://reddwarf.fandom.com/wiki/Silicon_Heaven">silicon heaven</a>. The last release of Windows 10 IoT Core was back in 2018 and, despite <a href="https://www.raspberrypi.org/products/raspberry-pi-4-model-b/">a new and incredibly powerful Raspberry Pi coming to market</a>, there are no signs of a compatible Windows 10 IoT Core release coming any time soon.</p>
<p>Furthermore, the choice to use Windows 10 IoT Core on a Raspberry Pi was a costly one. While you got to run a UWP app, you did so at the expense of huge swathes of other software, open-source libraries, educational material and community support which are available for the Raspberry Pi when running a Linux variant. Indeed, while .NET now enjoys excellent support for interfacing with electronic devices via the <a href="https://github.com/dotnet/iot">"dotnet/iot"</a> library, when Windows 10 IoT Core was first released, just toggling a GPIO pin was a somewhat tricky proposition.</p>
<p>As such I run <a href="https://en.wikipedia.org/wiki/Raspberry_Pi_OS">Raspberry Pi OS</a> (formerly Raspbian) on almost all of the (<a href="https://discord.com/channels/372137812037730304/550416151172087808/743474362002440302">embarrassingly large number of</a>) Pi's I own. This has lead to my development on the Pi being mainly being targeted at console apps via .NET Core's support for Linux.</p>
<p>But no more...</p>
<h2 id="uwp-on-raspberry-pi-os">UWP on Raspberry Pi OS</h2>
<p>The Uno team have made compiling a UWP app for the Raspberry Pi almost embarrassingly easy. Assuming you have a Windows PC with <a href="https://dotnet.microsoft.com/download/visual-studio-sdks">.NET Core SDK v3.1</a> and the <a href="https://www.nuget.org/packages/Uno.ProjectTemplates.Dotnet">pre-release Uno Project Templates</a> installed, and assuming you have a Raspberry Pi running 32-bit Raspberry Pi OS (and which has SSH & GTK correctly configured), then Uno's basic "Hello world" app can be run on the Pi by simply doing the following (note the change in prompt towards the bottom as we shift from executing commands on Windows to executing them remotely on the Pi):</p>
<pre><code>PS> mkdir UnoHelloWorld
PS> cd UnoHelloWorld
PS> dotnet new unoapp
PS> cd UnoHelloWorld.Skia.Gtk
PS> dotnet build
PS> dotnet publish --runtime linux-arm -c Release --self-contained
PS> scp -rp bin\Release\netcoreapp3.1\linux-arm\publish pi@[RPI IP ADDRESS]:~/UnoHelloWorld
PS> ssh pi@[RPI IP ADDRESS]
pi@raspberrypi:~ $ cd UnoHelloWorld
pi@raspberrypi:~/UnoHelloWorld $ chmod +x UnoHelloWorld.Skia.Gtk
pi@raspberrypi:~/UnoHelloWorld $ export DISPLAY=:0
pi@raspberrypi:~/UnoHelloWorld $ ./UnoHelloWorld.Skia.Gtk
</code></pre>
<p>If everything was setup correctly, you should see something like this on the Raspberry Pi screen:</p>
<img src="/Content/UnoPi/UnoHelloWorld on Raspberry Pi.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="UnoHelloWorld on Raspberry Pi"/>
<p>A UWP app, running under Raspberry Pi OS on a Raspberry Pi 3B+. As I said, almost embarrassingly easy!</p>
<h2 id="performance">Performance</h2>
<p>So, after showing that we could run a UWP app on the Pi, I was interested to compare the performance of an app running on the Pi with one running on my PC. I then remembered that during <a href="https://unoconf.com/">UnoConf</a> there had been a discussion of <a href="https://github.com/unoplatform/uno.dopesbench">Dopes Bench</a>. Unfortunately the code in this repo didn't (at the time of writing) contain Skia backend projects so, following the process above, I quickly knocked up a new Uno project and simply copied the "MainPage.*" and "Random2.cs" files from the Dopes Bench project into it (cue amazement that <em>exactly</em> the same code runs on Windows, Mac, Android, iOS, Web and, now, Linux).</p>
<p>I then compiled and ran the test on my PC (DopeTestUno.UWP / release build) and the Raspberry Pi 3B+ (DopeTestUno.Skia.GTK / release build). Here are the results:</p>
<table>
<tr>
<td>
<img src="/Content/UnoPi/DopeTestUno on PC.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="DopeTestUno on PC"/>
</td>
<td>
<img src="/Content/UnoPi/DopeTestUno on Raspberry Pi.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="DopeTestUno on Raspberry Pi"/>
</td>
</tr>
<tr>
<td style="text-align: center; font-weight: bold">
9217.42 Dopes/s
</td>
<td style="text-align: center; font-weight: bold">
401.05 Dopes/s
</td>
</tr>
<tr>
<td style="text-align: center">
Dell Precision T7910<br/>
6 Core (12 Thread) Xeon E5-2620v3 @ 2.4GHz<br/>
32 Gb RAM<br/>
NVidia GeForce GTX 980<br/>
</td>
<td style="text-align: center">
Raspberry Pi 3B+<br/>
4 Core BCM2837B0 A53 (ARMv8) 64-bit @ 1.4GHz<br/>
1Gb RAM<br/>
Broadcom Videocore-IV<br/>
</td>
</tr>
</table>
<p>Well, given the difference in spec between the PC and the Pi, it's not surprising that there's a large difference in "Dopes" but is 401.05 dopes good or bad? Furthermore what does this mean for real world performance of an app?</p>
<p>No idea, guess we're going to have to build a "real world" app...</p>
<h2 id="unopify-uno-pi-fy">Unopify ("Uno-Pi-fy"):</h2>
<br/>
<img src="/Content/UnoPi/Unopify on Raspberry Pi.png" class="img-responsive" style="margin: auto; max-width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Unopify on Raspberry Pi"/>
<p>Unopify is a UWP Spotify client written using the fantastic <a href="https://www.nuget.org/packages/SpotifyApi.NetCore/3.5.0?_src=template">SpotifyApi.NetCore</a> library along with the usual compliment of supporting libraries including <a href="https://www.nuget.org/packages/Microsoft.Extensions.DependencyInjection/3.1.7?_src=template">Microsoft.Extensions.DependencyInjection</a>, <a href="https://www.nuget.org/packages/System.Net.Http/4.3.4?_src=template">System.Net.Http</a>, <a href="https://www.nuget.org/packages/System.Reactive/">System.Reactive</a> and, of course, my faithful <a href="https://www.nuget.org/packages/MVx.Observable/2.0.0?_src=template">MVx.Observable</a>.</p>
<p>While only a proof-of-concept which took just a few hours to write, it already demonstrates a significant amount of functionality such as:</p>
<ul>
<li>Frictionless support for .NET Standard 2.0 libraries</li>
<li>Functional, Reactive, MVVM</li>
<li>Navigation (the app moves from an "Authenticating" view to a "Home" view)</li>
<li>Visual States & Visual State Triggers</li>
<li>Full layout capabilities (uses auto, proportional and explicit sizing of elements)</li>
<li>Image fetching, display and scaling (the image URI's retrieved from Spotify web calls are directly bound to each Image's <code>Source</code> property)</li>
<li>Opacity (a semi-transparent white rectangle is laid over the background image)</li>
<li>Lookless controls (via the previous, play, next buttons)</li>
<li>Command binding and dispatch (via the previous, play, next buttons)</li>
<li>XAML drawing primitives (via <code>Ellipse</code> and <code>Path</code> elements in the previous, play and next buttons)</li>
<li>... and loads more</li>
</ul>
<p>In fact, about the only thing I wasn't able to get working was the web-based OAuth2 authentication flow. This wasn't particularly surprising given that this flow needs to invoke and interact with a system browser so I simply worked around this (temporary) limitation by using another UWP app to do the authentication and shared the access token with Unopify via SignalR (there were probably better ways to do this but I had the SignalR code to hand).</p>
<p>There were a couple of minor issues - <code>UIElement.Opacity</code> doesn't seem to work and an inline control template for the button seemed to cause the button to disappear - but nothing that couldn't be easily worked around. In short, writing a UWP app that worked on the Raspberry Pi under Linux was no more difficult than writing a UWP app that runs on a phone under Android or iOS (which itself is a minor miracle!).</p>
<p>Below you can see a video of Unopify running on a Raspberry Pi 3B+. In it I'm using Spotify Web Player on the PC to control a Spotify Connect amp and running Unopify on the Pi you can <em>just</em> see below the TV. At start-up, Unopify requests an authentication token from SignalR then starts polling the Spotify Web API for player state and using the responses to update the UI. Finally, the previous/next/play/pause buttons within Unopify directly call the Spotify Web API which causes the amp to play, pause or change track accordingly.</p>
<p>(Apologies for the poor quality but a direct screen capture wasn't an option as I wanted to include audio from the amp.)</p>
<video class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" controls>
<source src="/Content/UnoPi/Unopify on Raspberry Pi.mp4" type="video/mp4"/>
Your browser does not support the video tag
</video>
<p>As you can see, while start-up was a little slow (exacerbated by there currently being no splash-screen) the running app is completely usable. Furthermore, while the app is very raw (as I said, it only took a few hours) many of the rough edges (i.e. the delay between showing track name and album image and the play/pause button glitch caused by the 1 second polling interval) could easily be smoothed with a few simple changes.</p>
<p>To put this in perspective, this is a <em>preview</em> build of a UWP app running on a two year old Raspberry Pi which has 1Gb of RAM, an 80Mb/s capable SD card for a hard drive and costs just £35!</p>
<p>I have a(nother!) 4Gb Raspberry Pi 4B+ on order and will update this post once with performance metrics and "real world" experience once it arrives.</p>
<p>Finally, the code for Unopify can be found on <a href="https://github.com/ibebbs/Unopify">Github</a>. Should you wish to run it, you will need to deploy the <code>Unopify.AuthRelay</code> service (for which a free-tier AppService on Azure works well) and implement partial methods on the "Secrets.cs" files in a couple of projects (appropriate exceptions will be thrown if you fail to do this).</p>
<h2 id="conclusion">Conclusion</h2>
<p>Uno Platform have once again significantly expanded the vista for UWP (definitely no pun intended) and left me almost dizzy with new possibilities. By supporting Linux on low-power devices, the Uno team has propelled UWP beyond desktop, mobile and web applications into the realm of <em>appliances</em>. Want UWP on your fridge? Sure! Watch? No problem. A graphical, touch-driven interface for your thermostat? You got it.</p>
<p><strong>UWP is now a truly <em>Universal Platform</em> and your "write-once" code really can "run anywhere".</strong></p>
<p>From a commercial perspective, the recent <a href="https://all3dp.com/1/single-board-computer-raspberry-pi-alternative/">deluge of single-board computers</a> and their <a href="https://www.nvidia.com/en-gb/autonomous-machines/embedded-systems/jetson-nano/">rapidly advancing capabilities</a> provides this technology with immense value. Leveraging the power of UWP and the .NET ecosystem on everything from embedded devices to mobile phones allows businesses to benefit from the incredible cost savings and RoI value proposition of a <a href="https://ian.bebbs.co.uk/posts/UnoValue">"one stack"</a> approach. With little to no training, your .NET developers are now able to deliver on the promise of <a href="https://www.webopedia.com/TERM/A/ambient-computing.html">ambient computing</a>, efficiently supporting every use-case on every device "from edge to cloud".</p>
<p>Wow.</p>
<h2 id="finally">Finally</h2>
<p>If you're interested in using the Uno Platform to deliver cross-platform apps or have an upcoming project for which you'd like evaluate Uno Platform's fit, then please feel free to drop me a line using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>. As a freelance software developer and remote contractor I'm always interested in hearing from potential new clients or ideas for new collaborations.</p>
<p>A few days ago I showed how the recent <a href="https://platform.uno/blog/announncing-uno-platform-3-0-linux-support-fluent-material-and-more/">v3 release</a> of the <a href="https://platform.uno/">Uno Platform</a> allowed you to run UWP apps on Linux. This was fantastic but really only half the story I wanted to tell. What I really wanted to do was see if I could get an app written in my favourite UI framework running on my favourite SBC; to wit, UWP on the the <a href="https://www.raspberrypi.org/">Raspberry Pi</a>. In this post I show how, yet again, the Uno team have made this not only possible but startlingly easy and shockingly powerful.</p>http://ian.bebbs.co.uk/posts/UnoLinuxRunning UWP on Linux With Uno2020-08-16T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>At <a href="https://unoconf.com/">UnoConf 2020</a>, the <a href="https://platform.uno/">Uno Platform</a> team wowed attendees with the announcement of preliminary support for Linux. This had been something I had very much been hoping for and, first chance I got, I just had to give it a try. In this article I share how I went about getting set up for building and running a UWP app on Linux using the Uno Platform.</p>
<h2 id="intro">Intro</h2>
<p>UnoConf 2020 was held on 12th August 2020. If you weren't "there" then I would highly encourage you to <a href="https://unoconf.com/">check out the recording</a>. Over the course of several hours, (virtual) attendees were treated to numerous demos and descriptions of the Uno Platform, it's use and (rather a lot on) it's history.</p>
<p>However, <a href="https://youtu.be/nbqe9uHWT_c?t=7931">the session by CEO Francois Tanguay and CTO Jérôme Laban</a> was were the Uno team really notched up the shock and awe. After numerous amazing announcements such as the release of Uno Platform 3.0 with out-of-the-box support for both Fluent <em>and</em> Material design aesthetics, and several "anything you can do [Flutter] we can do better/faster" demos, they then proceeded to blow everyone's minds by showing a UWP app running on a Raspberry Pi under Linux.</p>
<p>Yes, you read that correctly: Uno Platform now lets you run your UWP apps <em>on Linux</em>. Just let that sink in for a second.</p>
<p>Back? Good.</p>
<p>Well, I just had to give this a shot. Despite a lack of documentation it turned out to be a fairly simple process once some preliminary software had been installed and the results were honestly better than I could have hoped for (hint: keep reading to the end for a bonus section).</p>
<p>Below I've outlined all the steps you need to follow to get an Uno app running under Linux.</p>
<h2 id="setup">Setup</h2>
<p>Before being able to build or run a UWP app under Linux, you'll first need to get the following set up:</p>
<h3 id="windows-terminal">Windows Terminal</h3>
<p>We will be firing lots of commands into both Windows and Linux shells. While you don't technically <em>need</em> Windows Terminal, having a single app which is able to interact with both shells will really make your life easier. If you don't have it and want to install it, you can find it in the Windows Store, <a href="https://www.microsoft.com/en-gb/p/windows-terminal/9n0dx20hk701?activetab=pivot:overviewtab">here</a>.</p>
<p>In the following steps I endeavour to highlight which shell to use for each command by putting the OS in bold (i.e. <strong>Windows</strong> or <strong>Ubuntu</strong>).</p>
<h3 id="net-core-sdk-v3.1.100">.NET Core SDK v3.1.100</h3>
<p>Compiling an Uno Platform project for Linux requires a fairly recent version of the .NET Core toolchain so make sure you have SDK version 3.1.100 or higher installed. The latest version can be found <a href="https://dotnet.microsoft.com/download/visual-studio-sdks">here</a>.</p>
<p>Once you have an up to date version of the .NET Core SDK, install the latest pre-release version of the Uno ProjectTemplates. You can find latest version of the packages <a href="https://www.nuget.org/packages/Uno.ProjectTemplates.Dotnet">here</a> which at the time of writing is version <code>3.1.0-dev.39</code>.</p>
<p>From the <strong>Windows</strong> command line, install the templates using the following command:</p>
<pre><code>dotnet new --install Uno.ProjectTemplates.Dotnet::3.1.0-dev.39
</code></pre>
<h3 id="wsl2-ubuntu">WSL2 + Ubuntu</h3>
<p>We're going to be running our app in a Linux distribution running under WSL2. To do this you'll need to be running Windows 10 version 2004 (Build 19041) or higher and have WSL2 installed by following the instructions <a href="https://docs.microsoft.com/en-us/windows/wsl/install-win10">here</a>.</p>
<p>Once you have WSL2 installed, you'll need to install a Linux distribution. This can be done from the Windows Store, I used Ubuntu 20.04 LTS which can be found <a href="https://www.microsoft.com/en-gb/p/windows-terminal/9n0dx20hk701?activetab=pivot:overviewtab">here</a>.</p>
<h3 id="x-window-server">X Window Server</h3>
<p>In order to run graphic apps from WSL2, we need to install an X Window Server on our Windows 10 host machine. There are a surprising number of alternatives here including <a href="https://sourceforge.net/projects/xming/">Xming</a> and <a href="https://sourceforge.net/projects/vcxsrv/">VcXsrv</a> but I went for <a href="https://www.microsoft.com/en-gb/p/x410/9nlp712zmn9q?activetab=pivot:overviewtab">X410</a> because it was in the Windows Store, super-easy to use and - until the end of August - heavily discounted to just £8.39.</p>
<p>After installing X410, all you need to do is run the app, the right click on it's system tray icon and click "Allow Public Networks" as shown below:</p>
<img src="/Content/UnoLinux/X410 Allow Public Networks.png" class="img-responsive" style="margin: auto; max-width:90%; margin-top: 6px; margin-bottom: 6px;" alt="X410 Allow Public Networks"/>
<p>With that done, leave it running so we can...</p>
<h2 id="test">Test</h2>
<p>Before digging into building our own app, we're going to test our "WSL2 + X Window Server" combo. So from a command prompt inside your <strong>Ubuntu</strong> distribution run the following commands:</p>
<pre><code>sudo apt-get update
sudo apt-get install vim-gtk
</code></pre>
<p>This will install the graphic version of Vim which we can use to check that we're able to run a graphical Linux app from Windows (just sounds weird doesn't it!).</p>
<p>Next find the IP address of the WSL adapter on your Windows 10 host machine. This can be done using <code>ipconfig</code> from the <strong>Windows</strong> command line as shown below:</p>
<img src="/Content/UnoLinux/Windows 10 WSL Adapter IP Address.png" class="img-responsive" style="margin: auto; max-width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Windows 10 WSL Adapter IP Address"/>
<p>With this in hand, enter the following on the <strong>Ubuntu</strong> command prompt (told you that you'd want Windows Terminal!) substituting your IP address appropriately:</p>
<pre><code>export DISPLAY=[IP ADDRESS]:0
</code></pre>
<p>Finally, start vim-gtk by running the following command, again from the <strong>Ubuntu</strong> prompt:</p>
<pre><code>gvim
</code></pre>
<p>If everything is set up correctly, you should see the following window appear:</p>
<img src="/Content/UnoLinux/Gvim on Windows 10.png" class="img-responsive" style="margin: auto; max-width:60%; margin-top: 6px; margin-bottom: 6px;" alt="Gvim on Windows 10"/>
<p>If the window doesn't appear try following the troubleshooting section <a href="https://github.com/cascadium/wsl-windows-toolbar-launcher/blob/master/README.md#troubleshooting">here</a>.</p>
<h2 id="build">Build</h2>
<p>Still with me? Aces. Now lets use Uno to create a UWP app which will run in Ubuntu!</p>
<p>From the <strong>Windows</strong> command prompt, navigate to the directory where you want to create the new solution and type the following:</p>
<pre><code>dotnet new unoapp -o UnoLinux -w=false -wasm=false -ios=false -android=false -macos=false -sw=false
</code></pre>
<p>This will create a new folder named UnoLinux inside of which will be an Uno Solution containing just the <code>UWP</code> and <code>Skia.Gtk</code> head projects.</p>
<p>Next, still on the <strong>Windows</strong> command line, navigate to the <code>UnoLinux.Skia.Gtk</code> project and build it using:</p>
<pre><code>cd .\UnoLinux\UnoLinux.Skia.Gtk\
dotnet build
</code></pre>
<p>You will probably see a few warnings but as long as you see "Build Succeeded" you should be golden. My build output looks as follows:</p>
<img src="/Content/UnoLinux/UnoLinux Build Output.png" class="img-responsive" style="margin: auto; max-width:90%; margin-top: 6px; margin-bottom: 6px;" alt="UnoLinux Build Output"/>
<p>Finally we want to build the project for the Linux runtime and publish it as a self contained executable. This is done using the following:</p>
<pre><code>dotnet publish --runtime linux-x64 -c Release --self-contained
</code></pre>
<p>You're likely to see the same warnings again here but there shouldn't be any errors.</p>
<p>Once this command completes you should be able to navigate to <code>.\bin\Release\netcoreapp3.1\linux-x64\</code> where you will find a <code>UnoLinux.Skia.Gtk</code> file.</p>
<p>Got it? Brills!</p>
<h2 id="running-uwp-on-ubuntu">Running UWP on Ubuntu</h2>
<p>Back at the <strong>Ubuntu</strong> command prompt, navigate to the <code>bin\Release</code> directory above. In WSL all your Windows drives should be mounted under <code>/mnt/</code> so you should be able to run a command similar to the following (with [DRIVE LETTER] and [PATH] replaced):</p>
<pre><code>cd /mnt/[DRIVE LETTER]/[PATH]/UnoLinux/UnoLinux.Skia.Gtk/bin/Release/netcoreapp3.1/linux-x64
</code></pre>
<p>Finally - and here comes the magic - run the <code>UnoLinux.Skia.Gtk</code> app from the <strong>Ubuntu</strong> command line using:</p>
<pre><code>./UnoLinux.Skia.Gtk
</code></pre>
<p>Again, if everything has gone smoothly, the following window should pop-up after a few seconds:</p>
<img src="/Content/UnoLinux/UnoLinux Hello World.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="UnoLinux Hello World"/>
<p>Now sit back and let this sink in for a second.</p>
<p>This is a Universal <em>Windows</em> Platform application, rendering to a Skia backend, within a GTK host, running in Ubuntu, running under Windows. Given we're this many layers down, perhaps I might be able to plant an idea:</p>
<blockquote class="blockquote">
<p>UWP... <em>all... the... things</em>!</p>
</blockquote>
<h2 id="bonus">Bonus</h2>
<p>Now, thanks to the Uno team, all the above was fairly painless and I still had some time left so.... I decided to see how far I could push UWP under Linux.</p>
<p>Using the <a href="https://github.com/ibebbs/UnoChat">UnoChat sample</a> I created for my <a href="https://ian.bebbs.co.uk/posts/UnoChat">"Cross-Platform Real-Time Communication with Uno & SignalR"</a> post, I updated all the Uno packages to the latest pre-release versions. I then copied the <code>UnoLinux.Skia.Gtk</code> project folder from the <code>UnoLinux</code> solution we just created and added it to the <code>UnoChat</code> solution. Some monkeying ensured to re-namespace everything and correct project and package references but without too much effort, I got everything building.</p>
<p>With baited breath, I tried this:</p>
<video class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" controls autoplay loop>
<source src="/Content/UnoLinux/UnoChat for Linux.mp4" type="video/mp4"/>
Your browser does not support the video tag
</video>
<p>Woah! Without modifying <em>any</em> of the shared code, the <em>exact same application</em> compiled, ran <em><strong>and functioned</strong></em> under Linux.</p>
<p>That's not to say there weren't problems. There seemed to be an issue with <code>TextBox</code> controls which prevented me from sending messages from the Linux head and the fonts didn't quite match across the platforms. But, given this is just a preliminary release, the fact that the app could receive messages from a remote SignalR server then template and render the messages correctly on Linux is just stunning.</p>
<p>Absolutely. Stunning.</p>
<h2 id="code">Code</h2>
<p>The code for this sample is available in the "Prerelease" branch of my UnoChat repository <a href="https://github.com/ibebbs/UnoChat/tree/Prerelease">here</a>. Once Linux support is a little more mature, I'll update the more advanced "Uno.ChatSignalR" app I prepared for UnoConf which can be found <a href="https://github.com/unoplatform/Uno.Samples/tree/master/UI/ChatSignalR">here</a>.</p>
<h2 id="and-lastly">And lastly...</h2>
<p>If you're interested in using the Uno Platform to deliver cross-platform apps or have an upcoming project for which you'd like evaluate Uno Platform's fit, then please feel free to drop me a line to discuss your project/ideas using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>. As a freelance software developer and remote contractor I'm always interested in hearing from potential new clients or ideas for new collaborations.</p>
<p>At <a href="https://unoconf.com/">UnoConf 2020</a>, the <a href="https://platform.uno/">Uno Platform</a> team wowed attendees with the announcement of preliminary support for Linux. This had been something I had very much been hoping for and, first chance I got, I just had to give it a try. In this article I share how I went about getting set up for building and running a UWP app on Linux using the Uno Platform.</p>http://ian.bebbs.co.uk/posts/UnoValueOn the incredible value proposition of .NET & the Uno Platform2020-08-12T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>My company recently released <a href="https://www.zenue.uk/">Zenue</a>; a solution for small businesses in the UK hospitality industry struggling to meet post-lockdown governmental guidance. In just 48 hours, a team of two were able to progress from initial concept to app-store submission having delivered on all the fundamental use-cases. In this post I show how this was made possible by .NET and the <a href="https://platform.uno/">Uno Platform</a> and why you should be considering these technologies for your next project.</p>
<h2 id="intro">Intro</h2>
<p>Ostensibly <a href="https://www.cogenity.com/">Cogenity</a> operates as a consultancy but, whenever opportunity arises, we like to "dog food" the services we provide to our clients by undertaking a project for ourselves. Such an opportunity arose a few weeks back when, following the easing of lockdown in the UK, we were contacted by numerous small businesses in the hospitality industry (e.g. cafes, bars and pubs) stating they were unable to find a means of effectively meeting government guidance for recording visitors to their premises.</p>
<p>During an idle "water-cooler" conversation, we realised there was an interesting challenge here: These businesses were of limited means and couldn't invest in the significant infrastructure required to support many forms of advanced visitor tracking capabilities. Furthermore, the data these businesses were being asked to collect and keep safe represented a legislative tight-rope for them and a potential privacy nightmare for their customers. The "water-cooler" conversation was soon superseded by a "tree-house" brain-storming session (yes, weather permitting we do occasionally meet in our on-site tree-house) and we decided to take a stab at providing a solution.</p>
<p>After just 48 hours work (which included writing a web-site and privacy policy), we had a solution in place and had submitted an MVP of the Zenue app to both the Google and Apple app-stores.</p>
<p>We were super-happy with the project not just because it delivered on all the fundamental use-cases nor because it did so using a service model which cost virtually nothing to stand-up and which will scale in a cost-effective manner. No, we found the biggest positive of this project to be the tech stack we had decided to use and how it allowed us to minimize cross-platform development effort while all but eliminating code duplication.</p>
<p>In short, the delivery of the constituent apps - along with their supporting services - on this timescale was made possible by eschewing a "full stack" approach and opting for "one stack" comprised of just two fundamental technologies: .NET and the Uno Platform.</p>
<p>Here's why you should be considering these technologies for your next project.</p>
<h2 id="a-note-on-terminology">A note on terminology</h2>
<p>As with many such things in our industry, the term "full stack" has become extremely overloaded. Initially coined to describe "an individual who has a good understanding of the technologies used to implement different layers and components in a solution", it could be applied to <em>any</em> developer comfortable with <em>any</em> set of technologies capable of implementing an end-to-end solution. In recent years however this definition has become increasingly narrow to the point where - should you see it in a job advert or on a résumé - it will almost certainly have been used to mean "a developer who can write an HTML+CSS+JS web-app along with an arbitrary set of back-end technologies".</p>
<p>This has lead to frustrations in many camps not least of which being desktop and mobile app developers who choose not to employ web-related technologies in their craft. These developers feel equally entitled to the term "full stack" but are met with confusion when advocating themselves as such to companies and recruitment consultants alike.</p>
<p>In this post I will be employing a measure of creative license with terms "full stack" and "one stack", and while some of these uses may lead a reader to think "Hang on a second...", I would argue that my uses are no more disingenuous than the limited one described above. To avoid confusion however, I would like to advocate for a definition of the terms along the following lines:</p>
<ul>
<li><strong>Full stack</strong>: A heterogeneous set of technologies requiring significant and non-transferable training/experience in each.</li>
<li><strong>One stack</strong>: A homogeneous or complementary set of technologies within which training/experience is cumulative or easily transferred.</li>
</ul>
<p>So with that out of the way...</p>
<h2 id="full-stack">Full Stack</h2>
<p>If we were to have adopted a conventional "full stack" approach and employed (alleged) "best-of-breed" technologies for this project, we might have ended up with something along the lines of this:</p>
<img src="/Content/UnoValue/FullStack.png" class="img-responsive" style="margin: auto; width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Full Stack"/>
<p>Here we have:</p>
<ol>
<li>A mobile app for visitors to clients' premises written in Dart using the Flutter framework.</li>
<li>A web app for staff at clients' premises written in JavaScript using the React framework.</li>
<li>Internal desktop apps written in C# using the WinUI framework.</li>
<li>Back-end services for each of the applications written in C# using Azure Functions.</li>
</ol>
<p>As you can see, to deliver an MVP using this stack we would have had to employ 3 different languages on 4 different frameworks. Each of these languages/frameworks require significant experience/training before a developer can be considered to be proficient in their use and each of these languages/frameworks would require its own DevOps pipeline to build and deploy the associated app/services. All this can vastly increasing the time it takes to stand-up a solution of this nature, complicating versioned deployments, increasing code duplication and, ultimately, slowing iteration.</p>
<p>(And lets not even think about the time/effort/complexity for a single developer to install and maintain the plethora of tooling required to develop on all of these stacks simultaneously! <Shudder>)</p>
<p>Now, admittedly we <em>could</em> have considered simplifying this stack by employing one of the following:</p>
<ol>
<li>Use ReactNative for the Mobile App<br />
This would have reduced the language count by one but could have introduced significant cognitive dissonance given that, despite the name, React and ReactNative are <em>very</em> different frameworks.</li>
<li>Use Flutter for the Web app<br />
Yes, this might have been possible but web development in Flutter is very much in it's infancy and, at the time of writing, requires a "beta" channel version of Flutter which might have destabilized native app development.</li>
<li>Used Xamarin for the Mobile & Internal Apps<br />
This would have reduced the language count <em>and</em> the framework count but would have still left us needing to maintain a separate development stack for the web app.</li>
</ol>
<p>Regardless, as you can see, developing and supporting a solution that operates across these myriad platforms can be <em>extremely</em> time consuming and can make solution-wide micro-iterations all but impossible. It was certainly a non-starter for us as we had very tight deadlines for getting this done and very limited budget for maintenance moving forward.</p>
<h2 id="one-stack">One Stack</h2>
<p>So, if "Full stack" wasn't an option, how were we going to deliver this solution? Well, that was easy: the Uno Platform.</p>
<p>For those that are unaware, the Uno Platform provides the means to broaden the reach of Universal Windows Platform (aka UWP, WinRT, WinUI, etc) applications beyond... well, beyond the Windows Platform. By standing on the shoulders of two amazing technologies - <a href="https://www.mono-project.com/">Mono</a> & <a href="https://dotnet.microsoft.com/apps/xamarin">Xamarin</a> - the Uno Platform is able to expose huge (and ever increasing) swathes of the UWP/WinRT API surface to other platforms. This allows apps written for UWP to be run - often without modification - on platforms as diverse as iOS, Android and the Web (via WebAssembly).</p>
<p>As regular readers of my blog will know, I am a huge fan of the Uno Platform and have been raving about its abilities for <a href="https://ian.bebbs.co.uk/posts/Uno">quite some time now</a>. Suggesting that we use Uno for this project was an easy sell given it allowed us to write code in C# & XAML (which we know and love), run the same code across every platform required by this solution (<a href="https://platform.uno/blog/announcing-uno-platform-2-4-macos-support-and-windows-calculator-on-macos/">and then some!</a>) and didn't require any platform specific DevOps effort/management. In fact, short of installing the Uno Platform Solution templates from within Visual Studio, using the Uno Platform didn't even require any changes to existing development environments!</p>
<p>Effectively, using the Uno Platform allowed us to move from "full stack" to "one stack", as shown below:</p>
<img src="/Content/UnoValue/OneStack.png" class="img-responsive" style="margin: auto; width:80%; margin-top: 6px; margin-bottom: 6px;" alt="One Stack"/>
<p>Not to labour the point but here we have:</p>
<ol>
<li>A mobile app for visitors to client's premises written in C# using the Uno Platform.</li>
<li>A web app for staff at client's premises written in C# using the Uno Platform.</li>
<li>Internal desktop apps written in C# using the Uno Platform.</li>
<li>Back-end services for each of the applications written in C# using Azure Functions.</li>
</ol>
<p>All told, this amounted to one language and two frameworks - Uno Platform client-side, Azure Functions service-side. This simplicity made the project incredibly quick to iterate on and, due to the absolutely lack of friction between platforms, a whole lot of fun to deliver. Furthermore we were able to spend 99% of our time delivering use-case functionality instead of setting up multiple tech-stack development environments/workflows and worrying about how best to make them interoperate.</p>
<h2 id="why-should-i-consider-a-one-stack-approach">Why should I consider a "One Stack" approach?</h2>
<p>Well, for us there were innumerable benefits to a "One Stack" approach which, in our experience, would be beneficial to many small teams employing rapid iteration (or "lean start-up") practices. These included:</p>
<h3 id="single-language-tooling-solution">Single Language, Tooling & Solution</h3>
<p>All the apps and services for this delivery were implemented in C#, using Visual Studio and contained within a single solution. This allowed us to leverage the advanced code-refactoring capabilities of Visual Studio to quickly change solution structure and/or API's without fear of missing required changes in other projects/languages.</p>
<h3 id="net-standard-2.0-libraries">.NET Standard 2.0 Libraries</h3>
<p>Due to the incredible efforts of the <a href="https://www.mono-project.com/docs/about-mono/maintainers/">Mono team</a>, the Uno Platform is able to use pretty much any fully-managed .NET Standard 2.0 library. This not only allowed us to share solution artefacts across projects and platforms but also to use "off-the-shelf" nuget packages on platforms they had almost certainly not been designed for. (For an example of this, see my post on <a href="https://ian.bebbs.co.uk/posts/UnoChat">"Cross-Platform Real-Time Communication with Uno & SignalR"</a> where I use the <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client/">"Microsoft.AspNetCore.SignalR.Client"</a> nuget package from Android, iOS <em>and</em> WebAssembly).</p>
<h3 id="xamarin-libraries">Xamarin libraries</h3>
<p>When we did need to leverage platform specific capabilities, Uno Platform allowed us to directly reference and call Xamarin libraries without needing any form of marshalling. For example, Zenue uses the <a href="https://github.com/Redth/ZXing.Net.Mobile"><code>ZXing.Net.Mobile</code></a> library - ostensibly designed for Xamarin.Forms - to implement QR Code scanning (which is likely to be the subject of a future blog post).</p>
<h3 id="advanced-common-controls-and-visual-states">Advanced "Common" Controls and Visual States</h3>
<p>Using Uno Platform, "write-once" controls really can "run-anywhere". Furthermore, by leveraging the incredible capabilities of XAML ("look-less" controls, visual states, storyboarded animations, etc) these controls can be templated, styled and reused across a broad range of projects and platforms.</p>
<h3 id="integrated-debugging-hot-reload">Integrated debugging & Hot Reload</h3>
<p>Regardless of platform (UWP, Android, iOS, Web, etc) or deployment target (physical device, simulator, browser, etc), Uno Platform has strong support for integrated debugging and hot reloading of changes. This is a real game changer for the rapid development of cross-platform apps and once you get used to it, going back to a development environment/platform that doesn't allow you to break at arbitrary points in your (non-minified/transpiled) code or to visualize a change without restarting your app, simply feels like a return to the dark ages.</p>
<h3 id="end-to-end-qa-automated-testing">End-to-end QA & automated testing</h3>
<p>As all projects in the solution could be started and debugged from within a single IDE, we were easily able to spin-up and run the solution off-line. This allowed us to run end-to-end QA testing and quickly resolve issues in the integration of system components. We then augmented these QA tests via automated UI tests which covered every component of the solution (and which we found to be particularly good for creating the myriad application screenshots required by the various app stores).</p>
<h2 id="conclusion-future">Conclusion & Future</h2>
<p>As I hope is evident from the above, we found employing a "One Stack" approach for the Zenue project to be a huge win. Reducing our development tech-stack to just a single language and two frameworks resulted in productivity gains that we're still enjoying as we continue to iterate on the app and it's capabilities.</p>
<h3 id="uno-platform">Uno Platform</h3>
<p>At Cogenity we believe the Uno Platform has a very exciting future. While already an incredibly capable and mature framework for delivering cross-platform apps, the Uno team will shortly be releasing v3.0 (perhaps later at <a href="https://platform.uno/blog/unoconf-2020-virtual-free-aug-12-2020-save-the-date/">UnoConf 2020</a>) which promises many new capabilities and myriad improvements.</p>
<p>If you're interested in using the Uno Platform in your next project, Cogenity is able to offer consulting/development services which can ensure your project not only gets off to a flying start but also reaches a successful conclusion! Just drop us an <a href="mailto:contact@cogenity.com">email</a> or use our contact form <a href="https://www.cogenity.com/#three">here</a> and we'll get back to you asap.</p>
<h3 id="zenue">Zenue</h3>
<p>We're currently engaged with a number of local businesses interested in using Zenue but would welcome the opportunity to partner with businesses across the UK. While helping assuage the fallout from the current pandemic is very much Zenue's raison d'être, we're already looking to expand it's reach beyond visitor tracking. Our aim is to help small businesses leverage the innumerable use-cases modern technologies could provide but which have, so far, remained out of reach for all but large chains.</p>
<p>If your business needs help meeting current government guidance for visitor tracking or you have an idea for a killer feature you feel Zenue might be able to provide, please feel free to contact us via <a href="https://twitter.com/UkZenue">Twitter</a>, <a href="https://www.facebook.com/ZenueUK">Facebook</a> or <a href="mailto:contact@zenue.uk">email</a>.</p>
<p>My company recently released <a href="https://www.zenue.uk/">Zenue</a>; a solution for small businesses in the UK hospitality industry struggling to meet post-lockdown governmental guidance. In just 48 hours, a team of two were able to progress from initial concept to app-store submission having delivered on all the fundamental use-cases. In this post I show how this was made possible by .NET and the <a href="https://platform.uno/">Uno Platform</a> and why you should be considering these technologies for your next project.</p>http://ian.bebbs.co.uk/posts/UnoChatCross-Platform Real-Time Communication with Uno & SignalR2020-07-16T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>In this article we will see how to use <a href="https://platform.uno/">Uno Platform</a> and <a href="https://docs.microsoft.com/en-us/aspnet/signalr/overview/getting-started/introduction-to-signalr">SignalR</a> to create applications that run on all major platforms - PC, Mac, Android, iOS <em><strong>and</strong></em> Web - and are capable of receiving real-time updates from a SignalR service. As you will see, these two technologies work incredibly well together, providing an elegant solution to a use-case which, just a few years ago, would have been fiendishly difficult.</p>
<p>All the source code for this post can be found in my <a href="https://github.com/ibebbs/UnoChat">UnoChat</a> repository on GitHub.</p>
<h2 id="intro">Intro</h2>
<p>A project I'm working on features a web-dashboard. Being a XAML fan, using Uno to create a web-assembly app was a complete a no-brainer... until I realised that I wanted the dashboard to feature real-time updates. Thanks to SignalR (and a host of subsequent technologies) real-time updates is nothing new for traditional web apps but how would these technologies fare when used with Uno/WebAssembly.</p>
<p>Hunting around, I had seen that a couple of people had tried this approach a while back but had hit various stumbling blocks along the way. As far as I could tell, no-one had yet managed to get a working solution which, as you might imagine, was a tad worrying.</p>
<p>Regardless, I knew that Uno and its compilation to WebAssembly (driven by the amazing work undertaken by the <a href="https://github.com/mono/mono">Mono team</a>) had progressed massively in the last year so I figured I'd give it a go to see if I could get any further. I was very much prepared for a bit of a slog here, expecting to have to dig into the internals of both Uno and SignalR in order to get it working. But I was <em>not</em> prepared for what actually happened:</p>
<p>It... just... worked.</p>
<p>First time.</p>
<p>With no kludges, no work-arounds, no untoward platform-specific code and no conditional compilation.</p>
<p>It really did <strong>Just Work™</strong>.</p>
<p>The following post shows how you can use these amazing technologies together to deliver real-time updates to a cross-platform app.</p>
<h2 id="ingredients">Ingredients</h2>
<p>To cook up this little sumptuous little dish, you will need a copy of Visual Studio 2019 with the following installed:</p>
<ol>
<li>"ASP.NET and Web development" workload</li>
<li>"Azure development" workload</li>
<li>"Universal Windows Platform development" workload</li>
<li>"Mobile development with .NET" workload</li>
<li>".NET Core cross-platform development" toolset</li>
<li>The <a href="https://marketplace.visualstudio.com/items?itemName=nventivecorp.uno-platform-addin">Uno Platform Solution Templates</a>.</li>
</ol>
<p>You will also need a (<a href="https://azure.microsoft.com/en-us/free/">free</a>) Azure account to which we'll publish our SignalR service so it's easily available to our client platforms.</p>
<h2 id="objective">Objective</h2>
<p>We're going to be building a version of the <a href="https://docs.microsoft.com/en-us/aspnet/core/tutorials/signalr?view=aspnetcore-3.1&tabs=visual-studio">"Get started with ASP.NET Core SignalR"</a> sample app but, instead of an "HTML+js" client, we're going to be using Uno to write an app which can be compiled and run natively across multiple platforms <em>including</em> the web.</p>
<p>When we're finished, we'll have a solution which looks like this:</p>
<pre><code>UnoChat
|- UnoChat.Service
|- UnoChat.Client.Console
|- UnoChat.Client.App
|- UnoChat.Client.App.Droid
|- UnoChat.Client.App.iOS
|- UnoChat.Client.App.macOS
|- UnoChat.Client.App.UWP
|- UnoChat.Client.App.Wasm
|- UnoChat.Client.App.Shared
</code></pre>
<p>We'll dive into each of these projects individually below with the occasional "F5" to test our progress.</p>
<p>Right, lets go!</p>
<h2 id="service-console-client-testing-deployment">Service, Console Client, Testing & Deployment</h2>
<h3 id="unochat.service">UnoChat.Service</h3>
<p>We'll first create the SignalR service. As this is covered quite extensively in the <a href="https://docs.microsoft.com/en-us/aspnet/core/tutorials/signalr?view=aspnetcore-3.1&tabs=visual-studio">sample app</a> we're basing this article on I'm going to shoot through this pretty quickly, only covering notable differences and the code we should end up with.</p>
<p>Ok, start up Visual Studio 2019 and create a new project as shown here:</p>
<table style="margin: auto; width:100%;">
<tr>
<td><img src="/Content/UnoChat/CreateNewASPNetCoreWebApplicationI.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Create New ASP Net Core Web Application - Step 1"/></td>
<td><img src="/Content/UnoChat/CreateNewASPNetCoreWebApplicationII.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Create New ASP Net Core Web Application - Step 2"/></td>
</tr>
</table>
<p>Now, follow the steps in the <a href="https://docs.microsoft.com/en-us/aspnet/core/tutorials/signalr?view=aspnetcore-3.1&tabs=visual-studio#create-a-signalr-hub">"Create a SignalR hub"</a> section of the sample app to create a <code>ChatHub : Hub</code> class in a <code>Hubs</code> folder within the <code>UnoChat.Service</code> project.</p>
<p>We'll continue with the configuration described in the <a href="https://docs.microsoft.com/en-us/aspnet/core/tutorials/signalr?view=aspnetcore-3.1&tabs=visual-studio#configure-signalr">"Configure SignalR"</a> but we'll also be adding a <a href="https://docs.microsoft.com/en-us/aspnet/core/security/cors?view=aspnetcore-3.1">CORS policy</a> such that we're able to connect to it from a locally hosted Wasm app. The final configuration of the <code>Startup</code> class in the <code>UnoChat.Service</code> project should look like this:</p>
<pre><code class="language-c#">using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
namespace UnoChat.Service
{
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddRazorPages();
services.AddSignalR();
services.AddCors(o => o.AddPolicy(
"CorsPolicy",
builder => builder
.AllowAnyOrigin()
.AllowAnyMethod()
.AllowAnyHeader()
)
);
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseCors("CorsPolicy");
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapRazorPages();
endpoints.MapHub<Hubs.ChatHub>("/chathub");
});
}
}
}
</code></pre>
<p>And that's it. Seriously that's all we need to create a service which is able to provide real-time communication services to a whole host of client applications.... I know, right!</p>
<h3 id="unochat.client.console">UnoChat.Client.Console</h3>
<p>Next we'll create a quick console application to test our SignalR service prior to diving into a cross platform solution with Uno.</p>
<blockquote class="blockquote">
<p>BTW, the console app - by virtue of being .NET Core - is also cross-platform and will run on... well... <a href="https://docs.microsoft.com/en-us/dotnet/core/">pretty much anything</a>.</p>
</blockquote>
<p>So, right click on the "UnoChat" solution and add a new "Console App (.NET Core)" project as shown here:</p>
<img src="/Content/UnoChat/CreateNewNetCoreConsoleApp.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="Create New Net Core Console App"/>
<p>Then, add the <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client/"><code>Microsoft.AspNetCore.SignalR.Client</code> nuget package</a> to the <code>UnoChat.Client.Console</code> project as shown here:</p>
<img src="/Content/UnoChat/InstallMicrosoftAspNetCoreSignalRClientInUnoChatClientConsole.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="Install Microsoft Asp Net Core SignalR Client In UnoChat Client Console"/>
<p>Finally, replace the code in <code>Main.cs</code> with the following:</p>
<pre><code class="language-c#">using Microsoft.AspNetCore.SignalR.Client;
using System.Threading.Tasks;
namespace UnoChat.Client.Console
{
using Console = System.Console;
class Program
{
static async Task Main(string[] args)
{
Console.WriteLine("Hi! Err... who are you?");
var name = Console.ReadLine();
Console.WriteLine($"Ok {name} one second, we're going to connect to the SignalR server...");
var connection = new HubConnectionBuilder()
.WithUrl("http://localhost:61877/ChatHub")
.WithAutomaticReconnect()
.Build();
connection.On<string, string>("ReceiveMessage", (user, message) => Console.WriteLine($"{user}: {message}"));
await connection.StartAsync();
Console.WriteLine($"Aaaaaand we're connected. Enter a message and hit return to send it to other connected clients...");
while (true)
{
var message = Console.ReadLine();
await connection.InvokeAsync("SendMessage", name, message);
}
}
}
}
</code></pre>
<blockquote class="blockquote">
<p>Note: You'll need to ensure the port number in the <code>.WithUrl("http://localhost:61877/ChatHub")</code> line is correct. You can find the port number your <code>UnoChat.Service</code> is set to use by right clicking on the <code>UnoChat.Service</code> project, selecting <code>Properties</code>, navigating to the <code>Debug</code> tab and examining the <code>App URL</code> setting in the <code>Web Server Settings</code> section as shown here:
<img src="/Content/UnoChat/UnoChatServiceDebugSettings.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="UnoChat Service Debug Settings"/></p>
</blockquote>
<h3 id="local-testing">Local Testing</h3>
<p>Now we can test our service with our client console. Right click on the <code>UnoChat.Service</code> project and select <code>Debug->Start New Instance</code>. After a few seconds compilation a browser window should open and show something similar to this:</p>
<img src="/Content/UnoChat/UnoChatServiceDebugBrowser.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="UnoChat Service Debug Browser"/>
<p>With that running, go back to Visual Studio and right click on the <code>UnoChat.Client.Console</code> project and again select <code>Debug->Start New Instance</code>. A console window should appear asking who you are. Enter a name, hit return and wait for the app to tell you that it has connected. At this point you can send messages to the <code>UnoChat.Service</code> which should be echoed back to the console window prefixed with your name as shown here:</p>
<video class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" controls autoplay loop>
<source src="/Content/UnoChat/UnoChatClientConsoleDebug.mp4" type="video/mp4"/>
Your browser does not support the video tag
</video>
<p>Just to show we're not just echoing these things locally, right click on the <code>UnoChat.Client.Console</code> project again and start a second instance using <code>Debug->Start New Instance</code>. In this window enter a different name and wait for connection. Now when you send a message, you'll see it in both console windows as shown here:</p>
<img src="/Content/UnoChat/UnoChatClientConsoleTwoInstances.png" class="img-responsive" style="margin: auto; width:80%; margin-top: 6px; margin-bottom: 6px;" alt="UnoChat Client Console Two Instances"/>
<p>Pretty neat huh!</p>
<h3 id="deployment">Deployment</h3>
<p>With our SignalR service running nicely, lets deploy it to Azure by right clicking on the <code>UnoChat.Service</code> project and selecting <code>Publish...</code>. I'm not going to cover this process in too much detail as it's <a href="https://docs.microsoft.com/en-US/visualstudio/deployment/quickstart-deploy-to-azure?view=vs-2019">thoroughly documented elsewhere</a> but, if you've not done this before, the following screen shots should help you through:</p>
<table style="margin: auto; width:100%;">
<tr>
<td><img src="/Content/UnoChat/PublishUnoChatServiceToAzureI.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Publish UnoChat Service To Azure - Step 1"/></td>
<td><img src="/Content/UnoChat/PublishUnoChatServiceToAzureII.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Publish UnoChat Service To Azure - Step 2"/></td>
<td><img src="/Content/UnoChat/PublishUnoChatServiceToAzureIII.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Publish UnoChat Service To Azure - Step 3"/></td>
</tr>
<tr>
<td><img src="/Content/UnoChat/PublishUnoChatServiceToAzureIV.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Publish UnoChat Service To Azure - Step 4"/></td>
<td><img src="/Content/UnoChat/PublishUnoChatServiceToAzureV.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Publish UnoChat Service To Azure - Step 5"/></td>
<td><img src="/Content/UnoChat/PublishUnoChatServiceToAzureVI.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Publish UnoChat Service To Azure - Step 6"/></td>
</tr>
<tr>
<td><img src="/Content/UnoChat/PublishUnoChatServiceToAzureVII.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Publish UnoChat Service To Azure - Step 7"/></td>
<td><img src="/Content/UnoChat/PublishUnoChatServiceToAzureVIII.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Publish UnoChat Service To Azure - Step 8"/></td>
<td><img src="/Content/UnoChat/PublishUnoChatServiceToAzureIX.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Publish UnoChat Service To Azure - Step 9"/></td>
<td></td>
</tr>
</table>
<p>Still with me? Great.</p>
<p>Lets test our deployed SignalR service by updating the <code>.WithUrl("http://localhost:61877/ChatHub")</code> line in our <code>UnoChat.Client.Console</code> app to match the deployed service as shown in the last screenshot above; for me it's <code>.WithUrl("https://unochatservice20200716114254.azurewebsites.net/ChatHub")</code>. Once done you should be able to start the <code>UnoChat.Client.Console</code> app and send/receive messages to/from your deployed SignalR service.</p>
<p>Now for the magic...</p>
<h2 id="uno-client">Uno Client</h2>
<h3 id="creating-preparing-an-uno-project">Creating & Preparing An Uno Project</h3>
<p>Back in Visual Studio, right click the <code>UnoChat</code> solution and <code>Add->New Project...</code>. Select <code>Cross-Platform App (Uno Platform)</code> from the <code>Add a new project</code> dialog and click Next. Name it <code>UnoChat.Client</code> on the <code>Configure your new project</code> dialog and finally click <code>Create</code>:</p>
<table style="margin: auto; width:100%;">
<tr>
<td><img src="/Content/UnoChat/CreateNewCrossPlatformAppI.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Create New Cross Platform App - Step 1"/></td>
<td><img src="/Content/UnoChat/CreateNewCrossPlatformAppII.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" alt="Create New Cross Platform App - Step 2"/></td>
</tr>
</table>
<p>To help keep my solution organised, I like to group all the Uno "head" projects in a solution folder. This is shown below but don't feel obliged to follow suit:</p>
<img src="/Content/UnoChat/UnoChatClientSolutionFolder.png" class="img-responsive" style="margin: auto; width:40%; margin-top: 6px; margin-bottom: 6px;" alt="UnoChat Client Solution Folder"/>
<p>Now, the first thing to do here is to get our dependencies in order using the following steps:</p>
<ol>
<li>Upgrade all <code>Uno.*</code> packages to the latest non-prelease versions (at the time of writing this is <code>Uno.UI</code> & <code>Uno.UI.RemoteControl</code> to v2.4.4, <code>Uno.Wasm.Bootstrap</code> & <code>Uno.Wasm.Bootstrap.DevServer</code> to v1.3.0)</li>
<li>Install the <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.SignalR.Client/"><code>Microsoft.AspNetCore.SignalR.Client</code> nuget package</a> to all the head projects (UnoChat.Client.Droid, UnoChat.Client.iOS, etc, etc).</li>
<li>Install the <a href="https://www.nuget.org/packages/MVx.Observable/"><code>MVx.Observable</code> nuget package</a> to all the head projects (UnoChat.Client.Droid, UnoChat.Client.iOS, etc, etc).</li>
</ol>
<blockquote class="blockquote">
<p>Quick tip: Use the <code>Manage NuGet Packages for Solution...</code> option from the solution's right-click menu to get this done much faster than modifying individual projects.</p>
</blockquote>
<p>Lastly use the <code>Properties</code> window to change the <code>Root namespace</code> value for the <code>UnoChat.Client.Shared</code> project from <code>UnoChat.Client.Shared</code> to just <code>UnoChat.Client</code> (I'm kind hoping this change makes it into the Uno templates at some point).</p>
<h3 id="mvx.observable">MVx.Observable</h3>
<p>I like to implement UI/UX flows using behavioural, declarative and functional paradigms. I wrote <code>MVx.Observable</code> to be a "(mostly) unopinionated, light-weight alternative to ReactiveUI provided as a library <em>not a framework</em>" and have written about it extensively <a href="https://ian.bebbs.co.uk/posts/ReactiveBehaviors">here</a> and <a href="https://ian.bebbs.co.uk/posts/Uno#four-important-words">here</a>.</p>
<p>You don't <em>need</em> to use <code>MVx.Observable</code> to implement the functionality present in this project but I'd encourage you to at least give it a try as, like ReactiveUI, these patterns really can help manage UI state, ensure UX flows are testable and keep discrete behaviours... well, discrete.</p>
<p><code>MVx.Observable</code> uses <a href="https://www.nuget.org/packages/System.Reactive/"><code>System.Reactive</code></a> to embody it's behaviours in a reactive manner and, as these behaviours interact with the UI, we need to use <code>IScheduler</code> instances to ensure we update the UI from the correct thread. This is somewhat complicated by the fact that we're writing a cross-platform app which uses different <code>IScheduler</code> implementations to marshal updates to the appropriate platform threads. Fortunately, this complexity is easily tamed through the use of <a href="https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/partial-classes-and-methods">Partial Classes and Methods</a>.</p>
<p>In the <code>UnoChat.Client.Shared</code> project, add a <code>Schedulers.cs</code> file with the following content:</p>
<pre><code class="language-c#">using System;
using System.Reactive.Concurrency;
using System.Threading;
namespace UnoChat.Client
{
public static partial class Schedulers
{
static partial void OverrideDispatchScheduler(ref IScheduler scheduler);
private static readonly Lazy<IScheduler> DispatcherScheduler = new Lazy<IScheduler>(
() =>
{
IScheduler scheduler = null;
OverrideDispatchScheduler(ref scheduler);
return scheduler == null
? new SynchronizationContextScheduler(SynchronizationContext.Current)
: scheduler;
}
);
public static IScheduler Dispatcher => DispatcherScheduler.Value;
public static IScheduler Default => Scheduler.Default;
}
}
</code></pre>
<p>Then, in the UWP head, override the <code>OverrideDispatchScheduler</code> method to provide the correct scheduler for the platform by adding a <code>Schedulers.cs</code> file to the head project with the following content:</p>
<pre><code class="language-c#">using System.Reactive.Concurrency;
using Windows.UI.Xaml;
namespace UnoChat.Client
{
public static partial class Schedulers
{
static partial void OverrideDispatchScheduler(ref IScheduler scheduler)
{
scheduler = new CoreDispatcherScheduler(Window.Current.Dispatcher);
}
}
}
</code></pre>
<p>Now we can safely use the scheduler in our solution knowing that we're able to easily marshal operations to and from the UI thread.</p>
<h3 id="viewmodel">ViewModel</h3>
<p>Leveraging <code>MVx.Observable</code> we're now going to create a <a href="https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93viewmodel">ViewModel</a> to manage all the interaction with SignalR ensuring we don't need have any logic in the view's code-behind.</p>
<p>Create a new <code>ViewModel</code> class in the <code>UnoChat.Client.Shared</code> project containing the following code:</p>
<pre><code class="language-c#">using Microsoft.AspNetCore.SignalR.Client;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.Reactive.Disposables;
using System.Reactive.Linq;
using System.Windows.Input;
using Uno.Extensions;
namespace UnoChat.Client
{
public class ViewModel : INotifyPropertyChanged
{
private readonly MVx.Observable.Property<string> _name;
private readonly MVx.Observable.Property<HubConnectionState> _state;
private readonly MVx.Observable.Command _connect;
private readonly MVx.Observable.Property<string> _lastMessageReceived;
private readonly MVx.Observable.Property<IEnumerable<string>> _allMessages;
private readonly MVx.Observable.Property<string> _messageToSend;
private readonly MVx.Observable.Property<bool> _messageToSendIsEnabled;
private readonly MVx.Observable.Command _sendMessage;
private readonly HubConnection _connection;
public event PropertyChangedEventHandler PropertyChanged;
private static string DefaultName => typeof(ViewModel)
.Assembly
.GetName()
.Name
.Split('.')
.Last();
public ViewModel()
{
_name = new MVx.Observable.Property<string>(DefaultName, nameof(Name), args => PropertyChanged?.Invoke(this, args));
_state = new MVx.Observable.Property<HubConnectionState>(HubConnectionState.Disconnected, nameof(State), args => PropertyChanged?.Invoke(this, args));
_connect = new MVx.Observable.Command();
_lastMessageReceived = new MVx.Observable.Property<string>(nameof(LastMessageReceived), args => PropertyChanged?.Invoke(this, args));
_allMessages = new MVx.Observable.Property<IEnumerable<string>>(Enumerable.Empty<string>(), nameof(AllMessages), args => PropertyChanged?.Invoke(this, args));
_messageToSend = new MVx.Observable.Property<string>(nameof(MessageToSend), args => PropertyChanged?.Invoke(this, args));
_messageToSendIsEnabled = new MVx.Observable.Property<bool>(false, nameof(MessageToSendIsEnabled), args => PropertyChanged?.Invoke(this, args));
_sendMessage = new MVx.Observable.Command();
_connection = new HubConnectionBuilder()
.WithUrl("https://unochatservice20200716114254.azurewebsites.net/ChatHub")
.WithAutomaticReconnect()
.Build();
}
private IDisposable ShouldEnableConnectWhenNotConnected()
{
return _state
.Select(state => state == HubConnectionState.Disconnected)
.ObserveOn(Schedulers.Dispatcher)
.Subscribe(_connect);
}
private IDisposable ShouldEnableMessageToSendWhenConnected()
{
return _state
.Select(state => state == HubConnectionState.Connected)
.Subscribe(_messageToSendIsEnabled);
}
private IDisposable ShouldConnectToServiceWhenConnectInvoked()
{
return _connect
.SelectMany(_ => Observable
.StartAsync(async () =>
{
await _connection.StartAsync();
return _connection.State;
}))
.ObserveOn(Schedulers.Dispatcher)
.Subscribe(_state);
}
private IDisposable ShouldDisconnectFromServiceWhenDisposed()
{
return Disposable.Create(() => _ = _connection.StopAsync());
}
private IDisposable ShouldListenForNewMessagesFromTheService()
{
return Observable
.Create<string>(
observer =>
{
Action<string, string> onReceiveMessage =
(user, message) => observer.OnNext($"{user}: {message}");
return _connection.On("ReceiveMessage", onReceiveMessage);
})
.ObserveOn(Schedulers.Dispatcher)
.Subscribe(_lastMessageReceived);
}
private IDisposable ShouldAddNewMessagesToAllMessages()
{
return _lastMessageReceived
.Where(message => !string.IsNullOrWhiteSpace(message))
.WithLatestFrom(_allMessages, (message, messages) => messages.Concat(message).ToArray())
.Subscribe(_allMessages);
}
private IDisposable ShouldEnableSendMessageWhenConnectedAndBothNameAndMessageToSendAreNotEmpty()
{
return Observable
.CombineLatest(_state, _name, _messageToSend, (state, name, message) => state == HubConnectionState.Connected && !(string.IsNullOrWhiteSpace(name) || string.IsNullOrWhiteSpace(message)))
.Subscribe(_sendMessage);
}
private IDisposable ShouldSendMessageToServiceThenClearSentMessage(IObservable<object> messageToSendBoxReturn)
{
var namedMessage = Observable
.CombineLatest(_name, _messageToSend, (name, message) => (Name: name, Message: message));
return Observable.Merge(_sendMessage, messageToSendBoxReturn)
.WithLatestFrom(namedMessage, (_, tuple) => tuple)
.Where(tuple => !string.IsNullOrEmpty(tuple.Message))
.SelectMany(tuple => Observable
.StartAsync(() => _connection.InvokeAsync("SendMessage", tuple.Name, tuple.Message)))
.Select(_ => string.Empty)
.ObserveOn(Schedulers.Dispatcher)
.Subscribe(_messageToSend);
}
public IDisposable Activate(IObservable<object> messageToSendBoxReturn)
{
return new CompositeDisposable(
ShouldEnableConnectWhenNotConnected(),
ShouldEnableMessageToSendWhenConnected(),
ShouldConnectToServiceWhenConnectInvoked(),
ShouldDisconnectFromServiceWhenDisposed(),
ShouldListenForNewMessagesFromTheService(),
ShouldAddNewMessagesToAllMessages(),
ShouldEnableSendMessageWhenConnectedAndBothNameAndMessageToSendAreNotEmpty(),
ShouldSendMessageToServiceThenClearSentMessage(messageToSendBoxReturn)
);
}
public string Name
{
get => _name.Get();
set => _name.Set(value);
}
public HubConnectionState State => _state.Get();
public string LastMessageReceived => _lastMessageReceived.Get();
public IEnumerable<string> AllMessages => _allMessages.Get();
public string MessageToSend
{
get => _messageToSend.Get();
set => _messageToSend.Set(value);
}
public bool MessageToSendIsEnabled => _messageToSendIsEnabled.Get();
public ICommand Connect => _connect;
public ICommand SendMessage => _sendMessage;
}
}
</code></pre>
<p>While this code is fairly lengthy it includes a large number of behaviours such as asynchronous connection management and message handling for SignalR along with enabling and disabling controls based on the current state of the UI and/or connection. All these behaviours are separated into discrete methods allowing them to be easily modified, supplemented or removed by simply changing, adding or removing an appropriated named "ShouldXXXX" method.</p>
<h3 id="view">View</h3>
<p>With the ViewModel in place and taking care of all the fundamental logic for the application, we now need to use it from the view. In the code behind for <code>MainView.cs</code> add the following code:</p>
<pre><code class="language-c#">using System;
using System.Reactive.Linq;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Navigation;
// The Blank Page item template is documented at http://go.microsoft.com/fwlink/?LinkId=402352&clcid=0x409
namespace UnoChat.Client
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
private readonly ViewModel _viewModel;
private IDisposable _behaviours;
public MainPage()
{
this.InitializeComponent();
_viewModel = new ViewModel();
DataContext = _viewModel;
}
protected override void OnNavigatedTo(NavigationEventArgs e)
{
base.OnNavigatedTo(e);
var messageToSendReturn = Observable
.FromEvent<KeyEventHandler, KeyRoutedEventArgs>(
handler => (s, k) => handler(k),
handler => MessageToSendTextBox.KeyUp += handler,
handler => MessageToSendTextBox.KeyUp -= handler)
.Where(k => k.Key == Windows.System.VirtualKey.Enter);
_behaviours = _viewModel.Activate(messageToSendReturn);
}
protected override void OnNavigatingFrom(NavigatingCancelEventArgs e)
{
base.OnNavigatingFrom(e);
if (_behaviours != null)
{
_behaviours.Dispose();
_behaviours = null;
}
}
}
}
</code></pre>
<p>In this code we instantiate the ViewModel, set it as the View's <code>DataContext</code> and call it's <code>Activate</code> method when the user navigates to this view. Note how we pass an observable to the <code>Activate</code> method which will emit a value when the user hits return on the <code>MessageToSendTextBox</code>. This allows us to receive feedback from the UI without compromising View/ViewModel segregation.</p>
<p>The <code>Activate</code> method returns an <code>IDisposable</code> which, when disposed, will tear down all the associated behaviours, unsubscribe from events and release resources. Accordingly, we dispose of this <code>IDisposable</code> when the user navigates away from this view, thereby correctly managing the lifetime of the ViewModel's resources.</p>
<p>Finally, lets implement our UI by editing the <code>MainView.xaml</code> to the following:</p>
<pre><code class="language-xml"><Page
x:Class="UnoChat.Client.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:UnoChat.Client"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
<RowDefinition Height="Auto"/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto"/>
<ColumnDefinition Width="*"/>
<ColumnDefinition Width="Auto"/>
</Grid.ColumnDefinitions>
<TextBlock Text="Name:" Style="{StaticResource BaseTextBlockStyle}" Grid.Row="0" Grid.Column="0" Margin="4" VerticalAlignment="Center" />
<TextBox Text="{Binding Path=Name, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" Grid.Row="0" Grid.Column="1" Margin="4"/>
<Button Command="{Binding Path=Connect}" Content="Connect" Grid.Row="0" Grid.Column="2" Margin="4" Padding="16,4" HorizontalAlignment="Stretch"/>
<ItemsControl ItemsSource="{Binding Path=AllMessages}" Grid.Row="1" Grid.ColumnSpan="3" Margin="4" />
<TextBlock Text="Message:" Style="{StaticResource BaseTextBlockStyle}" Grid.Row="2" Grid.Column="0" Margin="4" VerticalAlignment="Center"/>
<TextBox x:Name="MessageToSendTextBox" Text="{Binding Path=MessageToSend, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" IsEnabled="{Binding Path=MessageToSendIsEnabled}" Grid.Row="2" Grid.Column="1" Margin="4"/>
<Button Command="{Binding Path=SendMessage}" Content="Send" Grid.Row="2" Grid.Column="2" Margin="4" Padding="16,4" HorizontalAlignment="Stretch"/>
</Grid>
</Page>
</code></pre>
<p>And that's that. Let's give it a go!</p>
<h3 id="testing">Testing</h3>
<p>Set the <code>UnoChat.Client.UWP</code> project as the "Startup Project" and hit F5. After a short compilation cycle you should see the following:</p>
<img src="/Content/UnoChat/UnoChatClientUWPRunningI.png" class="img-responsive" style="margin: auto; width:40%; margin-top: 6px; margin-bottom: 6px;" alt="UnoChat Client UWP Running - 1"/>
<p>Click "Connect" and, after a short pause you should see the "Message" textbox become enabled. Enter some text in the "Message" textbox and click the "Send" button (or hit enter) and the message should be sent to SignalR. SignalR will then publish this message to all connected clients which, given we are one of the connected clients, will result in the message being sent back to us and being displayed in ItemsControl in the middle of the window as shown below:</p>
<img src="/Content/UnoChat/UnoChatClientUWPRunningII.png" class="img-responsive" style="margin: auto; width:40%; margin-top: 6px; margin-bottom: 6px;" alt="UnoChat Client UWP Running - 2"/>
<p>But having one client on one platform is no fun!</p>
<p>Lets kick off the Android head project by right clicking on it and selection <code>Debug->Start New Instance</code>. After another short compilation an Android Emulator should started and our app should be deployed to it then run. Clicking "Connect" then (after connection is complete) entering a message in the "Message" text box and hitting "Send" should result in the following:</p>
<img src="/Content/UnoChat/UnoChatRunningOnUwpAndAndroid.png" class="img-responsive" style="margin: auto; width:40%; margin-top: 6px; margin-bottom: 6px;" alt="UnoChat Running On Uwp And Android"/>
<p>Nice, two platforms for the price on one!</p>
<p>Now (and you'll need to be <a href="https://docs.microsoft.com/en-us/xamarin/ios/get-started/installation/windows/connecting-to-mac/">paired to a Mac for this</a>), right click the iOS head project and select <code>Debug->Start New Instance</code>. Once the iOS device emulator has started, follow the same steps as with with other head projects and you'll see the following:</p>
<img src="/Content/UnoChat/UnoChatRunningOnUwpAndroidAndiOS.png" class="img-responsive" style="margin: auto; width:40%; margin-top: 6px; margin-bottom: 6px;" alt="UnoChat Running On Uwp, Android and iOS"/>
<p>That's three for three.</p>
<p>Now, right click on the WASM head and select <code>Debug->Start New Instance</code>. After a short compilation you'll see a browser window appear and.... get stuck at the splash screen.</p>
<p>Booo! So close...</p>
<h3 id="getting-wasm-linked">Getting WASM Linked</h3>
<p>Opening the browser's "Developer Tools" with F12 we see this on the Console:</p>
<img src="/Content/UnoChat/UnoChatWasmLinkerIssues.png" class="img-responsive" style="margin: auto; width:40%; margin-top: 6px; margin-bottom: 6px;" alt="UnoChat Wasm Link Issues"/>
<p>Well, we've seen these "A suitable constructor ... could not be found" exceptions <a href="https://ian.bebbs.co.uk/posts/UnoWithSwagger#this-missing-links">before</a>. They're due to the assemblies containing the associated types being omitted by the Mono linker. By looking up which assemblies the various types belong to we're able to explicitly instruct the linker to include the assemblies by modifying the <code>LinkerConfig.xml</code> file in the WASM head project.</p>
<p>After a few "start -> fail -> find type -> amend config" iterations I ended up with the following in my <code>LinkerConfig.xaml</code> file:</p>
<pre><code class="language-xml"><linker>
<assembly fullname="UnoChat.Client.Wasm" />
<assembly fullname="Uno.UI" />
<assembly fullname="Microsoft.AspnetCore.Http.Connections.Client"/>
<assembly fullname="Microsoft.Extensions.Options"/>
<assembly fullname="Microsoft.AspNetCore.SignalR.Client"/>
<assembly fullname="Microsoft.AspNetCore.SignalR.Client.Core"/>
<assembly fullname="Microsoft.AspNetCore.SignalR.Protocols.Json"/>
<assembly fullname="System.Core">
<!-- This is required by JSon.NET and any expression.Compile caller -->
<type fullname="System.Linq.Expressions*" />
</assembly>
</linker>
</code></pre>
<p>And, with this in place starting the WASM head resulted in:</p>
<img src="/Content/UnoChat/UnoChatWasmRunning.png" class="img-responsive" style="margin: auto; width:40%; margin-top: 6px; margin-bottom: 6px;" alt="UnoChat Wasm Running"/>
<p>ooOOoo... exciting!! Could it be??</p>
<video class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px;" controls autoplay loop>
<source src="/Content/UnoChat/UnoChatAllWingsCheckin.mp4" type="video/mp4"/>
Your browser does not support the video tag
</video>
<p>Yes, yes it could.</p>
<p>Here we have <code>UnoChat.Client.Console</code> (Red Leader), <code>UnoChat.Client.UWP</code> (Red 3), <code>UnoChat.Client.Wasm</code> (Red 6), <code>UnoChat.Client.Droid</code> (Red 5) and <code>UnoChat.Client.iOS</code> (Red Buttons) all connected to the SignalR service and all receiving real-time updates.</p>
<p>Nice.</p>
<h2 id="conclusion">Conclusion</h2>
<p>And there we go. In about an hour (hey I stopped for lunch) we have an app which is capable of receiving real-time updates and which runs on pretty much every OS - either natively or through the browser. Moreover, the code to deliver this somewhat epic feat is short, concise, maintainable and - most importantly - 99% shared amongst the various project heads.</p>
<p>The Uno platform really has matured amazingly well since I first blogged about it as part of last December's <a href="https://ian.bebbs.co.uk/posts/Uno">Third Annual C# Advent</a>. Back then I found that it worked... mostly... but not all the head projects functioned correctly and it required a whole host of kludges to get them all running from a shared codebase. Just over seven months later and the change is incredible: You now have an expectation of things working "out-of-the-box" and any minor difference/issue on a given platform to be an easy work around.</p>
<p>I can't wait to see what the Uno Platform has in store for us at <a href="https://platform.uno/blog/unoconf-2020-virtual-free-aug-12-2020-save-the-date/">UnoConf 2020</a> (personally I'm hoping for Uno on Linux and - most importantly - Raspberry Pis). Hope to see you all there on August 12th!</p>
<h2 id="lastly">Lastly...</h2>
<p>If you're interested in using the Uno Platform to deliver cross-platform apps or have an upcoming project for which you'd like evaluate Uno Platform's fit, then please feel free to drop me a line to discuss your project/ideas using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>. As a freelance software developer and remote contractor I'm always interested in hearing from potential new clients or about potential new collaborations.</p>
<p>In this article we will see how to use <a href="https://platform.uno/">Uno Platform</a> and <a href="https://docs.microsoft.com/en-us/aspnet/signalr/overview/getting-started/introduction-to-signalr">SignalR</a> to create applications that run on all major platforms - PC, Mac, Android, iOS <em><strong>and</strong></em> Web - and are capable of receiving real-time updates from a SignalR service. As you will see, these two technologies work incredibly well together, providing an elegant solution to a use-case which, just a few years ago, would have been fiendishly difficult.</p>http://ian.bebbs.co.uk/posts/UnoWasmDockerUno WebAssembly Containerization2020-06-22T00:00:00Z<h2 id="intro">Intro</h2>
<p>In this post I will show how to build and run <a href="https://platform.uno/">Uno Platform</a> WebAssembly projects within a Docker container. While I'm pretty familiar with containerization technologies, Uno's transformation of C#/XAML to WebAssembly via Mono was - and, to a certain extent, still is - a bit of a mystery. As such, this post will very much be an exploration of the technical underpinnings of these technologies.</p>
<h2 id="deployment">Deployment</h2>
<p>While most project heads in an Uno solution are native applications that get deployed to and run on a device (usually from an App store), the WebAssembly (Wasm) head is different. Artifacts from the compilation of the Wasm project need to be deployed to a server which is capable of serving them to a browser when requested.</p>
<p>This deployment is typically achieved by publishing the Wasm artifacts to IIS hosted on a [virtual] server, <a href="https://nicksnettravels.builttoroam.com/post-2019-03-20-publishing-uno-webassembly-wasm-to-azure-app-service-aspx/">deploying an Azure App-Service</a> replete with all artifacts or simply copying the artifacts to a <a href="https://nicksnettravels.builttoroam.com/post-2019-03-20-deploying-uno-wasm-using-blob-storage-aspx/">file store capable of responding to HTTP requests</a>.</p>
<p>However there is another modern deployment mechanism that has, in recent years, come to dominate the DevOps landscape: Containerization.</p>
<h2 id="add-docker-support">Add -> Docker Support...</h2>
<p>Visual Studio has had first class support for creating and running Docker Containers for quite a while now and its integration into Visual Studio is very mature. In most instances, Containerizing a project has become as simple as right-clicking on the project, selecting "Add -> Docker Support..." and following the resulting dialogs. Unfortunately, despite being a web project which Visual Studio knows how to "Publish...", right clicking on an Uno Wasm project does not present the option to add Docker Support.</p>
<p>Given I was working on a solution with a containerized ReST API (along with <a href="https://ian.bebbs.co.uk/posts/UnoWithSwagger">NSwag-generated .NET Standard client project</a>) I really wanted to be able to containerize the Wasm app too so it could form part of a <a href="https://docs.docker.com/compose/">composed deployment</a>.</p>
<p>To simplify the process of working this out, I created a new Uno Platform project named <code>ContaineredUnoWasm</code> using version 2.4.0.0 of the Uno Platform templates and worked to containerize that. This project (completed with working containerized builds) can be found <a href="https://github.com/ibebbs/ContaineredUnoWasm">here</a>.</p>
<h2 id="building-in-a-container">Building in a container</h2>
<h3 id="imitation-is-the-sincerest-form-of-flattery">Imitation is the sincerest form of flattery</h3>
<p>If I was going to containerize the deployment of the Wasm project, I decided to go the whole hog and allow for <a href="https://docs.docker.com/develop/develop-images/multistage-build/">multi-stage containerized builds</a>, much like those created natively by Visual Studio. Without really thinking too much about it, my first - somewhat naïve - approach was simply to copy and customise a <code>Dockerfile</code> created by Visual Studio; in effect, something like this:</p>
<pre><code class="language-dockerfile">FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"]
RUN dotnet restore "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Shared/", "ContaineredUnoWasm/ContaineredUnoWasm.Shared/"]
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/"]
RUN dotnet build "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "ContaineredUnoWasm.Wasm.dll"]
</code></pre>
<p>We can then try building a container image from this <code>Dockerfile</code> by executing the following command (from the root of our repo):</p>
<pre><code>$> docker build -f .\ContaineredUnoWasm\ContaineredUnoWasm.Wasm\Dockerfile .
</code></pre>
<p>Which fails with the somewhat confusing error:</p>
<pre><code>Step 10/16 : RUN dotnet build "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/build
...
Build FAILED.
...
Downloading mono-wasm-502dca36d36 to /tmp/mono-wasm-502dca36d36.zip
/root/.nuget/packages/uno.wasm.bootstrap/1.2.0/build/Uno.Wasm.Bootstrap.targets(124,5): error : System.ComponentModel.Win32Exception (2): No such file or directory [/src/ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj]
</code></pre>
<p>Oooh-kay. We know that file is exists as that's the file we started building!</p>
<h3 id="jaylee-to-the-rescue">Jaylee to the rescue!</h3>
<p>Casting around for ideas as to what might be causing this issue, I came across <a href="https://jaylee.org/archive/2019/03/21/azure-devops-wasm-build-container.html">this post</a> from the one and only <a href="https://twitter.com/jlaban">Jérôme Laban</a> which shows how to build the Wasm project in Azure Devops using a container... which all sounded rather promising.</p>
<p>Reading this post shows that Jérôme is using a custom container image - <code>nventive/wasm-build:1.0-bionic</code> - within which the Wasm project is built. Looking this image up on <a href="https://hub.docker.com/r/nventive/wasm-build">Docker Hub</a> and then navigating to the <a href="https://github.com/nventive/docker">source repository</a> allows us to see what bizarre wizardry Jérôme is using to build the Wasm project in a container. And here it is:</p>
<pre><code class="language-dockerfile">FROM mcr.microsoft.com/dotnet/core/sdk:2.2.105-bionic
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
RUN echo "deb https://download.mono-project.com/repo/ubuntu stable-bionic main" | tee /etc/apt/sources.list.d/mono-official-stable.list
RUN apt-get update
# Install mono, msbuild and dependencies
RUN apt-get -y install sudo unzip python mono-devel msbuild libc6 ninja-build
# Setup for GitVersion 4.x
RUN sudo apt-get install -y libgit2-dev libgit2-26 && \
ln -s /usr/lib/x86_64-linux-gnu/libgit2.so /lib/x86_64-linux-gnu/libgit2-15e1193.so
# Install node and puppeteer dependencies
RUN curl -sL https://deb.nodesource.com/setup_11.x | sudo -E bash - && \
sudo apt install -y nodejs gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 \
libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 \
libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 \
libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 \
libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget
# Install and activate emscripten
RUN git clone https://github.com/juj/emsdk.git && \
sudo chmod 777 /emsdk && \
cd emsdk && \
./emsdk install sdk-1.38.28-64bit && \
./emsdk install sdk-1.38.30-64bit && \
./emsdk install sdk-1.38.31-64bit && \
./emsdk install sdk-1.38.34-64bit && \
./emsdk install latest && \
./emsdk activate sdk-1.38.31-64bit && \
sudo chmod -R 777 /emsdk
</code></pre>
<p>No wonder our docker build was failing, look at all the additional dependencies we need to build the Wasm project.</p>
<p>You know, contrary to intuition, the more I learn about the how the C#->WebAssembly transformation works, the <em>more</em> it seems like magic.</p>
<h3 id="meanwhile-back-in-the-dockerfile">Meanwhile, back in the Dockerfile</h3>
<p>Right, so armed with Jérôme's magical mystical container image of power, we'll try rewriting our Dockerfile as follows:</p>
<pre><code class="language-dockerfile">FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
FROM nventive/wasm-build:latest AS build
WORKDIR /src
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"]
RUN dotnet restore "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Shared/", "ContaineredUnoWasm/ContaineredUnoWasm.Shared/"]
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/"]
RUN dotnet build "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "ContaineredUnoWasm.Wasm.dll"]
</code></pre>
<p>Looks promising, let's give it a shot. Running this:</p>
<pre><code>$> docker build -f .\ContaineredUnoWasm\ContaineredUnoWasm.Wasm\Dockerfile .
</code></pre>
<p>Produces this:</p>
<pre><code>Step 8/14 : RUN dotnet build "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/build
---> Running in 1f7ff2159acc
Microsoft (R) Build Engine version 15.9.20+g88f5fadfbe for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
Restoring packages for /src/ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj...
Installing Microsoft.Extensions.DependencyInjection.Abstractions 1.1.0.
Installing System.Runtime.CompilerServices.Unsafe 4.3.0.
Installing Microsoft.Extensions.Logging 1.1.1.
Installing System.ValueTuple 4.4.0.
Installing CommonServiceLocator 2.0.5.
Installing System.Buffers 4.4.0.
Installing System.Runtime.CompilerServices.Unsafe 4.5.2.
Installing Microsoft.Extensions.Primitives 1.1.0.
Installing Uno.SourceGenerationTasks 2.0.6.
Installing Uno.Core 2.0.0.
Installing System.Memory 4.5.2.
Installing Uno.Core.Build 2.0.0.
Installing System.Runtime.InteropServices.WindowsRuntime 4.3.0.
Installing Microsoft.Extensions.Configuration.Abstractions 1.1.1.
Installing Microsoft.Extensions.Logging.Abstractions 1.1.1.
Installing Uno.UI 2.4.0.
Installing Microsoft.Extensions.Logging.Console 1.1.1.
Installing Microsoft.Extensions.Logging.Filter 1.1.1.
Installing Uno.Wasm.Bootstrap 1.2.0.
Installing Uno.Wasm.Bootstrap.DevServer 1.2.0.
Generating MSBuild file /src/ContaineredUnoWasm/ContaineredUnoWasm.Wasm/obj/ContaineredUnoWasm.Wasm.csproj.nuget.g.props.
Generating MSBuild file /src/ContaineredUnoWasm/ContaineredUnoWasm.Wasm/obj/ContaineredUnoWasm.Wasm.csproj.nuget.g.targets.
Restore completed in 27.69 sec for /src/ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj.
It was not possible to find any compatible framework version
The specified framework 'Microsoft.NETCore.App', version '3.0.0' was not found.
- Check application dependencies and target a framework version installed at:
/usr/share/dotnet/
- Installing .NET Core prerequisites might help resolve this problem:
https://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
- The .NET Core framework and SDK can be installed from:
https://aka.ms/dotnet-download
- The following versions are installed:
2.2.3 at [/usr/share/dotnet/shared/Microsoft.NETCore.App]
/root/.nuget/packages/uno.sourcegenerationtasks/2.0.6/build/netstandard1.0/Uno.SourceGenerationTasks.targets(127,2): error : Generation failed, error code 150 [/src/ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj]
Build FAILED.
</code></pre>
<p>Oh, interesting stuff. An error we can work with and, if I'm not mistaken, I seem to remember the <code>nventive/wasm-build</code> Dockerfile starting with <code>FROM mcr.microsoft.com/dotnet/core/sdk:2.2.105-bionic</code>. Perhaps this Dockerfile is just a little out of date?</p>
<h3 id="updating-wasm-build">Updating Wasm-Build</h3>
<br/>
<h4 id="edit">- - - - EDIT - - - -</h4>
<p>At this point I forked, cloned, updated and built a new version of the <a href="https://github.com/nventive/docker"><code>nventive/docker</code></a> image. However, towards the end of authoring this blog post, a hunch caused me to search for "docker" in the "unoplatform" organisation on Github. On the last page of "Code" hits, I found a link to this <a href="https://github.com/unoplatform/Uno.Wasm.Bootstrap/blob/master/Readme.md"><code>Readme.md</code></a> in which, about half way down, is a reference to another container image <a href="https://hub.docker.com/r/unoplatform/wasm-build"><code>unoplatform/wasm-build</code></a>.</p>
<p>This image was completely up to date and meant I no longer had to build a custom version so the rest of this (somewhat painful) section has been removed. Conversely, I decided to leave the slight misdirection in the section above as I thought it provided quite an insight into the complexities involved in building the Wasm output.</p>
<h4 id="end-edit">- - - - END EDIT - - - -</h4>
<br/>
<h3 id="one-step-back-two-steps-forward">One step back, two steps forward</h3>
<p>Ok, having found a new <code>wasm-build</code> image, lets try integrating it into the <code>Dockerfile</code> for building <code>ContaineredUnoWasm</code> as follows:</p>
<pre><code class="language-dockerfile">FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
FROM unoplatform/wasm-build:latest AS build
WORKDIR /src
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"]
RUN dotnet restore "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Shared/", "ContaineredUnoWasm/ContaineredUnoWasm.Shared/"]
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/"]
RUN dotnet build "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "ContaineredUnoWasm.Wasm.dll"]
</code></pre>
<p>Which is built using the command:</p>
<pre><code>$> docker build -f .\ContaineredUnoWasm\ContaineredUnoWasm.Wasm\Dockerfile . -t ibebbs/containeredunowasm:latest
</code></pre>
<p>And results in:</p>
<pre><code>Successfully built c2c4c48cf33a
Successfully tagged ibebbs/containeredunowasm:latest
</code></pre>
<p>Huzzah, it worked!</p>
<img src="/Content/UnoWasmDocker/Brilliant.jpg" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="BRILLIANT!!!!">
<br/>
<h2 id="running-from-a-container">Running from a container</h2>
<p>Now, unless I'm very much mistaken, running our containerized Wasm app should be as simple as starting the container and navigating to the exposed port in a browser. As such, let's run the command:</p>
<pre><code>$> docker run -p 5000:80 ibebbs/containeredunowasm:latest
</code></pre>
<p>Results in:</p>
<pre><code>It was not possible to find any installed .NET Core SDKs
Did you mean to run .NET Core SDK commands? Install a .NET Core SDK from:
https://aka.ms/dotnet-download
</code></pre>
<p>Errr... no I didn't, I wanted to run my app service. Let's see what's going on here by opening an interactive shell within the container and listing the contents:</p>
<pre><code>docker run -it --entrypoint /bin/bash ibebbs/containeredunowasm:latest
root@52acb17752f2:/app# dir
AppManifest.js Uno.UI.css
Assets Uno.UI.dll
CommonServiceLocator.dll Uno.UI.js
Fonts.css Uno.Xaml.dll
Microsoft.Extensions.Configuration.Abstractions.dll Uno.dll
Microsoft.Extensions.DependencyInjection.Abstractions.dll _compressed_br
Microsoft.Extensions.Logging.Abstractions.dll _compressed_gz
Microsoft.Extensions.Logging.Console.dll corebindings.o
Microsoft.Extensions.Logging.Filter.dll dotnet.js
Microsoft.Extensions.Logging.dll dotnet.wasm
Microsoft.Extensions.Primitives.dll driver.o
Properties index.html
System.Buffers.dll jquery-pep.js
System.Collections.Immutable.dll managed-4aa732b23652301ed854d2dd646ce71b0b0b5e3f
System.ComponentModel.dll mono-config.js
System.Linq.dll normalize.css
System.Memory.dll refs
System.Numerics.Vectors.dll require.js
System.Reflection.Emit.ILGeneration.dll runtime.js
System.Reflection.Emit.Lightweight.dll server.py
System.Runtime.CompilerServices.Unsafe.dll service-worker.js
System.Runtime.InteropServices.WindowsRuntime.dll setImmediate.js
System.Threading.dll uno-bootstrap.css
Uno.Core.dll uno-bootstrap.js
Uno.Foundation.dll uno-config.js
Uno.UI.Toolkit.dll web.config
Uno.UI.Wasm.dll zlib-helper.o
</code></pre>
<p>What? No <code>ContaineredUnoWasm.Wasm.dll</code> but an "index.html"? This looks suspiciously like a...</p>
<img src="/Content/UnoWasmDocker/MindBlown.gif" class="img-responsive" style="margin: auto; margin-top: 6px; margin-bottom: 6px;" alt="Mind Blown">
<p>Yup, <em><strong>very</strong></em> much mistaken. The result of compiling the Wasm project isn't a hosted service but is ... of course ... just content which can be hosted by another service. As such, there's really nothing for the container to run and, as such, we're going to need a web server to serve this content.</p>
<h3 id="kestrel-vs-nginx">Kestrel vs Nginx</h3>
<p>Now, this being a .net solution and me being a .net fanboi, I really wanted serve this content from a .net webserver. I was thinking it should be possible to find an off-the-[docker hub]-shelf image of a kestrel webserver which was/could be configured to serve static content. But no, despite a significant search, it appeared that, if I wanted to serve the content from Kestrel I'd have to add, build and containerize a bespoke web project.</p>
<p>So instead I decided to use <a href="https://hub.docker.com/_/nginx/">nginx</a> which is configured to <a href="https://github.com/docker-library/docs/tree/master/nginx#hosting-some-simple-static-content">serve static content</a> out-of-the-box.</p>
<p>To use nginx, I amended the <code>Dockerfile</code> for <code>ContaineredUnoWasm</code> to this:</p>
<pre><code class="language-dockerfile">FROM unoplatform/wasm-build:latest AS build
WORKDIR /src
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"]
RUN dotnet restore "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Shared/", "ContaineredUnoWasm/ContaineredUnoWasm.Shared/"]
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm/", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/"]
RUN dotnet build "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/publish
FROM nginx:alpine
EXPOSE 80
COPY --from=publish /app/publish /usr/share/nginx/html
</code></pre>
<p>Then rebuilt and started the image:</p>
<pre><code>$> docker build -f .\ContaineredUnoWasm\ContaineredUnoWasm.Wasm\Dockerfile . -t ibebbs/containeredunowasm:latest
...
Successfully built 4c1fc5cb9efd
Successfully tagged ibebbs/containeredunowasm:latest
$> docker run -p 5000:80 ibebbs/containeredunowasm:latest
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
</code></pre>
<p>Looks promising. Let's hit <code>http://localhost:5000</code> with a browser:</p>
<img src="/Content/UnoWasmDocker/UnoLogo.png" class="img-responsive" style="margin: auto; width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Uno Logo">
<p>Well, that's better. Lets see if we can resolve the "Incorrect response MIME type" issue by telling nginx about the mime types to serve. This is done by adding a <code>mime.types</code> file to our project containing:</p>
<pre><code>types {
text/html html htm shtml;
text/css css;
text/javascript js;
application/wasm wasm;
application/octet-stream dll clr;
application/json json;
application/font-woff woff woff2;
}
</code></pre>
<p>And adding it to our <code>nginx</code> container by amending the <code>Dockerfile</code> for <code>ContaineredUnoWasm</code> as shown here:</p>
<pre><code class="language-dockerfile">FROM unoplatform/wasm-build:latest AS build
WORKDIR /src
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"]
RUN dotnet restore "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Shared/", "ContaineredUnoWasm/ContaineredUnoWasm.Shared/"]
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm/", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/"]
RUN dotnet build "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/publish
FROM nginx:alpine
EXPOSE 80
COPY --from=publish /app/publish /usr/share/nginx/html
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm/mime.types", "/etc/nginx/mime.types" ]
</code></pre>
<p>Now, rebuilding the image, running the container and hitting the endpoint in a browser gives us:</p>
<img src="/Content/UnoWasmDocker/HelloWorld.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="Hello World!">
<p>Wahoo!</p>
<h3 id="a-little-house-keeping">A little house keeping</h3>
<p>Of course, needing to add a "mime.types" file with this exact content to any Wasm project we want to containerize is a pain so instead lets make a container image that already includes this file. In a fork of the <a href="https://github.com/unoplatform/docker"><code>unoplatform/docker</code></a> repository, I'll add a new folder called <code>wasm-serve</code> within which I'll add the <code>mime.types</code> file from above and the following <code>Dockerfile</code>:</p>
<pre><code class="language-dockerfile">FROM nginx:alpine
COPY ["mime.types", "/etc/nginx/mime.types" ]
</code></pre>
<p>Building this image with the command <code>docker build . -t wasm-server:latest</code> allows me to modify the <code>Dockerfile</code> for <code>ContaineredUnoWasm</code> as follows:</p>
<pre><code class="language-dockerfile">FROM unoplatform/wasm-build:latest AS build
WORKDIR /src
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"]
RUN dotnet restore "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj"
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Shared/", "ContaineredUnoWasm/ContaineredUnoWasm.Shared/"]
COPY ["ContaineredUnoWasm/ContaineredUnoWasm.Wasm/", "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/"]
RUN dotnet build "ContaineredUnoWasm/ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "ContaineredUnoWasm.Wasm/ContaineredUnoWasm.Wasm.csproj" -c Release -o /app/publish
FROM ibebbs/wasm-serve:latest
EXPOSE 80
COPY --from=publish /app/publish /usr/share/nginx/html
</code></pre>
<p>Which, when rebuilt and run, still results in the running app.</p>
<p>Done and done.</p>
<h3 id="future-improvements">Future improvements</h3>
<p>At the moment the docker build needs to download the "mono-wasm SDK" on each build. It would be much better download the mono-wasm SDK in an earlier step of the Dockerfile so that it is cached and doesn't need to be downloaded each time. I'm looking into doing this and will provide an update as an when I've worked out how to do it.</p>
<p><a href="https://balintpogatsa.github.io/2019/05/05/webassembly-mono-aot-example.html">This article</a> suggests downloading the SDK from <a href="https://jenkins.mono-project.com/job/test-mono-mainline-wasm/label=ubuntu-1804-amd64/lastSuccessfulBuild/Azure/">here</a> but the build output from Visual Studio suggests the SDK is coming from a blob owned by Uno (<a href="https://unowasmbootstrap.blob.core.windows.net/runtime/mono-wasm-###########.zip">https://unowasmbootstrap.blob.core.windows.net/runtime/mono-wasm-###########.zip</a>). Either way, once it's downloaded, setting the <code>WASM_SDK</code> to the path of the unzipped sdk should ensure the sdk doesn't need to be downloaded during build.</p>
<p>I'm also still having problems building/running more advanced projects within a container. Hitting a weird error (<code>Object doesn't support property or method '_coreDispatcherCallback'</code>) when running the resulting WASM in a browser. Still banging my head against this one so any advice gratefully received.</p>
<h4 id="edit-1">- - - - EDIT - - - -</h4>
<p>And as if by magic, Jérôme nails the answer in one fell swoop:</p>
<blockquote class="twitter-tweet tw-align-center"><p lang="en" dir="ltr">Nice article!! dotnet publish definitely needs a bit of work (the support is very recent), too many files are in the output folder. For the `_coreDispatcherCallback` error, it's generally a mismatch between Uno.UI and Uno.Wasm.Bootstrap. Recent builds don't have that issue.</p>— Jérôme Laban (@jlaban) <a href="https://twitter.com/jlaban/status/1275046276233605122?ref_src=twsrc%5Etfw">June 22, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<br/>
<h4 id="end-edit-1">- - - - END EDIT - - - -</h4>
<h2 id="conclusion">Conclusion</h2>
<p>It really was a bit of a roller-coaster of a journey to get a containerized build / image of an Uno Wasm project running.</p>
<p>One thing it really highlighted to me is the amazing impact of open-source code. Any time an issue was encountered, I was able to find the <em>exact code/file</em> causing the problem and thereby find a solution. For someone who cut his teeth programming before the internet made resolving every programming issue as simple as a quick jaunt to StackOverflow, and when source-code was something you had to pay through the nose for, working in and with OSS for the past several years has been an real eye-opener (if you'll excuse the somewhat strained facial idioms).</p>
<p>However, we also discovered that this transparency can also be a double-edged sword. The .NET ecosystem - and especially Uno - is moving forward at an incredible cadence with new releases causing older versions to be deprecated and documentation to become dated. As we can see above, this can throw serious curve-balls to anyone endeavouring to stray from the beaten path or get a better understanding of how things work.</p>
<p>Anyway, should you have any suggestions for, or questions about, anything above please feel free to drop me a line using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>.</p>
<p>In this post I will show how to build and run <a href="https://platform.uno/">Uno Platform</a> WebAssembly projects within a Docker container. While I'm pretty familiar with containerization technologies, Uno's transformation of C#/XAML to WebAssembly via Mono was - and, to a certain extent, still is - a bit of a mystery. As such, this post will very much be an exploration of the technical underpinnings of these technologies.</p>http://ian.bebbs.co.uk/posts/UnoWithSwaggerGiving Uno Some Swagger2020-06-15T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>A few days ago, <a href="https://twitter.com/thenickrandolph">Nick Randolph</a> published an excellent blog post about <a href="https://nicksnettravels.builttoroam.com/consuming-swagger/">"Consuming REST API with Swagger / OpenAPI in Xamarin and Uno Applications"</a>. I read this article with great interest (and perhaps a touch of chagrin) as I was mid-way through writing a very similar article myself. While I found this post to be as detailed and pragmatic as <a href="https://nicksnettravels.builttoroam.com/uno-crossplatform-template/">Nick's always are</a>, I feel he missed a few key elements about consuming strongly-typed ReST clients in Uno, particularly when it comes to consuming them from within a browser via the WebAssembly (WASM) project. In this post I will cover these additional points such that the reader is able to consume ReST endpoints, in the same manner, from all Uno head projects.</p>
<h2 id="intro">Intro</h2>
<p>This article will now very much be a continuation of Nick's. If you haven't read Nick's post, I would encourage you to <a href="https://nicksnettravels.builttoroam.com/consuming-swagger/">do so now</a> so that you understand many of the approaches used here. Much like Nick, I will be using a ReST endpoint created for an earlier blog post, namely the "Cheeze.Store" API written for my <a href="https://ian.bebbs.co.uk/posts/LessReSTMoreHotChocolate">"Less ReST, More HotChocolate"</a> post.</p>
<p>All source code for this post can be found in my <a href="https://github.com/ibebbs/UnoWithSwagger">UnoWithSwagger</a> repo on Github.</p>
<h2 id="typed-clients">Typed Clients</h2>
<p>In contrast to Nick's post, I will not be using <code>dotnet openapi</code> to generate Typed Clients for my API but will instead continue to use the <a href="https://www.nuget.org/packages/NSwag.MSBuild/"><code>NSwag.MSBuild</code></a> package. This is for two reasons:</p>
<ol>
<li>Typed Client generation using <code>NSwag.MSBuild</code> uses an NSwag configuration file. This configuration file provides much greater control over the generated code than is currently possible with <code>dotnet openapi</code></li>
<li>Once configured, <code>NSwag.MSBuild</code> is able to generate Typed Clients directly from the ReST service's source code instead of needing a <code>swagger.json</code> file. This saves a significant amount of time when you're writing a .NET ReST service as you don't need to start the service to update the client side code, thereby removing friction and allowing you to rapidly iterate the API.</li>
</ol>
<p>If you're interested in using <code>NSwag.MSBuild</code> to generate your Typed Clients then I cover the process quite thoroughly <a href="https://ian.bebbs.co.uk/posts/LessReSTMoreHotChocolate#generating-typed-clients">here</a>.</p>
<p>Furthermore, rather than "newing up" a <code>swaggerClient</code> manually, I will be using the <a href="https://www.nuget.org/packages/Microsoft.Extensions.DependencyInjection/3.1.5"><code>Microsoft.Extensions.DependencyInjection</code></a> and <a href="https://www.nuget.org/packages/Microsoft.Extensions.Http/3.1.5"><code>Microsoft.Extensions.Http</code></a> packages to inject a correctly configured Typed Client into my view-model. This, I believe, is where Typed Clients really shine as this approach completely abstracts the source of the data such that the Typed Clients appear to just be another client side dependency.</p>
<p>Here is the <code>Services</code> class I use for service registration:</p>
<pre><code class="language-c#">public partial class Services
{
public static readonly Services Instance = new Services();
private readonly ServiceCollection _serviceCollection;
private readonly Lazy<IServiceProvider> _serviceProvider;
private Services()
{
_serviceCollection = new ServiceCollection();
_serviceProvider = new Lazy<IServiceProvider>(() => _serviceCollection.BuildServiceProvider());
}
private void RegisterGlobalServices(IServiceCollection services, ILogger logger)
{
services.AddHttpClient<Store.Client.IStoreClient, Store.Client.StoreClient>(
httpClient => httpClient.BaseAddress = new Uri("http://localhost:5000")
);
services.AddSingleton<ISchedulers, Schedulers>();
services.AddTransient<ViewModel>();
}
public void PerformRegistration(ILogger logger)
{
if (_serviceProvider.IsValueCreated) throw new InvalidOperationException("You cannot register services after the service provider has been created");
RegisterGlobalServices(_serviceCollection, logger);
}
public IServiceProvider Provider => _serviceProvider.Value;
}
</code></pre>
<p>Which is initialized from <code>App.xaml.cs</code> as shown here:</p>
<pre><code class="language-c#">sealed partial class App : Application
{
private readonly ILogger<App> _logger;
public App()
{
ConfigureFilters(global::Uno.Extensions.LogExtensionPoint.AmbientLoggerFactory);
_logger = global::Uno.Extensions.LogExtensionPoint.AmbientLoggerFactory.CreateLogger<App>();
Platform.Services.Instance.PerformRegistration(_logger);
this.InitializeComponent();
this.Suspending += OnSuspending;
}
...
}
</code></pre>
<p>And used (naively) from within the view to instantiate the ViewModel, which then acts as the view's data context:</p>
<pre><code class="language-c#">public sealed partial class MainPage : Page
{
private readonly ViewModel _viewModel;
private IDisposable _behaviours;
public MainPage()
{
this.InitializeComponent();
_viewModel = Platform.Services.Instance.Provider.GetRequiredService<ViewModel>();
DataContext = _viewModel;
}
protected override void OnNavigatedTo(NavigationEventArgs e)
{
base.OnNavigatedTo(e);
_behaviours = _viewModel.Activate();
}
protected override void OnNavigatedFrom(NavigationEventArgs e)
{
base.OnNavigatedFrom(e);
if (_behaviours != null)
{
_behaviours.Dispose();
_behaviours = null;
}
}
}
</code></pre>
<p>Finally, here is the <code>ViewModel</code> implementation showing the use of the <code>IStoreClient</code> Typed Client:</p>
<pre><code class="language-c#">public class ViewModel : INotifyPropertyChanged
{
private readonly IStoreClient _storeClient;
private readonly Platform.ISchedulers _schedulers;
private readonly ILogger<ViewModel> _logger;
private readonly MVx.Observable.Command _loadCheese;
private readonly MVx.Observable.Property<IEnumerable<Store.Client.Cheese>> _cheeses;
public event PropertyChangedEventHandler PropertyChanged;
public ViewModel(IStoreClient storeClient, Platform.ISchedulers schedulers)
{
_storeClient = storeClient;
_schedulers = schedulers;
_logger = global::Uno.Extensions.LogExtensionPoint.AmbientLoggerFactory.CreateLogger<ViewModel>();
_loadCheese = new MVx.Observable.Command(true);
_cheeses = new MVx.Observable.Property<IEnumerable<Store.Client.Cheese>>(Enumerable.Empty<Store.Client.Cheese>(), nameof(Cheeses), args => PropertyChanged?.Invoke(this, args));
}
private IDisposable ShouldLoadCheeseWhenLoadCheeseInvoked()
{
return _loadCheese
.Do(_ => _logger.LogInformation("Loading Cheeses!"))
.SelectMany(_ => _storeClient.GetAsync())
.ObserveOn(_schedulers.Dispatcher)
.Subscribe(_cheeses);
}
public IDisposable Activate()
{
return new CompositeDisposable(
ShouldLoadCheeseWhenLoadCheeseInvoked()
);
}
public ICommand LoadCheese => _loadCheese;
public IEnumerable<Store.Client.Cheese> Cheeses => _cheeses.Get();
}
</code></pre>
<p>Note: This ViewModel uses my <a href="https://www.nuget.org/packages/MVx.Observable/"><code>MVx.Observable</code></a> package:</p>
<blockquote class="blockquote">
<p>Functional, Declarative and Reactive Extensions for MVVM & MVC patterns</p>
<p>A (mostly) unopinionated, light-weight alternative to ReactiveUI provided as a library <em>not a framework</em>.</p>
</blockquote>
<h2 id="this-missing-links">This Missing Links</h2>
<p>Now, regardless of how you've generated your Typed Clients, you will have added a reference to the client library to each of the head projects in your Uno solution. With the above code in place, you should be able to start the UWP head, click the "Load Cheeze!" button and see this:</p>
<img src="/Content/UnoWithSwagger/UWPHead.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="UWP Head Running">
<p>However, starting the WASM head will result in the browser only showing the app's splash screen. If you bring up your browser's "developer tools" window (I use Chrome and Edge interchangeably) and view the console output you should see something like the following:</p>
<img src="/Content/UnoWithSwagger/WASMLinkerIssue.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="WASM DefaultHttpClientFactory could not be located">
<p>This error is due to way the Mono linker determines the assemblies and types that should - or shouldn't - be included in the WASM output. By default, only statically referenced types (i.e. those we're directly using in our code) will be included and downloaded into the browser when starting the app. As we don't directly reference "Microsoft.Extensions.Http.DefaultHttpClientFactory" this type isn't available to the app and therefore the DI container isn't able to instantiate it.</p>
<p>To resolve this, we need to explicitly instruct the Mono linker to include the types we need. This can be done by modifying the <code>LinkerConfig.xml</code> file (within the WASM head project) to the following:</p>
<pre><code class="language-xml"><linker>
<assembly fullname="Cheeze.App.Wasm" />
<assembly fullname="Uno.UI" />
<assembly fullname="Newtonsoft.Json" />
<assembly fullname="System.ComponentModel.Annotations"/>
<assembly fullname="Microsoft.Extensions.Http"/>
<assembly fullname="Microsoft.Extensions.Options"/>
<assembly fullname="Cheeze.Store.Client" />
<assembly fullname="System.Core">
<!-- This is required by JSon.NET and any expression.Compile caller -->
<type fullname="System.Linq.Expressions*" />
</assembly>
</linker>
</code></pre>
<p>With this done, we should now be able to start the Cheeze App within the browser:</p>
<img src="/Content/UnoWithSwagger/WASMRunningNoData.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="Cheeze.App running in browser">
<br/>
<h2 id="close-but-no-handler">Close, but no handler!</h2>
<p>With Cheeze.App running in the browser, if we click the "Load Cheeze!" button now we should get... wait for it....</p>
<p>Nope, nothing.</p>
<p>Back to the browser's debugging tool's Console output and we're likely to see something along the lines of "Operation is not supported on this platform". This is due to the fact that, while running in the browser, the WASM head uses the browser to make HTTP calls. In order to do this, the <code>HttpClient</code> used by the Typed Client implementation needs to be configured to use the <code>WasmHttpHandler</code> as described <a href="https://platform.uno/docs/articles/faq.html#is-it-possible-to-make-http-web-requests-using-the-wasm-target">here</a>.</p>
<blockquote class="blockquote">
<p>Note: Somewhat confusingly, I hit this error consistently while originally writing the Cheese.App but, after implemented the changes below then backing them out so I could write this post, I could not for the life of me get the error to occur again. I imagine it's something cached or not rebuilt but this does mean that I'm unable to share screenshots showing this error. Apologies.</p>
</blockquote>
<p>Fortunately, getting <code>HttpClient</code> to use the <code>WasmHttpHandler</code> can be done completely transparently to the Typed Client by adding some additional configuration to our dependency injection setup. Shown below is the refactored <code>Services.cs</code> class.</p>
<pre><code class="language-c#">public partial class Services
{
public static readonly Services Service = new Services();
private readonly ServiceCollection _serviceCollection;
private readonly Lazy<IServiceProvider> _serviceProvider;
private Services()
{
_serviceCollection = new ServiceCollection();
_serviceProvider = new Lazy<IServiceProvider>(() => _serviceCollection.BuildServiceProvider());
}
partial void GetHttpMessageHandler(ref HttpMessageHandler handler);
private HttpMessageHandler PrimaryHttpMessageHandler()
{
HttpMessageHandler handler = null;
GetHttpMessageHandler(ref handler);
handler ??= new HttpClientHandler();
return handler;
}
private void RegisterGlobalServices(IServiceCollection services, ILogger logger)
{
services
.AddHttpClient<Store.Client.IStoreClient, Store.Client.StoreClient>(
httpClient => httpClient.BaseAddress = new Uri("http://localhost:5000"))
.ConfigurePrimaryHttpMessageHandler(PrimaryHttpMessageHandler);
services.AddSingleton<ISchedulers, Schedulers>();
services.AddTransient<ViewModel>();
}
public void PerformRegistration(ILogger logger)
{
if (_serviceProvider.IsValueCreated) throw new InvalidOperationException("You cannot register services after the service provider has been created");
RegisterGlobalServices(_serviceCollection, logger);
}
public IServiceProvider Provider => _serviceProvider.Value;
}
</code></pre>
<p>Note the addition of the <code>.ConfigurePrimaryHttpMessageHandler(PrimaryHttpMessageHandler)</code> call and the <code>GetHttpMessageHandler</code> partial method. The code here ensures that <code>HttpClientHandler</code> is used as the default but allows this to be overriden by providing an implementation for the <code>GetHttpMessageHandler</code> within platform specific code. Accordingly, a partial implementation of the <code>Services.cs</code> class is added to the <code>WASM</code> head project as follows:</p>
<pre><code class="language-c#">public partial class Services
{
partial void GetHttpMessageHandler(ref HttpMessageHandler handler)
{
handler = new Uno.UI.Wasm.WasmHttpHandler();
}
}
</code></pre>
<p>Now when the implementation of the IStoreClient is injected into the <code>ViewModel</code> it will be using an <code>HttpClient</code> instance which is configured to use <code>WasmHttpHandler</code>. Nice.</p>
<h2 id="cors-blimey">COR[s] BLIMEY!</h2>
<p><em><strong>Now</strong></em> when we start the WASM head and click the "Load Cheeze!" button we get... #$@&%*! ... <em><strong>still</strong></em> nothing.</p>
<p>Again, back to the browser's Console output and we'll see the culprit:</p>
<img src="/Content/UnoWithSwagger/WASMCorsIssue.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="Still No Data">
<p>Remember how I said earlier that "the WASM head uses the browser to make HTTP calls"? Yup? Well, this therefore makes the requests beholden to <a href="https://en.wikipedia.org/wiki/Cross-origin_resource_sharing">CORS</a>. As the <code>GET</code> request emanating from our Cheeze.App is deemed to be from another origin (by virtue of running from a different port) our service refuses to answer the request and everything disappears in a puff of console output.</p>
<p>To resolve this issue, we need to change the service (Cheeze.Store) through the addition of a CORS policy, as shown below:</p>
<pre><code class="language-c#">public class Startup
{
...
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
...
services.AddCors(o => o.AddPolicy(
"CorsPolicy",
builder =>
{
builder.AllowAnyOrigin()
.AllowAnyMethod()
.AllowAnyHeader();
})
);
...
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
...
app.UseCors("CorsPolicy");
...
}
}
</code></pre>
<p>Note: The policy shown here is for debug only and shouldn't be used verbatim in production!!</p>
<h1 id="finally">Finally!</h1>
<p>With all this in place and rebuilt, clicking the "Load Cheeze!" button in the browser finally gives us:</p>
<img src="/Content/UnoWithSwagger/WASMWithData.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="WASM with data">
<p>YAY!</p>
<p>Now, personally, I feel it's worth taking a moment here to reflect on this. With just a minor change in client side code (~12 loc) we're able to run <em>exactly</em> the same app both on the desktop <strong>and</strong> in the browser. I mean, look at it:</p>
<img src="/Content/UnoWithSwagger/SideBySide.png" class="img-responsive" style="margin: auto; width:95%; margin-top: 6px; margin-bottom: 6px;" alt="Side By Side">
<p>With no effort and just a couple of minor exceptions (font weight in UWP - left, and a scroll bar in the browser - right) the UI is pixel-perfect across two platforms that really couldn't be more dissimilar! I've said it before and I'll say it again, the <a href="https://platform.uno/">Uno Platform</a> team deserve massive kudos for providing a framework that allows developers to leverage existing skills (not to mention one of the best UI frameworks) to deliver apps across four (no, wait, <a href="https://platform.uno/blog/announcing-uno-platform-2-4-macos-support-and-windows-calculator-on-macos/">FIVE!</a>) disparate platforms.</p>
<h2 id="wrapping-up">Wrapping Up</h2>
<p>While implementing WASM heads for Uno solutions, I've found the following helps smooth the process:</p>
<ul>
<li>Enable WASM debugging by add <code>inspectUri</code> to <code>properties/launchSettings.json</code> as shown <a href="https://platform.uno/blog/debugging-uno-platform-webassembly-apps-in-visual-studio-2019/">here</a></li>
<li>Use Microsoft Edge to find errors (it's Console output seems have more info) but Chrome to hit breakpoints</li>
<li>Create loggers via <code>global::Uno.Extensions.LogExtensionPoint.AmbientLoggerFactory.CreateLogger<T>()</code>. Uno uses an old version of <code>Microsoft.Extensions.Logging</code> so injecting an <code>ILogger<T></code> instance into a class doesn't (seem to) work for browser console output and certainly can't used used while registering services.</li>
</ul>
<p>And that's it. I hope you've found this helpful. Should you like or use any of the code in this article please star the <a href="https://github.com/ibebbs/UnoWithSwagger">repository</a> and, if you have any questions or comments, please feel free to drop me a line using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>.</p>
<h2 id="oh-and.fine-cheese">Oh, and... Fine Cheese</h2>
<p>Some content in the "Cheeze" app/repository has been borrowed - thus far without permission - from <a href="https://www.finecheese.co.uk/">The Fine Cheese Co</a> website. While I'm not affiliated with this company in any way - I just happen to like both cheese and their website - if you should end up ordering from them as a result of reading article, please let them know so they don't force me to change all the screen shots above. Thanks.</p>
<p>A few days ago, <a href="https://twitter.com/thenickrandolph">Nick Randolph</a> published an excellent blog post about <a href="https://nicksnettravels.builttoroam.com/consuming-swagger/">"Consuming REST API with Swagger / OpenAPI in Xamarin and Uno Applications"</a>. I read this article with great interest (and perhaps a touch of chagrin) as I was mid-way through writing a very similar article myself. While I found this post to be as detailed and pragmatic as <a href="https://nicksnettravels.builttoroam.com/uno-crossplatform-template/">Nick's always are</a>, I feel he missed a few key elements about consuming strongly-typed ReST clients in Uno, particularly when it comes to consuming them from within a browser via the WebAssembly (WASM) project. In this post I will cover these additional points such that the reader is able to consume ReST endpoints, in the same manner, from all Uno head projects.</p>http://ian.bebbs.co.uk/posts/CodewarsA Kata for Katas2020-06-09T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>Azure Functions and Azure Blob Storage provide an incredibly quick, easy and cheap way of adding dynamic content to a static website. In this post I show how I used this combo to add a list of completed "code kata" to my blog's sidebar.</p>
<h2 id="intro">Intro</h2>
<blockquote class="blockquote">
<p>A code kata is an exercise in programming which helps programmers hone their skills through practice and repetition.</p>
</blockquote>
<p>There are many ways of practising code katas and many sites that provide code katas for you to practise with. I use <a href="https://www.codewars.com">Codewars</a>.</p>
<p>While completing a kata yesterday, I thought it would be good to show the katas I'm completing on my blog. A quick search revealed that Codewars has an <a href="https://dev.codewars.com/">API</a> for retrieving profile and kata information and <a href="https://dev.codewars.com/#webhooks">webhooks</a> for notifying external services when this information changes. A workable solution for getting kata information on my blog quickly came to mind and I simply couldn't resist taking time out to implement it.</p>
<p>Just a few - very enjoyable - hours later, I had this:</p>
<img src="/Content/Codewars/Homepage.png" class="img-responsive" style="margin: auto; margin-top: 6px; margin-bottom: 6px;" alt="Homepage with Codewars"/>
<p>Here's how I did it.</p>
<h2 id="static-serverless">Static & Serverless</h2>
<p>My blog is written in <a href="https://en.wikipedia.org/wiki/Markdown">Markdown</a> and uses <a href="https://wyam.io/">Wyam.io</a> to translate the markdown (plus other content) into a static site which is hosted on <a href="https://pages.github.com/">Github Pages</a>. All content is source controlled and the process of adding a new blog post is very smooth.</p>
<p>As such I didn't really want to add any complexity to the process by trying to regenerate the site when I complete a kata. This meant I needed to a) embed an external page within my blog, and b) write a service which would generate this page whenever I complete a kata. Furthermore, given the relative infrequency with which I undertake code katas, I didn't want a service running 24/7. This meant going <a href="https://en.wikipedia.org/wiki/Serverless_computing">serverless</a>.</p>
<p>These requirements led to this architecture:</p>
<img src="/Content/Codewars/Architecture.png" class="img-responsive" style="margin: auto; width:90%; margin-top: 6px; margin-bottom: 6px; margin-top: -20px;" alt="Architecture"/>
<p>Which can be read as follows:</p>
<ol>
<li>When a Kata is submitted to Codewars ...</li>
<li>... a webhook is used to call the Http Trigger of our Azure Function.</li>
<li>The Azure Function queries the Codewars API for the data it needs to generate an HTML page.</li>
<li>The generated page is saved to Azure Blob Storage in a container which is configured to allow "Public read access for blobs only"</li>
<li>The homepage for my blog is modified to include an <code><embed/></code> element pointing to the generated page meaning ...</li>
<li>... visitors to my blog now receive both the content from GitHub Pages and the new page from Azure Blob Storage.</li>
</ol>
<h2 id="implementation">Implementation</h2>
<h3 id="azure-function">Azure Function</h3>
<p>If you're using Visual Studio 2019, writing this kind of Azure Function is an absolute doddle:</p>
<ol>
<li>Create a new project and select the "Azure Functions" template.</li>
</ol>
<img src="/Content/Codewars/CreateAzureFunctionsProject.png" class="img-responsive" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Create an Azure Functions project"/>
<ol start="2">
<li>Name the project - in this example I've used the name "Blog.FunctionsExample"</li>
<li>In the "Create a new Azure Functions application" dialog, ensure you've selected:
<ol type="a">
<li>"Azure Functions v3 (.NET Core)" (the latest Azure Blob Storage packages don't play so nice with older versions)</li>
<li>"Http trigger"</li>
<li>"Storage Emulator" for the "Storage account (AzureWebJobsStorage)"</li>
<li>"Function" for Authorization level</li>
</ol>
</li>
</ol>
<img src="/Content/Codewars/HttpTrigger.png" class="img-responsive" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Create a new Azure Functions application"/>
<ol start="4">
<li>Clicking the "Create" button should result in a new project which looks something like this:</li>
</ol>
<img src="/Content/Codewars/CreatedFunctionsSourceCode.png" class="img-responsive" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Create Function source code"/>
<p>Now, here comes the magic part: Hit F5.</p>
<img src="/Content/Codewars/FunctionDebugging.png" class="img-responsive" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Integrated Function Debugging"/>
<p>If everything is set up correctly (you may get prompted to install a few packages), running the Functions app should have started the <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-use-emulator">"Azure Storage Emulator"</a> and then spun up your function within the <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=windows%2Ccsharp%2Cbash">"Azure Functions Core Tools"</a> debugging host. Yup, this is a fully local debug environment for Azure Functions <em>including Azure Storage emulation</em>. Wow.</p>
<p>Once started, the debugging host should provide you an HTTP endpoint from which you can trigger your function; in the screenshot above it's <code>Function1: [GET,POST] http://localhost:7071/api/Function1</code>. Simply GET this URL from a browser (or <a href="https://www.postman.com/">Postman</a>, or <a href="https://docs.microsoft.com/en-us/windows/wsl/install-win10">curl</a>) and your function will run, returning the <code>responseMessage</code>.</p>
<p>You are completely free to use breakpoints or any other means of interactive debugging which effectively makes writing a cloud hosted and serverless Azure Functions app no more difficult than a basic console app.</p>
<p>Now all we need to do is flesh out the function.</p>
<h3 id="writing-to-azure-blob-storage">Writing to Azure Blob Storage</h3>
<p>First we want to make our function output an HTML page to Azure Blob Storage. While there are many ways to interact with Azure Blob Storage from within an Azure Function, by far the easiest is to lean on Azure Functions' built in <a href="https://jhaleyfiles2016.blob.core.windows.net/public/Azure%20WebJobs%20SDK%20Cheat%20Sheet%202014.pdf">bindings</a>. To do this we first need to add a the <a href="https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/"><code>Microsoft.Azure.WebJobs.Extensions.Storage</code></a> nuget package to our project. Then we add a new parameter to our function - (<code>CloudBlockBlob output</code> below) - with attributes - (<code>[Blob()]</code> below) - that detail how to bind this parameter. Finally we can save our generated content to the blob as shown here:</p>
<pre><code class="language-c#">[FunctionName("Function1")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
[Blob("output/content.html", FileAccess.Write, Connection = "AzureWebJobsStorage")] CloudBlockBlob output,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string name = req.Query["name"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
name = name ?? data?.name;
string responseMessage = string.IsNullOrEmpty(name)
? "<html><body>This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.</body></html>"
: $"<html><body>Hello, {name}. This HTTP triggered function executed successfully.</body></html>";
output.Properties.ContentType = "text/html";
await output.UploadTextAsync(responseMessage);
return new NoContentResult();
}
</code></pre>
<p>If we trigger this function now we should see a new container - <code>output</code> - added to the Storage Emulator containing a single file: <code>content.html</code>.</p>
<blockquote class="blockquote">
<p>Quick tip: If you're doing anything with any form of Azure Storage, do yourself a favour and download the <a href="https://azure.microsoft.com/en-us/features/storage-explorer/">"Azure Storage Explorer"</a>. This app provides a very easy to use GUI over many forms of Azure Storage hosted in the cloud or locally. While it won't match a CLI for repetitive tasks, during development this app can really help you see what files are ending up where and with which characteristics.</p>
</blockquote>
<p>Note that in most cases it would be sufficient to bind the <code>Blob</code> attributed parameter to a simple <code>Stream</code> type. However, this would result in files written to Azure Blob Store having a <code>Content-Type</code> of <code>application/octet-stream</code> which would not be displayed correctly (or at all!) by most browsers when encountering this type of file within an <code><embed/></code> tag. As such we elect to bind to a <code>CloudBlockBlob</code> type which allows us to set the <code>Content-Type</code> directly.</p>
<h2 id="collecting-and-aggregating-kata-information">Collecting and aggregating Kata information</h2>
<p>Great, so now we have a function which, when triggered, will write an HTML document to Azure Blob Storage. Now, we need to start working on filling out the HTML document with the information we're interested in. The first step here is to collect this information from Codewars which involves HTTP calls to three endpoints - none of which require authentication:</p>
<ol>
<li>The <a href="https://dev.codewars.com/#get-user">Profile</a> endpoint - to get my current honor and rank information</li>
<li>The <a href="https://dev.codewars.com/#get-user:-completed-challenges">Completed Challenges</a> endpoint - to get the katas I have completed</li>
<li>Repeated calls to the <a href="https://dev.codewars.com/#get-code-challenge">Code Challenge</a> endpoint - to get information for the last X katas I have completed ('X' will be specified in config)</li>
</ol>
<p>For each endpoint, I first craft an example request in Postman, copy the JSON returned from the endpoint invocation and employ Visual Studio's insanely useful <a href="https://dailydotnettips.com/did-you-know-you-can-automatically-create-classes-from-json-or-xml-in-visual-studio/">"Paste JSON as classes"</a> to create DTOs which I can deserialize into. This makes calls to each endpoint as simple as doing this:</p>
<pre><code class="language-c#">private static async Task<Profile.Rootobject> Profile(HttpClient client)
{
var completedResponse = await client.GetAsync("https://www.codewars.com/api/v1/users/ibebbs/");
using (var stream = await completedResponse.Content.ReadAsStreamAsync())
{
return await JsonSerializer.DeserializeAsync<Profile.Rootobject>(stream);
}
}
</code></pre>
<p>Note that all IO <em>has</em> to be async. Calling the synchronous versions of any of the methods above will result in an exception being thrown stating <a href="https://stackoverflow.com/a/60755952/628821">"Synchronous operations are disallowed"</a>. This slightly complicates the retrieval of completed code challenges as each challenge needs to be fetched asynchronously then projected into a DTO asynchronously and these asynchronous operations need to be performed a specific number of times.</p>
<p>My go to approach for dealing with collections in a functional manner - LINQ - can't handle asynchronous operations but fortunately a recent addition to C# 8 - <a href="https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.iasyncenumerable-1?view=dotnet-plat-ext-3.1">IAsyncEnumerable</a> - can. Coupled with <a href="https://www.nuget.org/packages/System.Linq.Async"><code>System.Linq.Async</code></a> I can write "LINQ style" projections over asynchronous operations, as shown below:</p>
<pre><code class="language-c#">private static async Task<IEnumerable<Completion>> Completions(HttpClient client, int numberOfCompletionstoInclude)
{
var completed = await Completed(client);
var result = await completed.data
.ToAsyncEnumerable()
.SelectAwait(d => AsCompletion(d, client))
.Take(numberOfCompletionstoInclude)
.ToArrayAsync();
return result;
}
</code></pre>
<p>Finally all the collected information is projected into a <code>Model</code> class for use in the next step.</p>
<h2 id="generating-an-html-page">Generating an HTML page</h2>
<p>To create the HTML page containing all the kata information in an appropriate layout I use (a prelease version of) <a href="https://www.nuget.org/packages/RazorLight/2.0.0-beta7">RazorLight</a>. This allows me to template the desired output using "cshtml" (a.k.a. <a href="https://docs.microsoft.com/en-us/aspnet/core/mvc/views/razor?view=aspnetcore-3.1">Razor pages</a>) and bind values from the <code>Model</code> into appropriate places within the template. Here's the <code>cshtml</code> file:</p>
<pre><code class="language-html">@model Blog.Codewars.Generator.Model
<!DOCTYPE html>
<html lang="en" xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" />
<title>Codewars!</title>
<link rel="stylesheet" href="https://www.codewars.com/assets/application-776f7eebc122613f70443dfee33518104673ba7dced96422ca993601702f6456.css">
<style>
table td {
border-bottom: none;
padding-top: 2px;
padding-bottom: 2px;
line-height: normal;
padding: 0px;
padding-right: 10px;
}
td.fitwidth {
width: 1px;
white-space: nowrap;
}
div.minitag {
line-height: normal;
font-size: 9px;
margin: 0px
}
.tight {
line-height: normal;
margin-top: 4px;
margin-bottom: 0px;
}
.tight-last {
line-height: normal;
margin-top: 4px;
margin-bottom: 8px;
}
.tagrow {
margin-bottom: 8px;
margin-top: -2px
}
</style>
</head>
<body style="background-color: white;padding-top: 0px">
<h2 class="tight">Honor: @Model.Honor</h2>
<h3 class="tight-last">Showing @Model.Completions.Count() of @Model.TotalCompleted completed kata</h3>
<table style="width: 100%;background-color: white;">
<tbody>
@foreach (var item in @Model.Completions)
{
<tr>
<td class="fitwidth" style="border-bottom: none;">
<p class="tight">@item.Date.ToString("yyyy-MM-dd")</p>
</td>
<td>
<a href="@item.Uri" target="_blank"><h5 class="tight">@item.Name</h5></a>
</td>
<td class="fitwidth" rowspan="2" >
@if (@item.Language == "csharp")
{
<img src="https://ian.bebbs.co.uk/Content/csharp.png" style="max-width: 32px; margin-top: -6px" />
}
else
{
<img src="https://ian.bebbs.co.uk/Content/fsharp.png" style="max-width: 32px; margin-top: -6px" />
}
</td>
<td class="fitwidth" rowspan="2">
@{
switch (item.Colour)
{
case "white":
<div class="small-hex is-extra-wide is-inline mr-15px is-white-rank"><div class="inner-small-hex is-extra-wide "><span>@item.Ktu</span></div></div>
break;
case "yellow":
<div class="small-hex is-extra-wide is-inline mr-15px is-yellow-rank"><div class="inner-small-hex is-extra-wide "><span>@item.Ktu</span></div></div>
break;
case "blue":
<div class="small-hex is-extra-wide is-inline mr-15px is-blue-rank"><div class="inner-small-hex is-extra-wide "><span>@item.Ktu</span></div></div>
break;
case "purple":
<div class="small-hex is-extra-wide is-inline mr-15px is-purple-rank"><div class="inner-small-hex is-extra-wide "><span>@item.Ktu</span></div></div>
break;
}
}
</td>
</tr>
<tr class="bottom-margin">
<td colspan="2">
<div class="mt-15px tagrow">
@foreach (var tag in @item.Tags)
{
<div class="keyword-tag minitag tight">@tag</div>
}
</div>
</td>
</tr>
}
</tbody>
</table>
</body>
</html>
</code></pre>
<p>Please excuse the crappy CSS. I <em>still</em> maintain CSS is a terrible way to style UI elements - particularly when compared to the elegance of <a href="https://platform.uno/">XAML</a>.</p>
<p>Anyway, this <code>codewars.cshtml</code> file is added to the project as an "Embedded Resource" and used as follows:</p>
<pre><code class="language-c#">public static class Implementation
{
public static async Task<string> GenerateBlogPage(int numberOfCompletionstoInclude)
{
var engine = new RazorLightEngineBuilder()
.SetOperatingAssembly(Assembly.GetExecutingAssembly())
.UseEmbeddedResourcesProject(typeof(Implementation))
.UseMemoryCachingProvider()
.Build();
var model = await Source.Create(numberOfCompletionstoInclude);
string result = await engine.CompileRenderAsync("codewars", model);
return result;
}
}
</code></pre>
<h2 id="securing-the-function">Securing the Function</h2>
<p>While this function will be exposed publicly, we don't want just anyone to be able to invoke it as this would directly cost us money. By using the "'Function' Authorization Level" when we created the function, we ensured that the function can be invoked only if an appropriate "code" value is passed in the URL, but this is still just <a href="https://en.wikipedia.org/wiki/Security_through_obscurity">"security through obscurity"</a> which we should look to bolster further. As we'd like to ensure only Codewars can invoke this function (or at least cause the page to be regenerated) we can provide a "secret" to Codewars which they pass back to us - and we can check for - when the function is invoked.</p>
<p>Furthermore, Codewars will call this function for a variety of reasons, not just when I complete a kata. As generating the page is a relatively costly process (in terms of resources at least), we want to ensure this happens only when required. We therefore flesh out the function as follows:</p>
<pre><code class="language-c#">private static bool IsCodeWars(HttpRequest request)
{
return request.Headers.TryGetValue("X-Webhook-Secret", out var values) && values.Contains(Settings.CodewarsSecret);
}
private static async Task<bool> IsMyHonorChange(HttpRequest request, ILogger log)
{
using (StreamReader reader = new StreamReader(request.Body))
{
var body = await reader.ReadToEndAsync();
log.LogInformation($"Body: '{body}'");
return body.Contains("honor_changed") && body.Contains(Settings.MyCodewarsId);
}
}
[FunctionName("WebHook")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest request,
[Blob("blog/codewars.html", FileAccess.Write, Connection = "AzureWebJobsStorage")] CloudBlockBlob output,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
if (IsCodeWars(request))
{
if (await IsMyHonorChange(request, log))
{
var content = await Generator.Implementation.GenerateBlogPage(Settings.NumberOfCompletionstoInclude);
output.Properties.ContentType = "text/html";
await output.UploadTextAsync(content);
return new NoContentResult();
}
else
{
return new StatusCodeResult(304);
}
}
else
{
return new UnauthorizedResult();
}
}
</code></pre>
<p>Note that <code>Settings</code> is a façade for retrieving configuration values as shown here:</p>
<pre><code class="language-c#">public static class Settings
{
public static string CodewarsSecret => Environment.GetEnvironmentVariable("CodewarsSecret");
public static string MyCodewarsId => Environment.GetEnvironmentVariable("MyCodewarsId");
public static int NumberOfCompletionstoInclude => Int32.Parse(Environment.GetEnvironmentVariable("NumberOfCompletionstoInclude"));
}
</code></pre>
<h2 id="deployment">Deployment</h2>
<p>Finally we need to get the function deployed and connected to Codewars. As this isn't something that is going to change regularly, deployment of the function to Azure is performed with a "right click -> publish" from within Visual Studio. Once deployed, the function URL and <code>CodewarsSecret</code> value are copied from the Azure portal and added to my Codewars Account Settings page as shown below:</p>
<img src="/Content/Codewars/FunctionURL.png" class="img-responsive" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Function URL from Azure Portal"/>
<br/>
<img src="/Content/Codewars/CodewarsWebhook.png" class="img-responsive" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Codewars Webhook settings"/>
<p>Once saved, completing a kata automatically generates a new page in Azure Blob Storage which then appears on my blog. Nice!</p>
<h2 id="conclusion">Conclusion</h2>
<p>Sometimes I'm amazed at how fast and inexpensive it has become to assemble solutions to problems that, just a few years ago, would have been a major undertaking and cost a significant amount to run. Indeed, this solution took just a few hours from concept to deployment and costs...</p>
<img src="/Content/Codewars/AzureCost.png" class="img-responsive" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Azure Cost Analysis"/>
<p>... yup, less than a penny a month to run!</p>
<p>As developers we truly are spoiled by the tooling provided to us by Visual Studio and the hosting options available in Azure. While I'm fairly proficient in variety of other languages and frameworks, I always find myself back in VS because it makes everything just so damn easy!</p>
<p>Anyway, the source code for this project can be found in my <a href="https://github.com/ibebbs/Blog.Codewars">"Blog.Codewars"</a> repository on Github. Please star it if you find it - or this blog post - useful.</p>
<p>Azure Functions and Azure Blob Storage provide an incredibly quick, easy and cheap way of adding dynamic content to a static website. In this post I show how I used this combo to add a list of completed "code kata" to my blog's sidebar.</p>http://ian.bebbs.co.uk/posts/COduo-Part4Many platforms, one world - Part 42020-05-10T00:00:00Z<h2 id="intro">Intro</h2>
<p>This is part 4 of my series on using the Uno Platform to write CO<sub><em>duo</em></sub>, a highly graphical cross-platform app, able to target both single and dual-screen devices. In this post I show how CO<sub><em>duo</em></sub> uses the TwoPaneView to provide a single, adaptive UI which functions across multiple form-factors, screens and orientations. I then detail how to set up an Uno Platform solution such that you're able to use (one of the myriad implementations of) the TwoPaneView in your apps.</p>
<p>For an introduction to CO<sub><em>duo</em></sub> or to find further posts in this series, please use the links below:</p>
<ul>
<li><a href="./COduo-Part1">Part 1 - Background</a></li>
<li><a href="./COduo-Part2">Part 2 - Infrastructure</a></li>
<li><a href="./COduo-Part3">Part 3 - Client Architecture</a></li>
<li><a href="./COduo-Part4">Part 4 - Using the TwoPaneView</a></li>
<li>Part 5 - Implementing the interactive UK Map</li>
<li>Part 6 - Charts on the Uno Platform</li>
<li>Part 7 - Windows, Win10X and releasing to the Microsoft Store</li>
<li>Part 8 - Android and releasing to the Google Play Store</li>
<li>Part 9 - iOS and releasing to the Apple App Store</li>
</ul>
<h2 id="the-twopaneview">The TwoPaneView</h2>
<p>Windows Dev Center describes the <a href="https://docs.microsoft.com/en-us/uwp/api/microsoft.ui.xaml.controls.twopaneview?view=winui-2.3">TwoPaneView</a> as:</p>
<blockquote class="blockquote">
<p>a layout control that helps you manage the display of apps that have 2 distinct areas of content, like a master/detail view.<br />
While it works on all Windows devices, the TwoPaneView control is designed to help you take full advantage of dual-screen devices automatically, with no special coding needed. On a dual-screen device, the two-pane view ensures that the user interface (UI) is split cleanly when it spans the gap between screens, so that your content is presented on either side of the gap.</p>
</blockquote>
<p>As outlined above, the central tenet of the TwoPaneView is that, by separating the UI of your app into two parts, your app can automatically capitalize on the additional screen real-estate offered by dual-screen devices. While splitting a UI into 2 distinct areas may seem odd, Microsoft offer several examples of how this can be achieved in their <a href="https://docs.microsoft.com/en-us/dual-screen/introduction">"Introduction to dual-screen devices"</a> article, a summary of which can be seen in the image below:</p>
<img src="https://docs.microsoft.com/en-us/dual-screen/images/dual-screen-app-patterns.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="Dual-screen app patterns"/>
<p>What Microsoft do not make clear though, is how good this approach is for providing a reactive UI on <em>single screen devices</em>. By splitting your UI in this way, it can be composed into a variety of layouts to automatically fit the myriad different screen resolutions and aspect ratios provided by devices ranging from PC's and tablets to mobile phones and IoT devices (and the various orientations thereof). For example, below I show common screen sizes, layouts and orientations which are natively catered for by the TwoPaneView:</p>
<img src="/Content/CODuo/TwoPaneViewLayouts.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="TwoPaneView on Single Screen"/>
<p>Note that the two panes do not need to be the same size and scroll-bars are introduced if either of the panes causes the layout to exceed the screen bounds.</p>
<p>Now this kind of reactive UI is nothing new but historically it would have had to be handled manually; usually (in the XAML world) through the use of <a href="https://blog.mzikmund.com/2017/02/visualstatemanager-pitfalls/">Visual States and Adaptive Triggers</a>. But with the TwoPaneView this is all taken care of for you while providing the added benefit of also allowing these panes to intelligently span across <em>screens</em>. Pretty neat huh.</p>
<p>Microsoft provide a fairly comprehensive guide to using the TwoPaneView <a href="https://docs.microsoft.com/en-us/windows/uwp/design/controls-and-patterns/two-pane-view">here</a> but there are numerous additional tips for using the control - particularly on multiple screens - that could easily warrant an entire blog post. Here though I would like to refocus on how you can start using the control in an Uno Project which, unfortunately, isn't as straight forward as it ought to be.</p>
<h2 id="three-implementations-of-two-panes">Three Implementations of Two Panes</h2>
<p>At the time of writing, there are <em>three</em> implementations of the TwoPaneView control:</p>
<ol>
<li>The <a href="https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.controls.twopaneview">Windows 10 SDK version</a>, released as part of the <a href="https://developer.microsoft.com/en-US/windows/downloads/windows-10-sdk/">v10.0.18362.0 SDK</a></li>
<li>The <a href="https://docs.microsoft.com/en-us/uwp/api/microsoft.ui.xaml.controls.twopaneview?view=winui-2.3">WinUI version</a> released as part of the <a href="https://www.nuget.org/packages/Microsoft.UI.Xaml/2.1.190405004">WinUI 2.1</a> nuget package</li>
<li>The <a href="https://platform.uno/blog/surface-duo-winui-twopaneview-implementation-via-uno-platform/">Uno version</a> released as part of the <a href="https://www.nuget.org/packages/Uno.UI/2.1.37">Uno.UI 2.1</a> nuget package</li>
</ol>
<p>Getting an Uno Platform solution to correctly use the desired implementations has been the cause of more than a little confusion (<a href="https://stackoverflow.com/questions/60931965/twopaneview-with-uno-platform">not least of which from me</a>), so here I will cover the various combinations that allow you to use the TwoPaneView in a cross-platform code-base.</p>
<h3 id="uno-windows-10-sdk">Uno + Windows 10 SDK</h3>
<p>If your UWP head project is targeting platform 1903 or later, then the easiest way to use the TwoPaneView is to mix the Uno.UI and Windows 10 SDK implementations of the control. To do this, first ensure:</p>
<ol>
<li>That all head projects except UWP have Uno.UI version 2.1 or later installed</li>
<li>The UWP head project is targeting platform 1903</li>
</ol>
<p>With these pre-requisites, the following XAML will compile and run successfully across all heads:</p>
<pre><code class="language-xaml"><Page
x:Class="UnoWithWinUI.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:UnoWithWinUI"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<TwoPaneView Pane1Length="0.3*" Pane2Length="0.7*" Background="Yellow" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" MinWideModeWidth="100">
<TwoPaneView.Pane1>
<Border>
<Rectangle Fill="LightBlue" />
</Border>
</TwoPaneView.Pane1>
<TwoPaneView.Pane2>
<Border>
<Rectangle Fill="LightGreen"/>
</Border>
</TwoPaneView.Pane2>
</TwoPaneView>
</Grid>
</Page>
</code></pre>
<h3 id="uno-winui">Uno + WinUI</h3>
<p>WinUI is <a href="https://microsoft.github.io/microsoft-ui-xaml/">"The Future of Windows Development"</a> and, accordingly, the Uno platform has committed to <a href="https://www.idiwork.com/unoplatform-winui-what-to-expect/">"put WinUI on every platform possible"</a>. As such, if you're looking to start a new cross-platform project, you should probably be looking to use controls from the WinUI package (not the Windows 10 SDK) where possible.</p>
<p>Unfortunately, this isn't as simple as one might hope. Uno currently only implements a small subset of the controls available in WinUI and, as the namespaces between these controls are different, you will need to limit yourself to only using controls from WinUI that have also been implemented in Uno if you want to maintain a single code-base for your cross-platform project (at the time of writing Uno.UI has implemented just the <a href="https://github.com/unoplatform/uno/tree/e61a1da0df49d2d93e32d71e2801fd84689bb007/src/Uno.UI/Microsoft/UI/Xaml/Controls/NumberBox">NumberBox</a> and the <a href="https://github.com/unoplatform/uno/tree/e61a1da0df49d2d93e32d71e2801fd84689bb007/src/Uno.UI/Microsoft/UI/Xaml/Controls/TwoPaneView">TwoPaneView</a> controls).</p>
<p>The following steps describe how to get an Uno solution setup such that you can correctly use a WinUI control - in this instance the TwoPaneView - without resorting to head project specific views:</p>
<ol>
<li>Ensure that all head projects except UWP have Uno.UI version 2.1 or later installed</li>
<li>Install the WinUI nuget package (version 2.1 or later) into the UWP head project</li>
<li>Add the required WinUI XAML resources to <code>App.xaml</code> in the Shared project as shown here:
<pre><code class="language-xaml"><Application.Resources>
<ResourceDictionary>
<ResourceDictionary.MergedDictionaries>
<XamlControlsResources xmlns="using:Microsoft.UI.Xaml.Controls" />
</ResourceDictionary.MergedDictionaries>
</ResourceDictionary>
</Application.Resources>
</code></pre>
</li>
<li>Add the <code>xmlns:winui="using:Microsoft.UI.Xaml.Controls"</code> namespace to the XAML page in which you wish to use the TwoPaneView control.</li>
<li>Add the TwoPaneView to the XAML page.</li>
</ol>
<pre><code class="language-xaml"><Page
x:Class="UnoWithWinUI.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:UnoWithWinUI"
xmlns:winui="using:Microsoft.UI.Xaml.Controls"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<winui:TwoPaneView Pane1Length="0.3*" Pane2Length="0.7*" Background="Yellow" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" MinWideModeWidth="100">
<winui:TwoPaneView.Pane1>
<Border>
<Rectangle Fill="LightBlue" />
</Border>
</winui:TwoPaneView.Pane1>
<winui:TwoPaneView.Pane2>
<Border>
<Rectangle Fill="LightGreen"/>
</Border>
</winui:TwoPaneView.Pane2>
</winui:TwoPaneView>
</Grid>
</Page>
</code></pre>
<p>At this point you should all project heads should compile and run successfully. If all goes well, you should see something akin to the following on each platform:</p>
<img src="/Content/CODuo/UnoTwoPaneViewOnAndroid.png" class="img-responsive" style="margin: auto; height:320px; margin-top: 6px; margin-bottom: 6px;" alt="Uno TwoPaneView on Android"/>
<h2 id="two-pains-with-the-twopaneview">Two pains with the TwoPaneView</h2>
<p>While developing CO<sub><em>duo</em></sub> I found that the TwoPaneView exhibited two curious behaviours that I had not expected. Firstly, the control would continue to use proportional sizing of the panes even when the control was being used across multiple screens and, secondly, it wrapped each pane's content in a scroll viewer which made it difficult to correctly design an "adaptive" UI.</p>
<p>I spent an age trying to work out why the control was behaving this way and potential methods to get it to work the way I expected. Finally I ended up writing a custom control which "just worked" and moved on with trying to deliver some more functional aspects of the app.</p>
<p>Sometime later, while discussing this issue with the Uno Platform team, I decided to recreate the issues I had experienced in a new solution. Yet, when I came to demonstrate the issues - this time on the Windows 10X Emulator - the TwoPaneView worked perfectly. Looking at the associated code I confirmed that it had not changed yet I was no longer seeing either of the behaviours I had previously experienced... until I tried running the project back on the Surface Duo emulator.</p>
<p>Bingo.</p>
<p>It turned out that, while the WinUI implementation of the TwoPaneView worked exactly as I had originally expected, the Uno recreation of the control didn't exhibit the same behaviour. I <a href="https://github.com/unoplatform/uno/issues/2816">created an issue</a> in the Uno Platform github repository and will revert to using the TwoPaneView when they - or I - have time to resolve the issue.</p>
<h2 id="using-the-twopaneview-in-coduo">Using the TwoPaneView in CO<sub><em>duo</em></sub></h2>
<p>CO<sub><em>duo</em></sub> uses the TwoPaneView in the "root" view. This root view is displayed in the UWP Window's <code>Frame</code> and never changes. To support navigation and layout changes CO<sub><em>duo</em></sub> employs a <a href="https://ian.bebbs.co.uk/posts/ReactiveStateMachines">Reactive State Machine</a> which dictates the content that should be displayed within each pane of the TwoPaneView. This is done by reacting to mode changes in the TwoPaneView (i.e. SinglePane, Tall, Wide) and emitting <code>Layout.Changed</code> events, all communicated between the view and state machine via the <code>Event.Bus</code>. These events are received by the <code>Root.ViewModel</code> which coordinates updating the TwoPaneView control in the <code>Root.View</code> by directly setting the content of each panel.</p>
<p>To illustrate this here is the code from the <a href="https://github.com/ibebbs/CODuo/blob/master/src/CODuo/CODuo.Shared/Home/State.cs"><code>Home.State</code></a> which reacts to layout changes:</p>
<pre><code class="language-c#">var viewModel = _viewModelFactory.Create<IViewModel>();
var layouts = Observable
.Merge(
_eventBus.GetEvent<Event.LayoutModeResponse>().Select(@event => @event.Mode),
_eventBus.GetEvent<Event.LayoutModeChanged>().Select(@event => @event.Mode))
.ObserveOn(_schedulers.Dispatcher)
.Select(mode => AsLayout(viewModel, mode))
.Select(AsEvent)
.Subscribe(_eventBus.Publish);
</code></pre>
<p>And the code from the <a href="https://github.com/ibebbs/CODuo/blob/master/src/CODuo/CODuo.Shared/Root/ViewModel.cs"><code>Root.ViewModel</code></a> which applies the layout:</p>
<pre><code class="language-c#">return _eventBus.GetEvent<Event.LayoutChanged>()
.WithLatestFrom(_view, (@event, view) => (@event.Layout, View: view))
.Where(tuple => tuple.View != null)
.ObserveOn(_schedulers.Dispatcher)
.Subscribe(tuple => tuple.View.PerformLayout(tuple.Layout));
</code></pre>
<p>And the code from the <a href="https://github.com/ibebbs/CODuo/blob/master/src/CODuo/CODuo.Shared/Root/View.xaml.cs"><code>Root.View</code></a> which updates the TwoPaneView (currently my custom <code>DualPaneView</code> control due to the issues described above):</p>
<pre><code class="language-c#">public void PerformLayout(Layout layout)
{
dualPaneView.Pane1 = layout.Pane1Content as UIElement;
dualPaneView.Pane2 = layout.Pane2Content as UIElement;
}
</code></pre>
<h2 id="part-5">Part 5</h2>
<p>In <a href="./COduo-Part5">Part 5</a> I will outline how I implemented the interactive map of the UK. I believe the approaches used for this control leverage some of the incredible power of UWP - and the Uno Platform - to "build modern, seamless UIs that feel natural to use on every <del>Windows</del> device."</p>
<h2 id="finally">Finally</h2>
<p>I hope you enjoy this series and that it goes some way to demonstrating the massive potential presented by the Uno Platform for delivering cross-platform experiences without having to invest in additional staff training nor bifurcating your development efforts.</p>
<p>If you or your company are interested in building apps that can leverage the dual screen capabilities of new devices such as the Surface Duo and Surface Neo, or are keen to understand how a single code-base can deliver apps to <em>every platform from mobile phones to web sites</em>, then please feel free to drop me a line using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>. I am actively seeking new clients in this space and would be happy to discuss any ideas you have or projects you're planning.</p>
<p>This is part 4 of my series on using the Uno Platform to write CO<sub><em>duo</em></sub>, a highly graphical cross-platform app, able to target both single and dual-screen devices. In this post I show how CO<sub><em>duo</em></sub> uses the TwoPaneView to provide a single, adaptive UI which functions across multiple form-factors, screens and orientations. I then detail how to set up an Uno Platform solution such that you're able to use (one of the myriad implementations of) the TwoPaneView in your apps.</p>http://ian.bebbs.co.uk/posts/TechAdventuresInSustainability-PartIITech Adventures in Sustainability2020-05-02T00:00:00Z<h2 id="background">Background</h2>
<p>I'm taking a quick break from my <a href="https://ian.bebbs.co.uk/tags/uno-platform">"Many Platforms, one world" blog series</a> to reprise an old - but related - series on using technology to promote sustainability. In <a href="http://ian.bebbs.co.uk/posts/TechAdventuresInSustainability-PartI">Part 1 of this series</a> I showed how my family uses <a href="https://github.com/ibebbs/SolarEdge.Monitor">SolarEdge.Monitor</a> to extract, persist and visualize the energy being produced by our solar panels. In this post I aim to show how I use the data produced by SolarEdge.Monitor to automatically optimize our electricity usage.</p>
<h2 id="maximizing-self-consumption-minimizing-imported-energy">Maximizing Self-Consumption / Minimizing Imported Energy</h2>
<p>The data collected by SolarEdge.Monitor shows the best times to turn on electrical appliances around the house like the washing machine and dish washer. Unfortunately, over the summer we still end up with generation/consumption patterns that look like this:</p>
<img src="../Content/TechAdventuresInSustainability-PartII/SolarGenerationAndImportExport.png" class="img-responsive" style="margin: auto; width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Generation And Consumption">
<p>As you can see, on sunny days, we generate way more electricity than we consume and over night we consume more electricity than we would like.</p>
<p>The obvious answer here would be to supplement our solar power system with a battery which would be charged with the excess energy we generate during the day and discharged during the evening. We have discussed this at length and, unfortunately, we still don't feel the cost / RoI balance is there to justify buying a solar battery at this time.</p>
<h2 id="working-smarter">Working Smarter</h2>
<p>So if the obvious answer isn't the right answer (yet), what can we do? Well, when we thought about this problem, we realised that there are a lot of electrical appliances/devices that are used during the day but that sit in standby overnight sipping energy. If we turned these off rather than being on standby we could potentially cut our overnight consumption.</p>
<p>Furthermore, from a sustainability point of view, we have a number of devices around the house that use normal domestic batteries ('AA' or 'AAA'). If we moved to using rechargeable batteries and charged them with excess solar energy then we could further maximise our self-consumption and reduce the number of batteries we buy.</p>
<p>Again these are fairly obvious actions to take but extremely onerous to perform on a daily basis. If only there were something that could detect when we're exporting energy and turn these devices on or, conversely, when we're importing energy and turn these devices off.</p>
<h2 id="powerfull">PowerFull</h2>
<p>And so I wrote <a href="https://github.com/ibebbs/PowerFull">PowerFull</a>.</p>
<blockquote class="blockquote">
<p>An open-source .NET Core utility for automatically controlling device power via MQTT.</p>
</blockquote>
<p>Once supplied with MQTT and device information, PowerFull is able to monitor the levels of electricity being imported or exported and turn devices on or off appropriately.</p>
<p>As with SolarEdge.Monitor, PowerFull is a .NET Core application which can natively be <a href="https://hub.docker.com/r/ibebbs/powerfull">containerized</a> and composed with other applications. A full description of how to configure and run PowerFull is available is both the <a href="https://github.com/ibebbs/PowerFull">source code repository</a> and the <a href="https://hub.docker.com/r/ibebbs/powerfull">Docker Hub</a> pages.</p>
<h2 id="sonoff-mini">Sonoff Mini</h2>
<p>My first application of PowerFull was to use excess solar energy to charge rechargeable batteries. I already had a decent 12-way battery charger (<a href="https://www.ikea.com/gb/en/p/storhoegen-battery-charger-with-storage-white-40303651/">courtesy of IKEA</a>) which I decided to control with one of these:</p>
<img src="../Content/TechAdventuresInSustainability-PartII/Sonoff Mini.jpg" class="img-responsive" style="margin: auto; width:40%; margin-top: 6px; margin-bottom: 6px;" alt="Sonoff Mini">
<p>A Sonoff Mini.</p>
<p>Now using Sonoff to control devices is nothing new and people have been flashing custom firmware - most often <a href="https://github.com/arendst/Tasmota">Tasmota</a> - to these devices for years. The Sonoff Mini however makes flashing custom firmware easier than ever with a factory supplied "DIY Mode". You see, historically, if you wanted to flash a Sonoff device with new firmware, you'd need an FTDI module to transfer the new firmware to the Sonoff device. With "DIY Mode" it's as simple as connecting a jumper using a specific tool to flash a firmware over Wifi. A guide to flashing the Sonoff Mini can be found <a href="https://www.youtube.com/watch?v=9fkYBWvwn4A">here</a>.</p>
<p>Furthermore, while PowerFull can be configured to work with a variety of devices via configurable MQTT messages, it is provided with a "Theme" that pre-sets all configuration values such that they are compatible with Tasmota.</p>
<h2 id="operation">Operation</h2>
<p>The diagram and associated notes below show how PowerFull interacts with SolarEdge.Monitor and the Sonoff Mini to control power states:</p>
<img src="../Content/TechAdventuresInSustainability-PartII/PowerFullOperation.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="PowerFull Operation Flow">
<br/>
<ol>
<li>PowerFull begins in the <strong>Starting</strong> state in which it connects to the MQTT broker and subscribes to required topics (<code>Device.PowerOffRequestTopic</code>, <code>Device.PowerOnRequestTopic</code>, <code>Device.PowerStateRequestTopic</code>, <code>Device.PowerStateResponseTopic</code>, <code>Messaging.PowerReadingTopic</code>)</li>
<li>PowerFull transitions to the <strong>Initializing</strong> state</li>
<li>PowerFull request power state (by sending <code>Device.PowerStateRequestPayload</code> on <code>Device.PowerStateRequestTopic</code>) from all devices (concurrently), waiting to receive a response from the device (on the <code>Device.PowerStateResponseTopic</code>) for up to 10 seconds.</li>
<li>Broker forwards '<null>' on topic <code>cmnd/%deviceId%/POWER</code> to Sonoff device.</li>
<li>Sonoff responds by publishing current power state ("ON" or "OFF") on topic "stat/%deviceId%/POWER"</li>
<li>PowerFull uses <code>Device.PowerStateResponseOnPayloadRegex</code> and <code>Device.PowerStateResponseOffPayloadReger</code> to determine state of device. Any device that doesn't respond within 10 seconds or for which the response is not matched by the power state regex values is left in an 'Unknown' state and no further interaction is performed.</li>
<li>PowerFull transitions to the <strong>Running</strong> state.</li>
<li>SolarEdge.Monitor publishes regular power reading message to the "home/solar/meter1readings" topic.</li>
<li>Power reading messages are received by PowerFull's subscription to the <code>Messaging.PowerReadingTopic</code> and the current power reading is extracted from the payload of the message via the <code>Messaging.PowerReadingPayloadValueRegex</code>.</li>
<li>Power reading is averaged across <code>Service.AveragePowerReadingAcrossMinutes</code> minutes and if it is:</li>
</ol>
<ol type="a">
<li>above the <code>Service.ThresholdToTurnOnDeviceWatts</code> value then the <code>Device.PowerOnRequestPayload</code> is sent to the <code>Device.PowerOnRequestTopic</code> for the next device to be turned on; or</li>
<li>below the <code>Service.ThresholdToTurnOffDeviceWatts</code> value then the <code>Device.PowerOffRequestPayload</code> is sent to the <code>Device.PowerOfRequestTopic</code> for the next device to be turned off</li>
</ol>
<ol start="11">
<li>Broker forwards the payload "ON" or "OFF" on topic "cmnd/%deviceId%/POWER" to Sonoff device which turns it's output on or off respectively.</li>
</ol>
<p>Steps 8-11 repeat until service is encounters a fault or halt at which point:</p>
<ol start="12">
<li>PowerFull transitions to a <strong>Faulted</strong> state where all subscriptions and resources are disposed</li>
<li>PowerFull transitions to the <strong>Stopped</strong> state where no further processing occurs</li>
</ol>
<p>* All terms in <code>Code Format</code> represent PowerFull configuration values. Configuration values can be specified on the command-line or via environment variables. You can see an example of the latter in the following section.</p>
<h2 id="docker">Docker</h2>
<p>As shown in <a href="https://ian.bebbs.co.uk/tags/uno-platform">Part 1</a>, I use <a href="https://docs.docker.com/compose/">Docker Compose</a> to run my Smart Home infrastructure. Adding PowerFull was simply a case of adding a new service to my <code>docker-compose.yml</code> file as shown below:</p>
<pre><code class="language-yml">version: "3.2"
services:
# https://hub.docker.com/_/eclipse-mosquitto
mqtt:
image: eclipse-mosquitto
ports:
- "1883:1883"
- "9001:9001"
solaredgemonitor:
image: ibebbs/solaredge.monitor
environment:
- Solaredge:Monitor:Inverter:Address=192.168.2.23
- Solaredge:Monitor:Inverter:Port=502
- Solaredge:Monitor:MQTT:Address=mqtt
- Solaredge:Monitor:MQTT:Port=1883
- Solaredge:Monitor:MQTT:ClientId=InverterMonitor
- Solaredge:Monitor:MQTT:Topic=home/solar/inverter
- Solaredge:Monitor:Service:PollingIntervalSeconds=10
- Solaredge:Monitor:Service:ModelsToRead=inverter,meter1readings
depends_on:
- mqtt
powerfull:
image: ibebbs/powerfull
environment:
- PowerFull:Service:Devices=sonoff-battery
- PowerFull:Messaging:Broker=mqtt
- PowerFull:Messaging:PowerReadingTopic=home/solar/meter1readings
- PowerFull:Messaging:PowerReadingPayloadValueRegex=^{.+"RealPower":{"Total":(?<RealPower>-?\d+(\.\d+)).+}
- PowerFull:Device:Theme=Tasmota
depends_on:
- mqtt
</code></pre>
<h2 id="conclusion">Conclusion</h2>
<p>By adding rules to interpret power on and power off messages to <a href="https://nodered.org/">NodeRed</a> (see <a href="https://ian.bebbs.co.uk/tags/uno-platform">Part 1</a> for an explanation of how NodeRed is used), we're able to monitor the effectiveness of this solution:</p>
<img src="../Content/TechAdventuresInSustainability-PartII/DevicePowerState.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="Device Power State">
<p>As you can see, over the last 30 days PowerFull has allowed me to harness 9 days (~216 hours) worth of electricity that would otherwise have been exported to the grid. Pretty cool!</p>
<p>If you have any questions or comments about PowerFull please feel free to drop me a line using the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>.</p>
<p>I'm taking a quick break from my <a href="https://ian.bebbs.co.uk/tags/uno-platform">"Many Platforms, one world" blog series</a> to reprise an old - but related - series on using technology to promote sustainability. In <a href="http://ian.bebbs.co.uk/posts/TechAdventuresInSustainability-PartI">Part 1 of this series</a> I showed how my family uses <a href="https://github.com/ibebbs/SolarEdge.Monitor">SolarEdge.Monitor</a> to extract, persist and visualize the energy being produced by our solar panels. In this post I aim to show how I use the data produced by SolarEdge.Monitor to automatically optimize our electricity usage.</p>http://ian.bebbs.co.uk/posts/COduo-Part3Many platforms, one world - Part 32020-04-28T00:00:00Z<h2 id="intro">Intro</h2>
<p>This is part 3 of my series on using the Uno Platform to write a cross-platform app, able to target both single and dual-screen devices. In this post I cover the architecture of the CO<sub><em>duo</em></sub> app with an aim to providing an understanding of how it's primary components interoperate to provide a robust and testable experience across multiple platforms and screen configurations.</p>
<p>For an introduction to CO<sub><em>duo</em></sub> or to find further posts in this series, please use the links below:</p>
<ul>
<li><a href="./COduo-Part1">Part 1 - Background</a></li>
<li><a href="./COduo-Part2">Part 2 - Infrastructure</a></li>
<li><a href="./COduo-Part3">Part 3 - Client Architecture</a></li>
<li><a href="./COduo-Part4">Part 4 - Using the TwoPaneView</a></li>
<li>Part 5 - Implementing the interactive UK Map</li>
<li>Part 6 - Charts on the Uno Platform</li>
<li>Part 7 - Windows, Win10X and releasing to the Microsoft Store</li>
<li>Part 8 - Android and releasing to the Google Play Store</li>
<li>Part 9 - iOS and releasing to the Apple App Store</li>
</ul>
<h2 id="architecture">Architecture</h2>
<p>In part 2 I presented the following diagram and discussed the service-side infrastructure components.</p>
<img src="/Content/CODuo/Infrastructure.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="Infrastructure.png"/>
<p>This post will be focussing on the architectural components of the app. Again, this isn't specifically about how the Uno Platform was used to implement the app so I will endeavour to keep these discussions at a high level. However, in order to understand how the app functions, I think it's important to understand it's various components and interactions.</p>
<p>To do this we should first outline some of the conventions and libraries used within CO<sub><em>duo</em></sub> in order to facilitate further discussion around the actual implementation.</p>
<h2 id="conventions-libraries">Conventions & Libraries</h2>
<h3 id="fluent-namespacing">Fluent Namespacing</h3>
<p>CO<sub><em>duo</em></sub> employs "Fluent Namespacing", an introduction to which can be found in my blog post <a href="https://ian.bebbs.co.uk/posts/FluentNamespacing">here</a>. To summarise, Fluent Namespacing promotes the practise of grouping classes by functional domain, <em>not</em> functional pattern.</p>
<p>For example, the Application State Machine is a class named <code>Machine</code> in the <code>Application.State</code> namespace; therefore having a full-name of <code>Application.State.Machine</code>. This is in contrast to a conventional grouping of classes by functional pattern where - for example - there would typically be a class named <code>ApplicationStateMachine</code> in the <code>StateMachines</code> namespace.</p>
<p>While this might initially take some getting used to, as you examine the source code for CO<sub><em>duo</em></sub> you should hopefully see how this approach simplifies class names, eases navigation and promotes good practices.</p>
<h3 id="reactive-extensions-mvvm-mvx.observable">Reactive Extensions, MVVM & MVx.Observable</h3>
<p>The <a href="https://github.com/dotnet/reactive">Reactive Extensions</a> (Rx) library is used throughout CO<sub><em>duo</em></sub> to implement many different types of component from <a href="#state-lifetime-management">State Machines</a> to the <a href="#communication">Event Bus</a>. One area where Rx shines particularly brightly however is as a means to write <a href="https://en.wikipedia.org/wiki/Reactive_programming">functional, declarative and reactive user interfaces</a>.</p>
<p>CO<sub><em>duo</em></sub> does just this by implementing <a href="https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93viewmodel">MVVM style ViewModels</a> as collections of <a href="https://ian.bebbs.co.uk/posts/ReactiveBehaviors">Reactive Behaviours</a> via the <a href="https://www.nuget.org/packages/MVx.Observable/">MVx.Observable</a> library.</p>
<p>If you are unfamiliar with Rx I would certainly suggest taking the time to learn about it. Not only will you understand more of how CO<sub><em>duo</em></sub> hangs together but, once you "get" it, I almost guarantee you'll start to see programming problems in a different light. Lee Campbell has a great introduction to Rx on his aptly named website <a href="http://introtorx.com/">"IntroToRx.com"</a>.</p>
<h2 id="implementation">Implementation</h2>
<h3 id="shared">99% Shared</h3>
<p>As can be seen from the diagram above, while CO<sub><em>duo</em></sub> comprises many "head" projects (i.e. UWP, Android, iOS, etc), all application code - except for a very small "Platform Services" layer - is shared across all platforms. This includes all state and application lifetime management, navigation and data access and View/ViewModel implementations. I believe this is quite an achievement and speaks volumes about the potential for the Uno Platform to lower TCO when implementing and maintaining a cross-platform solution.</p>
<p>The "Platform Services" layer comprises a couple of interface implementations in each head project which provides platform specific functionality. For example, the use of <a href="https://github.com/dotnet/reactive">Reactive Extensions</a> requires <a href="http://introtorx.com/Content/v1.0.10621.0/15_SchedulingAndThreading.html">IScheduler</a> implementations for correctly marshalling events to and from the platform's "UI thread". The implementation of (and access to) the correct IScheduler implementation is different for each platform so each head project contains an implementation of the <a href="https://github.com/ibebbs/CODuo/blob/master/src/CODuo/CODuo.Shared/Platform/ISchedulers.cs"><code>Platform.ISchedulers</code></a> interface. Shown below is the <a href="https://github.com/ibebbs/CODuo/blob/master/src/CODuo/CODuo.Droid/Platform/Schedulers.cs"><code>Platform.ISchedulers</code></a> implementation for Android:</p>
<pre><code class="language-c#">public class Schedulers : ISchedulers
{
private static readonly Lazy<IScheduler> DispatchScheduler = new Lazy<IScheduler>(() => new SynchronizationContextScheduler(SynchronizationContext.Current));
public IScheduler Default => Scheduler.Default;
public IScheduler Dispatcher => DispatchScheduler.Value;
}
</code></pre>
<h3 id="views-view-models">Views & View Models</h3>
<p>As mentioned above, CO<sub><em>duo</em></sub> employs the MVVM pattern to separate GUI and business logic. Each View uses data-binding to declaratively bind information provided by the ViewModel to the various controls presented in the UI. The ViewModel uses <a href="https://ian.bebbs.co.uk/posts/ReactiveBehaviors">Reactive Behaviours</a> and <a href="https://www.nuget.org/packages/MVx.Observable/">MVx.Observable</a> properties to react to user interactions and changes in application state.</p>
<p>The example below shows how current value for "Tonnes Of CO<sub>2</sub> per hour" is implemented in the <code>Home.ViewModel</code>:</p>
<pre><code class="language-c#">public class ViewModel : IViewModel, INotifyPropertyChanged
{
private readonly Data.IProvider _dataProvider;
private readonly Platform.ISchedulers _schedulers;
...
private readonly MVx.Observable.Property<int> _selectedRegion;
private readonly MVx.Observable.Property<Common.Period> _currentPeriod;
private readonly MVx.Observable.Property<double> _tonnesOfCO2PerHour;
public event PropertyChangedEventHandler PropertyChanged;
public ViewModel(Data.IProvider dataProvider, Platform.ISchedulers schedulers)
{
_dataProvider = dataProvider;
_schedulers = schedulers;
...
_currentPeriod = new MVx.Observable.Property<Common.Period>(nameof(CurrentPeriod), args => PropertyChanged?.Invoke(this, args));
_selectedRegion = new MVx.Observable.Property<int>(0, nameof(SelectedRegion), args => PropertyChanged?.Invoke(this, args));
_tonnesOfCO2PerHour = new MVx.Observable.Property<double>(nameof(TonnesOfCO2PerHour), args => PropertyChanged?.Invoke(this, args));
...
}
private IDisposable ShouldRefreshTonnesOfCO2PerHourWhenPeriodOrSelectedRegionChanges()
{
return Observable
// When the current value of either `_currentPeriod` or `_selectedRegion` changes ...
.CombineLatest(_currentPeriod, _selectedRegion, (period, regionId) => period?.Regions
// ... retreive the data for the selected region from the current period ...
.Where(region => region.RegionId == regionId)
// ... and use this data to calculate Tonnes Of CO2 Per Hour
.Select(region => (region.Estimated.TotalMW * MegaWattsToKiloWatts * region.Estimated.GramsOfCO2PerkWh) / GramsInAMetricTonne ?? 0.0)
// ... returning the first value or 0
.FirstOrDefault() ?? 0.0)
// ... then move onto the UI thread
.ObserveOn(_schedulers.Dispatcher)
// ... and update the _tonnesOfCO2PerHour value with the value
// calculated above causing the PropertyChanged event to be
// raised for the `TonnesOfCO2PerHour` property
.Subscribe(_tonnesOfCO2PerHour);
}
public IDisposable Activate()
{
return new CompositeDisposable(
...
ShouldRefreshTonnesOfCO2PerHourWhenPeriodOrSelectedRegionChanges()
...
);
}
...
public Common.Period CurrentPeriod
{
get { return _currentPeriod.Get(); }
}
public double TonnesOfCO2PerHour
{
get { return _tonnesOfCO2PerHour.Get(); }
}
public int SelectedRegion
{
get { return _selectedRegion.Get(); }
set { _selectedRegion.Set(value); }
}
...
}
</code></pre>
<p>As you can see, all source data and logic for implementing this behaviour is wrapped into a single, appropriately named method called 'ShouldRefreshTonnesOfCO2PerHourWhenPeriodOrSelectedRegionChanges'. While the code in this method should be comprehensible to anyone fluent with LINQ extension-method syntax, it has been annotated for clarity.</p>
<p>This pattern is repeated for each behaviour the ViewModel is required to implement.</p>
<h3 id="application-navigation-state">Application & Navigation State</h3>
<p>Similar to how we employ MVVM to separate view and business logic, I find it beneficial to separate view and application/navigation logic which all too often are conflated together. Doing this brings benefits similar to the adoption of MVVM in the view layer (i.e. simplified logic, enhanced testability, etc) to the application layer.</p>
<p>As such, application state and navigation state are managed by a dedicated <a href="https://en.wikipedia.org/wiki/Finite-state_machine">state machines</a>. These are implemented as <a href="https://ian.bebbs.co.uk/posts/ReactiveStateMachines">Reactive State Machines</a> and designed to mirror lifetime and navigation states in the app. Stateful application and navigation data is passed between states via a mutable <code>Application.Aggregate.Root</code>.</p>
<p>These approaches allow the app to elegantly manage lifetime events such as the app being suspended / resumed and to transparently restore navigation state and data.</p>
<p>Here is CO<sub><em>duo</em></sub>'s current state diagram:</p>
<img src="/Content/CODuo/StateChart.png" class="img-responsive" style="margin: auto; margin-top: 6px; margin-bottom: 6px;" alt="COduo State Diagram"/>
<h3 id="communication">Communication</h3>
<p>All communication between disparate app components (for example between the state-machine and a view model) occurs via events published through an Event Bus. This promotes decoupling by ensuring that the component that raises an event requires no knowledge of a component which might consume the event, and vice versa.</p>
<h3 id="data">Data</h3>
<p>Data for the application is retrieved and deserialized by the <code>Data.Provider</code>. The <code>Data.Provider</code> sets up an Rx subscription to acquire new data every 15 minutes or whenever a <code>Data.Requested</code> event is received from the event bus. This data is exposed to the rest of the application as an <code>IObservable<></code> which has been designed to immediately return the current value whenever a new consumer subscribes.</p>
<p>The <code>Data.Provider</code> starts fetching data when the <code>Activate</code> method is called and will continue to fetch data - regardless of whether there currently exists any subscribers - until the <code>IDisposable</code> result of the <code>Activate</code> method is disposed. This ensures data is immediately available to ViewModels when they need it (i.e. after navigation) and allows data acquisition to be correctly managed through Suspend/Resume transitions.</p>
<h2 id="source-code">Source Code</h2>
<p>You can find the source code for CO<sub><em>duo</em></sub> in my <a href="https://github.com/ibebbs/CODuo">Github repository</a>. Should you like or use it, please take the time to "star" the repository; it's a small gesture which really fuels developers's enthusiasm for projects such as these.</p>
<h2 id="part-4">Part 4</h2>
<p>Now we understand how the application hangs together, in <a href="./COduo-Part4">Part 4</a> I will detail how to setup an Uno Platform solution such that you're able to use a <code>TwoPaneView</code> control and how the TwoPaneView control is used within CO<sub><em>duo</em></sub>.</p>
<h2 id="finally">Finally</h2>
<p>I hope you enjoy this series and that it goes some way to demonstrating the massive potential presented by the Uno Platform for delivering cross-platform experiences without having to invest in additional staff training nor bifurcating your development efforts.</p>
<p>If you or your company are interested in building apps that can leverage the dual screen capabilities of new devices such as the Surface Duo and Surface Neo, or are keen to understand how a single code-base can deliver apps to <em>every platform from mobile phones to web sites</em>, then please feel free to drop me a line using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>. I am actively seeking new clients in this space and would be happy to discuss any ideas you have or projects you're planning.</p>
<p>This is part 3 of my series on using the Uno Platform to write a cross-platform app, able to target both single and dual-screen devices. In this post I cover the architecture of the CO<sub><em>duo</em></sub> app with an aim to providing an understanding of how it's primary components interoperate to provide a robust and testable experience across multiple platforms and screen configurations.</p>http://ian.bebbs.co.uk/posts/COduo-Part2Many platforms, one world - Part 22020-04-23T00:00:00Z<h2 id="intro">Intro</h2>
<p>This is part 2 of my series on using the Uno Platform to write a cross-platform app, able to target both single and dual-screen devices. In this post I cover the infrastructure used to collate and aggregate the data used by CO<sub><em>duo</em></sub> as a prelude to a deeper dive into the implementation of the app itself which I will cover in later posts.</p>
<p>Here are links to all the posts I have written - or intend to write - for this series:</p>
<ul>
<li><a href="./COduo-Part1">Part 1 - Background</a></li>
<li><a href="./COduo-Part2">Part 2 - Infrastructure</a></li>
<li><a href="./COduo-Part3">Part 3 - Client Architecture</a></li>
<li><a href="./COduo-Part4">Part 4 - Using the TwoPaneView</a></li>
<li>Part 5 - Implementing the interactive UK Map</li>
<li>Part 6 - Charts on the Uno Platform</li>
<li>Part 7 - Windows, Win10X and releasing to the Microsoft Store</li>
<li>Part 8 - Android and releasing to the Google Play Store</li>
<li>Part 9 - iOS and releasing to the Apple App Store</li>
</ul>
<h2 id="infrastructure">Infrastructure</h2>
<p>While considering how to implement CO<sub><em>duo</em></sub>, I needed to ensure the app could retrieve all the data it required quickly, efficiently, securely and - most importantly - cheaply. As such, I decided to introduce service infrastructure that would perform all the required data collation, aggregation and serialization such that the app merely had to retrieve a single file from a know URI.</p>
<p>Here is a mile-high view of the infrastructure used to operate CO<sub><em>duo</em></sub> and the architecture of the app's various components:</p>
<img src="/Content/CODuo/Infrastructure.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="Infrastructure.png"/>
<p>As the primary focus of this series of posts is the Uno Platform I won't be digging into the service-side components too deeply but I feel it's important to show how the infrastructure delivers on the requirements above in order to understand how this simplifies the app's implementation.</p>
<h2 id="serverless">Server[less]</h2>
<p>Fundamentally, the infrastructure is provided by two, timer-triggered <a href="https://azure.microsoft.com/en-us/services/functions/">Azure Functions</a>: "Weather Collection" and "Energy Aggregation". These 'serverless' functions collate, process and store all the data required by the app, greatly simplifying client data access.</p>
<p>Here's the (<a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-map?tabs=net">Application Insights</a> generated) application map:</p>
<img src="/Content/CODuo/AzureFunctionsApplicationMap.png" class="img-responsive" style="margin: auto; width:60%; margin-top: 6px; margin-bottom: 6px;" alt="Azure Functions Application Map"/>
<h3 id="weather-collection">Weather Collection</h3>
<pre><code class="language-c#">[FunctionName("WeatherV1")]
public static async Task Weather(
[TimerTrigger(WeatherNormal)] TimerInfo timer,
[CosmosDB(databaseName: CosmosDatabase, collectionName: WeatherCollection, ConnectionStringSetting = CosmosConnectionStringKey)] IAsyncCollector<Weather.Common.Document> documentsOut,
ILogger log)
</code></pre>
<p>The Weather Collection function is triggered every hour and retrieves data from the <a href="https://metoffice.apiconnect.ibmcloud.com/metoffice/production/">Met Office Weather Data Hub</a>. It collects 48 hours worth of forecast data for each of 14 locations around the UK (one city in each of the 14 <a href="https://www.ovoenergy.com/guides/energy-guides/dno.html">Distributed Network Operator regions</a>) then transposes this to generate weather data for each hour containing the forecast in each region.</p>
<p>This was done for many reasons but mostly to provide numerous small, easily indexed documents that can be cheaply written to, read from and updated within Cosmos DB. This has worked well and each hour documents are saved to a <a href="https://azure.microsoft.com/en-us/updates/azure-cosmos-db-free-tier-is-now-available/">free tier CosmosDB container</a>.</p>
<p>Persisting these documents does occasionally exceed the free tier's 400ru/s quota which means writes to Cosmos need to be retried until they succeed. While all the retries are transparently handled by the SDK, the retries cause the function to run longer than it otherwise would and, as such, I will probably modify the function to only persist 24 hours worth of forecast data in the next version.</p>
<h3 id="energy-aggregator">Energy Aggregator</h3>
<pre><code class="language-c#">[FunctionName("EnergyV1")]
public static async Task Energy(
[TimerTrigger(EnergyNormal)]TimerInfo timer,
[CosmosDB(
databaseName: CosmosDatabase,
collectionName: WeatherCollection,
ConnectionStringSetting = CosmosConnectionStringKey)] DocumentClient client,
[Blob(EnergyOutputFile, FileAccess.Write, Connection = EnergyStorage)] Stream blob,
ILogger log)
</code></pre>
<p>The Energy Aggregation function runs every 15 minutes and requests electricity generation and composition information from a few different API's, most notably Elexon's <a href="https://www.elexon.co.uk/knowledgebase/what-is-bmreports-com/">Balancing Mechanism Reporting Service</a>. This is collated with weather data generated by the Weather Collection function then aggregated and serialized into a JSON document easily consumed by the CO<sub><em>duo</em></sub> client application.</p>
<p>The serialized document is then persisted in a publicly accessible <a href="https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers?tabs=azure-portal">'Hot' Azure Blob</a> meaning the client application can retrieve it with a single, unauthenticated HTTPS request.</p>
<h3 id="conclusion">Conclusion</h3>
<p>At current levels (and using the CosmosDB free-tier) it is currently costing less than £1 per month to run this infrastructure with only small increases (due to bandwidth costs) as application usage scales. As such, I feel it satisfies CO<sub><em>duo</em></sub>'s requirements very neatly. Furthermore, Visual Studio's impressive tooling for developing and testing Azure Functions locally (including local emulators of all storage) streamlines the delivery of features and regression testing of changes such that I've been able to iterate on this project extremely quickly.</p>
<h3 id="more-information">More information</h3>
<p>I've deliberately kept this post at a "mile-high" level as the series is focused on the use of the Uno Platform to deliver a cross platform application. However, if you're keen to understand more of how these service-side components operate then drop me a line (contact links at the bottom of the page) and, if enough people are interested, I'll write a blog post detailing these approaches further.</p>
<h2 id="part-3">Part 3</h2>
<p><a href="./COduo-Part3">Part 3</a> will examine the architecture of CO<sub><em>duo</em></sub> with an aim to providing an understanding of how it's primary components interoperate to provide a robust and testable experience across multiple platforms and dual-screens.</p>
<h2 id="finally">Finally</h2>
<p>I hope you enjoy this series and that it goes some way to demonstrating the massive potential presented by the Uno Platform for delivering cross-platform experiences without having to invest in additional staff training nor bifurcating your development efforts.</p>
<p>If you or your company are interested in building apps that can leverage the dual screen capabilities of new devices such as the Surface Duo and Surface Neo, or are keen to understand how a single code-base can deliver apps to <em>every platform from mobile phones to web sites</em>, then please feel free to drop me a line using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>. I am actively seeking new clients in this space and would be happy to discuss any ideas you have or projects you're planning.</p>
<p>This is part 2 of my series on using the Uno Platform to write a cross-platform app, able to target both single and dual-screen devices. In this post I cover the infrastructure used to collate and aggregate the data used by CO<sub><em>duo</em></sub> as a prelude to a deeper dive into the implementation of the app itself which I will cover in later posts.</p>http://ian.bebbs.co.uk/posts/COduo-Part1Many platforms, one world - Part 12020-04-19T00:00:00Z<h2 id="tldr">TL;DR</h2>
<p>This is part 1 of a series of posts in which I chronical how the Uno Platform was used to write an app which runs natively on all major platforms and naturally on modern dual-screen devices (such as the forthcoming Surface Neo and Surface Duo). I will endeavour to detail how the Uno Platform makes it possible to achieve "99% shared code" across operating system and form-factor, all without having to leave the comfort of basic C# nor needing to learn a new dialect of XAML. And finally, through the app, I hope to provide the means to better understand - and help mitigate - the impact our energy usage is having on the environment.</p>
<h2 id="part-1">Part 1</h2>
<p>In this post I cover the app's conceptualization, why I chose to implement it using the Uno Platform, how you can get the app for your device and where you can examine it's source code. Later posts detail the various conundra of designing, implementing and deploying an app targeting multiple platforms using the Uno Platform.</p>
<p>Below is a (preliminary) list of posts I intend to write. It will be updated as each post is completed and published:</p>
<ul>
<li><a href="./COduo-Part1">Part 1 - Background</a></li>
<li><a href="./COduo-Part2">Part 2 - Infrastructure</a></li>
<li><a href="./COduo-Part3">Part 3 - Client Architecture</a></li>
<li><a href="./COduo-Part4">Part 4 - Using the TwoPaneView</a></li>
<li>Part 5 - Implementing the interactive UK Map</li>
<li>Part 6 - Charts on the Uno Platform</li>
<li>Part 7 - Windows, Win10X and releasing to the Microsoft Store</li>
<li>Part 8 - Android and releasing to the Google Play Store</li>
<li>Part 9 - iOS and releasing to the Apple App Store</li>
</ul>
<h2 id="background">Background</h2>
<p>Back in January I wrote a <a href="https://ian.bebbs.co.uk/posts/UnoDuoHey">blog post</a> showing how the <a href="https://platform.uno/">Uno Platform</a> could be used to write native, cross-platform apps that can leverage the unique UX opportunities afforded by dual and multi-screen devices such as the forthcoming Surface Duo and Surface Neo. This article was received well and the Uno Platform team dropped me a line after reading it suggesting that, if I could develop the PoC into a "real app", they'd feature it on their <a href="https://platform.uno/showcases/">showcases page</a>. This seemed like a great idea but, as I was in the middle of a project at the time and couldn't immediately think of an app I wanted to write, I thanked them and left it there...</p>
<p>Until, that is, I read that <a href="https://blogs.microsoft.com/blog/2020/01/16/microsoft-will-be-carbon-negative-by-2030/">Microsoft had committed to going carbon negative by 2030</a>. As regular readers of my blog will know, I have a penchant for <a href="https://ian.bebbs.co.uk/posts/TechAdventuresInSustainability-PartI">using technology to help promote sustainable living</a> and thought an app combining this with Microsoft's current focus on dual-screen devices could be the showcase app the Uno Platform team were looking for.</p>
<p>And so it was that CO<sub><em>duo</em></sub> came to be:</p>
<img src="/Content/CODuo/RunningOnSurface.png" class="img-responsive" style="margin: auto; width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Running On Surface.png"/>
<blockquote class="blockquote">
<p>An early version of CO<sub><em>duo</em></sub> running on Surface Pro (Windows 10), Surface Duo (Android 10) and Surface Neo (Windows 10X)</p>
</blockquote>
<h2 id="so-what-is-coduo">So what is CO<sub><em>duo</em></sub>?</h2>
<p>CO<sub><em>duo</em></sub> is an app which presents data about electricity generation and carbon emissions across the UK in a user-friendly way.</p>
<p>With CO<sub><em>duo</em></sub> I wanted to not only increase people's awareness of the impact their energy usage was having on the environment - particularly the CO<sub>2</sub> emissions - but also empower them to change their energy usage in ways which might help mitigate this impact. In short, my design goals could be summarised with the following two user stories:</p>
<blockquote class="blockquote">
<p>"As a domestic user of electricity, I need to understand the impact my energy usage has on the environment so that I am incentivized to change this usage"</p>
</blockquote>
<blockquote class="blockquote">
<p>"As a domestic user of electricity, I need to understand how I can change my energy usage so that it's impact on the environment is minimized".</p>
</blockquote>
<p>I started this project by searching for appropriate sources of data and was pleased to find that, for the UK at least, there were numerous free - and extremely detailed - public APIs available. I then spent some time prototyping a data visualisation that could show the carbon intensity of current and forecast energy generation and illustrate when might be best to use energy-intensive appliances (i.e. washing machines, dish washers, tumble dryers, etc).</p>
<p>Using Syncfusion's Essential Studio I got the below working in a UWP app in single evening:</p>
<img src="/Content/CODuo/Prototype.png" class="img-responsive" style="margin: auto; width:80%; margin-top: 6px; margin-bottom: 6px;" alt="Prototype of COduo"/>
<p>"Great", I thought, "Now to make it run across every platform, on every screen and in every configuration. How difficult can it be?".</p>
<p>Well, lets find out.</p>
<h2 id="why-the-uno-platform">Why the Uno Platform?</h2>
<p>This is my third blog post about the Uno Platform. The first two - <a href="https://ian.bebbs.co.uk/posts/Uno">The Seven GUIs of Christmas</a> about Uno's cross-platform capabilities & <a href="https://ian.bebbs.co.uk/posts/UnoDuoHey">Uno, Duo, Hey!</a> about Uno's dual-screen capabilities - showed a platform that had incredible potential and which was rapidly maturing to the point where it could deliver on this potential for "real world" apps.</p>
<p>Given I wanted to write an app that would work natively on both the Surface Duo - which runs Android - and Surface Neo - which runs Windows 10X - the Uno Platform was an obvious choice as it would reduce my technology stack from this:
<img src="/Content/CODuo/MultipleApps.png" class="img-responsive" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Multiple Apps"/></p>
<p>To this:
<img src="/Content/CODuo/UnoAllTheThings.png" class="img-responsive" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Uno All The Things!"/></p>
<p>As we will see in the following series, this choice really has paid dividends. In fact, it has been so successful that I feel I must issue a correction:</p>
<p>My first blog post about the Uno Platform - <a href="https://ian.bebbs.co.uk/posts/Uno">The Seven GUIs of Christmas</a> - contained the following:</p>
<pre><code>The Uno platform is, somewhat amazingly, able to display (almost) the exact same XAML page across multiple platforms (or 'heads' to use Uno parlance) with a very high degree of fidelity. This is quite an achievement and the team at nventive are rightly proud of this capability.
However, from the perspective of someone looking to write large applications on this platform, I don't believe this facility is particularly important nor - to a certain extent - even desirable. You see, in my experience, it is often the case that each platform and/or form-factor requires such different UI and/or UX that trying to shoe-horn everything into a single XAML page results in a page that is difficult, if not impossible, to maintain.
</code></pre>
<p>In contrast to this statement, CO<sub><em>duo</em></sub> has been written with a single code-base - from infrastructure through to view-models <em>and</em> views - shared across all devices. This has led to neither code bloat nor maintainability issues due, in most part, to the Uno Platform's faithful reproduction of a few key UWP tenets.</p>
<p>To explain: Whereas previously I had been used to writing cross-platform apps using multiple different display technologies, the Uno Platform is just UWP and UWP was designed to be... well... universal. Out of the gate, UWP ran on everything from desktop PCs and tablets through to mobile phones and IoT devices. It successfully abstracted away many technical difficulties of designing for multiple platforms ensuring the developer was able to write a single "adaptive" UI which would then be able to capitalize on the display surface(s) available.</p>
<p>Uno have very successfully reproduced this capability across multiple disparate platforms and while not quite pixel-perfect - as we will see in future posts - it is close enough that any differences can be smoothed over with a little creative design.</p>
<h2 id="which-platforms-does-coduo-run-on">Which platforms does CO<sub><em>duo</em></sub> run on?</h2>
<p>CO<sub><em>duo</em></sub> currently runs on the following platforms:</p>
<ul>
<li>Via UWP
<ul>
<li>Windows 10 PC</li>
<li>Windows 10 Tablet</li>
<li>Windows 10 Mobile/Phone</li>
<li>Windows 10 IoT</li>
<li>XBox One</li>
<li>Hololens</li>
<li>Surface Hub</li>
<li>Windows 10X PC/Tablet (i.e. Surface Neo)</li>
</ul>
</li>
<li>Via Android (Oreo - version 8 - or above)
<ul>
<li>Android Phone</li>
<li>Android Tablet</li>
<li>Android TV</li>
<li>Dual-Screen Android Devices (i.e. Surface Duo)</li>
</ul>
</li>
</ul>
<p>Furthermore, CO<sub><em>duo</em></sub> will be updated to run on the following platforms when time and resources allow:</p>
<ul>
<li>Via iOS
<ul>
<li>iPhone</li>
<li>iPad</li>
</ul>
</li>
<li>Via WebAssembly
<ul>
<li>Any <a href="https://en.wikipedia.org/wiki/WebAssembly">WebAssembly compatible browser</a></li>
</ul>
</li>
</ul>
<h2 id="where-can-i-get-coduo">Where can I get CO<sub><em>duo</em></sub>?</h2>
<p>Beta versions of CO<sub><em>duo</em></sub> are currently available in the following apps stores:</p>
<ul>
<li><a href="https://www.microsoft.com/en-gb/p/coduo/9php2cf3z997">Microsoft Store</a> - for PC, Tablet, XBox, Hololens and Surface Hub</li>
<li><a href="https://play.google.com/store/apps/details?id=solutions.onecog.coduo">Google Play</a> - for Android Phone, Table, and TV.</li>
<li>Apple App Store - Coming soon</li>
</ul>
<p>As promised, the Uno Platform Team have also featured CO<sub><em>duo</em></sub> on their <a href="https://platform.uno/showcases/">showcases page</a> and as part of their introduction to using the <a href="https://platform.uno/surface-duo-neo/">Uno Platform for Surface Duo and Surface Neo</a>.</p>
<h2 id="will-you-be-open-sourcing-coduo">Will you be open-sourcing CO<sub><em>duo</em></sub>?</h2>
<p>Yes, <em>mostly</em>. In addition to detailing lots of the design and implementation considerations that went into writing CO<sub><em>duo</em></sub> in various posts for this series, the code for CO<sub><em>duo</em></sub> has been published under a "shared source" license; specifically <a href="https://www.gnu.org/licenses/gpl-3.0.en.html">GPLv3</a> with the <a href="https://commonsclause.com/">"Commons Clause"</a>.</p>
<blockquote class="blockquote">
<p>The Commons Clause is a license condition drafted by Heather Meeker that applies a narrow, minimal-form commercial restriction on top of an existing open source license to transition the project to a source-availability licensing scheme. The combined text replaces the existing license, allowing all permissions of the original license to remain except the ability to "Sell" the software as defined in the text.</p>
</blockquote>
<p>It is my hope that transitioning the CO<sub><em>duo</em></sub> project to a "source-available" license scheme will allow others to understand how to use the Uno Platform to develop a cross-platform app without me having to worry about numerous clones of the app appearing in various app stores laden with ads.</p>
<p>The source code for CO<sub><em>duo</em></sub> can be found in my <a href="https://github.com/ibebbs/CODuo">COduo repository on Github</a>. If you like or use the source-code, please take the time to "star" the repository; it's a small gesture which really fuels developers's enthusiasm for projects such as these.</p>
<h2 id="when-will-new-posts-about-coduo-be-made-available">When will new posts about CO<sub><em>duo</em></sub> be made available?</h2>
<p>I intend to write/release a new post in the series every few days.</p>
<p>My plan is for parts 2 and 3 to provide a high level overview of the service-side infrastructure and client app architecture respectively. These posts won't specifically discuss the Uno Platform but will instead provide insight into how the project was designed and how this design simplifies the development of a cross-platform app such as CO<sub><em>duo</em></sub>.</p>
<p>After parts 2 and 3 I will be diving into the various considerations of using the Uno Platform to deliver a cross-platform app. Part 4 will detail how to use the TwoPaneView to develop an app that runs natively on dual-screen devices and part 5 onwards will discuss how other UI components were implemented.</p>
<p>I will round out the series by highlighting platform differences you need to be aware of while using the Uno Platform and my experience of deploying CO<sub><em>duo</em></sub> to each of the various apps stores.</p>
<h2 id="finally">Finally</h2>
<p>I hope you enjoy this series and that it goes some way to demonstrating the massive potential presented by the Uno Platform for delivering cross-platform experiences without having to invest in additional staff training nor bifurcating your development efforts.</p>
<p>If you or your company are interested in building apps that can leverage the dual screen capabilities of new devices such as the Surface Duo and Surface Neo, or are keen to understand how a single code-base can deliver apps to <em>every platform from mobile phones to web sites</em>, then please feel free to drop me a line using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>. I am actively seeking new clients in this space and would be happy to discuss any ideas you have or projects you're planning.</p>
<p>This is part 1 of a series of posts in which I chronical how the Uno Platform was used to write an app which runs natively on all major platforms and naturally on modern dual-screen devices (such as the forthcoming Surface Neo and Surface Duo). I will endeavour to detail how the Uno Platform makes it possible to achieve "99% shared code" across operating system and form-factor, all without having to leave the comfort of basic C# nor needing to learn a new dialect of XAML. And finally, through the app, I hope to provide the means to better understand - and help mitigate - the impact our energy usage is having on the environment.</p>http://ian.bebbs.co.uk/posts/UnoDuoHeyUno, Duo, Hey!2020-01-24T00:00:00Z<h2 id="intro">Intro</h2>
<p>Last December I wrote <a href="https://ian.bebbs.co.uk/posts/Uno">a blog post</a> called "The Seven GUIs of Christmas" as part of the <a href="https://crosscuttingconcerns.com/The-Third-Annual-csharp-Advent">Third Annual C# Advent</a> series. This post showed the use of the <a href="https://platform.uno/">Uno Platform</a> to write cross-platform apps in UWP. One of the major drivers behind this blog post a desire to write apps for Microsoft's <a href="https://news.microsoft.com/october-2-2019/">recently announced Surface Neo and Surface Duo devices</a> which run Windows 10X and Android respectively. Well, a couple of days ago, Microsoft finally released a <a href="https://blogs.windows.com/windowsdeveloper/2020/01/22/announcing-dual-screen-preview-sdks-and-microsoft-365-developer-day/">preview SDK for the Surface Duo</a> which included an Android Emulator with a preview Surface Duo image. Today I finally got a chance to see whether the Uno Platform really could deliver on these new form-factors.</p>
<h2 id="installing-the-emulator">Installing the Emulator</h2>
<p>If, like me, you don't have Android Studio installed and/or you want to install the Surface Duo SDK in a non-standard location (my super-speedy Intel Optane 900P C:\ drive is getting a little crowded!), you're going to face issues running the emulator. This is mostly due to the <code>run.bat</code> file used to launch the emulator not looking in the correct location for the Android SDK and not supporting installation of the Surface Duo SDK in a path that contains spaces.</p>
<p>If you're encountering issues launching the emulator, navigate to the <code>artifacts</code> directory within the Surface Duo SDK installation directory and edit the <code>run.bat</code> file to the following:</p>
<pre><code class="language-cmd">@echo off
rem ##### ENSURE THE SDK LOCATION BELOW IS CORRECT: #######
set ANDROID_SDK_LOCATION=C:\Program Files (x86)\Android\android-sdk
rem ############ DO NOT Modify below this line ############
set DIRNAME=%~dp0
if "%DIRNAME%" == "" set DIRNAME=.\
echo %DIRNAME%
rem Check if emulator is installed
set EMULATOR=%ANDROID_SDK_LOCATION%\emulator\emulator.exe
echo "%EMULATOR%"
if exist %EMULATOR% (
set ANDROID_PRODUCT_OUT=%DIRNAME%
"%EMULATOR%" -verbose -accel auto %* -sysdir "%DIRNAME%\bin" -kernel "%DIRNAME%\bin\kernel-ranchu" -datadir "%DIRNAME%\bin\data" -initdata "%DIRNAME%\bin\userdata.img" -vendor "%DIRNAME%\bin\vendor-qemu.img" -system "%DIRNAME%\bin\system-qemu.img" -initdata "%DIRNAME%\bin\userdata.img" -data "%DIRNAME%\bin\userdata.img"
) else (
echo "Can't find emulator executable, make sure its installed"
)
</code></pre>
<p>TBH, the changes are mostly just encapsulating paths within quotes but hopefully this'll save you a little time.</p>
<p>Hopefully now, when you launch the emulator, you'll be greeted by this:</p>
<img src="/Content/UnoDuoHey/DuoEmulator.png" class="img-responsive" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" alt="Android Duo Emulator"/>
<p>Hmm... dual screens!</p>
<h2 id="getting-started">Getting Started</h2>
<p>Microsoft have done a great job of helping developers get started on this platform by supplying some great <a href="https://docs.microsoft.com/en-gb/dual-screen/android/">code-snippets and samples</a> in both <a href="https://github.com/microsoft/surface-duo-sdk-samples">Java</a> and <a href="https://github.com/microsoft/surface-duo-sdk-xamarin-samples">C# (using the Xamarin platform)</a>. Furthermore, the emulator "just works" with the Visual Studio IDE such that, once running, it appears as a standard deployment target allowing you to quickly get apps running within the Surface Duo image.</p>
<img src="/Content/UnoDuoHey/VisualStudioTargettingDuoEmulator.png" class="img-responsive" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" alt="VisualStudio Targetting The Duo Emulator"/>
<h2 id="cross-platform-dual-screen">Cross-Platform Dual-Screen</h2>
<p>My first priority with Uno was to make sure I could correctly interpret when the app was running on a single screen or across both screens. To do this, I took a look at the Xamarin samples and quickly saw that they used a <code>ScreenHelper</code> class to collate information on the current state of the app. This class is provided as part of the (very new - just two days old at time of writing!) <a href="https://www.nuget.org/packages/Xamarin.DuoSdk/0.0.3.2">Xamarin.DuoSdk nuget package</a>. Fortunately, when running on Android (or iOS), Uno runs on top of Xamarin meaning I could just add a reference to this package from the <code>Droid</code> head project of my Uno solution and start using this class right away.</p>
<p>The main functions of the <code>ScreenHelper</code> class were abstracted behind an <code>IDeviceHelper</code> interface so that each head project could provide a platform specific implementation and a small shim written around the <code>ScreenHelper</code> class to satisfy this interface. Finally, to provide responsiveness to changes, I again used my <a href="https://www.nuget.org/packages/MVx.Observable/">MVx.Observable nuget package</a> to dynamically call <code>IDeviceHelper</code> members and update properties on a view model whenever the application changed modes.</p>
<p>In very short order, I had this working:</p>
<iframe width="560" height="315" style="margin: auto; width:50%; margin-top: 6px; margin-bottom: 6px;" src="https://www.youtube.com/embed/MBPo9GvnX-Q" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<p>Just to highlight: this is completely standard UWP / C# code, running <em>unchanged</em> on a dual-screen Android device.</p>
<p>A few comments / caveats:</p>
<ol>
<li>The loading time of the Uno app in the emulator was due to the app being run, as debug, directly from the Visual Studio IDE and is not indicative of Uno Platform app start times.</li>
<li>The app disappearing when switching between screens or between single and dual screen modes is not due to the Uno Platform; this happen with apps that come as part of the Duo image.</li>
<li>Occasionally, when switching between single and dual screen modes, the app will just disappear. Again, this is nothing to do with the Uno Platform and happens with apps that come as part of the Duo image.</li>
</ol>
<h2 id="conclusion">Conclusion</h2>
<h3 id="surface-duo">Surface Duo</h3>
<p>While the Surface Duo Android Emulator image is undoubtedly rough around the edges (it is, after all, a preview) it manages to provide a tantalising taste of what using dual-screen devices could be like. Indeed, just running the Contacts and Calendar apps side-by-side boggles the mind with possible interactions between the two. Furthermore Microsoft have, in relatively short order, delivered a preview SDK from which it is possible to start developing new dual-screen apps or enhance existing apps to take advantage of a second screen. Exciting times!</p>
<h3 id="uno-platform">Uno Platform</h3>
<p>Per my experience while writing "The Seven GUIs of Christmas" post, the Uno Platform has continued to preform admirably and shows great promise for writing apps that will run natively across platforms <strong>and</strong> on dual screens. The only issue I had with Uno while writing the app above was the use of a "Shared Project" to share the Xaml/ViewModel between the various head projects. This approach (which <a href="https://ian.bebbs.co.uk/posts/Uno#six-points-opining">I recommended against</a> in my previous post) resulted in Visual Studio stubbornly refusing to show the Xaml editor and countless errors being shown in the error window despite everything compiling and running fine.</p>
<h3 id="code">Code</h3>
<p>All code for this post can be found in my <a href="https://github.com/ibebbs/UnoDuoHey">UnoDuoHey repository</a> on Github.</p>
<h2 id="lastly">Lastly...</h2>
<p>I am currently eager to find potential new clients interested in using the Uno Platform to deliver cross-platform apps and those looking to capitalise on the amazing potential of dual-screen devices in particular. If this sounds like you or your company, please feel free to drop me a line to discuss your project/ideas using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>.</p>
<p>Last December I wrote <a href="https://ian.bebbs.co.uk/posts/Uno">a blog post</a> called "The Seven GUIs of Christmas" as part of the <a href="https://crosscuttingconcerns.com/The-Third-Annual-csharp-Advent">Third Annual C# Advent</a> series. This post showed the use of the <a href="https://platform.uno/">Uno Platform</a> to write cross-platform apps in UWP. One of the major drivers behind this blog post a desire to write apps for Microsoft's <a href="https://news.microsoft.com/october-2-2019/">recently announced Surface Neo and Surface Duo devices</a> which run Windows 10X and Android respectively. Well, a couple of days ago, Microsoft finally released a <a href="https://blogs.windows.com/windowsdeveloper/2020/01/22/announcing-dual-screen-preview-sdks-and-microsoft-365-developer-day/">preview SDK for the Surface Duo</a> which included an Android Emulator with a preview Surface Duo image. Today I finally got a chance to see whether the Uno Platform really could deliver on these new form-factors.</p>http://ian.bebbs.co.uk/posts/LessReSTMoreHotChocolateLess ReST, more Hot Chocolate2020-01-08T00:00:00Z<h2 id="intro">Intro</h2>
<p>A project I'm working on requires a microservice like evaluation environment. A brief google revealed very little that would suffice so I decided to quickly knock up my own. At the same time, I thought it would be a great opportunity to evaluate <a href="https://hotchocolate.io/">Hot Chocolate</a> by <a href="https://chillicream.com/">Chilli Cream</a>; a relative newcomer to the (very sparse) GraphQL for .NET scene. In this post I'll also be using <a href="https://github.com/RicoSuter/NSwag">NSwag</a> to generate <a href="https://www.openapis.org/">OpenAPI documents</a> and <a href="https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests#how-to-use-typed-clients-with-httpclientfactory">Typed Clients</a> for downstream services and, finally, I will be containerizing the microservices using <a href="https://www.docker.com/">Docker</a> and employing <a href="https://docs.docker.com/compose/">Docker Compose</a> to run and test them.</p>
<h2 id="contents">Contents</h2>
<ol>
<li><a href="#requirements">Requirements</a>
<ol>
<li><a href="#solution">Solution</a></li>
<li><a href="#environment">Environment</a></li>
</ol>
</li>
<li><a href="#about-hot-chocolate">About Hot Chocolate</a></li>
<li><a href="#solution-structure">Solution Structure</a></li>
<li><a href="#rest-services">ReST Services</a>
<ol>
<li><a href="#cheeze.store">Cheeze.Store</a></li>
<li><a href="#cheeze.inventory">Cheeze.Inventory</a></li>
<li><a href="#providing-swagger-endpoints">Providing Swagger Endpoints</a></li>
<li><a href="#generating-typed-clients">Generating Typed Clients</a></li>
</ol>
</li>
<li><a href="#graphql-service">GraphQL Service</a>
<ol>
<li><a href="#object-model">Object Model</a></li>
<li><a href="#schema-resolvers">Schema & Resolvers</a></li>
<li><a href="#service-binding-configuration">Service Binding & Configuration</a></li>
</ol>
</li>
<li><a href="#containerization">Containerization</a>
<ol>
<li><a href="#docker-support">Docker Support</a></li>
<li><a href="#container-orchestration-support">Container Orchestration Support</a></li>
</ol>
</li>
<li><a href="#testing">Testing</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ol>
<h2 id="requirements">Requirements</h2>
<h3 id="solution">Solution</h3>
<p>The requirements for the test environment were pretty simple:</p>
<ul>
<li>A .net core web service which, when called, fetched and collated data from two other .net core web services. As (conditionally) aggregating data from multiple sources is one of GraphQL's primary use cases I decided a GraphQL endpoint would make for a great entry point into this flow.</li>
<li>Avoid any tight coupling between the GraphQL endpoint and the underlying web-services yet provide strong compile-time guarantees of cohesion with these services.</li>
<li>A simple build/deployment/debug loop.</li>
<li>Embrace 'modern' methodologies; for example asynchronous controller actions and <a href="https://docs.microsoft.com/en-us/dotnet/csharp/nullable-references">Nullable Reference Types</a></li>
</ul>
<h3 id="environment">Environment</h3>
<p>To follow the following steps you will need:</p>
<ul>
<li><a href="https://dotnet.microsoft.com/download/dotnet-core/3.1">.Net Core 3.1 SDK</a></li>
<li>Powershell (I'd recommend the new <a href="https://www.microsoft.com/en-us/p/windows-terminal-preview/9n0dx20hk701?activetab=pivot:overviewtab">Windows Terminal</a>)</li>
<li>A text editor (<a href="https://code.visualstudio.com/Download">VSCode</a> perhaps?)</li>
<li><a href="https://docs.docker.com/docker-for-windows/">Docker for Windows</a></li>
</ul>
<h2 id="about-hot-chocolate">About Hot Chocolate</h2>
<p>I have only just started using Hot Chocolate but really like it. It allows code-first schema modelling using basic POCO classes leaving all the GraphQL magic to be implemented using a neat fluent syntax rooted from a <a href="https://hotchocolate.io/docs/schema"><code>SchemaBuilder</code></a> class. While this post is most certainly aimed at GraphQL beginners you may glean some additional information about Hot Chocolate from their <a href="https://hotchocolate.io/docs/introduction.html">"Quick Start"</a> or by watching <a href="https://www.youtube.com/watch?v=Lr6qyoAT8k4">any</a> <a href="https://www.youtube.com/watch?v=2QLhcqFYRpg">one</a> of the <a href="https://www.youtube.com/watch?v=q-5MUqLAEFs">many talks</a> by Michael Steib on it's use.</p>
<p>Now, if you do watch/have seen any of the videos here, you will notice that <a href="https://hotchocolate.io/docs/stitching">Schema Stitching</a> is mentioned numerous times. In fact, in a couple of videos it is discussed specifically in relation to "stitching" ReST services into a GraphQL schema (along with other GraphQL schemas). This sounded fantastic and was certainly a desired use case when I started using Hot Chocolate. Unfortunately, there is zero documentation or guidance on how this can be achieved at the current time so the project that follows uses basic <a href="https://hotchocolate.io/docs/resolvers">resolvers</a> to fetch data from ReST services and AutoMapper to map between schemas.</p>
<p>Before getting set up, be sure to install Hot Chocolate's template into the dotnet CLI as follows:</p>
<pre><code class="language-powershell">dotnet new -i HotChocolate.Templates.Server
</code></pre>
<h2 id="solution-structure">Solution Structure</h2>
<p>Here's how I set up my solution:</p>
<pre><code class="language-powershell"># Create directories and initialize git
mkdir Cheeze
cd Cheeze
git init
mkdir src
cd src
# Create projects and remove superfluous files
dotnet new graphql -n Cheeze.Graph
dotnet new webapi -n Cheeze.Store
dotnet new classlib -n Cheeze.Store.Client
rm .\Cheeze.Store.Client\Class1.cs
dotnet new webapi -n Cheeze.Inventory
dotnet new classlib -n Cheeze.Inventory.Client
rm .\Cheeze.Inventory.Client\Class1.cs
# Create solution for easy of use from VS
dotnet new sln -n Cheeze
dotnet sln add .\Cheeze.Graph\Cheeze.Graph.csproj
dotnet sln add .\Cheeze.Store\Cheeze.Store.csproj
dotnet sln add .\Cheeze.Store.Client\Cheeze.Store.Client.csproj
dotnet sln add .\Cheeze.Inventory\Cheeze.Inventory.csproj
dotnet sln add .\Cheeze.Inventory.Client\Cheeze.Inventory.Client.csproj
# Add project references
dotnet add .\Cheeze.Graph\Cheeze.Graph.csproj reference .\Cheeze.Store.Client\Cheeze.Store.Client.csproj
dotnet add .\Cheeze.Graph\Cheeze.Graph.csproj reference .\Cheeze.Inventory.Client\Cheeze.Inventory.Client.csproj
</code></pre>
<p>Unfortunately, if we do a <code>dotnet build</code> now we'll see a couple of errors due to <a href="https://github.com/ChilliCream/hotchocolate/issues/1329">a bug</a> in the Hot Chocolate server template which fails to add the HotChocolate namespace to the list of using statements in <code>Startup.cs</code>. This can be resolved with the following command:</p>
<pre><code class="language-powershell">@(Get-Content .\Cheeze.Graph\Startup.cs)[0..2] + "using HotChocolate;" + @(Get-Content .\Cheeze.Graph\Startup.cs)[3..44] | Set-Content .\Cheeze.Graph\Startup.cs
</code></pre>
<p>Everything should now build correctly.</p>
<h2 id="rest-services">ReST Services</h2>
<p>We'll start by building out our ReST services.</p>
<blockquote class="blockquote">
<p>Note: These ReST services simply return static (and somewhat bare) data as that's all the need to be for my test environment. As such there is no persistence layer implemented and much of the schema for each type is unused.</p>
</blockquote>
<p>The two services are as follows:</p>
<h3 id="cheeze.store">Cheeze.Store</h3>
<p>This web api will provide a full list of all cheeses available through the store along with descriptions and image URLs. It will (for simplicity) have a single endpoint which allows a consumer to retrieve all available cheeses.</p>
<p>To set this up, do the following:</p>
<ol>
<li>Delete the <code>Controllers</code> folder - We're a microservice and will be providing a single endpoint so there's no need for plurality here.</li>
<li>Delete <code>WeatherForecast.cs</code></li>
<li>Add the following files:
<ol>
<li><p>Controller.cs</p>
<pre><code class="language-cs">using System;
using System.Collections.Generic;
using System.Net;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
namespace Cheeze.Store
{
[Route("api/store")]
public class Controller : Microsoft.AspNetCore.Mvc.Controller
{
[HttpGet]
[ProducesResponseType(typeof(IEnumerable<Cheese>), (int)HttpStatusCode.OK)]
public Task<ActionResult<IEnumerable<Cheese>>> Get()
{
var result = new[]
{
new Cheese
{
Id = Guid.Parse("1468841a-5fbe-41c5-83b3-ab136b7ae70c"),
Name = "API Cheese"
}
};
return Task.FromResult<ActionResult<IEnumerable<Cheese>>>(Ok(result));
}
}
}
</code></pre>
</li>
<li><p>Cheese.cs</p>
<pre><code class="language-cs">using System;
using System.ComponentModel.DataAnnotations;
namespace Cheeze.Store
{
public class Cheese
{
public Guid Id { get; set; }
public Uri? Uri { get; set; }
[Required]
public string Name { get; set; } = string.Empty;
public string Description { get; set; } = string.Empty;
public decimal Price { get; set; }
}
}
</code></pre>
</li>
</ol>
</li>
</ol>
<h3 id="cheeze.inventory">Cheeze.Inventory</h3>
<p>This web api provides up to date inventory information for cheeses available through the store. It will have two endpoints which allow a consumer to get the availability of a specific cheese or a list of cheeses by id.</p>
<p>To set this up, do the following:</p>
<ol>
<li>Delete the <code>Controllers</code> folder - same as above</li>
<li>Delete <code>WeatherForecast.cs</code></li>
<li>Add the following files:
<ol>
<li><p>Controller.cs</p>
<pre><code class="language-cs">using Microsoft.AspNetCore.Mvc;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Threading.Tasks;
namespace Cheeze.Inventory
{
[Route("api/inventory")]
public class Controller : Microsoft.AspNetCore.Mvc.Controller
{
private static readonly Random Random = new Random();
[HttpGet("{id}")]
[ProducesResponseType(typeof(uint), (int)HttpStatusCode.OK)]
public Task<ActionResult<uint>> Get(Guid id)
{
return Task.FromResult<ActionResult<uint>>(Ok((uint)Random.Next(10)));
}
[HttpPost]
[ProducesResponseType(typeof(IEnumerable<Available>), (int)HttpStatusCode.OK)]
public Task<ActionResult<IEnumerable<Available>>> Post([FromBody] Request request)
{
var available = request.Ids
.Select(id => new Available { Id = id, Quantity = (uint)Random.Next(10) })
.ToArray();
return Task.FromResult<ActionResult<IEnumerable<Available>>>(Ok(available));
}
}
}
</code></pre>
</li>
<li><p>Request.cs</p>
<pre><code class="language-cs">using System;
using System.Collections.Generic;
using System.Linq;
namespace Cheeze.Inventory
{
public class Request
{
public IEnumerable<Guid> Ids { get; set; } = Enumerable.Empty<Guid>();
}
}
</code></pre>
</li>
<li><p>Available.cs</p>
<pre><code class="language-cs">using System;
namespace Cheeze.Inventory
{
public class Available
{
public Guid Id { get; set; }
public uint Quantity { get; set; }
}
}
</code></pre>
</li>
</ol>
</li>
</ol>
<h3 id="providing-swagger-endpoints">Providing Swagger Endpoints</h3>
<p>Both ReST services will provide a swagger endpoints to facilitate their use. We're using <a href="https://github.com/RicoSuter/NSwag">'NSwag'</a> to generate these endpoints for each project as follows:</p>
<ol>
<li>Add the required packages to each project:</li>
</ol>
<pre><code class="language-powershell">dotnet add .\Cheeze.Store\Cheeze.Store.csproj package NSwag.AspNetCore
dotnet add .\Cheeze.Inventory\Cheeze.Inventory.csproj package NSwag.AspNetCore
</code></pre>
<ol>
<li><p>In the <code>Startup.ConfigureServices</code> method, register the required Swagger services:</p>
<pre><code class="language-cs">public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
// Register the Swagger services
services.AddOpenApiDocument();
}
</code></pre>
</li>
<li><p>In the <code>Startup.Configure</code> method, enable the middleware for serving the generated Swagger specification and the Swagger UI:</p>
<pre><code class="language-cs">public void Configure(IApplicationBuilder app)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
// Remove HTTP->HTTPS redirection for simplified hosting in Docker
//app.UseHttpsRedirection();
app.UseRouting();
// Register the Swagger generator and the Swagger UI middlewares
app.UseOpenApi();
app.UseSwaggerUi3();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
</code></pre>
</li>
</ol>
<p>Build the solution to restore all dependencies:</p>
<pre><code class="language-powershell">dotnet build
</code></pre>
<p>If you now build and run either project you should now be able to navigate to the swagger endpoint UI. For example:</p>
<pre><code class="language-powershell">dotnet run --project .\Cheeze.Store\Cheeze.Store.csproj
start "microsoft-edge:http://localhost:5000/swagger"
</code></pre>
<h3 id="generating-typed-clients">Generating Typed Clients</h3>
<p>We're now going to use NSwag`s MSBuild package to generate a <a href="https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests#how-to-use-typed-clients-with-httpclientfactory">Typed Client</a> for each project at build time. To do this:</p>
<ol>
<li><p>Install the required packages</p>
<pre><code class="language-powershell">dotnet add .\Cheeze.Store\Cheeze.Store.csproj package NSwag.MSBuild
dotnet add .\Cheeze.Inventory\Cheeze.Inventory.csproj package NSwag.MSBuild
</code></pre>
</li>
<li><p>Build project to restore packages</p>
</li>
<li><p>Edit the project file to enable Nullable Reference Types and include all assemblies on build:</p>
<pre><code class="language-csproj"><Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
<Nullable>enable</Nullable> <!-- Add this line -->
<CopyLocalLockFileAssemblies>true</CopyLocalLockFileAssemblies> <!-- And this line -->
</PropertyGroup>
...
</Project>
</code></pre>
</li>
<li><p>Generate an NSwag configuration file</p>
<p>Building the solution after adding the <code>NSwag.MSBuild</code> package should have added the NSwag tools to your nuget package cache (usually in the following directory: <code>%userprofile%\.nuget\packages\nswag.msbuild\13.2.0\tools\NetCore31</code>). Using these build tools we can generate the required configuration file for each project with the following command:</p>
<pre><code class="language-powershell">cd .\Cheeze.Inventory
~\.nuget\packages\nswag.msbuild\13.2.0\tools\NetCore31\dotnet-nswag.exe new
cd ..\Cheeze.Store
~\.nuget\packages\nswag.msbuild\13.2.0\tools\NetCore31\dotnet-nswag.exe new
cd ..
</code></pre>
<p>Now we need to replace sections the generated configuration file with populated values. In each of files do the following:</p>
<ol>
<li><p>Set the <code>runtime</code> version:</p>
<pre><code class="language-json">{
"runtime": "NetCore31",
...
}
</code></pre>
</li>
<li><p>Modify the <code>documentGenerator</code> section to generate an OpenAPI document from the generated web assembly. Do this by replacing the <code>documentGenerator</code> section with the following (ensuring to replace the <code>controllerNames</code>, <code>defaultUrlTemplate</code> and <code>assemblyPaths</code> to the correct values for each project):</p>
<pre><code class="language-json">{
...
"documentGenerator": {
"webApiToOpenApi": {
"controllerNames": [
"Cheeze.Store.Controller"
],
"isAspNetCore": true,
"resolveJsonOptions": false,
"defaultUrlTemplate": "api/store",
"addMissingPathParameters": false,
"includedVersions": null,
"defaultPropertyNameHandling": "Default",
"defaultReferenceTypeNullHandling": "Null",
"defaultDictionaryValueReferenceTypeNullHandling": "NotNull",
"defaultResponseReferenceTypeNullHandling": "NotNull",
"defaultEnumHandling": "Integer",
"flattenInheritanceHierarchy": false,
"generateKnownTypes": true,
"generateEnumMappingDescription": false,
"generateXmlObjects": false,
"generateAbstractProperties": false,
"generateAbstractSchemas": true,
"ignoreObsoleteProperties": false,
"allowReferencesWithProperties": false,
"excludedTypeNames": [],
"serviceHost": null,
"serviceBasePath": null,
"serviceSchemes": [],
"infoTitle": "My Title",
"infoDescription": null,
"infoVersion": "1.0.0",
"documentTemplate": null,
"documentProcessorTypes": [],
"operationProcessorTypes": [],
"typeNameGeneratorType": null,
"schemaNameGeneratorType": null,
"contractResolverType": null,
"serializerSettingsType": null,
"useDocumentProvider": true,
"documentName": "v1",
"aspNetCoreEnvironment": null,
"createWebHostBuilderMethod": null,
"startupType": null,
"allowNullableBodyParameters": true,
"output": null,
"outputType": "Swagger2",
"assemblyPaths": [
"bin/$(Configuration)/netcoreapp3.1/Cheeze.Store.dll"
],
"assemblyConfig": null,
"referencePaths": [],
"useNuGetCache": true
}
},
...
}
</code></pre>
</li>
<li><p>Remove the <code>openApiToTypeScriptClient</code> and <code>openApiToCSharpController</code> sections from within the <code>codeGenerators</code> section of each file.</p>
</li>
<li><p>Modify the <code>openApiToCSharpClient</code> section to generate C# typed clients from the OpenAPI document. Do this by replacing the <code>openApiToCSharpClient</code> section with the following (ensuring to replace the <code>className</code>, <code>namespace</code> and <code>output</code> to the correct values for each project):</p>
<pre><code class="language-json">{
...
"codeGenerators": {
"openApiToCSharpClient": {
"clientBaseClass": null,
"configurationClass": null,
"generateClientClasses": true,
"generateClientInterfaces": true,
"injectHttpClient": true,
"disposeHttpClient": true,
"protectedMethods": [],
"generateExceptionClasses": true,
"exceptionClass": "ApiException",
"wrapDtoExceptions": true,
"useHttpClientCreationMethod": false,
"httpClientType": "System.Net.Http.HttpClient",
"useHttpRequestMessageCreationMethod": false,
"useBaseUrl": false,
"generateBaseUrlProperty": false,
"generateSyncMethods": false,
"exposeJsonSerializerSettings": false,
"clientClassAccessModifier": "public",
"typeAccessModifier": "public",
"generateContractsOutput": false,
"contractsNamespace": null,
"contractsOutputFilePath": null,
"parameterDateTimeFormat": "s",
"parameterDateFormat": "yyyy-MM-dd",
"generateUpdateJsonSerializerSettingsMethod": true,
"serializeTypeInformation": false,
"queryNullValue": "",
"className": "StoreClient",
"operationGenerationMode": "MultipleClientsFromOperationId",
"additionalNamespaceUsages": [],
"additionalContractNamespaceUsages": [],
"generateOptionalParameters": false,
"generateJsonMethods": false,
"enforceFlagEnums": false,
"parameterArrayType": "System.Collections.Generic.IEnumerable",
"parameterDictionaryType": "System.Collections.Generic.IDictionary",
"responseArrayType": "System.Collections.Generic.ICollection",
"responseDictionaryType": "System.Collections.Generic.IDictionary",
"wrapResponses": false,
"wrapResponseMethods": [],
"generateResponseClasses": true,
"responseClass": "SwaggerResponse",
"namespace": "Cheeze.Store.Client",
"requiredPropertiesMustBeDefined": true,
"dateType": "System.DateTimeOffset",
"jsonConverters": null,
"anyType": "object",
"dateTimeType": "System.DateTimeOffset",
"timeType": "System.TimeSpan",
"timeSpanType": "System.TimeSpan",
"arrayType": "System.Collections.Generic.ICollection",
"arrayInstanceType": "System.Collections.ObjectModel.Collection",
"dictionaryType": "System.Collections.Generic.IDictionary",
"dictionaryInstanceType": "System.Collections.Generic.Dictionary",
"arrayBaseType": "System.Collections.ObjectModel.Collection",
"dictionaryBaseType": "System.Collections.Generic.Dictionary",
"classStyle": "Poco",
"generateDefaultValues": true,
"generateDataAnnotations": true,
"excludedTypeNames": [],
"excludedParameterNames": [],
"handleReferences": false,
"generateImmutableArrayProperties": false,
"generateImmutableDictionaryProperties": false,
"jsonSerializerSettingsTransformationMethod": null,
"inlineNamedArrays": false,
"inlineNamedDictionaries": false,
"inlineNamedTuples": true,
"inlineNamedAny": false,
"generateDtoTypes": true,
"generateOptionalPropertiesAsNullable": false,
"templateDirectory": null,
"typeNameGeneratorType": null,
"propertyNameGeneratorType": null,
"enumNameGeneratorType": null,
"serviceHost": null,
"serviceSchemes": null,
"output": "$(Target)/StoreClient.Generated.cs"
}
}
}
</code></pre>
</li>
</ol>
</li>
<li><p>Edit the project file to use the configuration file to generate the typed client for each project (replacing <code>[PROJECT_NAME]</code> with <code>Cheeze.Store.Client</code> or <code>Cheeze.Inventory.Client</code> as appropriate):</p>
<pre><code class="language-csproj"><Project Sdk="Microsoft.NET.Sdk.Web">
...
<Target Name="NSwag" AfterTargets="Build">
<Copy SourceFiles="@(ReferencePath)" DestinationFolder="$(OutDir)References" />
<Exec Condition="'$(NSwag)'=='true'" Command="$(NSwagExe_Core31) run nswag.json /variables:Configuration=$(Configuration),OutDir=$(OutDir),Target=$(SolutionDir)[PROJECT_NAME]" />
<RemoveDir Directories="$(OutDir)References" />
</Target>
...
</Project>
</code></pre>
</li>
<li><p>Add a <code>build.ps1</code> file to the <code>src</code> directory containing:</p>
<pre><code class="language-powershell">$solutionDir = Get-Location
dotnet build .\Cheeze.Store\Cheeze.Store.csproj /p:NSwag=true /p:SolutionDir=$solutionDir
dotnet build .\Cheeze.Inventory\Cheeze.Inventory.csproj /p:NSwag=true /p:SolutionDir=$solutionDir
dotnet build .\Cheeze.Store.Client\Cheeze.Store.Client.csproj
dotnet build .\Cheeze.Inventory.Client\Cheeze.Inventory.Client.csproj
dotnet build .\Cheeze.Graph\Cheeze.Graph.csproj
</code></pre>
<p>The build script is required to ensure projects are built in the correct order and to ensure we don't try to regenerate our typed clients while containerizing our projects (see below).</p>
</li>
<li><p>Adding <code>Newtonsoft.Json</code> and <code>System.ComponentModel.Annotations</code> packages to the client projects:</p>
<pre><code class="language-powershell">dotnet add .\Cheeze.Store.Client\Cheeze.Store.Client.csproj package Newtonsoft.Json
dotnet add .\Cheeze.Store.Client\Cheeze.Store.Client.csproj package System.ComponentModel.Annotations
dotnet add .\Cheeze.Inventory.Client\Cheeze.Inventory.Client.csproj package Newtonsoft.Json
dotnet add .\Cheeze.Inventory.Client\Cheeze.Inventory.Client.csproj package System.ComponentModel.Annotations
</code></pre>
</li>
<li><p>Build!</p>
<pre><code class="language-powershell">.\build.ps1
</code></pre>
<p>If all the above is correct, we should have a successful build and see that <code>StoreClient.Generated.cs</code> and <code>InventoryClient.Generated.cs</code> appear in the <code>Cheeze.Store</code> and <code>Cheeze.Inventory</code> directories respectively.</p>
</li>
</ol>
<h2 id="graphql-service">GraphQL Service</h2>
<p>Finally we can get around to implementing our GraphQL service. We'll undertake the following steps to get this service running as expected:</p>
<ol>
<li>Create an object model of our DTOs and Graph Query as POCO objects</li>
<li>Build a GraphQL schema from these objects using the SchemaBuilder</li>
<li>Configure the .Net Core host to correctly run the GraphQL Service</li>
</ol>
<p>First however, as we're not currently able to use Schema Stitching, we need to perform mapping between the <code>Cheeze.Store</code> and <code>Cheeze.Graph</code> schemas ourselves. To facilitate this, we're going to use <a href="https://automapper.org/">Automapper</a> so we need to add the package to <code>Cheeze.Graph</code> using:</p>
<pre><code class="language-powershell">dotnet add .\Cheeze.Graph\Cheeze.Graph.csproj package AutoMapper
</code></pre>
<h3 id="object-model">Object Model</h3>
<p>Add a <code>Cheese.cs</code> to <code>Cheeze.Graph</code> with the following content:</p>
<pre><code class="language-cs">using System;
namespace Cheeze.Graph
{
public class Cheese
{
public Guid Id { get; set; }
public Uri? Uri { get; set; }
public string Name { get; set; } = string.Empty;
public string Description { get; set; } = string.Empty;
public decimal Price { get; set; }
public int Available { get; set; }
}
}
</code></pre>
<p>There are two things to note here:</p>
<p>Firstly, the Cheese type is very similar - <strong>but not identical</strong> - to the Cheese type declared in <code>Cheeze.Store</code>. Crucially this Cheese type has an Available property which is not in the data provided by <code>Cheeze.Store</code> and instead will be populated by dependent calls to <code>Cheeze.Inventory</code>.</p>
<p>Secondly this type does not implement any behaviour, it merely declares the shape (i.e. schema) of the data that can be provided by this service. All GraphQL functionality is provided via the SchemaBuilder and associated Resolvers as see below.</p>
<h3 id="schema-resolvers">Schema & Resolvers</h3>
<p>Add a <code>Schema.cs</code> file to <code>Cheeze.Graph</code> with the following content:</p>
<pre><code class="language-cs">using AutoMapper;
using HotChocolate;
using HotChocolate.Resolvers;
using HotChocolate.Types;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace Cheeze.Graph
{
public static class Schema
{
private static readonly IMapper Mapper;
static Schema()
{
var mapping = new MapperConfiguration(
configuration =>
{
configuration.CreateMap<Cheeze.Store.Client.Cheese, Cheese>()
.ForMember(cheese => cheese.Available, options => options.Ignore());
}
);
Mapper = mapping.CreateMapper();
}
private static async Task<IReadOnlyDictionary<Guid, int>> FetchInventory(this Cheeze.Inventory.Client.IInventoryClient inventoryClient, IReadOnlyList<Guid> cheeses)
{
var response = await inventoryClient.PostAsync(new Cheeze.Inventory.Client.Request { Ids = cheeses.ToArray() });
return cheeses
.GroupJoin(response, id => id, available => available.Id, (id, available) => (Id: id, Available: available.Select(a => a.Quantity).FirstOrDefault()))
.ToDictionary(tuple => tuple.Id, tuple => tuple.Available);
}
private static async Task<int> ResolveInventory(this IResolverContext context)
{
var dataLoader = context.BatchDataLoader<Guid, int>(
"availableById",
context.Service<Cheeze.Inventory.Client.IInventoryClient>().FetchInventory);
return await dataLoader.LoadAsync(context.Parent<Cheese>().Id, context.RequestAborted);
}
private static async Task<IEnumerable<Cheese>> ResolveCheeses(this IResolverContext context)
{
var results = await context.Service<Cheeze.Store.Client.IStoreClient>().GetAsync();
return results.Select(source => Mapper.Map<Cheeze.Store.Client.Cheese, Cheese>(source));
}
public static ISchemaBuilder Build()
{
return SchemaBuilder.New()
.AddQueryType(
typeDescriptor => typeDescriptor
.Field("Cheese")
.Resolver(context => context.ResolveCheeses()))
.AddObjectType<Cheese>(
cheese => cheese
.Field(f => f.Available)
.Resolver(context => context.ResolveInventory()))
.ModifyOptions(o => o.RemoveUnreachableTypes = true);
}
}
}
</code></pre>
<p>Amazingly, this single class implements <strong>all</strong> the functionality needed to provide a GraphQL compliant endpoint in ~70 SLoC. There is rather a lot going on though so lets break it down starting with the static public method <code>Build</code>.</p>
<p>The <code>Build</code> method uses (and returns) a <code>SchemaBuilder</code> to define the schema that will be presented through the GraphQL endpoint. This comprises two main elements: the <code>QueryType</code> - provided by the <code>.AddQueryType()</code> fluent method - and the <code>Cheese</code> object type - provided by the <code>.AddObjectType<Cheese>()</code> fluent method. We'll dig into each of these here.</p>
<p>The <code>AddQueryType</code> defines the types of queries that can be executed by this GraphQL endpoint in a purely code-first manner. The code above adds a field <code>Cheese</code> which, when used in the query, uses the <code>ResolveCheeses()</code> extension method to provide data for the query. The <code>ResolveCheeses()</code> extension method uses the <code>IResolverContext</code> to retrieve the typed client for the <code>Cheeze.Store</code> ReST endpoint and calls the <code>GetAsync()</code> method on it. Finally, AutoMapper is used to map between the <code>Cheeze.Store.Client.Cheese</code> and <code>Cheeze.Graph.Cheese</code> types, specifically ignoring the <code>Available</code> property of <code>Cheeze.Graph.Cheese</code>.</p>
<p>Similarly, the <code>AddObjectType<Cheese></code> method intercepts objects of type <code>Cheese</code> and uses the <code>ResolveInventory()</code> extension method to populate the <code>Available</code> property. This time however, a <code>BatchDataLoader</code> is used from within the extension method to neatly avoid the <a href="https://itnext.io/what-is-the-n-1-problem-in-graphql-dd4921cb3c1a">N+1 problem</a>.</p>
<h3 id="service-binding-configuration">Service Binding & Configuration</h3>
<p>Finally we need to bind required service and configuration types so, again in <code>Cheeze.Graph</code> add the following:</p>
<ol>
<li><p>A <code>Configuration.cs</code> file in an <code>Inventory</code> folder containing:</p>
<pre><code class="language-cs">using System;
namespace Cheeze.Graph.Inventory
{
public class Configuration
{
public Uri BaseAddress { get; set; } = new Uri("https://inventory");
}
}
</code></pre>
</li>
<li><p>A <code>Configuration.cs</code> file in a <code>Store</code> folder containing:</p>
<pre><code class="language-cs">using System;
namespace Cheeze.Graph.Store
{
public class Configuration
{
public Uri BaseAddress { get; set; } = new Uri("https://store");
}
}
</code></pre>
</li>
<li><p>In <code>Program.cs</code> refactor <code>CreateWebHostBuilder</code> method to the following:</p>
<pre><code class="language-cs">public static IWebHostBuilder CreateWebHostBuilder(string[] args)
{
return WebHost
.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) => config.AddEnvironmentVariables("Cheeze:Graph:"))
.ConfigureServices(
(hostContext, services) =>
{
services.AddOptions<Store.Configuration>().Bind(hostContext.Configuration.GetSection("Store"));
services.AddOptions<Inventory.Configuration>().Bind(hostContext.Configuration.GetSection("Inventory"));
})
.UseStartup<Startup>();
}
</code></pre>
<p>And add the two required usings:</p>
<pre><code class="language-cs">using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
</code></pre>
<p>Here we're adding configuration from Environment Variables (prefixed with <code>Cheeze:Graph</code>) to our application and binding this configuration to the types added above.</p>
</li>
<li><p>In <code>Startup.cs</code> refactor the <code>ConfigureServices</code> method to the following:</p>
<pre><code class="language-cs">public void ConfigureServices(IServiceCollection services)
{
services.AddHttpClient<Cheeze.Store.Client.IStoreClient, Cheeze.Store.Client.StoreClient>(
(serviceProvider, httpClient) => httpClient.BaseAddress = serviceProvider.GetRequiredService<IOptions<Store.Configuration>>().Value.BaseAddress
);
services.AddHttpClient<Cheeze.Inventory.Client.IInventoryClient, Cheeze.Inventory.Client.InventoryClient>(
(serviceProvider, httpClient) => httpClient.BaseAddress = serviceProvider.GetRequiredService<IOptions<Inventory.Configuration>>().Value.BaseAddress
);
// this enables you to use DataLoader in your resolvers.
services.AddDataLoaderRegistry();
// Add GraphQL Services
services.AddGraphQL(Schema.Build());
}
</code></pre>
<p>And again add the required using:</p>
<pre><code class="language-cs">using Microsoft.Extensions.Options;
</code></pre>
<p>Here we're binding the typed clients for <code>Cheeze.Store</code> and <code>Cheeze.Inventory</code> and ensuring they're configured with the appropriate base addresses. Finally we're using the <code>Schema.Build()</code> method to provide the GraphQL schema to the <code>services.AddGraphQL()</code> method.</p>
</li>
</ol>
<p>And that - as the say - is that. If we run our build script now we should find everything builds successfully.</p>
<h2 id="containerization">Containerization</h2>
<p>Now, rather than configuring and spinning up all the services manually, we'll simplify our debug/deploy loop by containerizing our services and using <a href="https://docs.docker.com/compose/">Docker Compose</a> to do the job for us. From Visual Studio this would be a simple case of using the "Add > Docker Support" and "Add > Container Orchestration Support" options from the "Solution Explorer" as described <a href="https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/docker/visual-studio-tools-for-docker?view=aspnetcore-3.1">here</a>. However, as we've so far done pretty much everything from the command-line, lets try carrying that on.</p>
<h3 id="docker-support">Docker Support</h3>
<p>First we'll add docker support to each of the top-level projects by using the standard multi-stage dockerfile template. I was unable to find an official source for this template so uploaded a version to my <a href="https://github.com/ibebbs/DotnetCliDocker">DotnetCliDocker</a> repository which we're be using here.</p>
<pre><code class="language-powershell">@(Invoke-WebRequest "https://raw.githubusercontent.com/ibebbs/DotnetCliDocker/master/Dockerfile3_1" | Select-Object -ExpandProperty Content) -replace "\(ProjectName\)","Cheeze.Graph" | Set-Content .\Cheeze.Graph\Dockerfile
@(Invoke-WebRequest "https://raw.githubusercontent.com/ibebbs/DotnetCliDocker/master/Dockerfile3_1" | Select-Object -ExpandProperty Content) -replace "\(ProjectName\)","Cheeze.Store" | Set-Content .\Cheeze.Store\Dockerfile
@(Invoke-WebRequest "https://raw.githubusercontent.com/ibebbs/DotnetCliDocker/master/Dockerfile3_1" | Select-Object -ExpandProperty Content) -replace "\(ProjectName\)","Cheeze.Inventory" | Set-Content .\Cheeze.Inventory\Dockerfile
</code></pre>
<h3 id="container-orchestration-support">Container Orchestration Support</h3>
<p>Now lets add a couple of files so that we can use Docker Compose to run our microservice environment</p>
<ol>
<li><p>Add a <code>docker-compose.yml</code> file to the <code>src</code> directory containing:</p>
<pre><code class="language-yaml">version: '3.4'
services:
cheeze.store:
image: ${DOCKER_REGISTRY-}cheezestore
build:
context: .
dockerfile: Cheeze.Store/Dockerfile
cheeze.inventory:
image: ${DOCKER_REGISTRY-}cheezeinventory
build:
context: .
dockerfile: Cheeze.Inventory/Dockerfile
cheeze.graph:
image: ${DOCKER_REGISTRY-}cheezegraph
build:
context: .
dockerfile: Cheeze.Graph/Dockerfile
ports:
- "8081:80"
environment:
- Cheeze__Graph__Store__BaseAddress=http://cheeze.store
- Cheeze__Graph__Inventory__BaseAddress=http://cheeze.inventory
depends_on:
- cheeze.store
- cheeze.inventory
</code></pre>
</li>
<li><p>Add a <code>.dockerignore</code> to the <code>src</code> directory by running:</p>
<pre><code class="language-powershell">@(Invoke-WebRequest "https://raw.githubusercontent.com/ibebbs/DotnetCliDocker/master/.dockerignore") | Set-Content .\.dockerignore
</code></pre>
</li>
<li><p>Build and run our containers</p>
<pre><code class="language-powershell">docker-compose build
</code></pre>
<p>This might take some time but should result in a successful build afterwhich you can run the containers using:</p>
<pre><code class="language-powershell">docker-compose run
</code></pre>
</li>
</ol>
<h2 id="testing">Testing</h2>
<p>With our composed containers running, open up a browser and navigate to <code>http://localhost:8081/playground</code>. You should see something like the following:</p>
<img src="/Content/LessReSTMoreHotChocolate/Playground.png" class="img-responsive" style="margin: auto; width:600px; margin-top: 6px; margin-bottom: 6px;" alt="GraphQL Playground">
<p>The two tabs on the right hand side of the screen - "Docs" & "Schema" - allow you to examine the GraphQL endpoint to determine the queries you can execute and the content the service is able to receive. As we've got very little data in our services, we'll just use a basic query to retrieve the data we've defined. In the left pain of the playground (underneath "# Write your query or mutation here") enter the following:</p>
<pre><code class="language-graphql">{
Cheese {
id,
name,
available
}
}
</code></pre>
<blockquote class="blockquote">
<p>Note: As you're typing this, you should see that auto-complete is available and extremely quick.</p>
</blockquote>
<p>Finally, once the query is complete, click the "Play" button in the centre of the screen. If everything has compiled and build correctly, you should see the following in the right hand pane:</p>
<pre><code>{
"data": {
"Cheese": [
{
"id": "1468841a-5fbe-41c5-83b3-ab136b7ae70c",
"name": "API Cheese",
"available": 9
}
]
}
}
</code></pre>
<p>And there we go. We've successfully used GraphQL to integrate and intelligently query two independent ReST services. Nice!</p>
<h2 id="conclusion">Conclusion</h2>
<p>If you're hitting up against some of the limitations of ReST - particularly for mobile client applications - I would very much recommend taking a look at GraphQL and <a href="https://chillicream.com/">ChilliCream's</a> Hot Chocolate library in particular. Hot Chocolate makes setting up a GraphQL endpoint incredibly easy, and it's code-first capabilities allow you to concentrate on modelling a domain that works for you and your customers rather than the GraphQL framework.</p>
<p>Hot Chocolate is under <strong>very</strong> heavy development with fantastic new features getting added at an amazing cadence (hopefully ReST based Schema Stitching will bubble to the top of ChilliCream's priority list soon). Furthermore support for this library is excellent; in point of fact, while authoring this article I posted a question in their Slack workspace only to get it answered by Michael Steib himself just moments later and which culminated in a discussion that lasted the better part of an hour.</p>
<p>ChilliCream also have a <a href="https://www.nuget.org/packages/StrawberryShake/11.0.0-preview.75">client-side library</a> for GraphQL called <a href="https://chillicream.com/blog/2019/11/25/strawberry-shake_2">"Strawberry Shake"</a>. While currently in alpha it looks extremely promising for creating strongly-typed GraphQL clients as it will - apparently - provide <a href="https://grpc.io/blog/grpc-dotnet-build/">"protobuff style"</a> code generation for the client direct from a GraphQL service's schema.</p>
<p>Lastly, if you are authoring ReST endpoints, I would very much recommend considering <a href="https://docs.microsoft.com/en-us/aspnet/core/tutorials/getting-started-with-nswag?view=aspnetcore-3.1&tabs=visual-studio">NSwag</a> over <a href="https://docs.microsoft.com/en-us/aspnet/core/tutorials/getting-started-with-swashbuckle?view=aspnetcore-3.1&tabs=visual-studio">Swashbuckle</a>. For me, NSwag's integration is a bit nicer than Swashbuckle and has a greater focus on the OpenAPI toolchain. Furthermore NSwag's tooling is first class allowing you to generate OpenAPI documents and/or client side libraries (in a number of languages) using a variety of tools, not least of which being the MSBuild target we used here.</p>
<p>All code for from this post can be found in my <a href="https://github.com/ibebbs/Cheeze">"Cheeze"</a> repository on GitHub.</p>
<p>As always, if you have any questions or comments on the above or would like to discuss any point further, please don't hesitate to contact me using any of the links below or from my <a href="https://ian.bebbs.co.uk/about">about page</a>.</p>
<p>A project I'm working on requires a microservice like evaluation environment. A brief google revealed very little that would suffice so I decided to quickly knock up my own. At the same time, I thought it would be a great opportunity to evaluate <a href="https://hotchocolate.io/">Hot Chocolate</a> by <a href="https://chillicream.com/">Chilli Cream</a>; a relative newcomer to the (very sparse) GraphQL for .NET scene. In this post I'll also be using <a href="https://github.com/RicoSuter/NSwag">NSwag</a> to generate <a href="https://www.openapis.org/">OpenAPI documents</a> and <a href="https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests#how-to-use-typed-clients-with-httpclientfactory">Typed Clients</a> for downstream services and, finally, I will be containerizing the microservices using <a href="https://www.docker.com/">Docker</a> and employing <a href="https://docs.docker.com/compose/">Docker Compose</a> to run and test them.</p>http://ian.bebbs.co.uk/posts/NetworkBootingManyRaspberryPisNetwork Booting Many Raspberry Pis2020-01-02T00:00:00Z<h2 id="intro">Intro</h2>
<p>This is just a short post - mostly for my own benefit - on how to network boot multiple Raspberry Pis from an x86 Linux Server. While this has been covered <a href="https://hackaday.com/2019/11/11/network-booting-the-pi-4/">many</a> <a href="https://hackaday.com/2018/10/08/hack-my-house-running-raspberry-pi-without-an-sd-card/">times</a> in <a href="https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/net_tutorial.md">other</a> <a href="https://github.com/raspberrypi/rpi-eeprom/blob/master/firmware/raspberry_pi4_network_boot_beta.md">posts</a> none of them worked for me "out of the box". Here's what does.</p>
<h2 id="infrastructure">Infrastructure</h2>
<p>I will be using the following components</p>
<ul>
<li>Hyper-V Virtual Machine running Raspberry Pi Desktop (aka Debian Buster with Raspberry Pi Desktop) downloaded from <a href="https://www.raspberrypi.org/downloads/raspberry-pi-desktop/">here</a> as the network boot server.</li>
<li>Multiple Raspberry Pi 3B+ (the non-plus Raspberry Pi 3B requires <a href="https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/net_tutorial.md">additional steps</a>) as network boot clients</li>
</ul>
<h2 id="requirements">Requirements</h2>
<p>The <a href="https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/net_tutorial.md">official Raspberry Pi Network Boot instructions</a> assume you're using a Raspberry Pi as the network boot server and can therefore "copy" a Raspbian installation from an SD Card that has been installed on the network boot client Raspberry Pi. As I want to use an Linux server - running in a virtualised environment no less - I will be using additional steps from Hackaday's excellent article on <a href="https://hackaday.com/2019/11/11/network-booting-the-pi-4/">Network Booting The Pi 4</a>.</p>
<p>Additionally, while each of the network boot client Raspberry Pi's will be running Raspbian Buster Lite, they will be used for different purposes so must run a unique Raspbian installation.</p>
<h2 id="steps">Steps</h2>
<h3 id="network-boot-server">Network Boot Server</h3>
<ol>
<li><p>Create a virtual machine and install Debian Buster with Raspberry Pi Desktop. I will not cover instructions for doing this here as there are many virtualisation engines and instructions for each would be different; suffice to say I used a Gen 1 Hyper-V instance on Windows Server 2016 with 4 virtual cores, 8Gb of RAM and 64Gb of disk-space. Furthermore, after installation, I enabled SSH and used SSH to execute the following.</p>
</li>
<li><p>Install required software using the following command:</p>
<pre><code class="language-bash">sudo apt-get install unzip kpartx dnsmasq nfs-kernel-server
</code></pre>
</li>
<li><p>Make a directory to contain the first network boot client image:</p>
<pre><code>sudo mkdir -p /nfs/raspi1
</code></pre>
</li>
<li><p>Download and unzip the latest Raspbian Buster Lite image:</p>
<pre><code class="language-bash">wget https://downloads.raspberrypi.org/raspbian_lite/images/raspbian_lite-2019-09-30/2019-09-26-raspbian-buster-lite.zip
unzip 2019-09-26-raspbian-buster-lite.zip
</code></pre>
</li>
<li><p>Mount the Raspbian Buster Lite image to known locations:</p>
<pre><code class="language-bash">sudo kpartx -a -v 2019-09-26-raspbian-buster.img
mkdir rootmnt
mkdir bootmnt
sudo mount /dev/mapper/loop0p2 rootmnt/
sudo mount /dev/mapper/loop0p1 bootmnt/
</code></pre>
</li>
<li><p>Copy the Raspbian Buster Lite image to the network boot client image directory created above:</p>
<pre><code class="language-bash">sudo cp -a rootmnt/* /nfs/raspi1/
sudo cp -a bootmnt/* /nfs/raspi1/boot/
</code></pre>
</li>
<li><p>Ensure the network boot client image doesn't attempt to look for filesystems on the SD Card:</p>
<pre><code class="language-bash">sudo sed -i /UUID/d /nfs/raspi1/etc/fstab
</code></pre>
</li>
<li><p>Replace the boot command in the network boot client image to boot from a network share. Ensure you replace [IP Address] with the IP address of your network boot server (note the <code>modprobe.blacklist</code> is required to successfully boot the Raspberry Pi 3B+ as described <a href="https://raspberrypi.stackexchange.com/a/105886">here</a>):</p>
<pre><code class="language-bash">echo "console=serial0,115200 console=tty root=/dev/nfs nfsroot=[IP Address]:/nfs/raspi1,vers=3 rw ip=dhcp rootwait elevator=deadline modprobe.blacklist=bcm2835_v4l2" | sudo tee /nfs/raspi1/boot/cmdline.txt
</code></pre>
</li>
<li><p>Enable SSH in the network boot client image:</p>
<pre><code class="language-bash">sudo touch /nfs/raspi1/boot/ssh
</code></pre>
</li>
<li><p>Create a network share containing the network boot client image:</p>
<pre><code class="language-bash">echo "/nfs/raspi1 *(rw,sync,no_subtree_check,no_root_squash)" | sudo tee -a /etc/exports
</code></pre>
</li>
<li><p>Create a TrivialFTP folder containing boot code for all network boot clients</p>
<pre><code class="language-bash">sudo mkdir /tftpboot
sudo cp /nfs/raspi1/boot/bootcode.bin /tftpboot/bootcode.bin
sudo chmod 777 /tftpboot
</code></pre>
</li>
<li><p>Enable and restart <code>rpcbind</code> and <code>nfs-kernel-server</code> services:</p>
<pre><code class="language-bash">sudo systemctl enable rpcbind
sudo systemctl enable nfs-kernel-server
sudo systemctl restart rpcbind
sudo systemctl restart nfs-kernel-server
</code></pre>
</li>
<li><p>Reconfigure <code>dnsmasq</code> to server TFTP files only to Raspberry Pi instances as described here:</p>
<blockquote class="blockquote">
<p>We need to add our settings to the dnsmasq config file, which is where most of the magic happens. Let’s talk about that “proxy” setting. What we’re asking dnsmasq to do is watch for DHCP requests, and rather than respond to those requests directly, wait for the primary DHCP server to assign an IP address. If dnsmasq sees a request for PXE information, it will send additional information to inform the PXE-capable device of the PXE server information. The upside is that this approach lets us support PXE booting without modifying the primary DHCP server.</p>
</blockquote>
<p>Be sure to replace [Broadcast Address] with the broadcast address for your network (use <code>ip address | grep brd</code> to find it):</p>
<pre><code class="language-bash">echo 'dhcp-range=[Broadcast Address],proxy' | sudo tee -a /etc/dnsmasq.conf
echo 'log-dhcp' | sudo tee -a /etc/dnsmasq.conf
echo 'enable-tftp' | sudo tee -a /etc/dnsmasq.conf
echo 'tftp-root=/tftpboot' | sudo tee -a /etc/dnsmasq.conf
echo 'pxe-service=0,"Raspberry Pi Boot"' | sudo tee -a /etc/dnsmasq.conf
</code></pre>
</li>
<li><p>Enable and restart the <code>dnsmasq</code> service:</p>
<pre><code class="language-bash">sudo systemctl enable dnsmasq
sudo systemctl restart dnsmasq
</code></pre>
</li>
<li><p>Find the serial number of the first network boot client:</p>
<ol>
<li><p>Tail <code>daemon.log</code> to :</p>
<pre><code class="language-bash">sudo tail -f /var/log/daemon.log
</code></pre>
</li>
<li><p>Plug in a network cable and power cable to the first network boot client. After 10-30 seconds you should see output like this in the daemon.log:</p>
<blockquote class="blockquote">
<p>dnsmasq-dhcp[9460]: 653460281 available DHCP subnet: 192.168.1.255/255.255.255.0<br />
dnsmasq-dhcp[9460]: 653460281 vendor class: PXEClient:Arch:00000:UNDI:002001<br />
dnsmasq-dhcp[9460]: 653460281 PXE(eth0) b8:27:eb:ec:46:57 proxy<br />
dnsmasq-dhcp[9460]: 653460281 tags: eth0<br />
dnsmasq-dhcp[9460]: 653460281 broadcast response<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 1 option: 53 message-type 2<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 4 option: 54 server-identifier 192.168.1.102<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 9 option: 60 vendor-class 50:58:45:43:6c:69:65:6e:74<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 17 option: 97 client-machine-id 00:44:44:44:44:44:44:44:44:44:44:44:44:44...<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 32 option: 43 vendor-encap 06:01:03:0a:04:00:50:58:45:09:14:00:00:11...<br />
dnsmasq-tftp[9460]: file /tftpboot/bootsig.bin not found<br />
dnsmasq-tftp[9460]: sent /tftpboot/bootcode.bin to 192.168.1.112<br />
dnsmasq-dhcp[9460]: 653460281 available DHCP subnet: 192.168.1.255/255.255.255.0<br />
dnsmasq-dhcp[9460]: 653460281 vendor class: PXEClient:Arch:00000:UNDI:002001<br />
dnsmasq-dhcp[9460]: 653460281 PXE(eth0) b8:27:eb:ec:46:57 proxy<br />
dnsmasq-dhcp[9460]: 653460281 tags: eth0<br />
dnsmasq-dhcp[9460]: 653460281 broadcast response<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 1 option: 53 message-type 2<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 4 option: 54 server-identifier 192.168.1.102<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 9 option: 60 vendor-class 50:58:45:43:6c:69:65:6e:74<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 17 option: 97 client-machine-id 00:57:46:ec:fe:57:46:ec:fe:57:46:ec:fe:57...<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 32 option: 43 vendor-encap 06:01:03:0a:04:00:50:58:45:09:14:00:00:11...<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/start.elf not found<br />
dnsmasq-tftp[9460]: file /tftpboot/autoboot.txt not found<br />
dnsmasq-tftp[9460]: file /tftpboot/config.txt not found<br />
dnsmasq-tftp[9460]: file /tftpboot/recovery.elf not found<br />
dnsmasq-tftp[9460]: file /tftpboot/start.elf not found<br />
dnsmasq-tftp[9460]: file /tftpboot/fixup.dat not found</p>
</blockquote>
<p>This shows that the first network boot client has successfully made requests to the TFTP service on the network boot service.</p>
</li>
<li><p>Notice the <code>dnsmasq-tftp[9460]: file /tftpboot/feec4657/start.elf not found</code> line. The 'feec4657' value is the serial number of the network boot client (it will obviously be different for you) and allows you to use different images for different devices.</p>
</li>
</ol>
</li>
<li><p>Create a directory for the first network boot client in the <code>/tftpboot</code> directory (remembering to replace <code>[SerialNumber]</code> with the value you found above):</p>
<pre><code class="language-bash">sudo mkdir /tftpboot/[SerialNumber]
</code></pre>
</li>
<li><p>Copy the boot directory from the <code>/nfs/raspi1</code> directory to the new directory in <code>/tftpboot</code>:</p>
<pre><code class="language-bash">sudo cp -a /nfs/raspi1/boot/* /tftpboot/[SerialNumber]/
</code></pre>
</li>
<li><p>Reconnect the power to the network boot client and it should now boot successfully. If you use <code>sudo tail -f /var/log/daemon.log</code> again you should see something like the following:</p>
<blockquote class="blockquote">
<p>dnsmasq-dhcp[9460]: 653460281 vendor class: PXEClient:Arch:00000:UNDI:002001<br />
dnsmasq-dhcp[9460]: 653460281 PXE(eth0) b8:27:eb:ec:46:57 proxy<br />
dnsmasq-dhcp[9460]: 653460281 tags: eth0<br />
dnsmasq-dhcp[9460]: 653460281 broadcast response<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 1 option: 53 message-type 2<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 4 option: 54 server-identifier 192.168.1.102<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 9 option: 60 vendor-class 50:58:45:43:6c:69:65:6e:74<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 17 option: 97 client-machine-id 00:44:44:44:44:44:44:44:44:44:44:44:44:44...<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 32 option: 43 vendor-encap 06:01:03:0a:04:00:50:58:45:09:14:00:00:11...<br />
dnsmasq-tftp[9460]: file /tftpboot/bootsig.bin not found<br />
dnsmasq-tftp[9460]: sent /tftpboot/bootcode.bin to 192.168.1.112<br />
dnsmasq-dhcp[9460]: 653460281 available DHCP subnet: 192.168.1.255/255.255.255.0<br />
dnsmasq-dhcp[9460]: 653460281 vendor class: PXEClient:Arch:00000:UNDI:002001<br />
dnsmasq-dhcp[9460]: 653460281 PXE(eth0) b8:27:eb:ec:46:57 proxy<br />
dnsmasq-dhcp[9460]: 653460281 tags: eth0<br />
dnsmasq-dhcp[9460]: 653460281 broadcast response<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 1 option: 53 message-type 2<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 4 option: 54 server-identifier 192.168.1.102<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 9 option: 60 vendor-class 50:58:45:43:6c:69:65:6e:74<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 17 option: 97 client-machine-id 00:57:46:ec:fe:57:46:ec:fe:57:46:ec:fe:57...<br />
dnsmasq-dhcp[9460]: 653460281 sent size: 32 option: 43 vendor-encap 06:01:03:0a:04:00:50:58:45:09:14:00:00:11...<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/autoboot.txt not found<br />
dnsmasq-tftp[9460]: error 0 Early terminate received from 192.168.1.112<br />
dnsmasq-tftp[9460]: failed sending /tftpboot/feec4657/start.elf to 192.168.1.112<br />
dnsmasq-tftp[9460]: sent /tftpboot/feec4657/config.txt to 192.168.1.112<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/recovery.elf not found<br />
dnsmasq-tftp[9460]: sent /tftpboot/feec4657/start.elf to 192.168.1.112<br />
dnsmasq-tftp[9460]: sent /tftpboot/feec4657/fixup.dat to 192.168.1.112<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/recovery.elf not found<br />
dnsmasq-tftp[9460]: sent /tftpboot/feec4657/config.txt to 192.168.1.112<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/dt-blob.bin not found<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/recovery.elf not found<br />
dnsmasq-tftp[9460]: sent /tftpboot/feec4657/config.txt to 192.168.1.112<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/bootcfg.txt not found<br />
dnsmasq-tftp[9460]: sent /tftpboot/feec4657/cmdline.txt to 192.168.1.112<br />
dnsmasq-tftp[9460]: sent /tftpboot/feec4657/bcm2710-rpi-3-b.dtb to 192.168.1.112<br />
dnsmasq-tftp[9460]: sent /tftpboot/feec4657/config.txt to 192.168.1.112<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/recovery8.img not found<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/recovery8-32.img not found<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/recovery7.img not found<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/recovery.img not found<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/kernel8-32.img not found<br />
dnsmasq-tftp[9460]: error 0 Early terminate received from 192.168.1.112<br />
dnsmasq-tftp[9460]: failed sending /tftpboot/feec4657/kernel8.img to 192.168.1.112<br />
dnsmasq-tftp[9460]: error 0 Early terminate received from 192.168.1.112<br />
dnsmasq-tftp[9460]: failed sending /tftpboot/feec4657/kernel7.img to 192.168.1.112<br />
dnsmasq-tftp[9460]: file /tftpboot/feec4657/armstub8-32.bin not found<br />
dnsmasq-tftp[9460]: sent /tftpboot/feec4657/kernel7.img to 192.168.1.112<br />
dnsmasq-dhcp[9460]: 1754635714 available DHCP subnet: 192.168.1.255/255.255.255.0<br />
dnsmasq-dhcp[9460]: 1754635714 available DHCP subnet: 192.168.1.255/255.255.255.0<br />
rpc.mountd[26471]: authenticated mount request from 192.168.1.112:843 for /nfs/raspi1 (/nfs/raspi1)</p>
</blockquote>
<p>Here we can see the following:</p>
<ul>
<li><code>sent /tftpboot/bootcode.bin to 192.168.1.112</code> -> We successfully sent the <code>bootcode.bin</code> to the network boot client</li>
<li><code>sent /tftpboot/feec4657/[FILENAME] to 192.168.1.112</code> -> We successfully sent boot files from the device specific <code>/tftpboot</code> directory to the network boot client</li>
<li><code>authenticated mount request from 192.168.1.112:843 for /nfs/raspi1 (/nfs/raspi1)</code> -> the network boot client mounted to the system drive from the nfs share.</li>
</ul>
</li>
<li><p>You should now be able to ssh into the network boot client using the following (replacing the [IP Address]) with the one you see):</p>
<pre><code class="language-bash">ssh pi@[IP Address]
</code></pre>
<p>Using the default password of 'raspberry'.</p>
</li>
</ol>
<h2 id="additional-network-boot-clients">Additional Network Boot Clients</h2>
<p>To add additional network boot clients, simply repeat steps 3, 6-10, 15-18 replacing all instances of <code>raspi1</code> with a new name.</p>
<h2 id="enjoy">Enjoy</h2>
<p>This is just a short post - mostly for my own benefit - on how to network boot multiple Raspberry Pis from an x86 Linux Server. While this has been covered <a href="https://hackaday.com/2019/11/11/network-booting-the-pi-4/">many</a> <a href="https://hackaday.com/2018/10/08/hack-my-house-running-raspberry-pi-without-an-sd-card/">times</a> in <a href="https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/net_tutorial.md">other</a> <a href="https://github.com/raspberrypi/rpi-eeprom/blob/master/firmware/raspberry_pi4_network_boot_beta.md">posts</a> none of them worked for me "out of the box". Here's what does.</p>