## Blog: Ramón Soto Mathiesen

### Argue for robustness

So many of us working on a daily basis with F# always claim that we are able to make more robust and bulletproof applications with fewer lines of code than we would need if we used the C’s (C,C++,C#, …). So how do we achieve this?

I will try to explain this in a less theoretical way so people don’t get lost in translation. Besides I will provide the usual foo/bar examples as well as a basic real world example.

Let’s start by defining a couple of functions:

We can all agree that the function look pretty robust right? The main operation is performed inside a try/with statement, for the C’s think of it as a try/catch statement. Now if the operation fails, 2/0 is possible in foobar, the log function will be called with the input parameter x and the exception ex. What seems a bit strange in the functions is that both operations, try/with, finishes in Some/None. This is one of the powerful features of F#, Some/None is a union type between the type and no-value. In other words, either you have a value of the given type Some of 'a or you don’t any value at all None. If you are familiar to ML-like languages, you will have seen this as datatype 'a option = NONE | SOME of 'a, in a identical form for OCaml as type 'a option = None | Some of 'a (you might be able to argue that F# is the OCaml .NET version) and finally as data Maybe a = Just a | Nothing in Haskell.

Remark: Just for correctness, the log function is implemented with the Console.WriteLine method, which is threadsafe and in combination with sprintf/"%A", to make it generic.

### Robustness but verbosity

Now that we have the robust functions. lets combine a couple of them together as we do when we write code:

We can see that we get a type error as the function bar takes an int as input and not an int option type. Let’s re-write the code in a correct way:

I think it’s easy to argument for robustness and correctness but you might think: “Less code you say?”. And you are right, this kind of implementation would be really annoying to write for every single function you would have to pipe the result to.

The more theoretical approach to simplify the code but still maintaining correctness, would be to implement the Maybe Monad (monads are called Computation expressions in F#):

Where we can use the monad to write the previous code as:

By using the monad we don’t have to write function | Some v -> some_function v | None -> None for each time we pipe the value but, it’s still some kind of annoying having to write all the temporary variables x,y,z in order to get the final result. The ideal scenario would be to write the following code:

But this is not possible as we need to bind the functions together. Actually that is what let! does. The let! operator is just syntactic sugar for calling the Bind method.

Remark: The Maybe Monad can be implemented in less verbose code by using the built-in Option.bind function:

### Infix operator to the rescue (»=)

So how do we get as close to 2 |> foo |> bar |> foobar but without compromising on correctness and robustness? Well the answer is quite simple

What we need to do is to introduce the following infix operator:

Now we can combine functions together in the following manner:

Which is pretty close to what we wanted to achieve, 2 |> foo |> bar |> foobar, right?

Another thing to have in mind when using binded functions is to think of the bind as how Short-circuit evaluation works. SCE denotes the semantics of some Boolean operators in some programming languages in which the second argument is executed or evaluated only if the first argument does not suffice to determine the value of the expression. For example: when the first argument of the AND function evaluates to false , the overall value must be false; and when the first argument of the OR function evaluates to true, the overall value must be true. Binding functions is more or less the same, where the output from the first function is bounded to the input of the second. If the first function returns None, then the second is never called and None is returned for the whole expression. Let’s see this in an example using foobar and 0 as input:

After foobar throws an exception and return None, none of the other following foobar functions are evaluated. Cool right?

### Another infix operator to the rescue (|=)

As in real life you might want to get the value of the type and use it in other frameworks that doesn’t have support for Some/None . What you can do is to do something like:

or

This will limit your code to unit = () or to throw and exception. which would be OK if it’s encapsulated in a try/with statement. But sometimes you will just want be able to assign a value that means no change in the final result of the computation. For example: 0 in a sum of integers, 1 in a product of integers, an empty list in a concatenation, and so on. To achieve this I usually implement the following infix operator:

This will now allow us to use the value as the given type and if there is no value then use the specified default value:

Remark: As with the Maybe Monad, this infix operator can also be implemented in less verbose code by using the built-in Option.fold function:

### So let’s use the infix operators on a basic real world example

Now that we have the receipt to create correct and robust one-liner functions, let’s define two functions for this example. The first will return Some array of even numbers from an arbitrary array. And the second will return Some array of the top 10 biggest numbers from an arbitrary array.

For the first function it’s easy to argument for it to never break. If the array doesn’t contain any even numbers, Some empty array will be returned. But for the second function we can see that there will always be returned a Some sub-array of size 10. What will happen when the input array is of a smaller size? Let’s execute the code:

We can see that the first evaluation returns an array of ten even numbers from 2000 to 1982 while the second returns an empty array and logs the out of boundary exception to the console.

Remark: Please never write code like this, it’s always more desirable to check for the size of the array than to get an out of boundary exception. This was just to make a point of bulletproof functions and hereby applications by using F#.

### Conclusion

Well now that I gave you the receipt for creating small robust and bulletproof functions, or Lego blocks as I call them, that can easily be tested for correctness and robustness, now it’s your turn to create your blocks, combine them to create bigger blocks and make robust applications. Happy coding and remember to have fun.

### Where to go from here

Finally if you want to get a deeper understanding of what is happening here, please spend an of your life watching this amazing video:

I’ve been employed @ Delegate A/S for about a year. In this short period I have created some tools for our CRM developers/consultants in order to make working with Microsoft Dynamics CRM more smoothly. One of these tools is DAXIF# which is defined as A set of tools that in combination with other MS tools make it easier to work with CRM/xRM on a daily basis (also for developers who are not familiar with the platform)

The interface is through F# script files that can be executed from a command prompt or directly from Visual Studio (the best IDE for F# scripts):

The main reason to use F# to create this set of tools is as usual the same sales speech we use to give again and again and again: Error free projects with smaller code base, where there is a need to use one programming language (no. Bat files or PowerShell, …). Where big data, external data sources, parallelism, concurrency, asynchronous processor are trivial to use.:

One of the things I learned from this project was that I actually could make F# scripted and self documented Unit Test that can be executed without having to build the final .DLL:

DAXIF# is proprietary so you will need a license to use it. We don’t provide licenses to other CRM Partner/competitors

Keep updated for the upcoming website and NuGet package.

• Link to slides from MF#K (English): Slides

• Link to slides from CRM Partner Community (Danish): Slides

• Geek alert: A few references to Dota 2 might appear in the code:

or in the project structure:

I tried to implement the bitonicsorter I wrote about in my masters thesis. The result is the following code:

It still lacks of speed, even with the use of the included libraries Array.Parallel or Async.Parallel / Async.RunSynchronously (fork/join) but it was fun to write as usual.

REMARK: It’s much more readable than the code I wrote back in the days …

Last June I was in Madrid for TechEd Conference. The main focus was The Cloud. Microsoft has actually done a really good job and the platform is very mature. I’m not going to lie by saying that I will prefer to host everything in the cloud than doing it on-premise. A few PowerShell scripts and voila you got yourself the desired environments. And with the instance slider, you got yourself the amount of instances that you could need for a specific period. Try to do something similar with your on-premises infrastructure. Another awesome feature is that from now on you will only pay for the environments if they are running. This means that DEV and TEST can be shut down while they are not being used:

Lucian Wischik gave three talks with regards to async arriving to C# 5.0 (no callbacks needed). Hmmmm, I wonder were we have seen this before, who said F#?

Another really interesting talk was David Starr regarding Brownfield Development. We all have seen this huge amount of spaghetti code right?

But how do we actually ensure that we don’t get to this point? And how do we avoid that methods grow to become huge? I think the main problem is because we use a toolbox that actually allows this to happen, mostly cos it’s part of it’s verbosity

… well the answer isn’t that difficult. Even though Dustin Campbell gave a good talk, Microsoft really needs to understand that they are not going to catch the businessmen attention by showing a how F# is really good to solve Project Euler problems. What Microsoft needs to do, is to show on one of their platforms how using F# provides a more clean and robust way to make quality software, and we might able to help out on this one, stay tuned:

Finally, not everything in Madrid had to be hard work, there were also time to some pleasure:

As it has been a while since I went to TechEd and because I have to give a small talk for the rest of Delegate A/S employees, I needed to get the PowerPoints and some videos. I was a bit bored and cos I love F# I decided to make a small file crawler. Things I noticed while creating the app is how simple it is to convert from a sequential to parallel app. Just convert the sequence to an array and then apply parallelism, as simple as that. The only issue I found while converting the app to run in parallel is that printfn is not thread-safe so a few changes to Console.WriteLine and sprintf and voila, it’s 100% parallel. This is one of the strong sides of F#, like any other .NET language, it has access to the whole Frameworks API.

Remark: There is no need to actually change the algorithm, so it is still as readable as it was before. Like stated before, change three lines and voila, the app runs in parallel …

The crawler is called from a terminal like this:

And will save the files and write the following output:

p.s.: It wouldn’t be that difficult to convert the code above to a generic website file crawler …

I recently participated at Microsoft’s Dynamics CRM Training Blitz Day, on the Technical Overview for Application consultants, Presales consultants and Developers Track. My first impression was …

… and it’s actually the way it should be. As we have it now, there a lot of code monkeys that really don’t understand the complexity of the systems they are developing, and I know cos I’ve been that monkey on several occasions. But it’s not always the monkeys fault, what you have to understand is that what people have been working on for several years, a developer has to learn and master in very short period in order to implement in code, normally the pre-analysis/design phase of a project. On other occasions it’s the experts in the subject that are not able to communicate what they want to the developers in an understandable language, like for example plain English.

Back in the days while I was studying at the Computer Science Department at Copenhagen University, myself and two fellow students, Joakim Ahnfelt-Rønne and Jørgen Thorlund Haahr, wrote a bachelor thesis on the subject: Klient/server-applikation til kvalitetskontrol af tandbehandlinger. We tried to create a generic application that would allow subject-matter experts to define some business processes from an administration interface without the use of any kind of code. This processes would be defined with a set of rules that would be enforced whenever his/hers peers/colleagues would execute the defined process on both the client side but as well on the server side. It’s been about 5 years since we made that initial prototype, a very limited piece of software but theoretical correct and usable in real-life, to what we have nowadays in form of CRM2013 and I can’t avoid to get a little smile on my face thinking that at least one of the big software companies are doing things the right way, or at least they are trying.

The agenda for the Training Blitz Day looked promising with buzzwords as: New UI, Process Agility, Mobile Client, Yammer Integration, Exchange Sync, Business Rules, Client Extensibility

… and it didn’t disappointed me, even though it was 4 hours with very short breaks combined with my previous 8 hours at work.

They started by given a simple introduction to the new UI. The left bar, which took about 20% of the screen is now placed on the top. This new bar is always visible and you can access all the different sections of the CRM system at any time.

Another major change in the Forms are that they now are a combination of several entities thanks to the Business Process flows. Given the many devices that now are able to connect to CRM, the Form visibility will adjust automatically based on the size of the screen. Also mention that Forms are now a single page, where the previous iFrames are replaced with div html tags that are loaded asynchronously.

Finally, as Microsoft already pointed out, the very heavy loading Ribbons are gone for good. They are replaced with the command bar, which is enabled for touch screens, which is always visible.

As mentioned in the beginning of this blog post, the Business Process Flows are pushed to a whole new level, where you are just not locked to a single process but while you are working on an opportunity for example, you can choose to run the cross-sale process if you already know the customer instead of having to go through the standard lead-to-opportunity process.

The processes are easily defined with the well known interface, there are some minor changes.

The best part is that these Business Processes will work for all your interfaces: CRM web interface, Mobile applications, Outlook, Custom apps

One of the awaited moments was the presentation of the Mobile Apps, only available for Microsoft and Apple phones and tablets at the moment. The application are build on HTML5 but with a native wrapper in order get access to the specific hardware features. By using HTML5 it’s easier to provide new functionality without having to deploy new applications to the different App Stores.

Note: There will be no offline client, what Microsoft tries to push is the always-online and when you aren’t, you will have cached previously downloaded data. As with the Xbox One, they would might have to rethink that one again, or at least we will need developers to do that part.

The XML used to save the visual representations for the Forms and Dashboard, will be reused on the Mobile Apps. This will make it easier to re-use already implemented functionality. There are introduced some limits in order to provide a fluent user experience.

There is also made a separate phone app that integrates with contacts in order to make phone-calls directly from the CRM app.

Another feature of the new CRM is that the old heavy-in-memory Outlook client

will be replaced by several processes that will operate separately avoiding the OS memory limit

Another awaited feature is the Server-side synchronization. No more E-mail routers and no more Outlook client must be running on the users PC in order to send a couple of messages.

From now on, these tasks will now be done between the Exchange and CRM servers.

A flow of the update of an item from a phone, is done without Outlook even been used.

So how difficult is it going to be to upgrade from CRM2011 to CRM2013? Well it’s going to be really easy, if your solution complies with the SDK. On the CRM online version, Microsoft will take care of everything. On-Premises there are to options:

• Best-practice: Just take a backup of the current tenant and then import it into your new CRM2013 setup.

• Alternatively: Upgrade the current CRM2011 with CRM2013 and choose to update all current tenants or wait to do it later from the Deploy Manager.

One of the performance improvements of the upgrade is that tables will be merged into a single table, it was separated into two tables because of previous SQL limitations for tables.

In order to implement Business Process Flows there will be almost no need to use code made by developers, that is usually difficult to maintain across different developers. Instead there will be used a declarative/visualized language/interface

Once a rule is defined, it will work everywhere.

The subject-matter experts will be able to define the processes from a very simple but powerful interface.

The process will be deployed with the solution packages and they now also support export/import of labels for use with several languages.

Only headache will be once again the unsupported solutions created by some “Partners”, even though Microsoft keeps saying time and time again, don’t do it.

What the final customer has to understand is that by allowing this to happen, it gets more difficult to upgrade smoothly and even install the latest roll-updates containing not only new functionality but also fixes to know bugs. Finally the amount of time/money used to correct these problems, that shouldn’t be there in the first place, are the final customer going to pay for and not the “Partner”. Just think about that for a moment …

Due to the new UI, the Client SDK is expanded to support the new functionality.

Remark: Some CRM2011 functionality will be @deprecated in the new release. You can see the differences between the two SDK visualizations here: Xrm.Page Object Model

Another must-have for any single business critical application out there is an built-in autosave functionality.

Remark: Plug-ins will trigger every time auto-saved is called, so please re-think the logic on how plug-ins are implemented. The best-practices recommended by Microsoft are always to limit the fields an update plug-in should be triggered on.

Sitemaps and customizations to that will still be done through the XML as we are used to.

Another AWESOME feature in combination with the Business Process flows are that now workflows can be run synchronously as plug-ins (both pre- and post). This will also limit the amount of code needed to implement plug-ins.

In order to support real-life scenarios where a set of events needs to be triggered and combine state between these events will now also be possible to be done without writing code. I wished they used a bit more time on this matter as it will be game changer compared to other CRM providers.

Also CRM2013 will provide OData access to Mobile/Custom Apps that uses the OData interface through the Windows Azure Active Authentication library.

There will also be support for phone/SMS mechanism to login for very high secured organizations.

As an CRM Architect is very easy to understand why so many, and more and more organizations are choosing the CRM Framework (xRM), as the backbone of their systems. Just look at the next picture. By adding a simple entity to your solution, you will automatically get all this. Just by doing a few mouse clicks and adding some text. Impressive right?

Finally I would like to say that I’m very exited and gratefully surprised with the outcome of the new CRM2013 release. I’m really looking forward to work with it and as I’m writing this blog post, we just received and e-mail from Microsoft noticing us that Delegate A/S CRM Online solution will be upgraded in December this year, without having to do anything from our side. The cloud is no longer the future, but the present.