Wednesday 27 June 2012

HTML5: The end of native apps

The future of web programming is moving towards a common, cross-platform language with HTML5. But will software developers meet CIO demands for new web technologies?
HTML5 is an emerging standard for developing cross-platform web and mobile apps. Many IT chiefs favour web and mobile applications using HTML5 and developers are under pressure to build up-to-date apps using the latest cross-platform web technologies.
How should developers use HTML5 and balance the pros and cons of using emerging web technologies? Is it time to cast off the shackles of native apps for good?
Dan Zambonini, co-founder of web app company Apparus and author of A Practical Guide to Web App Success, says CIOs can no longer count on known timeframes, budgets or even features as a rapidly evolving environment demands project managers exercise greater flexibility.
Zambonini believes back-end software, such as databases MongoDB, CouchDB and ZeroMQ are helping to make app platforms more generalised and less supplier-specific.
Iterative development for web app success
An iterative process relies on our ability to successfully focus on something for a short period of time and takes into account our inability to accurately visualise and predict how theory becomes reality. By taking short, iterative steps, we can focus on creating brilliance one move at a time, and can evolve our app as we get a better feel for the features that succeed and those that don’t turn out as we hoped.
The iterative process happens on two levels. At the higher level, new app features are developed incrementally. For each release these approximate stages are followed: some research, then interface design, coding, a working release and marketing. Check the customer reactions, learn and repeat.
On the lower level, each of the stages is a mini set of iterations in itself, punctuated by testing that informs us whether or not further cycles are required. This holds especially true at the interface and development stages, where a skeleton design or chunk of code can gradually be refined with more detail as it undergoes testing.
If your app development project has a deadline (and unless you’re working on an informal side project, it will), each high-level iteration should be allotted a specific number of days, to ensure a number of full iterations and the learning that comes from them, over the lifetime of the project.
Each iteration should last a fixed length of time, so that your team can develop a rhythm; you will quickly adapt to the recurring deadlines and become adept at estimating how much functionality can be produced in each.
The exact length of an iteration can range from a week to a month. Your team will need to decide on the best length for them based on a number of factors:
  • The complexity of the app: An iteration needs to be long enough for a team to sometimes produce fairly advanced features. Even if these are only developed to a minimum quality or prototype level, they may take weeks.  Similarly, an iteration should not be so short that most of it is spent on planning, testing and deployment, with little time for the actual development.
  • Customer expectations: If you’re developing an app for a client, they may influence how often they expect to see movement and change. This is not necessarily a negative factor if the customer can participate more easily in shaping the development of the application to meet their expectations.
  • Team pressure and rhythm: A deadline needs to positively pressure the team into productivity without being unrealistic and causing the team to opt out of the process. To decide on the deliverables for an iteration, a risk-driven approach is superior to choosing the low-hanging, easy features first.
When taking this approach, you should first develop the high-priority/high-risk features followed by high-priority/low-risk and, finally, low-priority/low-risk. Low-priority/high-risk features should be avoided altogether until the app is a proven success.
Extract from A Practical Guide to Web App Success by Dan Zambonini, published by Five Simple Steps.
“Technologies like these really help developers to embrace change, knowing that project managers don’t have to re-write a lot of code or create complex database migration scripts just to change or add a few fields,” says Zambonini.
“On the front end there are lots of useful frameworks such as dynamic stylesheet Less and Blueprint that help new apps get a first iteration up and running more quickly, so you can collect those invaluable first few pieces of user feedback as soon as possible,” he says.
Zambonini believes the real technologies driving change on the web are browsers.
“Since the start of 2012, we’ve just reached a point where the major browsers are now auto-updating, which is a huge boon for app developers and should give organisations the confidence to use HTML5, CSS3 and Javascript features,” he says.
A survey by Evans Data of 1,200 developers showed 75% are using HTML5 for app development. Furthermore, HTML5 was 20% higher on average than Microsoft’s Silverlight or Adobe’s Flash in importance to the development cycle.
Some firms, such as Royal Bank of Scotland (RBS) and the Financial Times (FT), have started using HTML5 web apps rather than creating native, platform-specific apps.
In April this year, the FT’s HTML5 web app reached two million users after launching in June 2011. The media group has since dropped its native iPad app.
Steve Pinches, FT group product head for emerging technologies, says the app eliminates problems with creating multi-platform native apps by speeding up the development process and reducing costs for developing multiple native apps.
Pinches says: “We believe that, in many cases, native apps are simply a bridging solution while web technologies catch up and are able to provide the rich user experience demanded on new platforms.
“As these improve we expect to see more HTML5 apps and fewer native apps, but there is always likely to be a market for native apps for specific brands or when deeper integration with the hardware or super fast performance are required,” he adds.
While web app development allows preferred tools, which are not specific to a supplier, to be used, challenges remain.
Pinches says tools and documentation are lacking for mobile-based web app development. The HTML5 webkit browser uses a device’s graphics hardware to improve graphics and video display. However, this causes issues such as screen flickering.
But organisations are using HTML5 for mobile apps despite the remaining challenges.
Cardiff University deployed a cross-platform HTML5 mobile app in September 2011 to accompany its native iPhone app, which provides information and services for students.
Eileen Brandreth, director of university IT at Cardiff University, says: “The Cardiff University app is a great way to widen access to some of the university’s key information, tools and services at a time when growth in smartphone use is increasing rapidly in the university community.”
Developers at digital agency, Box UK, used jQuery Mobile framework to develop the university’s app.
Gavin Davies, principal software developer at Box UK, says: “The final specification of HTML5 is unlikely to be set in stone until at least 2014, but we’re already seeing a number of benefits, including powerful audio and video support, useful client-side storage capabilities and highly advanced accessibility. As an open format interoperable between mobile and web, there’s also less reliance on closed source proprietary plug-ins.
“A significant saving can be made in developer time to implement many features; while proprietary technologies like Silverlight or Flash require obscure, baroque tags or fancy JavaScript plug-ins for media elements such as video, HTML5 enables it to be added in the mark-up just like any other standard elements, while retaining semantic meaning.”
Legacy limitations
However, Davies believes corporate organisations are more limited, as they tend to have older versions of Internet Explorer installed, which have poor support for HTML5.
“This means that inelegant work-arounds are sometimes required to provide features,” he says.
Research firm Forrester recommends using HTML5test.com, Modernizr and the HTML5 boilerplate to target differences in browser support and identify cross-platform features.
Forrester also believes native apps are still preferable in the face of such restrictions.
“Gaps in hardware acceleration, inconsistent support for on-device instruments, and incomplete user interface control frameworks still make creating the most engaging device-centric apps a bridge too far for HTML5 at this point.
“That will change over time, but for the foreseeable future, we still see a place in the app internet for native apps and middleware tools that cross-compile to device-optimised code,” writes Forrester in a report titled Embracing The Open Web: Web Technologies You Need To Engage Your Customers, And Much More.
“We all need to work together to stress the web as a platform and to push over the few remaining hurdles”
Despite its limitations, industry leaders see HTML5 as a common language with the potential to completely overcome native operating system (OS) restrictions.
In a keynote speech at the International CTIA Wireless Association 2012 conference in New Orleans, Gary Kovacs, CEO of Mozilla, called for the mobile ecosystem to stop developing proprietary platforms.
He says: “We need to move beyond the silos of native operating systems and hybrid apps on proprietary platforms to device-agnostic platforms that run the full, standards-compliant and open web.
“We all need to work together to stress the web as a platform, to push over a few remaining hurdles like graphics and video and native device API access and work together on the common language, HTML5.”
Users demand that software developers think about how the next generation of apps will transcend differences between OS platforms and browsers, to create a seamless and responsive experience.
Whether that happens with HTML5 or a future technology that does it better, the development process will need to be flexible to cater for new and emerging technologies - and the common language industry leaders are pushing for.

Wednesday 13 June 2012

Turning the Cloud into a Supercomputer: Windows Azure and High Performance Compute goes mass market

Introduction

When you’ve been a proponent of the Microsoft Technology stack for many years you tend to see a lot of technologies come and go. Some stick around, evolve and end up changing the nature of enterprise; others fall by the wayside as the market decides that it doesn’t need it. Eleven years ago one of us (Richard Conway) spoke at the PDC 2001 on .NET My Services, which was by all accounts ahead of its time but didn’t offer the open flexibility we’ve come to expect through evolving web standards. That’s because at the time the standards didn’t exist. Nowadays, it’s a brave new world and we have the luxury of application building blocks such as WCF, which embody the evolved standards.
Windows Azure is a PaaS (Platform as a Service). The word Windows is attached here and is highly relevant as this is a next generation of Windows; offering an alternative programming and deployment model for applications. It’s not a one size fits all and there are many hurdles left before the “cloud” becomes mainstream. Fears and suspicions have to be allayed and this will only come with time and a good record of performance and diverse use cases.
This article is about one aspect of Windows Azure which is not fully understood and in our view changes the nature of how companies and individuals will do business. As hosting costs fall and scalability becomes achievable to all new market entrants, intellect and time to market are just as likely to create the next Facebook as capital. This article discusses, with examples, how the cloud can be turned into a vast calculation matrix by applying parallel computational capabilities through the provision of on demand resources.
Elastacloud has spent the past year working on building applications for the Windows Azure HPC Scheduler SDK. We’ve presented on the topic the country over both virtually and in person and we’ll be presenting some more at an upcoming conference we’re running - sponsored Microsoft and others - on the 22nd June. HPC traditionally is a high value infrastructure product and Microsoft’s offering has been to enable HPC and condition clusters across an enterprise. When the HPC team began their analysis of the cloud they made it an extension of an on-premise model with the idea of “bursting to the Cloud”. Within the last 12 months HPC has become a wholly cloud deployable product which enables services and various types of applications across a range of business lines to leverage the sheer power and elasticity of the cloud to perform very mathematically intensive operations across a large number of nodes without the need for any deployment. The upshot is that HPC is falling within the purview of developers now as opposed to infrastructure. Therefore, here are a few lessons to get you started!

Simple Architectural Patterns

HPC is used for mathematically intensive workloads across various lines of businesses. For example, Monte Carlo simulations are a perfect candidate for an HPC workload. These are “embarrassingly parallel” applications which can be spread over multiple compute nodes such that sometimes billions of calculations can be executed and the results aggregated when they’re complete. Other compute intensive workloads which lend themselves well to HPC are frame rendering, encoding, transcoding, compressing, encrypting, statistical operations, engineering research, medical checks, image and video pattern matching to name but a few.
clip_image002
Fig 1 Using a Parametric Sweep
The simplest pattern we can do is also the most powerful. This is called a parametric sweep and will allow us to prepare our data in advance and use this data to build a meaningful workload. This could be a pricing calculation. The next iteration will grab the next available unlocked record in the database or storage and execute. The results could be left in the database or another database and a client can connect to this asynchronously. Whilst this is both quick and powerful it’s not very flexible.
clip_image004
Fig 2 - Makeup of a Parametric Sweep application
The good news is that HPC team has enabled SOA in their Azure implementation! Developers can make familiar use of WCF for embarrassingly parallel workloads by creating Services and registering them with the HPC runtime. HPC will host the WCF service and forward all calls through its host process so essentially developers can build WCF services with well published interfaces and access them through asynchronous proxies that can be created using svcutil or via Visual Studio.
Fig 3 shows the makeup of a SOA application in HPC which is a lot more flexible in its approach to using an application developer centric model and parallel-enabling this. The most important differences involve the asynchronous messaging between client and service since the client calls methods on the service proxy and gets results back when each individual task has been completed. This means that data can be returned to the client rather than the client having to fetch and index it.
clip_image006
Fig 3 – Makeup of a SOA application
Another type of application is an MPI application. This is beyond the scope of this article suffice it say that these types of applications put the “High Performance” in HPC. In principle they are written in C and can provide for a much richer execution experience using the MS-MPI runtime. These applications function best on reserved hardware where cores have access to shared memory and are located on the same switch which currently Windows Azure does not guarantee.

Building an HPC application

An HPC “cluster” is made up of the following:
  • Head node
  • Compute nodes
  • Web front end
  • Broker nodes
The “head node” receives all input and external requests and executes applications that are deployed on a compute node. It “schedules” “jobs” which are comprised of many “tasks” across the cluster. It is aware of the cluster’s behaviour at all times through various monitoring mechanisms to allow it to work out which cluster compute nodes it should be scheduling to.
The web front end is used to view the job scheduling and determine whether jobs have succeeded or failed. It is a secure ASP.NET web forms application which shows all relevant information about the cluster and the jobs its running. For newcomers to HPC this is the best insight into how the cluster functions and the information it takes a run a job. For example, when a SOA application executes it simply loads a certain type of job template with the required environment variables so it’s fair to say at its fundamental level though it is still a job and a set of underlying tasks.
clip_image008
Fig 4 – Showing the “physical” makeup of a cluster
In order to bring an illustration of how powerful, yet easy building a SOA for HPC is we’ll look at a modified example form the Windows Azure Platform Training Kit.
To begin ensure you have the latest version of the Windows Azure SDK installed which includes the Windows Azure HPC Scheduler SDK.
To begin with we should create a new WCF Project called DivisibleByTwoService and add the following interface:
[ServiceContract]
public interfaceIDivisibleByTwoService
{
  [OperationContract]
  int Mod2(int num);
}
Followed by this implementation:

[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public class DivisibleByTwoService : IDivisibleByTwoService
public class DivisibleByTwoService : IDivisibleByTwoService
{
            public int Mod2(int num)
            {
                  return num % 2;
             }
}
In order to make HPC aware of the service we need to create some very specific configuration and call the configuration file in our project DivisibleByTwo.config.
The ServiceModel binding we won’t look at but we’ll look at the configuration of a broker here:
         <microsoft.Hpc.Broker>
                 <monitor messageThrottleStartThreshold="4096"
             messageThrottleStopThreshold="3072"
             loadSamplingInterval="1000"
             allocationAdjustInterval="30000"
             clientIdleTimeout="300000"
             sessionIdleTimeout="300000"
             statusUpdateInterval="15000"
             clientBrokerHeartbeatInterval="20000"
             clientBrokerHeartbeatRetryCount="3" />
                 <services>
                          <brokerServiceAddresses>
                            <add baseAddress="net.tcp://localhost:9091/Broker"/>
                            <add baseAddress="http://localhost/Broker"/>
                            <add baseAddress="https://localhost/Broker"/>
                          </brokerServiceAddresses>
                 </services>
                 <loadBalancing messageResendLimit="3"
                   serviceRequestPrefetchCount="1"
                   serviceOperationTimeout="86400000"
                   endpointNotFoundRetryPeriod="300000"/>
         </microsoft.Hpc.Broker>
Broker nodes allow requests to be forward to the service. It should be evident that there are three types of transport NetTcp, Http and Https and the bindings that we haven’t shown here reflect these.

Consuming an application

Once all of this is ready we should create a new console application to consume our new service. We can add a Service Reference to our new project so that a service proxy is created for us.
Once this is done we can just reference the proxy classes in code.
Now we can write some code to consume our service.
Firstly we need to add two service references to Microsoft.Hpc.Scheduler and Microsoft.Hpc.Scheduler.Session and add the respective using statements.
Our client will send 100 random number requests then our service will divide each one by two and send back the modulus.
To begin we need to create a Session by referencing the endpoint address of the head node in the cloud and the service name. We need to supply a username/password which we’ll set when we deploy the cluster and use the WebAPI transport scheme which sends batched requests to the head node via SOAP and XML.
SessionStartInfo info = new SessionStartInfo(“myhpcazureservice.cloudapp.net”, “DivideByTwoService”);
info.TransportScheme = TransportScheme.WebAPI;
info.Username = "elastacloud";
info.Password = "AC@mplexPass3";
 
The CreateSession method will be used to set up a session for sending requests to the cluster.
 
using (Session session = Session.CreateSession(info))
{
        AutoResetEvent done = new AutoResetEvent(false);
         using (BrokerClient<IDivisibleByTwoService> client = new BrokerClient< IDivisibleByTwoService>(session))
         {
                    client.SetResponseHandler<DivisibleByTwoResponse>((response) =>
                    {
                        try
                        {
                            int ud = response.GetUserData<int>();
                            int reply = response.Result.Mod2Result;
                        }
                        catch (SessionException ex)
                        {
                            Console.WriteLine("SessionException while getting responses in callback: {0}", ex.Message);
                        }
 
                        if (Interlocked.Increment(ref count) == 100)
                        {
                            done.Set();
                        }
                    });
 
                    
                    for (int i = 0; i < 100; i++)
                    {
                        client.SendRequest<Mod2Request>(new Mod2Request(i), i);
                    }
 
                    client.EndRequests();
                    done.WaitOne();
                }
         }
         session.Close();
For simple reading most of the logging has been stripped out of the code. The asynchronous WCF client code should be self-explanatory. Looking at the for loop we can see 100 requests being sent by the client but not actually sent on the wire until EndRequests is called. When all of the responses have been received in the handler the WaitHandle is released and execution continues and the Session is closed. The two methods which need special attention are the GetUserData<> which will return the iteration number for the request (or any state data passed in as the second parameter to the SendRequests method) and the Result.Mod2Result property which will return the integer result from the service.

Deploying an Application

The Windows HPC Scheduler SDK provides a simple developer tool called AppConfigure which can be used to deploy the packaged HPC software.
clip_image009
Fig 5 – A typical HPC deployment
The Windows Azure HPC Scheduler SDK comes with the correct .cscfg and .csdef configuration for the various types of nodes and an ASP.Net website which is referenced using https://{mydomain}.cloudapp.net/portal which needs the required login credentials to proceed.
AppConfigure generates a management certificate which can be uploaded to a subscription to perform the deployment. It will create a storage account and database which the scheduler will use to manage and monitor all activity on the deployed cluster.
The deployment for HPC is fairly complicated and so takes a reasonable amount of time to complete. You should expect to wait up to 40 mins. The underlying mechanism is simply through the use of the Service Management API as well as several executable files supplied within the Scheduler SDK which do things such as populate databases and Table Storage.
clip_image010
Fig 6 – The AppConfigure application

Summary

The Windows Azure HPC Scheduler represents a shift in the emphasis of the value proposition of the cloud. No longer can the cloud be simply thought of in the context of the deployment of applications but also in the building of parallel applications which span multiple nodes to execute tasks in record time.

About Elastacloud

Elastacloud are a wholly Windows Azure consultancy. They have years of experience migrating applications to Windows Azure including open source, heavy messaging and other stubborn applications. For the last year they have been dedicated to HPC on Azure and have released their first HPC product, the Big Compute Marketplace, to a limited audience. Contact them at @elastacloud and http://www.elastacloud.com/. They are founders of the UK Windows Azure User Group and are running a Microsoft-sponsored conference on the 22nd June with Scott Guthrie as a keynote speaker.
178743d[1]16ea31d[1]Richard Conway and Andy Cross
Elastacloud
http://blog.elastacloud.com/
Richard is Director of Elastacloud, a Microsoft partner providing cloud consultancy and product development for Windows Azure and HPC Server and Co-Founder of the UK Windows Azure Users Group. Andy is Head of Product Development at Elastacloud and co-Founder of the UK Windows Azure User Group.

Thursday 7 June 2012

Webinar: Microsoft SQL Server 2012: Potential Game Changer?

DSP, UK Launch Partner for SQL Server 2012

Are you looking at your current SQL estate and wondering how to improve business and cut costs through innovative IT? 

Are you are uncertain about the right choice of SQL, and what significance the arrival of SQL Server 2012 might have on your bottom line?  

Register for this webinar to learn:
·         How a pro-active SQL strategy can help you realise efficiencies and have a positive impact on the overall strategic goals of your business.
·         How you can demonstrate a higher and measurable ROI by reducing the overall cost of ownership of your SQL estate.
·         About the specific enhancements of this version and associated price changes.

http://it.dspmanagedservices.co.uk/webinar--sql-connection/