About Me

My photo

Senior Technical Talent Advisor and Community Development - A.I, Machine Learning and Data Science.

Director & Co-Founder @ www.Abso-Fashion-Lutely.co.uk
Director & Founder of Data Science & Big Data Analytics IT Job Board http://datasciencebigdataanalytics.jobboard.io/

Roles:

Data Scientist, Statistician, Insight Analyst, Chief Data Scientists, Data Architect, Data Engineer, Data Analyst, Statistics Consultant, Research Engineer, Quantitative Analyst, Developer, Engineer, Pre-Sales / Post Sales engineer, Sales Engineer, Software Engineer, Systems Engineer, Technical Evangelist, Client Services Engineer, Architects (Cloud, Solutions & Enterprise), Cloud & Big data Analytics, Microsoft Azure and consulting roles.

The above is not an exclusive list and I work on a diverse range of roles and technologies. Contact me:  0044 788 135 1363

I am currently studying with Udacity to further my career in Data Science, Big Data Analytics, Programming, Algorithms, Artificial Intelligence, Computer Science, Statistics, Physics, Psychology & Visualizing Algebra.

I specialise in the following:

• Talent Manager and also responsible for expanding our internal team; vetting, meeting & technically testing candidates
• Providing advice around legal, accounting and general recruitment to the community
• Utilising the latest social media strategies for recruitment Twitter,Github Etc ..
• Setting up interviews, negotiating extensions, offers & contracts
• Developing new relationships and bringing in new business
• Client & account management
• Attending and arranging technical conferences to better my understanding of the technologies and markets I specialise in

I own & run the following groups on LinkedIn:
• ASP.Net MVC 3, MVC 4 & MVC5 Ninjas
• Cloud & Big data
• Microsoft Azure Ninjas
• Data Scientist & Analytics UK
• HTML5 Ninjas
• Java Blackbelt
• Hadoop Experts UK & EMEA
You'll need Skype CreditFree via Skype

Tuesday, 29 November 2011

Microsoft must hit 4 Azure milestones ASAP

Next year could be a very big year for Windows Azure, Microsoft’s nearly two-year-old Platform-as-a-Service (PaaS). And, it better be, given the investment in man-power and dollars Microsoft poured into the effort which has not been as widely adopted as the company must have hoped. (Publicly, company execs are very happy with Azure’s traction, of course.)



The ambitious Azure platform launched in February 2010 with a lot of fanfare and expectations. A year later Microsoft claimed 31,000 subscribers although it was unclear how many of them paid.  But since then Azure lost steam and unlike Amazon, Heroku, VMware and other big names in the cloud, it just doesn’t get much love from the web developers it needs to woo.

All that can change if Microsoft delivers on promises to open up deployment options for Azure and to offer more bite-sized chunks of Azure services in Infrastructure-as-a-Service (IaaS) form. If it does that well and in a reasonable amount of time, Microsoft — which has always been chronically late  – will be tardy to the party but will get there in time to make an impression. After all, we are still relatively early in the cloud era.Here’s what Microsoft needs to do for Azure in the next few months:
Azure must run outside Microsoft data centers.
The company has to make Azure available outside its own four walls. Right now if you want to run apps on Windows Azure, they run in Microsoft data centers (or, since August in Fujitsu data centers.) Fujitsu also recently launched a hybrid option that allows it to run brand-new apps in Azure and connect them to legacy apps on its other cloud platform. We’re still waiting for Hewlett-Packard and Dell Azure implementations. (HP could announce this at its HP Discovershow this week.)  Down the road, all this work by hardware makers should result in an “Azure Appliance” architecture that would enable other data centers to run the PaaS.
Microsoft must offer VM Roles and Server AppV for IaaS fans.
Microsoft needs to offer  more bare-bones chunks of Azure services– akin to Amazon’s EC2.  That’s why Microsoft needs to get VM Roles in production mode as soon as possible.  VM Roles, in beta for a year, allows organizations to host their own virtual machines in the Azure cloud. Also coming is Microsoft Server AppV which should make it easier for businesses to move server-side applications from on-premises servers to Azure. “VM roles and Server AppV are the two IaaS components that Microsoft has not yet pushed into production. It still seems that Microsoft has not really focused on the IaaS aspect of Azure,” said Rob Sanfilippo, research vice president with Directions on Microsoft. Microsoft that focus now. As Microsoft adds IaaS capabilities to Azure, Amazon is also adding PaaS perks like Elastic Beanstalk, to its portfolio, so these companies are on a collision course.
System Center 2012 has to ease app migration and management.
Microsoft needs to make it easier for customers wanting to run private and public cloud implementations to manage both centrally. That’s the promise of Microsoft System Center 2012, due in the first half of 2012. With this release, customers that now must use System Center Virtual Machine Manager for private cloud and the Windows Azure interface for Azure will be able to manage both from the proverbial  ”one pane of glass.”  That’s a good thing, but not good enough. “It’s nice that System Center will be able to monitor stuff in both places, but what we need to be able to run stuff in either place,” said a .NET developer who tried Azure but moved to AWS.
 Microsoft must eat its own Azure “dog food.”
Right now precious few Microsoft applications run on Azure — there even seems to be confusion at Microsoft about this. One executive said that Xbox Live,  parts of CRM Online and Office 365 run on Azure only to be contradicted by a spokesperson who came back to say none of them actually do. Bing Games, however, do run on Azure. No matter: this is a schedule issue as almost all these applications predate Azure.  The next release of Dynamics NAV ERPapplication  will be the first Microsoft business application to run fully on Azure. There is no official due date, but Dynamics partners expect it next year. Three other Dynamics ERP products will follow. Directionally, Azure is where all Microsoft apps are going. “Our goal, of course, is that everything will be running on Azure,” said Amy Barzdukas, general manager of Microsoft’s server and tools unit.
In summary:  it’s not too late for Azure, but …
Microsoft has to get these things done — and soon — to counter AWS momentum and also that of rival PaaS offerings like Salesforce.com’s Heroku and Red Hat’s OpenShift which draw new-age, non-.NET oriented Web developers. Recent Gartner research shows PaaS will be a hot segment going forward. The researcher expects PaaS revenue to hit the $707 million by the end of this year, up from $512 million for 2010.  And it expects PaaS revenue to reach $1.8 billion in 2015. That’s good growth, but here will be more of the aforementioned players fighting for that pie. This is going to get good as younger cloud-era rivals are fighting to make Microsoft — the on-premises software giant —  irrelevant in this new arena. But one thing rivals in other eras have learned: It’s idiotic to underestimate this company.

Wednesday, 23 November 2011

Embedding RavenDB into an ASP.NET MVC 3 Application

Attention to the NoSQL movement is growing within the Microsoft .NET Framework community as we continue to hear of companies sharing their implementation experiences of it in applications that we know and use. With this heightened awareness comes the curiosity to dig in and identify how a NoSQL data store could provide benefits or other potential solutions to the software that developers are currently crafting. But where do you start, and how hard is the learning curve? Maybe an even more relevant concern: How much time and effort are required to fire up a new data storage solution and start writing code against it? After all, you have the setup process of SQL Server for a new application down to a science, right?
Word has reached the .NET community on the wings of a raven about a new option for a NoSQL-type data-layer implementation. RavenDB (ravendb.net) is a document database designed for the .NET/Windows platform, packaged with everything you need to start working with a nonrelational data store. RavenDB stores documents as schema-less JSON. A RESTful API exists for direct interaction with the data store, but the real advantage lies within the .NET client API that comes bundled with the install. It implements the Unit of Work pattern and leverages LINQ syntax to work with documents and queries. If you’ve worked with an object-relational mapper (ORM)—such as the Entity Framework (EF) or NHibernate—or consumed a WCF Data Service, you’ll feel right at home with the API architecture for working with documents in RavenDB.
The learning curve for getting up and running with an instance of RavenDB is short and sweet. In fact, the piece that may require the most planning is the licensing strategy (but even that’s minimal). RavenDB offers an open source license for projects that are also open source, but a commercial license is required for closed source commercial projects. Details of the license and the pricing can be found at ravendb.net/licensing. The site states that free licensing is available for startup companies or those looking to use it in a noncommercial, closed source project. Either way, it’s worthwhile to quickly review the options to understand the long-term implementation potential before any prototyping or sandbox development.

RavenDB Embedded and MVC

RavenDB can be run in three different modes:
  1. As a Windows service
  2. As an IIS application
  3. Embedded in a .NET application
The first two have a fairly simple setup process, but come with some implementation strategy overhead. The third option, embedded, is extremely easy to get up and running. In fact, there’s a NuGet package available for it. A call to the following command in the Package Manager Console in Visual Studio 2010 (or a search for the term “ravendb” in the Manage NuGet Packages dialog) will deliver all of the references needed to start working with the embedded version of RavenDB:
Install-Package RavenDB-Embedded
Details of the package can be found on the NuGet gallery site at bit.ly/ns64W1.
Adding the embedded version of RavenDB to an ASP.NET MVC 3 application is as simple as adding the package via NuGet and giving the data store files a directory location. Because ASP.NET applications have a known data directory in the framework named App_Data, and most hosting companies provide read/write access to that directory with little or no configuration required, it’s a good place to store the data files. When RavenDB creates its file storage, it builds a handful of directories and files in the directory path provided to it. It won’t create a top-level directory to store everything. Knowing that, it’s worthwhile to add the ASP.NET folder named App_Data via the Project context menu in Visual Studio 2010 and then create a subdirectory in the App_Data directory for the RavenDB data (see Figure 1).
App_Data Directory Structure
Figure 1 App_Data Directory Structure
A document data store is schema-less by nature, hence there’s no need to create an instance of a database or set up any tables. Once the first call to initialize the data store is made in code, the files required to maintain the data state will be created.
Working with the RavenDB Client API to interface with the data store requires an instance of an object that implements the Raven.Client.IDocumentStore interface to be created and initialized. The API has two classes, DocumentStore and EmbeddedDocumentStore, that implement the interface and can be used depending on the mode in which RavenDB is running. There should only be one instance per data store during the lifecycle of an application. I can create a class to manage a single connection to my document store that will let me access the instance of the IDocumentStore object via a static property and have a static method to initialize the instance (see Figure 2).
Figure 2 Class for DocumentStore
public class DataDocumentStore
{
  private static IDocumentStore instance;
 
  public static IDocumentStore Instance
  {
    get
    {
      if(instance == null)
        throw new InvalidOperationException(
          "IDocumentStore has not been initialized.");
      return instance;
    }
  }
 
  public static IDocumentStore Initialize()
  {
    instance = new EmbeddableDocumentStore { ConnectionStringName = "RavenDB" };
    instance.Conventions.IdentityPartsSeparator = "-";
    instance.Initialize();
    return instance;
  }
}
The static property getter checks a private static backing field for a null object and, if null, it throws an InvalidOperationException. I throw an exception here, rather than calling the Initialize method, to keep the code thread-safe. If the Instance property were allowed to make that call and the application relied upon referencing the property to do the initialization, then there would be a chance that more than one user could hit the application at the same time, resulting in simultaneous calls to the Initialize method. Within the Initialize method logic, I create a new instance of the Raven.Client.Embedded.EmbeddableDocumentStore and set the ConnectionStringName property to the name of a connection string that was added to the web.config file by the install of the RavenDB NuGet package. In the web.config, I set the value of the connection string to a syntax that RavenDB understands in order to configure it to use the embedded local version of the data store. I also map the file directory to the Database directory I created in the App_Data directory of the MVC project:
<connectionStrings>
  <add name="RavenDB " connectionString="DataDir = ~\App_Data\Database" />
</connectionStrings>
The IDocumentStore interface contains all of the methods for working with the data store. I return and store the EmbeddableDocumentStore object as an instance of the interface type IDocumentStore so I have the flexibility of changing the instantiation of the EmbeddedDocumentStore object to the server version (DocumentStore) if I want to move away from the embedded version. This way, all of my logic code that will handle my document object management will be decoupled from the knowledge of the mode in which RavenDB is running.
RavenDB will create document ID keys in a REST-like format by default. An “Item” object would get a key in the format “items/104.” The object model name is converted to lowercase and is pluralized, and a unique tracking identity number is appended after a forward slash with each new document creation. This can be problematic in an MVC application, as the forward slash will cause a new route parameter to be parsed. The RavenDB Client API provides a way to change the forward slash by setting the IdentityPartsSeparator value. In my DataDocumentStore.Initialize method, I’m setting the IdentityPartsSeparator value to a dash before I call the Initialize method on the EmbeddableDocumentStore object, to avoid the routing issue.
Adding a call to the DataDocumentStore.Initialize static method from the Application_Start method in the Global.asax.cs file of my MVC application will establish the IDocumentStore instance at the first run of the application, which looks like this:
protected void Application_Start()
{
  AreaRegistration.RegisterAllAreas();
  RegisterGlobalFilters(GlobalFilters.Filters);
  RegisterRoutes(RouteTable.Routes);
 
  DataDocumentStore.Initialize();
}
From here I can make use of the IDocumentStore object with a static call to the DataDocumentStore.Instance property to work on document objects from my embedded data store within my MVC application.

RavenDB Objects

To get a better understanding of RavenDB in action, I’ll create a prototype application to store and manage bookmarks. RavenDB is designed to work with Plain Old CLR Objects (POCOs), so there’s no need to add property attributes to guide serialization. Creating a class to represent a bookmark is pretty straightforward. Figure 3 shows the Bookmark class.
Figure 3 Bookmark Class
public class Bookmark
{
  public string Id { get; set; }
  public string Title { get; set; }
  public string Url { get; set; }
  public string Description { get; set; }
  public List<string> Tags { get; set; }
  public DateTime DateCreated { get; set; }
 
  public Bookmark()
  {
    this.Tags = new List<string>();
  }
}
RavenDB will serialize the object data into a JSON structure when it goes to store the document. The well-known “Id” named property will be used to handle the document ID key. RavenDB will create that value—provided the Id property is empty or null when making the call to create the new document—and will store it in a @metadata element for the document (which is used to handle the document key at the data-store level). When requesting a document, the RavenDB Client API code will set the document ID key to the Id property when it loads the document object.
The JSON serialization of a sample Bookmark document is represented in the following structure:
{
  "Title": "The RavenDB site",
  "Url": "http://www.ravendb.net",
  "Description": "A test bookmark",
  "Tags": ["mvc","ravendb"],
  "DateCreated": "2011-08-04T00:50:40.3207693Z"
}
The Bookmark class is primed to work well with the document store, but the Tags property is going to pose a challenge in the UI layer. I’d like to let the user enter a list of tags separated by commas in a single text box input field and have the MVC model binder map all of the data fields without any logic code seeping into my views or controller actions. I can tackle this by using a custom model binder for mapping a form field named “TagsAsString” to the Bookmark.Tags field. First, I create the custom model binder class (see Figure 4).
Figure 4 BookmarkModelBinder.cs
public class BookmarkModelBinder : DefaultModelBinder
{
  protected override void OnModelUpdated(ControllerContext controllerContext,
    ModelBindingContext bindingContext)
  {
    var form = controllerContext.HttpContext.Request.Form;
    var tagsAsString = form["TagsAsString"];
    var bookmark = bindingContext.Model as Bookmark;
    bookmark.Tags = string.IsNullOrEmpty(tagsAsString)
      ? new List<string>()
      : tagsAsString.Split(',').Select(i => i.Trim()).ToList();
  }
}
Then I update the Globals.asax.cs file to add the BookmarkModelBinder to the model binders at application startup:
protected void Application_Start()
{
  AreaRegistration.RegisterAllAreas();
  RegisterGlobalFilters(GlobalFilters.Filters);
  RegisterRoutes(RouteTable.Routes);
 
  ModelBinders.Binders.Add(typeof(Bookmark), new BookmarkModelBinder());
  DataDocumentStore.Initialize();
}
To handle populating an HTML text box with the current tags in the model, I’ll add an extension method to convert a List<string> object to a comma-separated string:
public static string ToCommaSeparatedString(this List<string> list)
{
  return list == null ? string.Empty : string.Join(", ", list);
}

Unit of Work

The RavenDB Client API is based on the Unit of Work pattern. To work on documents from the document store, a new session needs to be opened; work needs to be done and saved; and the session needs to close. The session handles change tracking and operates in a manner that’s similar to a data context in the EF. Here’s an example of creating a new document:
using (var session = documentStore.OpenSession())
{
  session.Store(bookmark);
  session.SaveChanges();
}
It’s optimal to have the session live throughout the HTTP request so it can track changes, use the first-level cache and so on. I’ll create a base controller that will use the DocumentDataStore.Instance to open a new session on action executing, and on action executed will save changes and then dispose of the session object (see Figure 5). This allows me to do all of the work desired during the execution of my action code with a single open session instance.
Figure 5 BaseDocumentStoreController
public class BaseDocumentStoreController : Controller
{
  public IDocumentSession DocumentSession { get; set; }
 
  protected override void OnActionExecuting(ActionExecutingContext filterContext)
  {
    if (filterContext.IsChildAction)
      return;
    this.DocumentSession = DataDocumentStore.Instance.OpenSession();
    base.OnActionExecuting(filterContext);
  }
 
  protected override void OnActionExecuted(ActionExecutedContext filterContext)
  {
    if (filterContext.IsChildAction)
      return;
    if (this.DocumentSession != null && filterContext.Exception == null)
      this.DocumentSession.SaveChanges();
    this.DocumentSession.Dispose();
    base.OnActionExecuted(filterContext);
  }
}

MVC Controller and View Implementation

The BookmarksController actions will work directly with the IDocumentSession object from the base class and manage all of the Create, Read, Update and Delete (CRUD) operations for the documents. Figure 6 shows the code for the bookmarks controller.
Figure 6 BookmarksController Class
public class BookmarksController : BaseDocumentStoreController
{
  public ViewResult Index()
  {
    var model = this.DocumentSession.Query<Bookmark>()
      .OrderByDescending(i => i.DateCreated)
      .ToList();
    return View(model);
  }
 
  public ViewResult Details(string id)
  {
    var model = this.DocumentSession.Load<Bookmark>(id);
    return View(model);
  }
 
  public ActionResult Create()
  {
    var model = new Bookmark();
    return View(model);
  }
 
  [HttpPost]
  public ActionResult Create(Bookmark bookmark)
  {
    bookmark.DateCreated = DateTime.UtcNow;
    this.DocumentSession.Store(bookmark);
    return RedirectToAction("Index");
  }
   
  public ActionResult Edit(string id)
  {
    var model = this.DocumentSession.Load<Bookmark>(id);
    return View(model);
  }
 
  [HttpPost]
  public ActionResult Edit(Bookmark bookmark)
  {
    this.DocumentSession.Store(bookmark);
    return RedirectToAction("Index");
  }
 
  public ActionResult Delete(string id)
  {
    var model = this.DocumentSession.Load<Bookmark>(id);
    return View(model);
  }
 
  [HttpPost, ActionName("Delete")]
  public ActionResult DeleteConfirmed(string id)
  {
    this.DocumentSession.Advanced.DatabaseCommands.Delete(id, null);
    return RedirectToAction("Index");
  }
}
The IDocumentSession.Query<T> method in the Index action returns a result object that implements the IEnumerable interface, so I can use the OrderByDescending LINQ expression to sort the items and call the ToList method to capture the data to my return object. The IDocumentSession.Load method in the Details action takes in a document ID key value and de-serializes the matching document to an object of type Bookmark.
The Create method with the HttpPost verb attribute sets the CreateDate property on the bookmark item and calls the IDocumentSession.Store method off of the session object to add a new document record to the document store. The Update method with the HttpPost verb can call the IDocumentSession.Store method as well, because the Bookmark object will have the Id value already set. RavenDB will recognize that Id and update the existing document with the matching key instead of creating a new one. The DeleteConfirmed action calls a Delete method off of the IDocumentSession.Advanced.DatabaseCommands object, which provides a way to delete a document by key without having to load the object first. I don’t need to call the IDocumentSession.SaveChanges method from within any of these actions, because I have the base controller making that call on action executed.
All of the views are pretty straightforward. They can be strongly typed to the Bookmark class in the Create, Edit and Delete markups, and to a list of bookmarks in the Index markup. Each view can directly reference the model properties for display and input fields. The one place where I’ll need to vary on object property reference is with the input field for the tags. I’ll use the ToCommaSeparatedString extension method in the Create and Edit views with the following code:
@Html.TextBox("TagsAsString", Model.Tags.ToCommaSeparatedString())
This will allow the user to input and edit the tags associated with the bookmark in a comma-delimited format within a single text box.

Searching Objects

With all of my CRUD operations in place, I can turn my attention to adding one last bit of functionality: the ability to filter the bookmark list by tags. In addition to implementing the IEnumerable interface, the return object from the IDocumentSession.Query method also implements the IOrderedQueryable and IQueryable interfaces from the .NET Framework. This allows me to use LINQ to filter and sort my queries. For example, here’s a query of the bookmarks created in the past five days:
var bookmarks = session.Query<Bookmark>()
  .Where( i=> i.DateCreated >= DateTime.UtcNow.AddDays(-5))
  .OrderByDescending(i => i.DateCreated)
  .ToList();
Here’s one to page through the full list of bookmarks:
var bookmarks = session.Query<Bookmark>()
  .OrderByDescending(i => i.DateCreated)
  .Skip(pageCount * (pageNumber – 1))
  .Take(pageCount)
  .ToList();
RavenDB will build dynamic indexes based on the execution of these queries that will persist for “some amount of time” before being disposed of. When a similar query is rerun with the same parameter structure, the temporary dynamic index will be used. If the index is used enough within a given period, the index will be made permanent. These will persist beyond the application lifecycle.
I can add the following action method to my BookmarksController class to handle getting bookmarks by tag:
public ViewResult Tag(string tag)
{
  var model = new BookmarksByTagViewModel { Tag = tag };
  model.Bookmarks = this.DocumentSession.Query<Bookmark>()
    .Where(i => i.Tags.Any(t => t == tag))
    .OrderByDescending(i => i.DateCreated)
    .ToList();
  return View(model);
}
I expect this action to be hit on a regular basis by users of my application. If that’s indeed the case, this dynamic query will get turned into a permanent index by RavenDB with no additional work needed on my part.

A Raven Sent to Awaken Us

With the emergence of RavenDB, the .NET community appears to finally have a NoSQL document store-type solution catered toward it, allowing Microsoft-centric shops and developers to glide through the nonrelational world that so many other frameworks and languages have been navigating for the past few years. Nevermore shall we hear the cries of a lack of nonrelational love for the Microsoft stack. RavenDB is making it easy for .NET developers to start playing and prototyping with a nonrelational data store by bundling the install with a clean client API that mimics data-management techniques that developers are already employing. While the perennial argument between relational and nonrelational surely won’t die out, the ease of trying out something “new” should help lead to a better understanding of how and where a nonrelational solution can fit within an application architecture.

Thursday, 17 November 2011

Now Available! Updated Windows Azure SDK & Windows Azure HPC Scheduler SDK

A new version of the Windows Azure SDK, a new Windows Azure HPC Scheduler SDK, and an updated Windows Azure Platform Training Kit.  Whether you are already using Windows Azure or looking for the right moment to get started, these updates make it easier than ever to build applications on Windows Azure. 

Highlights:
  • Windows Azure SDK (November 2011)—Multiple updates to the Windows Azure Tools for Visual Studio 2010 that simplify development, deployment, and management on Windows Azure. The full Windows Azure SDK can be downloaded via the Web Platform installer here.
  • Windows Azure HPC Scheduler SDK— Works in conjunction with the Windows Azure SDK and includes modules and features to author high performance computing (HPC) applications that use large amounts of compute resources in parallel to complete work.  The SDK is available here for download.
  • Windows Azure Platform Training Kit—Includes hands-on labs, demos, and presentations to help you learn how to build applications that use Windows Azure. Compatible with the new Windows Azure SDK and Windows Azure Tools for Visual Studio 2010. The training kit can be downloaded here.
Here are the details:
The Windows Azure SDK for .NET includes the following new features:
  • Windows Azure Tools for Visual Studio 2010
    • Streamlined publishing: This makes connecting your environment to Windows Azure much easier by providing a publish settings file for your account.  This allows you to configure all aspects of deployments, such as Remote Desktop (RDP), without ever leaving Visual Studio.  Simply use the Visual Studio publishing wizard to download the publish settings and import them into Visual Studio.  By default, publish will make use of in-place deployment upgrades for significantly faster application updates.
    • Multiple profiles: Your publish settings, build config, and cloud config choices will be stored in one or more publish profile MSBuild files. This makes it easy for you and your team to quickly change all of your environment settings. 
    • Team Build: The Windows Azure Tools for Visual Studio 2010 now offer MSBuild command-line support to package your application and pass in properties.  Additionally, they can be installed on a lighter-weight build machine without the requirement of Visual Studio being installed.
    • In-Place Updates: Visual Studio now allows you to make improved in-place updates to deployed services in Windows Azure. For more details visit http://blogs.msdn.com/b/windowsazure/archive/2011/10/19/announcing-improved-in-place-updates.aspx
    • Enhanced Publishing Wizard: Overhaul of publishing experience to sign-in, configure the deployment, and review the summary of changes
    • Automatic Credential Management Configuration: No longer need to manually create or manage a cert
    • Multiple Subscription Deployment Management: Makes it easier to use multiple Windows Azure subscriptions by selecting the subscription you want to use when publishing within Visual Studio.
    • Hosted Service Creation: Create new hosted services within Visual Studio, without having to visit the Windows Azure Portal.
    • Storage Accounts: Create and configure appropriate storage accounts within Visual Studio (no longer need to do this manually)
    • Remote Desktop Workflow: Enable by clicking a checkbox and providing a username/password – no need to create or upload a cert
    • Deployment Configurations: Manage multiple deployment environment configurations
    • Azure Activity Log: More information about the publish and virtual machine initialization status
For more information on Windows Azure Tools for Visual Studio 2010, see What’s New in the Windows Azure Tools.
  • Windows Azure Libraries for .NET 1.6
    • Service Bus & Caching: Service Bus and caching client libraries from the previous Windows Azure AppFabric SDK have now been updated and incorporated into the Windows Azure Libraries for .NET to simplify the development experience.
    • Queues:
      • Support for UpdateMessage method (for updating queue message contents and invisibility timeout)
      • New overload for AddMessage that provides the ability to make a message invisible until a future time
      • The size limit of a message is raised from 8KB to 64KB
      • Get/Set Service Settings for setting the analytics service settings
  • Windows Azure Emulator
    • Performance improvements to compute & storage emulators.
Click here to download the Windows Azure SDK via the Web Platform Installer.
Windows Azure HPC Scheduler SDK
Working in conjunction with the Windows Azure SDK for .NET, the Windows Azure HPC Scheduler SDK contains modules and features that developers can use to create compute-intensive, parallel applications that can scale in Windows Azure. The Windows Azure HPC Scheduler SDK enables developers to define a Windows Azure deployment that includes: 
  • Job scheduling and resource management
  • Web-based job submission
  • Parallel runtimes with support for MPI applications and WCF services
  • Persistent state management of job queue and resource configuration

Monday, 14 November 2011

12 Cool HTML5 Geolocation Ideas

Knowing the location of your users can help boost the quality of your Web site and the speed of your service. In the past, users had to actively input their location and submit it to a site, either by typing it, using a long drop-down list, or clicking a map. Now, with the HTML5 Geolocation API, finding your users (with their permission) is easier than ever. Figure 1 shows a Web site using geolocation to determine the location of a user, represented in latitude and longitude. The numbers can easily be translated into something more understandable, such as the street name or city.

Showing a User’s Location with the Help of Geolocation
Figure 1 Showing a User’s Location with the Help of Geolocation

Imagine how useful your site could be if it provided online timetables for all public transportation in a particular city. Using geolocation, the site could recommend optimal travel routes to get people where they’re going as quickly as possible. Desktop users could get their start location sorted by proximity to their computer. Mobile users trying to get home after a night out could quickly find the closest bus stop within walking distance. These possibilities and more are just an API away.

Scenarios for Using the Geolocation API

Here are 12 simple scenarios that illustrate how a Web site can accommodate users and customize their experience by taking their location into account. Some of them might seem obvious, but the small things often make the biggest differences.
  1. Public transportation sites can list nearby bus stops and metro locations.
  2. Late night out? Taxi or car service Web sites can find where you are, even if you don’t know.
  3. Shopping sites can immediately provide estimates for shipping costs.
  4. Travel agencies can provide better vacation tips for current location and season.
  5. Content sites can more accurately determine the language and dialect of search queries.
  6. Real estate sites can present average house prices in a particular area, a handy tool when you’re driving around to check out a neighborhood or visit open houses.
  7. Movie theater sites can promote films playing nearby.
  8. Online games can blend reality into the game play by giving users missions to accomplish in the real world.
  9. News sites can include customized local headlines and weather on their front page.
  10. Online stores can inform whether products are in stock at local retailers.
  11. Sports and entertainment ticket sales sites can promote upcoming games and shows nearby.
  12. Job postings can automatically include potential commute times.

How Geolocation Works

Technically speaking, a PC or a mobile device has several ways to find out its own location (hopefully, in the same place as the user).
  • GPS is the most accurate way to determine positioning, but it’s less energy efficient than other options and sometimes requires a lengthy startup time.
  • A-GPS (assistive GPS) uses triangulation between mobile phone towers and public masts to determine location. Although not as precise as GPS, A-GPS is sufficient for many scenarios.
  • Mobile devices that support Wi-Fi access points can use hotspots to determine the user’s location.
  • Stationary computers without wireless devices can obtain rough location information using known IP address ranges.
When it comes to sharing the physical location of users, privacy is a serious concern. According to the Geolocation API, “user agents must not send location information to Web sites without the express permission of the user.” In other words, a user must always opt in to share location information with a Web site. Figure 2 shows a typical message requesting a user’s permission. For more information about ensuring security with the Geolocation API, see Security and privacy considerations.
Sample User Permission Request
Figure 2 Sample User Permission Request

Three Simple Functions

Are you ready to incorporate geolocation into your Web site? You need to learn only three simple functions to master the entire API, which resides within the geolocation object, an attribute of the Navigator object. (Learn more about the geolocation object here.)
The getCurrentPosition function gets the user location one time. It takes two arguments in the form of callbacks: one for a successful location query and one for a failed location query. The success callback takes a Position object as an argument. It optionally takes a third argument in the form of a PositionOptions object.
navigator.geolocation.getCurrentPosition(locationSuccess, locationFail);
 function locationSuccess(position) {
        latitude = position.coords.latitude;
        longitude = position.coords.longitude;
 }
 function locationFail() {
       alert(‘Oops, could not find you.’);
 }
The Position object contains the properties shown in Figure 3.
Figure 3 Properties of the Position Object

PropertyValueUnit
coords.latitudedoubledegrees
coords.longitudedoubledegrees
coords.altitudedouble or nullmeters
coords.accuracydoublemeters
coords.altitudeAccuracydouble or nullmeters
coords.headingdouble or nulldegrees clockwise
coords.speeddouble or nullmeters/second
timestampDOMTimeStamplike the Date object

The watchPosition function keeps polling for user position and returns an associated ID. The device determines the rate of updates and pushes changes in location to the server.
The clearWatch function stops polling for user position. It takes the ID of watchPosition as an argument.

Presenting Location Data: Geodetic or Civic

There are two ways of presenting location data to the user: geodetic and civil. The geodetic way of describing position refers directly to latitude and longitude. The civic representation of location data is a more human readable and understandable format.
Each parameter has both a geodetic representation and a civic representation, as illustrated in Figure 4.
Figure 4 Examples of Geodetic and Civic Data

AttributeGeodeticCivic
Position59.3, 18.6Stockholm
Elevation10 meters4th floor
Heading234 degreesTo the city center
Speed5 km / hWalking
Orientation45 degreesNorth-East

When using the Geolocation API, you get the geodetic data back from the functions. Presenting location data in raw numbers is rarely friendly or useful. Online services, such as Bing Maps and Yahoo GeoPlanet can help you translate between the two presentation modes.

Browser Support

Internet ExplorerFirefoxChromeOperaSafariiPhoneAndroidWindows Phone
Internet Explorer 9+Firefox
3.5+
Chrome
5+
Opera
10.6+
Safari
5+
iPhone
3+
Android
2+
Windows Phone 7.5+
Figure 5 Browsers that support the HTML5 Geolocation API
Even though geolocation works in all the major browsers (Figure 5), you still have to take into account the scenarios in which location can’t be provided. For example, a user might be running an older browser or have hardware that doesn’t include positioning devices, or simply might not want to automatically share location information. The location detected could even be incorrect. In such situations, you should always include an alternative or a fallback method so users can enter or change their location manually.

Geolocation in Action

Copy and paste the example code in Figure 6 and save it as an HTML file. Open it in your favorite browser and follow the two-step instructions on the Web site to see the Geolocation API draw a blue circle around your current location.
Figure 6 Using the Geolocation API
<!doctype html>
<html lang="en">
<head>
    <title>Geolocation demo</title>
    <meta charset="utf-8" />
</head>
<body>
    <h1>Geolocation demo</h1>
    <p>
        Find out approximately where you are.
    </p>
    <p>
        Step 1: <button onclick="GetMap()">Show map</button>
    </p>
    <p>
        Step 2: When prompted, allow your location to be shared to see Geolocation in action
    </p>
    <div id="mapDiv" style="position: relative; width: 800px; height: 600px;"></div>
    <script type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"></script>
    <script type="text/javascript">
        var map = null;
        function GetMap() {
            /* Replace YOUR_BING_MAPS_KEY with your own credentials.
                Obtain a key by signing up for a developer account at
                http://www.microsoft.com/maps/developers/ */
            var cred = "YOUR_BING_MAPS_KEY";
            // Initialize map
            map = new Microsoft.Maps.Map(document.getElementById("mapDiv"),
                { credentials: cred });
            // Check if browser supports geolocation
            if (navigator.geolocation) {
                navigator.geolocation.getCurrentPosition(locateSuccess, locateFail);
            }
            else {
                alert('I\'m sorry, but Geolocation is not supported in your current browser. Have you tried running this demo in IE9?');
            }
        }
        // Successful geolocation
        function locateSuccess(loc) {
            // Set the user's location
            var userLocation = new Microsoft.Maps.Location(loc.coords.latitude, loc.coords.longitude);
            // Zoom in on user's location on map
            map.setView({ center: userLocation, zoom: 17 });
            // Draw circle of area where user is located
            var locationArea = drawCircle(userLocation);
            map.entities.push(locationArea);
        }
        // Unsuccessful geolocation
        function locateFail(geoPositionError) {
            switch (geoPositionError.code) {
                case 0: // UNKNOWN_ERROR
                    alert('An unknown error occurred, sorry');
                    break;
                case 1: // PERMISSION_DENIED
                    alert('Permission to use Geolocation was denied');
                    break;
                case 2: // POSITION_UNAVAILABLE
                    alert('Couldn\'t find you...');
                    break;
                case 3: // TIMEOUT
                    alert('The Geolocation request took too long and timed out');
                    break;
                default:
            }
        }
        // Draw blue circle on top of user's location
        function drawCircle(loc) {
            var radius = 100;
            var R = 6378137;
            var lat = (loc.latitude * Math.PI) / 180;
            var lon = (loc.longitude * Math.PI) / 180;
            var d = parseFloat(radius) / R;
            var locs = new Array();
            for (x = 0; x <= 360; x++) {
                var p = new Microsoft.Maps.Location();
                brng = x * Math.PI / 180;
                p.latitude = Math.asin(Math.sin(lat) * Math.cos(d) + Math.cos(lat) * Math.sin(d) * Math.cos(brng));
                p.longitude = ((lon + Math.atan2(Math.sin(brng) * Math.sin(d) * Math.cos(lat), Math.cos(d) - Math.sin(lat) * Math.sin(p.latitude))) * 180) / Math.PI;
                p.latitude = (p.latitude * 180) / Math.PI;
                locs.push(p);
            }
            return new Microsoft.Maps.Polygon(locs, { fillColor: new Microsoft.Maps.Color(125, 0, 0, 255), strokeColor: new Microsoft.Maps.Color(0, 0, 0, 255) });
        }
    </script>
</body>
</html>
If you run the code as is, your location will be shown along with a message about invalid credentials, as shown in Figure 7. To get a result without the warning text (Figure 8), you need to replace YOUR_BING_MAPS_KEY with your own key, which is generated when you sign up for a Bing Maps Developer account.
Geolocation Demo Mapping a Location without a Valid Key
Figure 7 Geolocation Demo Mapping a Location without a Valid Key
Geolocation Demo Mapping a Location after Inserting a Valid Key
Figure 8 Geolocation Demo Mapping a Location after Insertinga Valid Key

Thursday, 10 November 2011

Adobe confirms Flash Player is dead for mobile devices - what are your thoughts on it !!!

Tuesday, 8 November 2011

I like Jailbreakers they unearth hidden panorama mode in iOS 5 camera app


Somewhere deep within the bowels of iOS 5 lurks a panoramic camera function, and hacker Conrad Kramer has unlocked it. The trick, according to Kramer (AKA Conradev), is to set the "EnableFirebreak" key to "Yes" within an iOS preference file. Alternatively, you could just grab fellow hacker Grant Paul's Firebreak tweak, which just hit the Cydia storefront this morning.

Once installed on your jailbroken phone, Firebreak will allow you to take full panoramic shots directly from the iOS interface, as pictured above in Paul's screenshot. No word yet on if or when Apple plans on flipping this function live, but in the meantime, you can check out the links below for more details.

http://www.appleinsider.com/articles/11/11/07/ios_5_camera_app_contains_hidden_panorama_mode.html