Raf's laboratory Abstracts Feed Raffaele Rialdi personal website

I am Raf logo

Why WinRT, an object oriented API

September 21, 2011
http://iamraf.net/News/why-winrt-an-object-oriented-api

Before analyzing WinRT it's necessary to go back in time and retrace the evolution of languages.

C language consolidated a standard way to interoperate in two different ways:

  • Source-level thanks to the standardization of ANSI 1989 (and later)
  • Binary, by using the x86 architecture calling conventions (fastcall, stdcall, cdecl) that establishes the way parameters are passed to a call (and more).

To date this is still the most popular way to interoperate between languages and, even when make calls to platform invoke in the .NET Framework, we are using these calling conventions.
When calling Win32 API we are using the 'stdcall' calling convention.

Side note: my first beta testing was MASM compiler (Microsoft Macro Assembler compiler) as a result of a calling convention bug that I found in version 5.1 interoperating with the C compiler (in the eighties).

As of C++, interoperability has always been limited to source files. The ISO Committee (currently chaired by Herb Sutter) established the first standard in 1998 (right after the launch of Visual C++ 6.0 that therefore does not meet that standard) and subsequent ones, until this August 2011 when the committee approved the new C++11 standard.

The committee never established any binary standard for the C++ runtime. This means that is not even possible to use libraries that come from two different versions of the same brand of compiler. You must always recompile the source code with the same version of the C++ compiler.

Here there are some of the problems of a binary standard for an object oriented language:

  • definition of a type system (think how many 'string ' types has been created in C and C++)
  • objects lifecycle management (construction and destruction)
  • definition of a standard for "decoration" applied by the compiler to differentiate overloaded functions
  • definition of binary encapsulation (C++ supports the private and protected inheritance, for which only the compiler knows the exact layout of the class. Hence the need to separate the concepts of binary interface (virtual table) from its possible implementations in different binary objects

Talking about distributed systems, the problem is even more complex. For example, consider to pass a Binary Large OBject (blob) parameter to a remote function call. In C this buffer is identifiable by a pointer that does not have the blob length information anyway. C++ can encapsulate the blob in a class but does not provide any semantic to describe, during the invocation of the remote call, whether the parameter is in input, output, or both directions.
In a distributed system crossing a "boundary" (border) has a cost. If the blob was a web service parameter, the cost in terms of bandwidth to move in either direction cannot be ignored.

IDL (Interface Definition Language by the Open Software Foundation Distributed Computing Environment Remote Procedure Call) arises to fill these gaps. IDL was adopted by Microsoft COM and from CORBA in the Unix world. The binary version of the metadata described by IDL, the type libraries, are the modern binary libraries that also allow binary interoperability among different languages.

I can't blame Don Box when stated "COM is Love" indicating (my interpretation) finally COM as a real practical solution, winning and performing for binary compatibility, interoperability and communication. It's still a solution that works well on Windows and very performant (excluding, of course, the "C" as it does not provide a solution to the problems addressed by COM). Side note: in the //build/ corridors, at the end of the session by Herb Sutter I saw Don Box and told him "So COM is still love," and it was a pleasure for me to see him smiling.

At the end of the nineghties we could observe several problems:

  • The type system is inadequate and too complex (BSTR, SAFEARRAY, VARIANT, …) and it often requires a custom marshaler (an object that can describe how to copy the memory layout of a custom type)
  • Collections are not native
  • Events based on connection points is a complex game
  • Object lifecycle based on reference counting is complex and must be handled by hand
  • Registering interfaces and classes (registry)

In C++ some of these problems are addressed by ATL (Active Template Library) but it is still a library available only in C++ and complex to use.

Power hardware

Starting from the late '90 PC power got over the requirements of the average application. This was the trigger that saw the birth of Java and later of the .NET Framework. In that historical period the overhead due to the runtime and garbage collection was minimal compared to the benefits of productivity and maintainability. In addition, in the absence of a modern C++ standard, which has seen the light only this August 2011, languages like C# had a great and deserved success.

The Framework.NET, based in its infrastructure on COM, reaffirms several concepts that we have already analyzed:

  • Defines a modern type system
  • Provides CLI specifications that allow you to use multiple languages
  • Defines the binary-level compatibility
  • Increases the richness of metadata than those provided by IDL in COM (think reflection)
  • Defines an Intermediate Language (IL) that allows decoupling compared to system architecture (x86/x64/…)

At bottom, the real advantages of Framework.NET is the runtime, that many people criticised, which has however decreased drastically the number of bugs in modern applications and incredibly increased the productivity that directly translates into the chance to build more sophisticated applications. Most of the other concepts were already in COM even if expressed in a more complex way.

No more just PC

As of today the requirements are changed again. In fact, users are demanding smaller and smaller devices that are powerful (but for obvious reasons cannot be as powerful as a modern PC). This requirement is met mainly by two factors: using CPU cycles and (consequently) lower power/battery requirements.
If we had cycles to waste in the past, now things changed enough that we can't ignore.

Another important factor when dealing with a runtime such as the .NET Framework is its versioning. Ensuring that an application that runs on a version of the runtime can run on a different runtime version is very complex, maybe a chimera.

Requirements changed again:

  • native code
  • preserve the productivity of .NET programming
  • modern type system
  • performance
  • etherogeneous languages
  • flexibility in the management of version

The Windows RunTime (WinRT) lays the foundations on these concepts and adds new ones that we will see next post such as activation and asynchronous calls.

WinRT was created from a well-established infrastructure of the operating system which is COM, but with a number of precious changes to overcome the problems we have analyzed previously.
WinRT preserves the basic layout of objects, the IUnknown AddRef, Release and QueryInterface. IDispatch, which gave many headaches to developers, was removed in place of IInspectable. It looks like paradoxical as these two interfaces, among other things, play similar roles, namely the discovery of metadata.

I don't know if any other operating systems already did it, but exposing operating system services in an object-oriented way is certainly a big leap forward. Starting from the next post will begin to analyze the various parts of WinRT.



rated by 0 users



Share this page on Twitter


Privacy | Legal Copyright © Raffaele Rialdi 2009, Senior Software Developer, Consultant, p.iva IT01741850992, hosted by Vevy Europe Advanced Technologies Division. Site created by Raffaele Rialdi, 2009 - 2015 Hosted by: © 2008-2015 Vevy Europe S.p.A. - via Semeria, 16A - 16131 Genova - Italia - P.IVA 00269300109