WYSIWYG

http://kufli.blogspot.com
http://github.com/karthik20522

Tuesday, March 27, 2012

Programming Images: Using WIC to extract metadata

Read my blog post on GettyImages Tech blog about how to use WIC to extract Image Metadata (XMP/IPTC):

http://blog.gettyimages.com/2012/03/27/programming-images-using-wic-to-extract-metadata

Labels: ,

Tuesday, March 20, 2012

To work smart or work hard

Someone once told me I work too hard. But do I really work hard? There is a lot of difference between being physically present at work for 8hrs and actually working for 8hrs! My work usually consists of 1hr lunch + 2 hrs on reddit and maybe 4 hrs of work related work like talking, emailing & finally programming. Am I a hard working employee?? I hope I am not!

During my relatively small 6yrs of work experience, I have come across 3 types of employees:

1. Someone who works for the sake of working.
2. Some who works for love of money.
3. someone who is passionate and work more than what is asked for.

Type 2 and type 3 employees are equally motivated for different reasons but would be an asset to the team & the company as they have some form of motivation. But type 1 is a liability to the company. Type 1 employee is a type according to me is someone who writes code without fundamental knowledge of the problem that is being solved and potentially an unsatisfactory code. This is the type that adds bugs to the system.

Now let's take a smart employee, a smart employee understands the business and writes code to not only solve the current problem efficiently but also designs, builds and structures code in the most elegant and flexible way for future changes to the system. To justify a smart user let's take the following coding practices:

- A good programmer will always write test cases and follow TDD pattern to avoid functional bugs. Having test cases ensures the functionality of the code remains intact when a change is made. Test tools like NUnit (code level testing), Watin for UI testing and Jasmine for JavaScript testing are useful.

- A good programmer tends to usually refractor code. Off course a good programmer is confident of his/her changes by ensuring the test cases are passing.

- A smart programmer is aware of the fact that a system over time degrades in performance. An architecture designed for scalability is a smart thinking. Not many programmers think in multi tier architecture. Building a system where the website is a consumer of a data service (web service) is architecturally good decision as scaling is as simple as adding more web services box's (not entirely true but there is some truth in it). Basic idea is to keep UI & business & data isolated from each other. Vertical and horizontal scaling are now a reachable goal. From a code perspective, async/event driven programming would increase through put of the system & having a business layer as a web service would provide the basis of scaling if data fetching or complex business logic takes too long. On a data layer, having master-slave configuration can decrease load on db by having master for write operation and slave for read operations. For non critical information like logging, doesn't require to be written in real time, a message queue is an ideal solution (MSMQ, Rabbit MQ)

Now a smart programmer understands the above issues and design in a certain way to avoid long term issues. There is nothing wrong in working hard short term but working hard weeks/months together is a sign of a fundamental problem that one should access. Like:

-Is there too many bugs that you dealing with, well, then I guess it's time to write test cases.

-Is there performance issue, well maybe it's time to re-evaluate the system design.

-Not sure of your programming skills, indulge in pair programming, code reviews.

-Spending too much time writing code, well it's time to expand your skills; research on blogs, get to know the latest and greatest technology, maybe someone somewhere have solved the problem in a very efficient way than your struggle.

The moral of my story is that over time the amount of effort put in should decrease. A smart programmer would learn to avoid both short term (bugs) and long term mistakes (architecture) compared to a duck tape programmer who lives for short term gains and long term pains. Take the extra effort to be the smart ass at the beginning and become the bad ass in the end.

Work smart my friends not work hard.

Labels:

Friday, March 16, 2012

SignalR - Web Sockets

SignalR is a cool asynchronous signaling library for ASP.NET to build real-time applications or for any push based notifications. SignalR is similar to Socket.IO or NowJS. Before the workd of web-sockets, there was Comet which was basically a long-held HTTP requests.

SignalR can be downloaded from NuGet [http://www.nuget.org] and downloading the files manually from Git [https://github.com/SignalR/SignalR]

More descriptive information on SignalR can be found at Scott Hanselman’s site [http://www.hanselman.com/blog/AsynchronousScalableWebApplicationsWithRealtimePersistentLongrunningConnectionsWithSignalR.aspx]

To get SignalR up and running following are few todo’s on server side:

Step 1 is to create a PersistentConnection class like follows


Note that Connection.Broadcast would send the data to all connected clients. If you need to sent to a particular client you can use the Send method.

Send(string clientId, object value);

Note Send requires the intended clientId. If you were to be building a chat client or a custom push service, you probably would be storing clientId’s somewhere in a local object or a central repo.

Other useful functions that PersistantConnection class provides that can be overridden:




Step 2 is to add the routes if you are using MVC. This route needs to be registered in Global.asax


That’s pretty much it on the server side

Step 3, if the intended client is a web browser it’s as easy as follows:




That’s all is to SignalR setup. A chat type client would require some sort of client association on the server side to keep communication private.

But what about non-browser based apps, like a console app or a windows service. SignalR has libraries for those too, can be downloaded from NuGet.

In real world applications, no project ends up with a single instance, single class project. In real world apps, there are many class’s and class’s are initialized and disposed all the time. In this scenario opening and closing a SignalR connection or instializing a new SignalR object is a wrong approach since you want be connected to the server all time.

One way to keep the connection persistant is to create a static signalR object, in following case it’s a singleton class


A calling class can get an the instance of the above PushClass


Another way but fancier way to achieve the same persistant effect could be like follows:



The calling class can do the following:

Labels: ,

Git commands and basics

Between Team Foundation Server, SubVersion and Git that I have worked with, Git is slowly turning out to be quite a version control! Anyways, during my work with Git as my source control, here are some useful commands that I have put into my own cheat sheet.

create a file

$notepad myfile.txt

add a file

$ git add myfile.txt

add all files

$ git add .

commit a file (including comments)

$ git commit -m "Initial add"


get latest version and merge

$ git pull --rebase

note: rebase reformulates the check-in timeline continuum based the repository that everyone is working with.

checkout a branch

$ git checkout master

push a branch to remote repository without merging

git push origin newfeature

remove a branch on a remote repository

git push origin :newfeature

delete a local branch

git branch -d newfeature

Creating a branch (and switching to the new branch) in one line

git checkout -b [name of new branch]

Making sure changes on master appear in your branch

git rebase master

Pulling a new branch from a remote repository

git fetch origin [remote-branch]:[new-local-branch]


Undoing in Git - Reset, Checkout and Revert

Git provides multiple methods for fixing up mistakes as you are developing. Selecting an appropriate method depends on whether or not you have committed the mistake, and if you have committed the mistake, whether you have shared the erroneous commit with anyone else.
Fixing un-committed mistakes

If you've messed up the working tree, but haven't yet committed your mistake, you can return the entire working tree to the last committed state with

$ git reset --hard HEAD

This will throw away any changes you may have added to the git index and as well as any outstanding changes you have in your working tree. In other words, it causes the results of "git diff" and "git diff --cached" to both be empty.

If you just want to restore just one file, say your hello.rb, use git checkout instead

$ git checkout -- hello.rb
$ git checkout HEAD hello.rb

The first command restores hello.rb to the version in the index, so that "git diff hello.rb" returns no differences. The second command will restore hello.rb to the version in the HEAD revision, so that both "git diff hello.rb" and "git diff --cached hello.rb" return no differences.
Fixing committed mistakes

If you make a commit that you later wish you hadn't, there are two fundamentally different ways to fix the problem:

You can create a new commit that undoes whatever was done by the old commit. This is the correct thing if your mistake has already been made public.

You can go back and modify the old commit. You should never do this if you have already made the history public; git does not normally expect the "history" of a project to change, and cannot correctly perform repeated merges from a branch that has had its history changed.

Fixing a mistake with a new commit

Creating a new commit that reverts an earlier change is very easy; just pass the git revert command a reference to the bad commit; for example, to revert the most recent commit:

$ git revert HEAD

This will create a new commit which undoes the change in HEAD. You will be given a chance to edit the commit message for the new commit.

You can also revert an earlier change, for example, the next-to-last:

$ git revert HEAD^

In this case git will attempt to undo the old change while leaving intact any changes made since then. If more recent changes overlap with the changes to be reverted, then you will be asked to fix conflicts manually, just as in the case of resolving a merge.
Fixing a mistake by modifying a commit

If you have just committed something but realize you need to fix up that commit, recent versions of git commit support an --amend flag which instructs git to replace the HEAD commit with a new one, based on the current contents of the index. This gives you an opportunity to add files that you forgot to add or correct typos in a commit message, prior to pushing the change out for the world to see.

If you find a mistake in an older commit, but still one that you have not yet published to the world, you use git rebase in interactive mode, with "git rebase -i" marking the change that requires correction with edit. This will allow you to amend the commit during the rebasing process.


Good Git sites:

http://rogerdudler.github.com/git-guide/
http://think-like-a-git.net/
http://jonas.nitro.dk/git/quick-reference.html

Labels:

Thursday, March 15, 2012

WCF Data Contract Versioning

One of the greatest challenges when building any Webservice especially if it involves various clients connected is the issue of versioning. Following is possible version changes that WCF can handle by default:

Adding new Parameters to an OperationClient unaffected. New parameters initialize to default values at the service
Removing parameters from an operationClient Unaffected. WCF ignores old parameters, data lost at the service
Modifying Parameter typesException will occur if the incoming type from client is different from server
Modifying return value typesException will occur
Adding new operationsClient unaffected. Will not invoke operations it knows nothing about
Removing operationsException. It would be unknown action header


Operation contract changes possible solutions:
1) Service contract Inheritance
2) Brand new service contract with new namespace

Data contract changes
1) Adding IExtensibleDataObject
* ExtensibleDataObject is basically Key-Value dictionary
* This dictionary holds properties that old or future code contracts don’t have
* Basically it preserves the properties that don’t exists in the request or response
* Need to use svcutil.exe or visual studio reference to utilize this.
- By default all proxy class’s have this

Example:

Labels: ,

Automapper setup in WCF

Unlike Asp.NET, WCF has no global.ascx file where we can instantiate global objects for the life of the AppPool or until IIS restarts. Most programmers at some time would have used Object mappers such as AutoMapper etc. Automapper by far is my favorite of object mappers, simple to use and easy to configure. But AutoMapper requires the object properties be mapped between Source type and Destination type before any mapping is executed. This can be fairly easily be achieved in Asp.net by creating the mapping in global.ascx thus automapper is ready to map objects when requests are made. But in WCF, since it’s stateless there is no global class that is executed once for the lifetime of the service. But there are other ways to go around this global execution.

One such way is to create a ServiceBehavior that is executed when the Service is initialized. Following is how it’s done:

Step 1, is to create ths ServiceBehavior which would bind to all services and calls’ the automapper initialization function



Note: following namespaces are required for the above code to work


Having the ServiceBehavior an Attribute type, we can pick and choose services to provide more control. And in the AutomapBootstrap class we define the mappers



Having a static function helps in not creating a new AutomapBootstrap object on every service request. This same technique can be used if you use ServiceLocators (IOC) which requires global initialization.

Labels: , ,

Async WCF web-Services

In the world of Scalable programming, it’s all about Event Driven or Asynchronous programming. Event driven, callback based programming like node.js have taken the programming world by storm and few .NET based open source Event driven servers like KayakHttp (OWIN) have it’s uses but when it comes to Asp.NET MVC or WCF, asynchronous programming can be achieved using Tasks based programming approach. Do remember that Asynchronous programming model requires a good data access/interaction system design. A good async design can provide better scalability and potentially high server throughput. NOTE: higher throughput is server handling more requests and not speedier execution (some cases, yes)

A WCF service having async operations can provide a higher server throughput since the server is not longer waiting for the operation to complete to serve the next request. Off-course an Async design pattern adds complexity to the system. Following is how to provide a async operation:

Step 1, in the service interface we need to let the OperationContract know it’s a async operation



Step 2 is make the function call Async by providing a Begin and End operation. Basically Begin method is called when operation starts and End Method is the callback function when the operation completes execution. Following is the same function as above but with Begin and End




Step 3, once the operations are modified with Begin and End methods in the Service Interface, we need to build out the functions in the actuall Service Class.




In BeginGetData function, a new async Task is spawned with a return value of “string” and once the method “GetData” is executed the callback function is called. Task.ContinueWith is trigged only once the value is returned or if there is any unhandled exception thrown. In EndGetData function, basically takes the result and returns back to the calling client.

An example of calling client:




To learn more about Tasks, MSDN should be a good starting point [http://msdn.microsoft.com/en-us/library/system.threading.tasks.task.aspx]

Labels: , ,

WCF REST Service Operation Description

Having a REST based Service enpoints (Article 1) can be a nightmare to query if the parameters are not known already. This parameter suspense can be avoided by enable Web Helper in the WCF configuration.

AS part of the enpointBehaviour you can add the following to provide operation Description



would add http based service description like the following:



The operations can be explored by clicking on the Method link and this would provide you with request and response parameters names and type.

Labels: ,

WCF - REST Endpoints JSON

With more public facing websites providing access to their data, it is quite necessary for the API to be of KISS [Keep it simple Stupid] by design. Long gone are the days with services being SOAP and XML based. REST [Representational State Transfer] based endpoints is the way to go now.a.days. REST based urls are both SEO friendly and much for cleaner URI and easier to remember. When it comes to API design, a REST based endpoint provides a simple and clean URI for it to be consumed.

In .NET world, REST based endpoints can easily be created in an ASP.NET MVC application either by using Routes or by URL rewriting. But in the case of WCF (Windows communication Foundation) REST based endpoints are not provided out-of-the-box. To make this happen we can either configure WCF manually (this article) or we could use frameworks like WCF Web API (which is now called ASP.NET WebAPI and would be part of ASP.NET MVC 4.0) and ServiceStack.NET. The idea behind both the frameworks (SS.NET & WebAPI) is to provide REST based out of the box support for XML, JSON and ODATA (in webapi).

To configure WCF to provide REST based enpoint we need to first enable “ASPNETCompatibility” and have the REST endpoint bind to “webHttpBinding”. In the WCF web.config, following changes would be needed:



Step 1 is to enable aspNETCompatibility we need modify the “serviceHostingEnvironment” to turn on compatibility.


Step 2 is to create a endpoint behaviour to enable http based GET/POST requests. We create the following behavior and provide it with a name for reference:



Step 3 is to hook up the webservice to this behavior either at its default URI location or a custom URI location. Following is a json endpoint for the service:



For WCF Service to allow http based requests we need to bind the service to “webHttpBinding”. webHttpBinding is used to configure endpoints for Web services that are exposed through HTTP requests instead of SOAP messages. Do keep in mind, the contract is the Interface namespace of the service being exposed.

Step 4 is now to enable ASPNEtCOmpatibility in the Service Class



Note: ASpNETCompatibilityRequirements attribute is based of “System.ServiceModel.Activation” namespace.

Step 5 is offcourse to let the service to know if the output is a JSON or XML response. In the service interface, the methods attributes needs to be defined:



Note that Request and Response format can be JSON or XML and also Method could be either POST or GET. URI Template is the method name that will be exposed thru the service.

You are DONE!

You can access the service using the SVC URI like http://localhost/Service1.svc/json/{method}

NOTE: If you would like to add a SOAP based URI endpoint to provide a WebApi style service interface, you can add the following endpoint to the service section



JSON endpoint : http://localhost/Service1.svc/json/{method}
SOAP endpoint: http://localhost/Service1.svc/soap/
WSDL: http://localhost/Service1.svc?wsdl

Labels: , ,

Thursday, March 1, 2012

Beers in my belly - III



captain lawrence




Labels: