OOPS!

Object Oriented Programming languages are all the rage these days.

Few really understand why programmers decided to leave their POPS behind like a bunch of entitled brats, and embrace the new OOPS.

I didn’t understand it either, and every website I visited presented me with a very dry and repetitive explanation of why programmers worship OOPS, without actually explaining it.

Me: What is Object Oriented Programming?
Google: Well, it consists of Abstraction, Encapsulation, Inheritance and Polymorphism.
Me: No, no, what is Object-Oriented Programming?
Google: It’s the concept most of our modern programming languages are based on.
Me: So what is it?
Google: Well, it consists of Abstraction, Encapsulation, Inheritance and Polymorphism.
Me: …

-True Story…google it if you don’t believe me.

College taught me that OOPS was introduced to represent complex objects, but it still didn’t click. In fact, nothing that was covered in college stuck with me.

I was by far, the most attentive. After college.

It took me a while to come to terms with OOPS, and to finally understand it for what it is.

Instead of giving you the treatment that was administered to me, I’ve decided to explain it differently.

Let’s Go Off Topic

There’s this woman who writes fantasy novels starring an orphan who is actually a wizard but only comes to know about it on his tenth birthday, when a hairy giant breaks into his step parents house, scares the living crap out of them, and brings the orphan his first birthday cake. He then gets invited to study at a magical academy by the hairy giant, who bids him farewell after scaring the living crap out of his step parents one last time.

Mess with Hagrid, and he will bend you back into shape.

This is when our young protagonist gets to explore the realm of magic and discovers some pretty amazing things about himself and of the world he belongs to, as he journeys through life, stringing experiences one after the other, with friends and foes he makes along the road.

Let’s take the book she wrote as an example for our object.

Now I know what you’re thinking, “Ashwin, there is nothing complex about a book. It’s just a collection of pages containing written text compiled and sold at exuberant prices, especially the one she wrote. And did you really narrate the entire story just to use the book as a reference for your explanation?”

I agree.

There is nothing complicated about books, but when you try to describe it to a machine, things get complicated. I am, however, going to ignore your last question since I won’t have time to address all your queries.

Time is of the essence, and I am not good at giving short and sweet examples.

Machines Understand Only One Language

Machines are incredibly logical, in fact, too logical. You can get a child to understand the difference between a bird and a butterfly by pointing out the anomalies, but you can’t do the same for a machine because they lack references; and most importantly, they lack intelligence.

Even if we equipped them to vast libraries of information, they will still operate like slaves because they are not sentient beings, and lack consciousness. Machines are instead, armed with logic and it’s our job to build those references from scratch and intertwine them into logical networks, our machines are built to operate within.

In simple caveman English, computer no understand, because computer no smart. You show, computer just follow.

A-OK.

So Where Were We

A book consists of numerous attributes, or stuff that describe it so that we can identify it as a book. Each book has an author, number of pagesgenre just to name a few. These are the many important things that describe a book, allowing it to form an identity for itself.

For human beings such as me and you, these attributes are intuitively understood; a level of operation machines do not possess. We know a book when we see one. We don’t have to check whether the book has an author, pages or genre – just a quick glance will suffice. It goes to show just how incredible we are. You can also think of how great a developer mother nature is. We can even guess what the book is about from reading the title, or looking at the colour and design of the book cover.

But machines can identify items only by logically checking each condition that we have to program it to perform.

Here is a snippet of C# code for your reference.

Class Book
{
 public string Author;
 public int Pages;
 public string Genre;

 Public Book()
 {
  Author = “None”;
  Pages = 0;
  Genre = “None”;
 }
 Void Check()    //Responsible for checking whether the book contains values or not
 {
  If((Book.Author != None) && (Convert.ToInt32(Book.Pages) != ‘0’)  && (Book.Genre != None))
  Console.WriteLine(“Item is a Book!”);
  Else
  Console.WriteLine(“Item is not a Book!”);
 }
public static void Main(string[] args) //This is where the program initiates
{
  Book B = new Book();
  B.Author = “J.K Rowling”;
  B.Pages = 800;
  B.Genre = “Fantasy”;
  B.Check();      //This is when the checking actually starts
 }
}

This is what goes on inside a machine’s mind. While we use thoughts to filter out details, the machines use algorithms to process information.

Objects! Objects!

Hold your horses, I was just getting to that.

You see, our book is an object. It’s obvious to us, but for the machine to grasp it as one, we weave it into a pattern as shown above.

But here is where it gets complicated.

To understand objects, you have to understand something else.

It’s called classes.

Classes?

Yes classes.

Your book actually starts off as a class. I know, I said it was an object earlier, but even then, I am right.

To make a book, you need some sort of a structure or a blueprint before we fill it with content. That something will include a name to give, an area of literature to provide, and some digital documents to type it into.

These are the requirements for a book to materialize, and these requirements are declared before the book is produced.

You don’t build a house without first charting out the blueprint, unless you were trying to build a sandcastle.

You need a mold, before you can inject it with your program rich resin.

Your class, is essentially a blueprint of the complex idea you want the machine to understand, synthesis and store.

And when you use that blueprint, you are creating an object that functions in the real world.

In technical terms, you create an instance of the class.

So we start to synthesis it from:

Class Book

{
 public string Author;
 public int Pages;
 public string Genre;
}

Class “Book” lays down the foundation for how the machine will represent a Book. It’s the skeleton that you assemble as per requirements. Maybe all you need is the author’s name, so you trim off the rest and design your skeleton accordingly.

What About The Object?

public static void Main(string[] args)
{
 Book B = new Book();
}

The object ‘B’ tells our machine how to identify the Book.

You can’t use something that only exists as a set of requirements, or as few lines of code in the machine’s memory, so you have to create the actual object by invoke it as an instance of the class like so,

Book B = new Book();

Now the most obvious question is,

Why Go For Classes and Objects?

Programming languages like COBAL, C, Fortran or Pascal, don’t use OOPS, which is why there is very little requirement for it in the market.

But why is that the case? Why is objectifying considered the way forward? Did the patriarchy have something to do with it?

Sexism in the IT world, brought to you by patriarchy.class

When you transform data into objects, you are encrypting it, hence making it secure. Both, procedural and object-oriented languages can process data and create functions, but only OOPS can encapsulate that data and choose who to provide access to.

Also, OOPS allows the user to shift the dependency away from functions, allowing developer to reuse existing codes and focus on the main logic he has to construct. This might not be obvious to you at first, but once you get knee deep in coding, it will make sense.

You will find a multitude of reasons online, justifying the superiority of OOPS over POPS; the ones I’ve mentioned here are what I feel most important.

And that concludes our brief introduction to the world of coding.

How To Become a Great Developer

Devote attention and effort to the base that governs your coding language, and you will be rewarded for your persistence. You could spend a decade or two exploring just one programming language, and you will discover that the terrain just goes on expanding. There is so much to learn as it is, with more being added each day, which is why it is important, if not crucial, that you stick with whatever it is that you decide to explore, in its entirety for a good period of time.

What I mean by that is, if you wish become a decent web developer, stick to it for at least 3-4 years, before exploring other fields. If you check out early and start learning android development or data science because its “cool” and “trendy”, that will make you a jack of all trades, and a king of none.

With every milestone you cover, you will discover more joy in your pursuit, as opposed to those who dash past everything, in a desperate attempt to meet the bare minimums to land themselves a job, or to avoid getting fired. We live in a time where attention spans have been drastically reduced thanks to smartphones, which is why many find it difficult to concentrate, let alone code.

It requires devotion, just like any other craft, but even more so. It’s not easy to stare at an flashing screen while trying to piece bits of codes together.

But endure, and you will be rewarded.

Or you could switch over to RPA like I did.

Weren’t expecting that, were you?

Leave a Comment