2010/08/26

Ten Things I Hate About Object-Oriented Programming

Filed under: Editorial — Tags: — Oscar Nierstrasz @ 17:25

Boy, I some days I really hate object-oriented programming.

Apparently I’m not the only one. In the immortal words of Edsger Dijkstra: “Object-oriented programming is an exceptionally bad idea which could only have originated in California.”

Well, I’m not normally one to complain, but I think it is time to step back and take a serious look at what is wrong with OOP. In this spirit, I have prepared a modest list of Ten Things I Hate About Object-Oriented Programming.

1. Paradigm

What is the object-oriented paradigm anyway? Can we get a straight story on this? I have heard so many different versions of this that I really don’t know myself what it is.

If we go back to the origins of Smalltalk, we encounter the mantra, “Everything is an object”. Except variables. And packages. And primitives. And numbers and classes are also not really objects, and so on. Clearly “Everything is an object” cannot be the essence of the paradigm.

What is fundamental to OOP? Peter Wegner once proposed that objects + classes + inheritance were essential to object-oriented languages [http://doi.acm.org/10.1145/38807.38823]. Every programming language, however, supports these features differently, and they may not even support them as built-in features at all, so that is also clearly not the paradigm of OOP.

Others argue convincingly that OOP is really about Encapsulation, Data Abstraction and Information Hiding. The problem is that some sources will tell you that these are just different words for the same concepts. Yet other sources tell us that the three are fundamentally different in subtle ways.

Since the mid-eighties, several myths have been propagated about OOP. One of these is the Myth of Reuse, which says that OOP makes you more productive because instead of developing your code from scratch, you can just inherit from existing code and extend it. The other is the Myth of Design, which implies that analysis, design and implementation follow seamlessly from one another because it’s objects all the way down. Obviously neither of these candidates could really be the OO paradigm.

Let’s look at other paradigms which offer a particular way to solve programming problems. Procedural programming is often described as programs = data + algorithms. Logic programming says programs = facts + rules. Functional programming might be programs = functions + functions. This suggest that OOP means programs = objects + messages. Nice try, but this misses the point, I think.

For me the point of OOP is that it isn’t a paradigm like procedural, logic or functional programming. Instead, OOP says “for every problem you should design your own paradigm”. In other words, the OO paradigm really is: Programming is Modeling

2. Object-Oriented Programming Languages

Another thing I hate is the way that everybody loves to hate the other guy’s programming language. We like to divide the world into curly brackets vs square brackets vs round brackets.

Here are some of the nice things that people have said about some of our favorite OOPLs:

“C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do, it blows away your whole leg.”

It was Bjarne Stroustrup who said that, so that’s ok, I guess.

“Actually I made up the term ‘object-oriented’, and I can tell you I did not have C++ in mind.” — Alan Kay

“There are only two things wrong with C++: The initial concept and the implementation.” — Bertrand Meyer

“Within C++, there is a much smaller and cleaner language struggling to get out.” — Bjarne Stroustrup

“C++ is history repeated as tragedy. Java is history repeated as farce.” — Scott McKay

“Java, the best argument for Smalltalk since C++.” — Frank Winkler

“If Java had true garbage collection, most programs would delete themselves upon execution.” — Robert Sewell

But perhaps the best blanket condemnation is the following:

“There are only two kinds of languages: the ones people complain about and the ones nobody uses.” — Bjarne Stroustrup

3. Classes

Classes drive me crazy. That might seem strange, so let me explain why.

Clearly classes should be great. Our brain excels at classifying everything around us. So it seems natural to classify everything in OO programs too.

However, in the real world, there are only objects. Classes exist only in our minds. Can you give me a single real-world example of class that is a true, physical entity? No, I didn’t think so.

Now, here’s the problem. Have you ever considered why it is so much harder to understand OO programs than procedural ones?

Well, in procedural programs procedures call other procedures. Procedural source code shows us … procedures calling other procedures. That’s nice and easy, isn’t it?

In OO programs, objects send messages to other objects. OO source code shows us … classes inheriting from classes. Oops. There is a complete disconnect in OOP between the source code and the runtime entities. Our tools don’t help us because our IDEs show us classes, not objects.

I think that’s probably why Smalltalkers like to program in the debugger. The debugger lets us get our hands on the running objects and program them directly.

Here is my message for tool designers: please give us an IDE that shows us objects instead of classes!

4. Methods

To be fair, I hate methods too.

As we have all learned, methods in good OO programs should be short and sweet. Lots of little methods are good for development, understanding, reuse, and so on. Well, what’s the problem with that?

Well, consider that we actually spend more time reading OO code than writing it. This is what is known as productivity. Instead of spending many hours writing a lot of code to add some new functionality, we only have to write a few lines of code to get the new functionality in there, but we spend many hours trying to figure out which few lines of code to write!

One of the reasons it takes us so long is that we spend much of our time bouncing back and forth between … lots of little methods.

This is sometimes known as the Lost in Space syndrome. It has been reported since the early days of OOP. To quote Adele Goldberg, “In Smalltalk, everything happens somewhere else.”

I believe that the code-oriented view of today’s IDEs is largely to blame — given that OO code does not accurately reflect the running application, the IDE gets in our way instead of helping us to bridge the gap. Another reason I believe that Smalltalkers like to develop in the debugger is that it lets them clearly see which objects are communicating with which other objects. I am guessing that one of the reasons that Test-Driven Development is popular is that it also exposes object interactions during development.

It is not OOP that is broken — we just haven’t figured out (after over 40 years) how best to develop with it. We need to ask ourselves: Why should the source code be the dominant view in the IDE?

I want an IDE that lets me jump from the running application to the code and back again. (For a demonstration of this idea, have a look at the Seaside web development platform which allows you to navigate directly from a running web application to the editable source code. [http://seaside.st])

5. Types

OK, I admit it. I am an impatient guy, and I hate having to say everything twice. Types force me to do that.

I’m sure some of you are thinking — “Oh, how could you program in an untyped language. You could never be sure your code is correct.”

Of course there is no such thing as an “untyped” programming language — there are just statically and dynamically typed ones. Static types just prevent you from writing certain kinds of code. There is nothing wrong with that, in principle.

There are several problems, however, with types as we know them. First of all they tend to lead to a false sense of security. Just because your Java program compiles does not mean it has no errors (even type errors).

Second of all, and much more evil, is that type systems assume the world is consistent, but it isn’t! This makes it harder to write certain useful kinds of programs (especially reflective ones). Type systems cannot deal well with the fact that programs change, and that different bits of complex systems may not be consistent.

Finally, type systems don’t cope well with the fact that there are different useful notions of types. There is no one type system to rule them all. Recall the pain we experienced to extend Java with generics. These days there are many interesting and useful type systems being developed, but we cannot extend Java to accommodate them all. Gilad Bracha has proposed that type systems should not only be optional, in the sense that we should be able to run programs even if the type system is unhappy, but that they should be pluggable, meaning that we can plug multiple type systems into different parts of our programs. [http://bracha.org/pluggableTypesPosition.pdf] We need to take this proposal seriously and explore how our languages and development tools can be more easily adapted to diverse type systems.

6. Change

“Change is inevitable — except from a vending machine.” — Robert C. Gallagher

We all hate change, right? So, if everyone hates change, why do we all complain when things don’t get better? We know that useful programs must change, or they degrade over time.

(Incidentally, you know the difference between hardware and software? Hardware degrades if you don’t maintain it.)

Given that real programs must change, you would think that languages and their IDEs would support this. I challenge you, however, to name a single programming language mechanism that supports change. Those mechanisms that do deal with change restrict and control it rather than enable it.

The world is not consistent, but we can cope with that just fine. Context is a great tool for managing change and inconsistency. We are perfectly comfortable adapting our expectations and our behavior in our daily lives depending on the context in which we find ourselves, but the programs we write break immediately if their context changes.

I want to see context as a first-class concept in OO languages and IDEs. Both source code and running software should be able to adapt to changing context. I believe that many design patterns and idioms (such as visitors, and dependency injection) are simply artifacts of the lack of support for context, and would disappear if context were available as a first-class construct.

7. Design Patterns

Patterns. Can’t live with ’em, can’t live without ’em.

Every single design pattern makes your design more complicated.

Visitors. I rest my case.

8. Methodologies

“All methodologies are based on fear.” — Kent Beck

Evidently some of my students follow the Chuck Norris school of Agile Development:

“Chuck Norris pairs alone.”

“Chuck Norris doesn’t do iterative development. It’s right the first time, every time.”

“Chuck Norris doesn’t do documentation. He stares down the code until it tells him everything he wants to know.”

9. UML

Bertrand Meyer tells this story about always wondering why diagrammatic modeling languages were always so popular, until one day it hit him: “Bubbles don’t crash.” I believe his point is that OO languages are modeling languages. (AKA “All you need is code”)

There similarly appears to be something fundamentally wrong with model-driven development as it is usually understood — instead of generating code from models, the model should be the code.

By analogy, when FORTRAN was invented, it was sold as a high-level language from which source code would be generated. Nowadays we think of the high-level languages as being the source code.

I like to think that one day, when we grow up, perhaps we will think of the model as being the source code.

10. The Next New Thing

Finally, I hate the catchphrase: “Objects are not enough. We need …” Over the years we have needed frameworks, components, aspects, services (which, curiously, seems to bring us back to procedural programming!).

Given the fact that objects clearly never were enough, isn’t it odd that they have served us so well over all these years?

Conclusion?

25 years ago we did not expect object-oriented programming to last as a “new” phenomenon for so long. We thought that OO conferences like ECOOP, OOPSLA and TOOLS would last for 4 or 5 years and then fade into the mainstream. It is too soon to dismiss OOP as just being part of the mainstream. Obviously we cannot feel passionately about something that does not interest us. The fact that academic and industrial research is still continuing suggests that there is something deep and important going on that we do not yet fully understand.

OOP is about taming complexity through modeling, but we have not mastered this yet, possibly because we have difficulty distinguishing real and accidental complexity.

I believe that to make further progress we must focus on change and how OOP can facilitate change. After all these years, we are still in the early days of OOP and understanding what it has to offer us.

Oscar Nierstrasz
[Banquet speech given at ECOOP 2010. Maribor, June 24, 2010]

95 Comments

  1. Procedural:
    Examine problem -> code solution
    OOP:
    Examine problem -> Create objects -> code solution

    I’ve used c++, java, javascript, C#, ruby, scala, ada, modula-2, lisp, and F#. I can sort of see the hype of functional and modular languages, but for the life of me I can’t see what’s so exciting about object-oriented ones. Maybe it’s just the way my brain is wired but I personally love the procedural way of doing it. The best programs I have ever written were in C. And another thing I enjoy is using gotos.

    Comment by Bob — 2012/05/24 @ 17:32

  2. The myth of OO is that it’s better than procedural.

    It simply isn’t. In fact OO is harder to learn and less productive than procedural, it’s LESS reusable, not more, it’s LESS easy to design, not more.

    The only reason most programmers these days like OO is that they were raised with it and are used to it.

    Good modularized procedural programming if faster, more agile, easier to code, easier to maintain, and easier to extend. And much, much, much lower learning curve.

    The only reason OO become popular is that it was a fad that caught on, and it allowed programmers to make their work more convoluted and more complex, so they could derive higher billing rates and more job security.

    I say all these things as somebody who spent 10 years writing huge procedural-based programs, which were very well organized, highly proceduralized and modularized, etc. Then I took 10 years off to be a marketing drone. ThenI went back and started programming again…in Java and Objective-C. I am a hot OO programmer now, fine. It works fine. But it’s absolutely not one iota better.

    I do have to laugh when younger programmers who have never written large procedural applications tell me:

    “oh, but procedural programming is so hard, so messy, so difficult to maintain.”

    and I say: “no, not if you design your sub-procedures and custom functions properly. They serve exactly the same function as objects.”

    then the poor little OO guys say :”oh, but that’s a lot of up-front design.”

    and I say “Yes, just like OO is a lot of up-front design. NO DIFFERENCE WHATSOEVER”

    Except as noted before: OO has a much larger layer of bull**** terminology slathered on top, and a lot of new “features” like inheritance, which really are absurdly useless in the real-world programming. As a programmer one must spend more time wallowing through all this *** before doing the work.

    Comment by Tom B — 2012/06/17 @ 16:50

  3. OOP is for sociopaths, procedural is for empaths.
    OOP is like a literary novel, procedural is math/science

    if you ever had to fix someone’s OOP code…ugh.

    I find the vast majority of ‘experienced’ programmers are copy/pasters and are not that good…and all of them seem to be OOPers.

    I am pretty sure windows is OOP, which explains why it gets more fluffy and less ‘fixed’ with each version.

    Comment by bob — 2012/06/21 @ 15:33

  4. You don’t really understand OOP. Most of your points are irrelevant.

    Comment by Nissim levy — 2012/07/02 @ 00:39

  5. OOP is just a complex way of doing code that today’s Basics like PureBasic and various others can do with a few lines of code. Programmers love to jump on a new band wagon, gives them an excuse for not finishing projects in Basic or some other equally easy language , they started and lost track or direction in.

    I hate OOP, its over bearing, the reuse code ability it claims to have exclusive rights too is total BullS**t, and the coding rules it uses just slows down coding and it hurts creativity. You still have to add lines of encapsulating code,just to reuse anything.

    A simple call to a Print statement in Basic to produce a Print “Hello world” takes on a whole new world of hurt in OOP.
    To me, and this is just my opinion, Its Freakin stupid .

    It is the programmers fault that languages like Purebasic are not getting the press they deserves.
    Basic is not dead, its just out of the main stream headlines. And because most are procedural , some dumb programmers think they are toy languages. yea right, better get skooled.
    And today’s Basics are fast as hell, almost all compile to native code and produce an EXE, or whatever it is Mac’s run (prg’s?) .

    Hell Purebasic supports Mac, Linux and Windows and you can write any type of program with it!. It supports all APIs and if its not in there, you can make a Procedure and create it yourself. Remember Procedures?, god I love em!. Or create your own library, in any language you want, Pure doesnt mind. It loves all Libraries.
    With built in 3D graphics, Sprites, both 2D and 3D, OpenGL and DirectX, Sound,Midi, Music,Menus, Gadgets, Internet support, Types, Structures, Libraries, inline assembly, and over 1000 commands, whats not to like?

    Look at Blitz Basic for example, more shareware games have been written in it than any other language and its not OOP.

    Only problem I have recommending it (Blitz), is the lack of updates lately. Now Blitzmax is kinda OOP, but not so much that a programmer gets lost in the code.

    So yes, I’m tired of lazy programmers tryin to force OOP on the world. get some balls and learn how to program like a real boy. You bunch of wooden headed dolls!, jk

    GO BASIC!!!

    Comment by Ken — 2012/07/07 @ 17:27

  6. Object oriented programming, to me, is a joke!!.
    Ok, lets pile on more complicated commands, brackets, terminators, constructors, Methods((Functions), Procedures), inside of other Procedures (classes),and inside of more procedure calls(constructors). Yea, that looks like really readable code now!!.
    I dare anyone to take anything beyond a simple Class, and try to deconstruct it, as in, tell someone step by step whats going on. Good luck!. Bet you don’t get past why it has to have a main section or it will not run.

    I program in Purebasic, one of the few languages that didn’t fall victim to the mighty OOP buzz word stampede.

    It uses Procedures!.,Yes, it has Structures, Types, can make API calls, and other fancy bits, but its highly readable, and it compiles out to an EXE for Windows Systems. a PRG for Macs, ( I think its PRG, dont own a mac, anymore so not used to their tags).

    The point being, it is a modern language, powerful, can make any type of program you can think of, is fast as hell, and no OOP insight, Thank God!.

    And yes, I can do, Java, Actionscript 3, Html5, C#, C++. But too me, its like painting a picture with a lions tail, while its still attached to the lion!.
    Me,I’d rather use a real paint brush, one with plain ol’ procedural calls.
    Nothing wrong with Basic these days, just something really wrong with today’s programmers.
    Some of you believe your own hype!. Most cant even get past the “hello world”tutorial starter program that comes with most languages.
    Yet you all run around quoting OOP buzz words, I assume , in a failed attempt at looking intelligent. It doesn’t work!., Us real programmers, those of us that have sold products written in Basic, regular C, Modula, Forth Assembler or some other OOP lacking language know the truth!. So stop it!!, please?,
    All you are doing is ruining the future of programming and piling more crap on the fire.
    Reminds me of some guys I knew that bought the Game Genie for the old Nintendo back in the day. They used to brag about beating a game when in fact it was the Game Genie that did it for them.
    Same with OOP,. Most don’t even understand it, they just use it because they have been told its the latest!, greatest!, … its not!!

    And before you think I’m just some crazy nut from the 80’s, well I am, I have programmed Atari 8 bits (when I was a teen), Atari STs, Amigas, Intel based computers and others, in almost every language invented, Ever done a game in forth? I have. Ever programmed in a language called Action on an Atari 800 on a hot summer night while drinking a beer, then release a game that thousands played? , I have!
    Ever worked on a bit of assembly code so long that you later dreamed in binary? I ha…… well you get it!

    So stop hiding behind classes, methods, constuctors and all that other OOP mess and man the hell up!!

    Real programmers dont need OOP!!!,

    All we need is a good IDE, some killer graphics code, a woman that understands late hours in the glow of a monitor, and a beer, later all!

    Comment by Ken — 2012/07/08 @ 16:12

  7. You should go for javascript, Its the language of the Web.

    Comment by Eick Brandr — 2012/07/20 @ 02:19

  8. It seems to me

    Procedural programmers do not like OOP because they do not understand it.
    OOP programmers do not write good OOP because they do not properly understand the ‘golden rules’ of it.
    Many OOP programmers do not plan their code – at all.

    Both styles can create perfectly good code. Personally (horses for courses) OOP suits the line of work I am in.

    Comment by Anthony — 2012/07/24 @ 18:03

  9. 100/100

    When I started OO programming, I have faced problems. After 12 years, I will say, I am still facing the same problems.

    And the problem is nothing but OOP itself!

    Comment by Srinivas Nayak — 2012/08/21 @ 13:53

  10. Hey dude, I like your article…

    But dont you think you should give examples for every point you write here. So that every ponit will look relevant and easy to understand.

    Comment by nanda — 2012/08/30 @ 17:26

  11. I wonder if anyone has considered that our tools for what we are building (languages are TOOLS people. yes. that’s right. they are tools to build something. a means to an end. not THE end) are so extremely primitive, that there’s no point in arguing which “paradigm” is better? We are basically neolithic humanoids twirling a stick in the mud and calling it writing. Except our stick and mud are java and c++ and ides and frameworks etc. We don’t even know what the metaphorical “lever” is in the world of IT (much less what a “wheel” is.. there is no wheel to reinvent because we aren’t at a point where we can standardize something “round” and “useful” and standard)

    Comment by ha — 2012/09/13 @ 01:32

  12. […] Ten Things I Hate About OOP […]

    Pingback by The Object Cube | The soft nature of software — 2012/09/19 @ 18:34

  13. I tend to agree with Ken about OOP. Summing the above, the problems seem to concern extant implementations, and development tools that impede more than they help.

    Java and its various IDEs are a particular obstacle to stable and reliable applications. The desire by vendors to support multiple jars with classes that do much the same thing -but with small variations- is a general cause of stress alopecia.

    Languages such as Python, wherein mere source indentation replaces explicit punctuation, are not advances. As the DB guy said, languages which attempt to “be all things to all men” in re DB interfaces cause problems. When a DB vendor publishes an interface, a language designer should just use it and move on.

    Lastly, there’s too much academic Computer Science thinking. In the real world, people deal every day in filthy lucre that’s only used with a set precision. Every language should have a ‘money class’ and ‘money object’ that can be declared and used with a standard and predictable rounding method that doesn’t depend on the hardware being used.

    Comment by Yeah Sure — 2012/11/19 @ 00:23

  14. Well well well …. use python …

    Comment by Scaringella — 2013/03/18 @ 17:21

  15. There will never be a single, great programming language or construct, for one reason. People.

    It’s not the fault of the language that people use it differently, its because people think differently.

    I myself am highly logical and it is reflected in my code – very small amounts of code in the right places doing lots of work.

    When I look at most other peoples code, I simply cannot see how they thought what they were doing made any sense. But to them, it obviously made all the sense in the world. Conversely, other people sometimes have difficulty grasping my code.

    That isn’t the fault of the language – actually, its not a fault at all, its just the way things are.

    Comment by Rodney Barbati — 2013/04/05 @ 18:44

  16. You sound like you love OOP, but hate most of the popular implementations. Join the club. Types aren’t fundamental to OOP, nor is bashing another guy’s language (I don’t even understand why that’s on this list). Nor are classes. Nor inheritance. In your 6th argument, you’re not even complaining about OOP, but about every programming language I’ve ever seen. Context sounds great, but it’s certainly not mutually exclusive with OOP.

    Don’t hate OOP for the languages that do it wrong.

    Comment by weberc2 — 2013/05/31 @ 17:44

  17. I’d like to point to my PhD research on the (mathematical) meaning of (nominally-typed) OOP (eg, Java, C#, C++, Scala, etc). The official copy of my PhD thesis is available at scholarship.rice.edu/handle/1911/70199, a more polished version of which was recently published as a book (http://www.amazon.com/dp/3639512812/).

    Chapter 2 of the book (and the thesis, but in less-polished form) discusses and presents OOP concepts and what “the essence” of OOP is. The discussion in that chapter is more oriented towards developers than researchers.

    Comment by Moez AbdelGawad — 2013/06/20 @ 21:43

  18. Nice post but doesn’t agree all you are saying. It is a good thing to create some structure in the code and avoid duplicated code. Creating code is not similar to ‘hacking’, it might be easy to insert some code, copy some and change it a bit, you will get soup at the end of years. After these years you cannot easily change it when it is huge (mostly it huge).

    The stupidity of OOP is that everything must be a class these days. To make it more complex, each class has it’s own file even when it is just a couple of lines. And, some prefer name spaces, 10 levels deep because it is a cool thing. Some people also use classes for unique things like constants, it is class abuse. constants needs to be defines or consts not classes because when it is a class (or an array) it can be changed!

    The cool thing of OOP is that you can inherit it and it is meant to inherit. You don’t need it for simple things. I have seen a script from Microsoft that extends the Javascript API (they wanted to add some new rules can’t remember what it actually was) but it was very complex to arhieve the same effect, I need just 4 lines of code (they needed with their example 20 lines or such).

    That’s the stupidity of these days, not everything has to be class because somebody or company tells you so. It makes simple things more complex, it is stupid! It is like the abuse of exceptions, same story, the abuse of namespaces, interfaces, packages etc.

    It seems to be cool to understand the difficult thing, it has nothing to do with being a good programmer. Like you said, not a programmer but a modeller.

    Over the years I use classes also but when it can be a general thing I will use/create a function that can be used inside a class. Working like this, you will have a library with functions that extends the language’s core functions (for example to read a file) and some classes that extends the functionality you need. At the end you will get: Standarized code, great extension to the core functions and less bloated/compact classes and easy to fix when there is a problem. At the long run it will be rock solid.

    Use classes only when you need it, not always.

    Comment by Erwinus — 2013/08/20 @ 05:08

  19. There are two types of programmers – those that understand and use OOP and then there are those that don’t. The latter are quick to criticise but not so quick to shut up and learn something useful. You rarely find OOP guys criticising the procedural programmers (whats the point really). I am sure that most good OOP programmers have thought to themselves at some point in time “What is all this stuff about? Why am I bothering?”. Then that hallelujah moment happens and it all slots into place in your mind and you never want to go back. Its a bit like learning to ride a bike – you can’t really tell someone how to do it – they have to figure it out for themselves. I can’t even think about code unless its objects nowadays. It doesn’t help that a lot of the OOP texts and articles out there do a truelly awful job of explaining it and I’m not surprised when people just give up and go back to “safe” procedures.
    I very important thing is that bad code is bad code regardless of the style or techniques used and for the most part the bad coders use procedural coding because if they were any good they would have moved to OOP already. Thats not to say that there has been some awful code written in C++ Java and Delphi.
    There is one GREAT language – C#.

    Comment by Tim Black — 2013/12/02 @ 16:00

  20. Seriously Tim?

    “bad coders use procedural coding because because if they were any good they would move on to OOP?”

    That made me laugh so hard,

    look dude,

    I know how to code in Classes, Methods, Attributes, types,lists, maps, ect.

    Accessing things that with one function or Procedure and an Array, I could do way easier and without referencing every damn thing the class has so I can find out what I have to input just to get a result back.

    Look at the Procedure PrintFunny(funny$)

    Not hard to understand is it?

    You input string$ funny$ as in

    Print(“funny this guy thinks we are all stupid!”)

    and something funny gets printed.

    You can find out what string to input by looking up the procedure in a procedure list , usually on the right side that comes with most Procedural IDEs,

    like Purebasic, Blitzbasic , Fastbasic, Powerbasic, Euphoria, AGK ect, Darkbasic , Javascript to a degree with RJ textED.

    All very powerful languages and most capable of producing anything a class based language can.

    So dont treat us procedural guys as dumb asses, we are not!

    We prefer knowing what our code it up too,
    Instead of blindly hoping the class we are referencing in some giant set of classes elsewhere will do what we want.

    And that we have input all the attributes so the class does not go batshit crazy on us.

    I can program in Java, Javascript(which I do), C# , C++, assembly languages dating back to the 6502, 68000 (Amiga and ST) ,and 386.

    So do not ever treat me or any other programmer as some fool you can point and laugh at because we don’t use the same club to kill our prey as you do.

    We like looking up procedures and finding out what is inside, we like simple calls to functions that don’t require a PHD in computer science to decode.

    We like procedures!

    Now run along and program in that OOP language I’m sure you use , that you have never even gotten past the “hello world” class in!

    We’ll stick to procedures and Functions, (samething)!

    And before you act as if we could never write any programs without your precious classes?

    Tell me then, how did all those guys, (me included) sell software back in the late 80’s early 90s with just our bare hands and a procedural language?

    Guess we are better than you, cause we used what in your opinion is a caveman’s language to make some of the best software and video games ever made.

    Sure your heard of Pacman?, Donkey Kong?, Mr.Do?, Mario? all done without the use of Classes.

    thank you for your time!.

    Comment by the Ken — 2014/02/13 @ 19:24

  21. “I can’t even think about code unless its objects nowadays.”

    I’m sorry for you Tim!

    I don’t have a problem with OOP per se, I have a problem with programmers who can think about code in one way only.
    Sometimes OOP is the right abstraction, but more often it is not.

    You think C# is great? Compared to F# it looks like a toy language!

    You can use patterns, I can abstract them away. You can’t do that because your language and your mindset is too limited.

    Comment by Jürgen Heiling — 2014/05/03 @ 16:06

  22. It’s not just object-oriented code, it’s everything beyond straight C. If you think about it you have very few things in programming. You have memory, and you have the executable code that works on that memory. C is a worthy invention b/c it abstracts away the various assembly languages out there. But beyond this everything else is unnecessarily complicated, yet we are told it will make our life easier. There was a time when people viewed programming as a way to make a machine do something useful in the world. Now it seems people have lost touch with that simple idea. The only reason other languages have become the norm is b/c they have large standard libraries meant to enhance productivity. Has nothing to do with paradigms.

    Comment by Bob — 2014/06/17 @ 18:55

  23. Obviously there are some extreme views on the list, but I have to say I have struggled with the OO concepts for years now.

    I stated in Basic then went to Cobol and used a 4GL called PowerHouse (from Cognos). I like to think I was a pretty good programmer and could do most things. PowerHouse had a CRUD generator which meant providing the DB was right you could have a fully working system within literally 10 minutes, doing CRUD functionality with automatic validation of field types. Of course the limitation was that there were boundaries on what you could do, but that wasn’t such bad thing, this encouraged easy to read code and simple debugging.

    I started using Java and then C# and I couldn’t believe what a huge step backwards this was in development. I had to write stacks of code to do the most simple things. There were no direct access to a DB, you had to write SQL Select statements in the code!

    I still tinker with C#, mainly Aspx, but I have never really got my head around OO. It is just too abstract from real coding. In general, you input something, process and output it. You don’t create a copy of a class and extend it.

    Someone in this chain put the Car class and inheritance as a real world example. To me that’s exactly the problem, that is a physical example of something that you never program. I would prefer a real world programming example. So input something, write it to a DB and send something to a printer. I have never used inheritance as I never see the value in it. If I have a printer class and I want to add a new print function, in reality I need to edit the printer class and add it. Otherwise I am making a dependant piece of code that someone else might end up duplicating as its not in the original class.

    Anyway that’s my 2 pence worth

    Comment by Richard Barker — 2014/06/26 @ 15:38

  24. 1. Isn’t Programming is Modeling a valid paradigm? Do you have any exposure to domain-driven design? Modeling is one of at least three basic survival skill for software developers.

    2. The theme here seems to be that the language is responsible for its own misuse by developers. No language, however well designed for change, is ever going to be a substitute for discipline and maturity of the programmer who wields it. It will always be possible to create abominations in any programming language, including all those you didn’t mention (i.e., non-OO languages).

    3. The answer to your question is: architectural blueprints. They are of course a physical entity, they exist, but the only really purpose they serve is to help one construct, or understand the construction of, buildings and other structures. I recommend looking at Scala, it supports working with both objects and classes.

    4. As long as methods are designed according to well understood principles (OCP, LSP, ISP, SRP, etc.) then there is no reason you should have to understand the whole in order to enhance or extend a part. A tool (IDE) is not going to protect untrained developers from creating abominations. Again, point the finger in the direction it belongs: the developer. If they don’t understand and follow patterns and principles, then they deserve the big ball of mud they’re going to end up with.

    5. If you are having to use RTTI to get the job done then you haven’t thought through your domain well enough. The LSP says (essentially) that an instance of a derived class must be usable anywhere an instance of the base class may be used, without any side-effects or type-sensitive processing (RTTI), Types are not inherently evil. Developers who don’t understand their domain are.

    6. “the programs we write break immediately if their context changes” Exactly! The programs *we* right. Don’t try to transform responsibility for building into the design the required flexibility into some omnipotent tool that is going to save your from your own lack of foresight. Tools are NEVER going to save you. Grow up and take responsibility for your own failure to follow well-understood patterns and principles. If it’s tough to refactor, it’s because you wrote it that way. Don’t blame the tool or the language.

    7. BS. Written as one who has probably gone overboard applying patterns where they do not make sense. If you’ve actually read the GoF or any other decent design patterns book, you’ll note that patterns are defined in the context of a specific problem they are meant to solve. If you have a hammer, not everything is nail. As a case in point, how many Singletons were in your last project? If you answered more than say two or three, then you do not understand that pattern nor likely any others. If by “make your code more complicated” you mean make it more articulate, make it match the domain better, make it *easier* to understand and extend, then yes I agree. If you do not understand the principles behind them you cannot apply patterns correctly.

    8. Suck it up. Unless you’re a cowboy coding in your Mom’s basement, you are going to get involved in methodologies. There is going to be traceability, responsibility, reporting, schedules, (oh my!). Methodologies have NOTHING to do with whether or not quality software (OO or not) gets written. Stop externalizing your angst.

    9. UML is useful because it is an abstraction that doesn’t crash. I do not think that the way we write complex software today is a sustainable model. They are already getting too big, too complicated. What is need much more than a new language or new tool, is a useful abstraction above writing the code that we write today. How many times have you written a loop that iterates over a collection (for, while, foreach, whatever)? Doesn’t it make you angry? It should. We are so mired in arguments about semi-colons and braces, that we are completely failing the see the train wreck that awaits us at the end of this tunnel.

    10. I fail to understand the argument here, I do think that Objects are not the end-all, be-all solution to software development. But I do think that there are cases where their application makes perfect sense…in the hands of a mature, trained developer.

    Conclusion
    Tools and languages are no substitute for a well trained, mature developer who understands the underlying principles on which ALL software is written. Don’t depend on tools and languages as crutch, to take over responsibility for doing things right. Either that, or invent the next layer of abstractions that obsoletes all discussion of execution models, language paradigms, etc. But stop whining about OO. It doesn’t deserve it.

    Comment by Loren Erickson — 2014/07/25 @ 00:26

  25. In early-days, only simple programs like add-amount-to-account or remove-an-account or calculate-interest or backup-tape was written in LOTUS/COBOL/BASIC. So Procedural programming was enough for those days.

    In today’s world it gets more complex. General public (billions of users) are using a website (Facebook?). For instance, a credit card has complex-calculations/limits/offers/conversion-rates/etc. For this we need multiple classes/interfaces defining the behavior and business-logic. That demands OOP.

    People have lots of criteria to choose/buy products. That make them ask for
    product-x, (product.name)
    price-y, (product.price)
    quantity-q, (product.quantity)
    brand-z, (product.brandid) (brand.name)
    delivered-in-n-days, (shoppingcart.deliveryType)
    billed-to-a, (shoppingcart.billingdetails)
    delivered-to-b, (shoppingcart.shippingdetails)
    + apply my offer-coupon-code-xyz. (shoppingcart.coupon)

    That needs OOP. isn’t it?

    Comment by Arun — 2014/08/23 @ 20:36

  26. In a procedural environment, how would you represents concepts such as an invoice, a file on the filesystem, UI buttons or even a string?

    Oh right! A data structure with functions to manipulate it! That sounds awfully object-oriented to me…

    If the above statement doesn’t make sense to you, chance are you don’t understand OO.

    Object-orientation _is_ a paradigm–not a language feature.

    Comment by J-F Bilodeau — 2014/09/06 @ 01:42

  27. Some of the biggest messes I have seen have been people taking OO too far. Too many objects, too many “nifty” techniques. As a logical concept it is great. Especially encapsulation and organization/categorization of the logic into black boxes and such. But, you do not need objects to effectively do that all the time. OO implementations often gets bogged down in too many objects, properties, methods and other things. For example, I have seen many crap developers put way too much functionality and logic into constructors so that all sorts of things magically happen when you simply declare objects and leave a freaking mess to try to understand them and what is going on in the areas using them. Same with too many events or too much inheritance. Done wrong it can make a system too rigid to where people do not want to modify things because they got too difficult to work with. The concepts of the benefits of it and the use of reasonable objects where they make sense are the good points. It is a technique… but sometimes some focus on the technique more than the functionality, readability, and final solution and just make a freaking mess. Sometimes developers go way too far into objects and tiers that they just make an overly complicated mess for the actual need at hand. The better systems I have worked on have featured a mix of traditional and object oriented techniques. Use the right tool and technique for the job via a weighing of pros and cons. But using OO over not does not automatically make an OO solution better, not by a long shot. Build with objects or object-like principles when they help you and are needed… but it certainly can be something where people take it too far.

    Comment by JW — 2014/09/11 @ 19:18

  28. Before I get mad and rant for thousands of characters, I’d like to know: How much do you actually program in the workspace???? Because there are quite a few things that I do daily that require objects…

    Comment by Garzella — 2014/09/30 @ 02:53

  29. A motivating discussion is worth comment. There’s no doubt that that you should write more about this issue, it may not be a
    taboo subject but typically people don’t discuss such topics.
    To the next! Cheers!!

    Comment by diverse.unm.edu — 2014/10/09 @ 11:48

  30. OOP is not scary but some concepts are harder to understand and very broad. Specially in UML there are many diagrams to represents 11 types almost, very confusing.

    Regards,

    Creately

    Comment by Creately — 2014/10/17 @ 11:12

  31. Procedural is a descendant of OO rather than the two being separate entities. Consider that functions or methods called in OO are ultimately granted a slice of processor pie and run procedurally until they terminate. It’s not too difficult to write a heavily recursive function that will crash the runtime and sometimes the host machine. That code can be ported almost as-is to a procedural language and achieve the same effect.

    OO is a bootstrap for looped procedural programming. It comes at the cost of some control — we often don’t have access to the looping mechanism, memory management, etc. — but comes with convenience.

    It’s true that OO’s structure can be messy but I would argue it’s no worse than tens of thousands of lines of crammed procedural code in one flat file (if you split it up into external includes you’re tacitly agreeing that OO structure makes sense). Either way, readable code is more a reflection of the developer rather than the programming paradigm they’re using.

    As someone who cut their teeth on procedural I am reluctant to return unless there’s a compelling reason. I want to write useful code without spending my time re-inventing the wheel. It’s not bad to have a gander at the wheel once in a while to see if maybe it can be improved but if we did only that we’d never have created the countless wheel-bearing devices we have today. I believe the same can be said for OO.

    Comment by Patrick — 2014/12/11 @ 18:35

  32. I meant: “OO is a descendant of procedural…” 🙂

    Comment by Patrick — 2014/12/11 @ 18:37

  33. @Garzella: nothing you code *requires* “objects” unless you’re forced to work within an already-existing OO paradigm. Any program may be written without objects.

    Comment by Dave Newton — 2014/12/12 @ 17:24

  34. I think the obvious is missed here. OOL’s weren’t created for developers to code easily, but for machines to read, compile and organize the code and executables – and for IDE’s to help us reliably cobble together thousands of tiny pieces into a gazillion different puzzles. Why else would anyone write an “EventHandler” for a snippet which could easily be typed in a few characters of inline code? If we wrote “procedural” code we’d still have to deal with a library of a few thousand functions unless we wanted to re-code them each time. Over time the functions become words, the language grows and you need a reference to handle it. If you want to build an IDE, you need to structure the reference for quick access and before you know it you end up with something very like OOP – without necessarily exposing the object paradigm to the developer.

    That said, I absolutely hate OOP for inherently procedural processing, which is most of what I find myself doing on a daily basis. That’s what scripts and procedural languages are for. Cranking up a full-fledged version of VS xxxx .NET to produce three webpages dealing with a few dozen data elements is like swatting a fly with a gold-plated toaster-oven; it works, but that oven gets a little heavy after a while and there are more appropriate tools to use. Sometimes the customer just wants that oven anyway, though. It’s just so-o-o-o shiny!!!

    I am a structured-programming guy from the COBOL days – a discipline which could straighten out a lot of the object gobbledegook being produced lately. There is no incompatibility here. We work in hierarchical organizations, live in a hierarchical society, deal largely with hierarchical data, build hierarchical systems (hardware and software) and we should approach our code design with a similar methodology if we want it to be readable and maintainable – starting with the requirements phase. Structured programming is simply a hierarchical movement from General to Specific. However, it relies upon a similarly-structured methodology which starts with a general statement of the requirement and moves to the specifics of a user interface or back-end processing.

    This rational methodology is what allows organizations to break problems into their component parts and to assign work to development teams which subsequently assign tasks to individual developers and analysts – with the goal of producing the best possible product in the shortest possible time.

    Instead of widespread adoption of this, we increasingly see high-level managers and consultants choosing hardware and software products according to the latest street buzz and then trying to find some poor coding schmuck and his buddies to spend the next 12 months or more tied to a chair with his brain wired to a spaghetti network of 18 different language flavors-of-the month trying to make the whole disorganized contraption function with some semblance of order.

    The results are usually somewhat less than stellar.

    While I’m on the subject, the minute someone utters the words “The Cloud” he is forever marked a fool or charlatan to anyone with a smidgen of technical understanding.

    A network of server farms, themselves a network of thousands of individual servers tied to mega-terabytes of very real electro-mechanical storage devices and housed in very real terrestrial installations collectively consuming enough electricity to power a mid-sized city can hardly be described as a fluffy cloud. It’s like calling a power plant “The Mist” or a brick a melody – or terror a country you can declare war upon.

    “The Cloud” is a marketing masterpiece – and a societal diaster; an almost complete collective failure to come to terms with the realities of of our technical existence. It gives me nightmares even briefly contemplating a future with such people in positions of authority.

    Ironically, this is the epitome of the object paradigm – give it a name, hide the details and expose the methods. The imaginary “Cloud” is a huge abstraction which people are content to treat not only as an actual object, but THE actual object – which bears little or no resemblance to the reality.

    It’s analogous to the name “God” – he is who and what we say he is and he works in mysterious ways and he is good unless you are bad, in which case you will be punished.

    (God, I hope The Cloud doesn’t decide to punish me for blasphemy or I am financially fxxxxd!)

    The fault lies not with the paradigm, but with the widespread eagerness to choose the abstraction over the reality; to accept as truth that which they know to be untrue; to choose ignorance over knowledge and sophistry over their own sophistication.

    Safety in numbers I guess.

    Which brings one more name to mind: “Titanic.”

    Comment by Just Another Steve — 2014/12/25 @ 03:55

  35. The hardware we are dealing with is inherently procedural. It loads a program at a memory offset, executes from the beginning, and runs to the end.

    The entire object paradigm is an abstract illusion of spinning wheels between start and end.

    The stack and heap of various VMs are likewise abstract inventions which few machines actually implement in hardware.

    ALL object-language programs can be therefore be written procedurally because that’s how the generic hardware operates.

    All objects are translated into procedural code at some point, because that’s how the hardware works.

    The question is: is the object paradigm the best way to implement our ideas – or would we be better off working with the reality, or a different paradigm?

    Comment by Steve — 2015/01/07 @ 09:35

  36. I use Gambas3 in GNU-Linux. Gambas3 is apparently OOP.
    Interestingly I am not required to understand this in order to use it.
    I write code for fun, hours a day and I love it.

    If your programing language requires you to understand and explain OOP in order to use the language and to read their code manual then there is something seriously wrong.

    OOP should be invisible to the programmer, like a function which they may use or may choose not to use.

    Comment by Evan — 2015/03/30 @ 00:10

  37. Steve’s says that once a program has loaded it “executes from the beginning and runs to the end”. That’s a bunch of baloney! Once a program is loaded it does only one thing: it enters the message loop. That’s all there is to it. A program starts up and enters the message loop. And it stays in the message loop receiving an indeterminate amount of messages until the “terminate” message appears.

    The OOP paradigm is perfectly natural. It is based on the way things work in the real world. Consider this: what is a person? Could you say that a person is an animal with two legs and two arms and a head? You could say that but you would be wrong. A “person” is an abstract idea.

    That’s the object-oriented paradigm. Nothing is real. You need to refactor whatever it is you may think about the world around you. You will find that the object-oriented methodology of inheritance is the keystone. Twenty years ago programmers understood that. Today there are people claiming that interfaces and method overloading provide polymorphism. That’s nonsense. Two methods sharing a name while receiving different parameters; what’s polymorphic about that? Absolutely nothing! And interfaces are just collections of abstract methods; all they offer is a shared nomenclature! They don’t do anything! It’s hardly object-oriented and it’s certainly not polymorphic! Inheritance is the foundation of the way it works. Polymorphism is the result. It is a cause and effect relationship.

    Object oriented programming is founded in the way the natural world works.

    Comment by Skats McDoogle — 2015/04/03 @ 20:17

  38. I haven’t programmed anything for a long time, but when I did, it was procedural. OOP seemed to me just like a lot of faffing and dancing around whatever needed doing, instead of getting on and doing it.

    At base, computing is just about something being added to something else. There is no getting away from that, and it is a procedure, and there is no point enveloping it in layers of unnecessary nonsense that seem to me just using a “sledgehammer to crack a nut”. Actually, a lot of sledgehammers, each inheriting a bit of the original sledgehammer… What is the point of that? Just use nutcrackers instead.

    No wonder I gave up programming.

    Comment by Keithy — 2015/04/04 @ 01:18

  39. You seem to be displaying your own lack of understanding as weaknesses of OOP, I don’t blame as OOP is not clearly defined and there are so many different things called OOP that its easy to get confused, it took me a long time to finally put a definition to the magic word OOP.

    1. I would say OOP is the paradigm where you program using objects as opposed to bytes (and other primitives) and pointers to memory. What is an object? An object is an abstraction over a block of bytes, instead of working with the bytes directly i.e. first byte = first field, second byte = second field, etc, (to access the second field you add 1 to the pointer to where the first byte is stored in memory) you can view this block of bytes as an object. An object is a high level abstraction allowing you to easily give meaning to a block of bytes which is specific to your problem domain. A simple example would be a rectangle object, the first byte in this object represents the length, the second byte represents the width. Methods are functions/procedures which are specialized to take rectangle objects as at least one of their input arguments (not “this/self” is actually implicitly the first argument, i.e. the method does not belong to the object) an example of a method would be an area method which calculates .. the area of the rectangle. The area method encapsulates the details of how the area is calculated (you don’t need to explicitly pass the rectangle dimensions to area) you call area with rect as the argument and it will calculate the area for you. Why is this encapsulation useful, lets say you decide to add triangle objects to your program, the formula for calculating triangle objects is different than that for rectangles what are you to do now? OOP makes this easy, you just add a area method for triangle objects (note this does not necessarily mean inheritance) and you can carry on using area same as you did for rectangles. I just defined polymorphism here, what I just said here are the fundamental principles of OOP and what do you know those principles are heavily used in Functional Programming and the likes, nowhere did I mention state or inheritance or mutability.

    “The Myth of Reuse” You are right there is myth that using OOP means you automatically get reuse with no extra effort that is wrong, you do indeed get reuse if done properly but if not it becomes even worse than procedural code. “instead of developing your code from scratch, you can just inherit from existing code and extend it” Umm no, just no, I think you said that on purpose because you need to bash OOP and pretend you don’t know what is meant by reuse. Reuse means that once you develop a class you can use it in other projects, you don’t have to recode it, of course this is not automatic, the class has to be designed carefully such that it can be reused.

    2. No comment, other than to say that different languages do OOP differently and you shouldn’t judge the paradigm based on how C++ or Java implement it, Objective C, SmallTalk and even Common Lisp do a much better job than Java or C++.

    3. What are classes you ask? Classes are simply a description of a type of object, I already used classes to explain objects refer to my rectangle object. A rectangle object is an object which posses 2 properties (dimensions) the length property and width property. You are right classes exist only in our minds, in reality we only have bytes, bytes and pointers to bytes, so unless you really love programming in terms of bytes classes are your best friend (not even procedures exist they are merely addresses to the machine code in memory), A class simply tells the compiler/interpreter how a particular object is mapped from to memory and vice versa.

    4. The sole confusion which arises from methods is that most OO languages group methods with a class and somehow give the illusion that methods belong to a class, with “this” being some magic entity which comes out of nowhere, when really a method is just a function/procedure with “this” as its first parameter identify which object, IMO Common Lisp does a good job of separating methods from classes. On another note if you want a runtime version of your program, get yourself an assembly debugger showing you each assembly instruction and the contents of each register,

    5. I agree with you fully here, I prefer dynamic typing and would love for types to be optional. Most modern typing systems are incredibly lacking but I struggle to see what this has to do with OOP but rather languages in general.

    7. I completely agree here as well, the first stage in any design should be prototyping to actually become aware of your problem and aware of what you actually need to do. Rushing into complex design patterns such as UML leads to unnecessary complexity.

    Comment by Jay — 2015/05/17 @ 17:13

  40. Tim Lee… you may be insane or absolutely brilliant. Either way, your description of marxism as it relates to materialism and OOP is positively fascinating.

    Comment by Peter — 2015/06/04 @ 22:37

  41. On 2015/01/07 at 09:35 Steve said…
    The question is: is the object paradigm the best way to implement our ideas – or would we be better off working with the reality, or a different paradigm?

    Well said, Steve.

    Comment by Phil — 2015/06/05 @ 00:05

  42. The funny thing is that; all the OOP languages were made using Procedural programming, not only but also all OS are made using Procedural programming too, but when it comes to users; they want you to use OOP, because this secures them.
    OOP is just that they don’t want you to access there application, they just give you some parameters and you are allowed to use them only not more not less.
    after all of these applications and programs nowadays, OOP now become useless, because you have to know and keep memorizing each parameter for each application, in other words “it is not real programming any more” it is like learning history.

    Comment by Shahim Khlaifat — 2015/08/22 @ 13:09

  43. Let me make my argument by dubunking this OOP abomination: http://www.csis.pace.edu/~bergin/patterns/ppoop.html

    How is 83 lines of code across 7 different files more maintainable than 19 lines all in one place? Those software “engineers” can trace the logic of instantiations and registrations and invokations between files, but not those “unwieldy if statements!” that read EXACTLY like the problem they are describing? FOOEY BUCKETS!!

    I love OOP where it’s appropriate, which is for modeling DATA and atomic operations on it; but it’s not even an appropriate model for behaviors on a SYSTEM of objects/data.

    SO WHAT, that you’d have to change the code to add more ifs. With the OOP version, you’d still have to create or modify SOME code, and what does it matter if there is some “central” part that does not change, when you have to trace the execution to the part that you DO change anyway?

    On the point of efficiency, there is no reason that a hash-table / map / dictionary could not be created inline at the top of the “hacker” code, and work just as efficiently (actually MORE so, since it would just be one instance instead of multiple class “singletons” being initialized at startup).

    On the point of scalability & maintainability, these arguments hold up just as well for larger applications: ok, so there’d be more inline-procedural logic to understand, but which is easier to understand & maintain: the one that just SAYS WHAT IT DOES, or the one where the logic is scattered everywhere across multiple classes & files?

    Maintainability & extensibility does NOT come from mechanisms, it comes from your code being understandable in terms of the HUMAN mental model of what it does. If one cannot look at the code and be able to see what the over-all INTENDED behavior is, then you end up with a ball of mud that’s hard to change because you cannot REASON about it, because it does NOT correlate nicely with the human (user) mental models that it’s supposedly supposed to fulfill.

    If you want to maintain changes, then source-control does that nicely for you; especially if you can look at a change from direct-behavior A to direct-behavior B, rather than having to reverse engineer the whole thing to understand the ramifications of a small change on the whole system interactions woven between many classes.

    This is a difficult topic, because the software industry has become a pop-culture obsessed with fads and patterns and languages, etc., rather than the HUMAN models that software is supposed to be. There is this is assumption that newer is better (newer languages, newer mechanisms, etc.), and that the “old guys” don’t understand OOP because it’s too advanced from their more primitive languages.

    The real shame is that software people do not understand their own history, and think it’s unapplicable. Few software engineers are familiar with key figures such as Alan Kay, Doug Engelbart, John McCarthy, Christopher Alexander; but a physicist who is not familiar with the works of Newton, Tesla, Einstein, etc. would be dismissed from the field. In software, we don’t know and don’t care, because the modern trends surely are better than that foundational stuff that we abandoned (and we see the same ideas being reinvented every 20 years).

    Furthermore, focusing on LANGUAGE is not going to get us anywhere, because if software was a true craft, then “language” would be as moldable as the software we use it on, and you could just “use” whatever paradigm you wanted. Instead, we have these pre-made un-changable languages & IDEs that have already decided on a base set of paradigms/tools for us to use to make programs, and you can built on TOP of them, but you cannot change the base. This is like how a carpenter cannot make his own tools out of wood because he needs METAL to shape the wood; but a metal-worker/smith could. We use software to shape software, so why are we still restricted by any specific language or tool?

    Neither OOP nor functional nor any other language paradigm, nor testing nor TDD nor IDEs nor any other tools result in my software being “good” any more than a glossy cover, typesetting, and spell-checking will make for a good STORY. The software industry needs to start thinking about what makes a good STORY, not just a good “book” in the physical sense. Once that’s accomplished, then the languages, tools, etc. will follow suit to the extent that they can help with the BIG PICTURE.

    My beef with OOP (or rather, how OOP is used) is that it’s much the opposite of what I just said above: people are so focused on the mechanisms and “architecture” within their code, that the actual resulting behavior is no longer the main concern. For example, the focus on which patterns are being used where, rather than on what is the best way to express the HUMAN model of behavior that is expected (it has to be expressed IN THOSE TERMS, or it’s not really understandable). Software “Patterns” came from Christopher Alexander’s “Pattern Language”, which failed in physical architecture for similar reasons.

    Any who, if you look at where all the REALLY major advances in software came from, the focus was always on HUMAN concerns, not mechanism (Doug Engelbart’s NLS, Alan Kay’s Dynabook, etc). OOP and Patterns came out of this thought, but quickly diverged from it, and now we are dealing with the mess left behind. Jim Coplien sums that up here: http://www.infoq.com/interviews/coplien-dci-architecture

    Comment by Dan — 2015/08/29 @ 22:40

  44. As someone who learned structured programming in the 80s, and who is struggling to grasp the concepts of OOP in the 21st Century, I read this with great interest.

    I learned to write structured programs. That is, we have a small, compact main loop that conditionally calls various subroutines.

    Compared to this, OOP seems thus far to me–and I will admit I am still struggling to comprehend it–an unworkable, overdesigned hairball. “Objects,” “methods,” “constructors,” “events,” isn’t it all just blocks of code? Only they’re blocks of code that have been created as virtual widgets that interact in a standardized manner that makes the most trivial tasks almost impossible. It is inelegant. It adds layer upon layer upon layer of unneeded abstraction, layer upon layer upon layer of gingerbread and unneeded complexity to what should be a straightforward matter.

    Perhaps I misunderstand, of course. I’m still struggling with the concepts.

    Comment by Noah Baudie — 2015/09/22 @ 16:17

  45. Learn LISP to get out from the swamp. Usually, it is not the paradigm that is bad, but the languages. Some languages are bloated and some language requires you to write boiler plate. For procedural language, I love C. For OO language, I love Python. For functional language, I love Scheme.

    Comment by Alex Vong — 2015/12/20 @ 11:14

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress