About My Profession
Ron Yevin
What Do You Want?>
Before I actually met Ron Yevin, I already felt I knew him. Ron Jensen (RJ) had mentioned him to me and the story that I couldn't possibly forget was that he had written his own operating system. Why? Just because he could. But Ron described it by saying the prompt on the console would say, "What Do You Want?>" He then said you wanted to punch the computer. Another story Jensen told was that he married a fellow programmer. He also had evidence he could show me. Yevin had worked for CG on a part time basis at some point and had written a label generation program that RJ showed me. He had instructed Yevin to provide some documentation of the program which ended up have some 600 lines of code. Here's how the program began:
BALR 12.0 LOAD
USING 12.11 TWO
LA 11,4095(12) BASE
LA 11,1(11) REGISTERS
* THE REST OF THIS PROGRAM IS SELF-EXPLANATORY,
* SO NO FURTHER COMMENTS ARE NECESSARY.
By sometime in 1983 with business increasing, Ron Jensen's time was taken up by clients brought to CG by Paul Seltzer so we needed help. Because every month RJ went over to Mountain States Computing (Fred Schmitt's service bureau) to process the credit union jobs (I would eventually modify that COBOL program so it would run on our computer) he would often see Ron Yevin. He wasn't working for Fred, but a client that was using Mountain States facilities. RJ correctly figured out that Yevin was not happy working there and would be open to leaving. What made me really happy was that RJ ran this by me before he made the offer. My response was he sounds great, go for it!
I Meet the Legend
By the time Ron was hired, I already felt I was an excellent assembler programmer, so I was shocked that when I met him and showed my assembler coding, he was dismissive and it felt condescending: "You are still coding like that?" I had no idea what he was talking about. What was I doing wrong? I was taken to is new office where he had brought some things along: some printouts and a magnetic tape reel. What he showed me looked amazing. His coding showed macros that I had not seen like: IF, ELSE, DO, WHILE, UNTIL. You could write structured assembler programs! But I couldn't have known about them because they were not on our system. They were contained on the magnetic tape he had brought along and so we quickly loaded them onto our system.
Here's an example of some straight assembler code. We're checking the State field of a record. If the State is CA or WA, we run a routine to handle this. If it's not one of those two states, we run a different routine.
CLC STATE,=C'CA'
BE STATECA
CLC STATE,=C'WA'
BE STATECA
...
CODE FOR PROCESSING THE STATES OTHER THAN CA OR WA GOES HERE
...
B EXITST
STATECA ...
CODE FOR PROCESSING THE STATES THAT AREN'T CA OR WA
...
EXITST EQU *
Some notes as you're probably not familiar with IBM assembler. The CLC is an instruction which stands for Compare Logical Character. The BE is a Branch on Equal. (Other branches are BL - Branch on Low, BH - Branch on High). The single B instruction is an unconditional Branch. It is a GOTO.
Now I show that same coding using the structured macros:
IF CLC,STATE,EQ,=C'CA',OR,CLC,STATE,EQ,=C'WA'
...
CODE FOR PROCESSING THE STATES THAT AREN'T CA OR WA
...
ELSE
...
CODE FOR PROCESSING THE STATES OTHER THAN CA OR WA GOES HERE
...
ENDIF
As you can see, the code is a lot easier to read. But there is something a little weird. Notice the CLC on the compare statement. Why is it there? Simply, we're still coding in assembly language. It doesn't look like it because the "IF" statement is actually a macro and macros do not execute. During the assembly process, they get converted into actual assembler statements that will executed. So the CLC is specifying the type of compare that is to occur. In higher level languages you wouldn't need this because the compiler would figure out what kind of compare that was needed. This leads to what I initially feared. That these macros would result in code that wouldn't be as efficient compared to the coding I was using. But it doesn't, and in fact, the really great thing about these macros is that you only have to look at the generated code to see the result and after that you could always predict exactly what the underlying code would be. Branching statements are still generated, but the locations are practically gibberish and it doesn't matter. When I was writing code I would always try to make the branching labels clever but couldn't always succeed, especially when it was a more extensive program.
One thing I should point out. Though the goal of structured programming is to never have a GOTO, it frequently happens that a well placed GOTO (in assembler it was done with an unconditional Branch instruction) would actually not only make programming easier, it also made it more readable. The classic example is if you end up nesting a lot of IF statements and you get to a point where you need to exit.
Concept 14 Macros
So where did these macros originate? Were they copyrighted? Where did Yevin find them?
They originated from IBM, and eventually I would learn they were called the Concept 14 Macros. You wouldn't know either of those facts by looking at the macro source code. I have no idea exactly where he found them, but I know at some point they were distributed by IBM as part of some of their operating systems or associated software which needed to be generated for a specific installation. It turns out in every IBM operating system the assembler compiler is always included. The operating system itself is written in assembler and it needs to be compiled to generate it. This is not necessarily a onetime task. When maintenance is applied to the system, it needs to be compiled. So were they copyrighted? Apparently they were not. In the macro source the usual IBM headers were not there which would have stated that fact. So IBM didn't care to copyright them and in addition, they didn't maintain them as well. Over time I did find a bug or two, but it was easy to program around. And, finally, I have no idea where Yevin found them. He told me that wherever he was employed doing assembly languare he brought them with him.
When Ron Jensen found out what we were doing he jokingly said, "Gene and Ron have their own language!" Ron Jensen himself did not embrace them because he was too engrained in what he was doing. I vividly recall that when we finally got terminals, we installed one on RJ's desk to show him how easy it was. But when the time came that he needed to actually write a program, he'd head to he computer room and sit down at one of the keypunch machines! When we got the terminals I would never again sit at one of those machines. All the card decks I had got converted so I could edit them via a terminal.
These macros became very important when I would get my own computer business because directly hiring assembler programmers became difficult to almost impossible. But because they made assembler so much easier, I found that I could hire COBOL programmers who were starting to lose their jobs and they could very quickly learn assembler. And the comment from them was always the same, "I thought assembly language was supposed to be difficult?"
When You Need To Sort a Table
Occasionally you would need to load a dynamic table into memory which you would then manipulate. And sometimes you'd want to sort that table. One of the most common algorithms for sorting is the bubble sort. It's pretty easy to code, but horribly inefficient most of the time. Notice I've shown most of the time in italics because I'll be coming back to that. When I was trying to learn Z80 assembler I ran across a routine using the Shell-Metzner algoritm and is one of the fastest so coded it IBM assembler. I mentioned this to Jim who said he thought Ron Jensen could use that routine so I cataloged it as a subroutine. I was hoping to write a binary lookup routine, but I strugged to make an algorithm work and cases would arise where I would not locate a table entry that was clearly there!
When Ron Yevin arrived we got to talking about a binary lookup routine after I told him I had a pretty good sort routine. He had never written a binary lookup but he knew the key: "The secret to writing a binary lookup is pretending that the table has a perfect 2N entries. This makes the division by 2 always perfect. But you must check each time if your pointer is now beyond the end of the table." I wanted to write it myself, but Yevin went immediately to code one. What we then did is make the sort and binary lookup parameters match because they would usually be used in unison.
Below is the commented code from the beginning of each of the routines:
CSORT CSECT
TITLE 'TABLE SORTING SUBROUTINE '
***********************************************************************
* CALL CSORT,(TABLE,NUMBER,ENTLENG,ASCEND,KEYLENG,KEYPOS),VL *
*TABLE DS CLXXX TABLE TO SORT *
*NUMBER DC F'NNN' NUMBER OF ENTRIES IN TABLE *
*ENTLENG DC F'NN' LENGTH OF EACH TABLE ENTRY (LRECL) *
*ASCEND DC F'0' ASC/DES SORT FLAG (0= ASC, OTHER=DES) *
*KEYLENG DC F'NN' LENGTH OF SORT KEY *
*KEYPOS DC F'NN' RELATIVE KEY POS IN RECORD (DEFAULT=1) *
***********************************************************************
CBINARY CSECT
TITLE 'BINARY LOOKUP SUBROUTINE '
***********************************************************************
* CALL CBINARY,(TABLE,NUMBER,ENTLENG,SEARCH,KEYLENG,KEYPOS),VL*
*TABLE DS CLXXX TABLE TO SORT *
*NUMBER DC F'NNN' NUMBER OF ENTRIES IN TABLE *
*ENTLENG DC F'NN' LENGTH OF EACH TABLE ENTRY (LRECL) *
*SEARCH DC CLnnn SEARCH FIELD * *
*KEYLENG DC F'NN' LENGTH OF SORT KEY *
*KEYPOS DC F'NN' RELATIVE KEY POS IN RECORD (DEFAULT=1) *
***********************************************************************
These two routines would become more valuable over time because as memory costs decreased, succeeding machines had more core memory available for applications. The once unfathomable 16 million byte addressing range (24 bits) of the SYSTEM/360 and 370 was increased to 2 gigabytes with the release of the 370-XA system which allowed the addressing range to 31 bits. And the last mainframe I worked on, a zSeries mainframe in 2000, IBM further extended the addressability of the architecture to 64 bits which is 16 exabytes. With that kind of memory if your table entry was 1000 characters, you could sort and search 18,446,744,073,709,600 entries.