|By: Paul S. Cilwa||Viewed: 4/22/2019
||Topics/Keywords: #ProgrammingforMicrosoftWindows #VisualBasic6.0||Page Views: 1700|
|Preliminary material for this free online course in Visual Basic 6.0.|
The whole concept of programming can be very intimidating to non-programmers. Hey, sometimes it's intimidating for programmers, too! But, it needn't be, because it's really as simple as giving directions, something you probably do at least once a week, anyway.
There are five basic steps in programming:
Decide what you actually want to achieve
Decide what parts you want the computer to do for you
Phrase what you want the computer to do in language it understands
Make sure it really does understand
Allow the computer to do what you asked
Now, in the "old days" of a decade or so ago, no one person performed all these steps. However, in this era of downsizing, it is quite common for a PC programmer to perform the first 4—the end user performs the fifth; and, sometimes, the programmer is his or her own end user!
What Do You Want To Do Today?
The first step in writing a successful computer program is to decide what you actually want to accomplish. That is not the same as deciding what you want the computer to do! Look at this subtle difference: If you are designing a home stereo system, the end goal is not for the system to play music; it is for the system's owner to enjoy music played by the system. Too complex a user interface to the system may, for many users, get in the way of that goal.
So, the true purpose of most business computer programs is to allow the end user to do his or her job more efficiently.
However, to do this, you'll have to know what it is the end user does now. That usually means interviewing the person or persons who will be your end users. Then you ask yourself: What is it they do, or need to do? What parts can the computer do for them? What parts would they prefer to do for themselves? (And don't forget to step back and look at the big picture. It's quite possible that it would be easier and more effective to replace a larger part of the office task than the end user originally requested.)
What Parts Do You Want To Program Today?
Once you know what the end user needs the computer to do, you can begin designing the computer application or tool that will make that possible. This means, for us, picturing the functions the end user needs in the form of a standard Windows application.
And it is very important this application have the appearance of a "standard Windows application." How many audio enthusiasts would buy CD players that looked like water balloons, or toasters? Most people like components to blend in with each other and work similarly; and every Windows application—whether you anticipate it or not—is going to be part of the whole suite of Windows applications the end user owns. The end user will want them to work together in ways you never dreamed of. So, you need to support that.
If possible, you also don't want to require end users be trained in your new application. If the end user know how to do his or her own job, and is accustomed to using Windows (as most people are, these days), he or she should be able to sit right down and use your application, right out of the package (or, right off of the corporate network).
Now, whatever the computer is going to do, is going to require information…and this information has to come from somewhere. If your end user wants to print payroll checks, for example, you will need the information that appears on the check: The employee's name and social security number, and the amount he or she is to be paid. You'll also need to print the calculated taxes, deductions, and vacation days earned.
The employee's name and social security number will be on file, of course, along with the rate of pay and the number of hours worked…but how did that information come to be on file? Will the end user need to convert information from the reports of an old personnel system, or enter them on a form on screen? What information gets calculated, what information must be entered, what information will be obtained from another source, such as the computer clock or the Internet?
The formal way of doing this is to create what's called a data flow diagram. This is an absolute requirement of a large-scale system. In smaller applications you can possibly track this information without creating a formal diagram, but you still need to have the information in your head—or you may discover, when it comes time to print that social security number, that the field must remain blank until you've made "enhancements" to the system.
What Do You Want To Say Today?
Have you ever tried to explain to a child how to cook an egg? There are certain things most children know—where the eggs are, for example, and that the stove is hot—and many things they do not know, perhaps how to turn on the burner or what it means to boil water or how long an egg should be in boiling water. Sometimes, for safety's sake, we might over-explain some things. And sometimes, we may get unexpected results when we inadvertently under-explain something.
The reason, though, that we have to explain in such detail is that children have limited vocabularies, compared to most adults; if we want that egg cooked properly we must make sure not one, single, step is omitted.
Well, personal computers are like children: PCs, too, know how to do some things and not others; PCs can also be taught how to do new things on the basis of the things they already know. In each case, our job, as teacher, is to make sure we do not omit any steps and that we are certain that each step we describe, is based on something the PC—or child—already knows.
Yes, putting it simply: "programming" is actually teaching. When you write a program, you are really teaching the computer how to do a job for you.
Now, just as, in order to teach a child, you must know what he or she already knows, to teach a computer, you must know what it already knows. We call that list machine instructions, and it is a fairly small list. Included in this list are feats such as adding two numbers, comparing two letters, and copying values from one place to another. Everything we get computers to do, is built up from these machine instructions.
Programming in machine language is all but a lost art; it is extremely tedious, difficult, and time-consuming. Fortunately, the first computer programmers did all that work for us: It's sort of like teaching a child how to cook an egg, after that child has been taught by someone else how to talk, and to not pee while in the kitchen.
So, we get to use bigger words, and to describe higher-level steps—but it's still detailed instructions we must give.
What Do You Want To Test Today?
After your program is written, it must be tested. This is simply the process of making sure each part of the program works in the way you expected…and, later, that the application does the job the user expected it to do.
If a part of the application does not do what you wanted, that's called a "bug"; you then must debug the application. All modern development environments provide extensive debugging assistance.
In the old-fashioned style of programming (now called "linear programming"), it wasn't unusual to spend 20% or more of the planned development time locating and fixing program bugs. With object-oriented programming, however, your debugging actually becomes part of the programming process. As a result, while the application still needs to be tested, it is not unusual for it to actually work as expected, first time, every time.
So, why do so many commercial programs crash on occasion? The primary reason is that they were not written in true object-oriented fashion, even though the programmers may have thought they were. Much published older Microsoft C++ code, for example, shows the programmers didn't understand the new concepts: They include public data members, a lack of reference arguments, a plethora of macros, and other techniques identified as troublesome by the rest of the object-oriented programming community. Even their newer code is composed of many hard-to-track macros and designs intended to limit object-oriented concepts such as inheritance.
The secondary reason is that, no matter how object-oriented your application may be, if another, badly-written piece of code causes Windows to crash, your application is going to crash with it.
The above references coding tests, only. Alpha tests are performed by any other person, preferably someone who doesn't know how the application is supposed to work. He or she will be more likely than you to find bugs, simply by trying combinations of things you never thought of. Beta tests are performed by one or more actual end users, in the field of battle—that is, to try and do real work. A beta tester knows the application may crash, and doesn't put too much reliance on it; but the real use will turn up any problems, whether in coding or design, in time to be addressed.
What Do You Want To Distribute Today?
When the application is completed, it must be deployed. It's been several years since a programmer could copy an executable file onto a floppy disk and hand it to end users. Nowadays, any application we write will reference half-a-dozen or more ancillary programs, libraries and tools; and all must be included on the distribution media, along with a setup program to make sure all the components are placed on the end user's machine where they belong.
Most modern development environments include some kind of setup-creating program or "wizard" to assist in this operation, preventing it from becoming a project in itself. Visual Basic 6.0, for example, came with the Package and Deployment Wizard.
This section is basically a description of how computers work; so, if you already know this, you can safely skip it.
No matter how well-trained the computer is when you get it, there are—at least, so far—a few elementary components that you really must know, to program successfully. Let's get these out of the way.
All computers contain memory, and all programs, at their core, accomplish what they do by moving values around in that memory, from one place to another. What goes into that memory? Data…and commands…all in the form of numbers.
Memory comes in two types, transient and persistent. Most transient memory is RAM—Random Access Memory—and contains information only while your computer is powered up. Thus, it is used for running programs, but not for permanent storage. Persistent memory is usually implemented by a hard disk, a device that retains information in the form of magnetic patterns, much like a videocassette.
Whether transient or persistent, all memory works by storing 0s and 1s…nothing more. Each one of those memory elements is, therefore, simply an on/off switch, called a bit.
All that memory must be accessed by a running program, so that the program can manipulate particular values stored in them. (You wouldn't want the dollar amount and number of earned vacation days on your paycheck to get reversed, would you?) This is done by grouping the bits into components called bytes and assigning addresses to each.
The addresses are assigned to memory starting from zero. That is, the first byte of a block of memory is byte 0; the next is byte 1, and so on. So, a simple computer might number its bytes of memory like this:
Now, while each byte of memory has an address, it also has a value—and you don't want to confuse the two. The address of a byte of memory is physical; it has to do with the wiring of the computer and the placement of the memory chips. The value of a byte is the number it contains. That value might be a number; it might be an encoded letter or color, and it might be a command. As a program runs, the values of the bytes in memory can and will change; the addresses of those bytes remains fixed.
For simplicity's sake, I am not addressing the issue of virtual addressing here. If you already know about virtual addressing, what are you doing, wasting your time on this introductory material?!
Now, it would be quite impossible for anyone to remember what is in each of millions of addresses. Which one has the number of hours the employee worked? Which one has the rate of pay? How could anyone write a program to manipulate the values in those addresses, if he or she couldn't remember which address was which?
The concept of variable names comes to the rescue. Why not tell the computer that address 4, for example, is the rate of pay, address 5 is the number of hours worked, and address 6 will be the check amount? By using names instead of numbers, we can write our directions out much more readably; and, since the computer can do the translating job, we don't even have to worry about that!
In fact, why should we worry about the addresses at all? When we need a variable, the computer will be perfectly happy to locate an unused memory location for us and associate the name we give with the address of that location.
Related to the concept of variable names is the array. This is a series of contiguous memory addresses that are referenced by a single name and an index. For example, you might use an array called Temperatures with indexes from 1 to 365, to track the highest temperature of each day of the year.
When you are using a computer to help balance your checkbook, it wouldn't surprise you to know the computer was working with numbers. But—even when it is displaying the name of the person to whom you've written a check, it is doing so via numbers. How? By encoding everything as numeric values. For example, you could code the alphabet so that "a" = 1, "b" = 2, and so on. (Those aren't actually the codes the computer uses, but the idea is the same.) When text is displayed on your monitor, it is displayed in a certain color, against a different color background—and colors can also be coded as numbers. Where on your screen the text is to be displayed, is specified by coordinates; and, of course, those are expressed as numbers, as well.
As it turns out, computer memory can only store numbers—but everything can be coded as numbers, and the computer can do the coding and decoding very quickly; so that's not a problem.
The great mathematician, Alan Turing, was the first person to realize that commands could also be stored as numbers—and that, therefore, commands and data could be kept together, in any storage medium that would hold numbers. The commands work in context. For example, a simple program might look like this in numbers:
A machine that could "run" this "program" needs only three assumptions built into it:
It has an "accumulator", a place for arithmetic results to be stored
It has a "program counter" that keeps track of which command is to be executed next
The first entry is always a command; other numbers are determined to be commands or codes based on the previous commands executed
Now suppose we have a list of commands—"machine codes"—that looks like this:
Add the number following to the contents of the accumulator
Subtract the number following from the accumulator
Add the two numbers following, put result in accumulator
Multiply the two numbers following, put result in accumulator
That means that, when execution starts, this will happen:
The program counter, initially starting at the first value, expects a command and finds a "3". This command causes the next two values, "5" and "2", to be added together, putting a "7" in the accumulator. The program counter adjusts itself to point to the 4th value.
The 4th value, a "2", says to subtract the following value, "4", from the accumulator (which will leave a "3" in there). The program counter is advanced to the position of the last value.
The last value, "5", causes the whole operation to stop running, leaving the final result, "3", in the accumulator where, presumably, someone will read it.
Now, instructions can also place values right in the flow of instructions, giving the whole mechanism an incredible richness of possibilities. And, indeed, as today's computer programs prove, with their ability to figure taxes, remove scratches from old photos, place the weatherperson in front of a cloud image, and record and play videos, these possibilities are endless. And all those things are built up of the kind of instruction set presented here.
Typical CPUs have a set of two or three hundred instructions.
Our final concern in this chapter is how the numbers are stored. We humans store numbers in our heads by noting how many fingers would be required to represent them. Our numbering system, based on fingers, has digits from zero to one less than the number of fingers, and represents larger values by placing digits in columns that represent powers of ten. The first column is the number of 100 values (ones), the second is 101 values (tens), the next is 102 values (hundreds), and so on.
An electronic component that can store ten discreet values (from zero through nine) turns out to be very expensive to make, and not very reliable. However, an electronic circuit that can store just two values—on and off, standing in for zero and one—can be made very cheaply, and quite small (a requirement if you want to have millions of them!). Thus, modern computers store all values in Base 2, which we call binary.
In Base 2, there are only two "digits", zero and one. The word "digit" refers to fingers and therefore inherently implies Base 10; so a different word was needed to describe Base 2. The word, "bit", was coined from the phrase "binary digit".
The first column of bits represents the number of 20 values, (ones); a second column holds the number of 21 values (twos); the next holds the number of 22 (fours); and so on. Eight bits can hold values from zero to one less than 27, or 255 (Base 10).
A single bit holds such a small amount of information that one seldom needs to be accessed by itself. Therefore, modern computers do not provide any instructions to do so. Bits are grouped together, somewhat arbitrarily, into groups of eight called bytes. (Bytes don't have to be composed of eight bits; but on all modern computers they are.) Bytes, in turn, are grouped into pairs and fours, for various reasons. Two bytes is usually termed a word, and two words is called a paragraph. These terms aren't used in Visual Basic, however.
When you do need to work with a single bit, the computer has to access the whole byte in which the bit is located. The bit of interested is then "masked" from the other ones using a technique called Boolean arithmetic. This is a set of arithmetic instructions that has no analog in Base 10; it only works in Base 2—but allows us to do some pretty cool things. We'll cover Boolean operations in more detail, later on.
In any case, you now know enough to begin programming with Visual Basic. In the next chapter, we'll start simply, by starting the Visual Basic Integrated Development Environment. Click the Next button, below, to continue!