3 – 02 L Intro To Comp HW V1 RENDER V1

Optimization is all about eliminating instructions that are not needed, while still getting the result you expect from your program. The fewer times that a program has to access memory and perform calculations, the more efficient your program will be. Optimization isn’t just about shoving your code through some black box until its performance increases, it’s about knowing how a computer works and executes your code, and using that knowledge to your advantage. Besides a storage medium like your hard drive, a computer basically has two parts that are relevant to running your programs. The CPU and RAM. The CPU or central processing unit, has a section called the control unit, which carries out the instructions contained in your code. Another section known as the arithmetic logic unit, calculates mathematical and logical expressions. RAM or random access memory, is what your program uses to store your variables and instructions to support the execution of your program. It is a storage medium like your hard drive, but it has much faster read and write times and is volatile. Which means that when you turn your computer off, all the data disappears. Imagine if your hard drive did that every time you turned your computer off. In essence, the control unit in the CPU will read the next instruction in your code and execute it by writing data to ram, reading data from RAM, or have the arithmetic or logic unit perform a calculation. Unlike Python, when you finish writing a program in C++, it needs to be compiled before you can execute it. The compiler rewrites your code into a set of instructions that the CPU can understand called machine code, which is basically the language of the CPU. So a compiler is basically acting as a translator between how you understand and write code, and how the CPU understands and reads code. For example, you might write the line int x equals five, the compiler would turn this line of code into something like this. And the CPU would understand that to mean store the value of five to a specific location in RAM that can be accessed with an address tied to the variable x. Another instruction would be able to retrieve the value of X by sending an instruction to the CPU that would query the value stored at the address tied to X. And another instruction could take the value stored at the address for X, incremented by 10, and then update the new value in memory. All of these instructions take time to execute. And some operations like Trigonometric functions and branches of an IF statement are known to be particularly inefficient. Let’s take a series of IF statements. Here we have an assignment of the variable y based on the value of x. This code is inefficient because it causes the CPU to compare the value of x twice. However, if the first condition is false, the second condition is automatically true. Therefore an optimization would be to rewrite the IF statements with an IF else statement like the following. This version of the code only needs to do one comparison. You might be thinking that such a small change won’t make much of a difference in speed, and with a single occurrence like this, it probably won’t. But imagine if these statements were inside a four loop that ran thousands or millions of times. Small inefficiencies definitely start to add up. Understanding how the computer operates helps you determine which calculations might be slowing your program down. The more insight you have into what was happening inside the computer, the more successful you’ll be at increasing the efficiency of your programs.

%d 블로거가 이것을 좋아합니다: