3 Fantastic and Influential Women in Computer Science

by Imaani Choudhuri '2021

Did you know that Wikipedia has over 300 pages about women in computer science? And yet most people can only manage a handful, if they know any at all. If you have time and want to feel inspired, check out the hundreds of women computer scientists listed on Wikipedia. For this article, we’ll take a look at three women whose inventions shaped the field of computing systems today.

Grace Hopper

Commodore Grace M. Hopper, USN (covered)

You may have heard of Grace Hopper, but did you know that she helped pioneer the fields of programming languages and compilers? After completing her PhD from Yale and becoming a professor, she decided to join the Navy and was tasked with operating the Mark I computer. This computer was essentially an enormous eight-foot tall calculator that spoke in only punched paper tape. Programmers were given a book containing codes the machine understood and hand-copied routines into notebooks, often sharing notebooks.

To better understand what Admiral Hopper’s compiler did, let’s use an example of computing the sine of an angle using a loop. Before Admiral Hopper’s compiler, you would have to find a code book that had the sine loop and copy that onto the tape every time you wanted to compute sine. If you forgot a line or misread a B as a 13 – good luck!

Later, working on the UNIVAC I computer at Eckert-Mauchly Computer Corporation, Admiral Hopper realized the usefulness of having a program understand common subroutines instead of having to copy them in herself. She went on to develop the A-0 compiler, which used symbolic math and call numbers. (Small note: during this era, the term compiler referred to what we now refer to as the linker/loader process.)

With the A-0 compiler, you could simply use a call number unique to the sine function to compute a sine. The UNIVAC I computer had a magnetic tape that stored common programming functions, and you could store the sine loop in there, ready to be called by number later. This practice still persists today – you’ve probably never written a loop that computes sine using the Taylor series, you probably imported a math library and used its sine function.

Keep in mind that the A-0 language was still mainly symbolic math. Admiral Hopper noticed a divide amongst users of the compiler:

COBOL-syntax
An example of a program in COBOL

“One was people who liked using symbols – mathematicians and people like that. There was another bunch of people who were in data processing who hated symbols, and wanted words…Why not give them a word-oriented language? And that was part of what was behind Flow-Matic B-0, which became one of the ancestors of COBOL.”

Admiral Hopper led the development of another compiler, the Flow-Matic B-0 compiler, which incorporated English words and statements. As she mentioned, Flow-Matic became COBOL, a business-oriented programming language that uses English verbs and connectors (still used today!). These ideas helped shape the future of programming languages to what it is today, which I’d argue is a mixture of readable words and symbolic math.

Admiral Hopper’s interview is fascinating, covering her opinions on everything from computer science education, distributed computing, and the future of computer systems. Her achievements have made programming more accessible and efficient, and her ideas will continue to be useful in years to come. I highly recommend you take a read if you can, and I’d like to leave you with this valuable insight of hers:

“[People] totally fail to realize that everything I’ve ever done was not genius effect. It was all straightforward common sense at the time: how can you get this done?…I think I always looked for the easiest and best, most accurate way of getting something done. I don’t use high-level theory. I use very basic common sense – the whole original concept of the compiler is the most common sense thing I ever heard of.”

Patricia Selinger

Pat Selinger, image from IT History Society

If you’ve only worked with SQL once or twice, the inner workings of a database are undeniably quite magical. You just say some names and some conditions and in an instant, you get results. How did it get those results? You can thank Dr. Selinger for her invention of cost-based query optimization, which is now the standard amongst relational database systems. She also played a pivotal role in creating System R, the first database system to implement SQL and which used other great design decisions, including cost-based query optimization.

So let’s talk a bit about query optimization. Let’s say you had a table books, with an id and a title column, and a table library, with an id column and a checked_out column. We want to know all the titles we can check out, so we issue the SQL query SELECT title FROM library, books WHERE library.id = book.id AND checked_out=False; to our database.

Now it’s up to the database to decide what order to join tables or filter out rows, which we call plans. One plan is to join the library and books tables first, based on matching id, and then throw out the rows that are checked out. But what if we first eliminate all the rows from the library that are checked out, then join rows that match id? This can actually be more efficient since the join uses less rows, and joins tend to be computationally expensive.

This is a pretty simple query, but you can imagine with more conditions, tables, and order by/grouping conditions, the number of possible plans can increase drastically. (There’s also different joining algorithms that you can consider!) We’ve also established that some plans are more efficient than others – so how can we choose the best plan, or at least a good plan? In Dr. Selinger’s words:

“The trick to cost-based query optimization is estimating a cost for each of the different ways of accessing the data, each of the different ways of joining information from multiple tables, and estimating the sizes of results…and so forth.”

The query optimizer in the big picture, image from Algorithm Wiki

So in our example we mentioned earlier, we can quantify how ‘good’ a plan is by estimating its cost. Let’s assume all the ids in books are in tables, books has size 4, library has size 10, and 50% of the library is checked out. If we use a join algorithm that looks at all the row pairs, join then filter will take us 44 times to look at a row, while filter then join costs us 30 times, so filter then join is the better plan.

The query optimizer uses assumptions and statistical metadata to estimate plan costs just like above, and builds from simple plans to more complex plans. Personally, I really love how neat and straightforward of a design it is. If we know our data well, we can estimate costs to plan more cleverly. Also, building from simple to complex means we will try not to explore unpromising plans, which helps keep the algorithm and its plans nice and efficient.

Dr. Selinger’s interview is a great look at the beginnings of query optimization to its current state of growth and improvement, and her take on the future of database computing and open-source development is inspiring and thought-provoking. (Also, check out her IBM Fellows page for more content.) I’ll leave this with a great reminder from Dr. Selinger on the necessity of the query optimizer:

“The fact is that as customers use more and more shrink-wrapped packages or have ad hoc users who haven’t gone to SQL school for a year, there’s a real need to be able to do good query optimization. You can’t have database administrators running into the room, saying, “Don’t hit Enter yet. I’ve got to look at your query to see if it’s going to be OK.” Outstanding cost-based query optimization is critical to speeding application productivity and lowering total cost of ownership.“

Radia Perlman

Does the name “Mother of the Internet” ring any bells? You may be surprised to know Dr. Radia Perlman is widely regarded as such for her many contributions to the Internet, especially in routing and her invention of the Spanning Tree Protocol. But besides that, she has also contributed to the field of computer security in areas like network security, encryption models, and certificate revocation.

Radia Perlman 2009

Growing up, Perlman noted that she hated computers (and still does!) and along with her strong inclination to math and science, enjoyed piano and creative writing. Perlman went on to study math at MIT and worked a part-time job that taught kids how to program. After earning her bachelor’s and master’s in math at MIT, Perlman dropped out of grad school. In her words:

“I was shy and insecure, didn’t have an advisor, couldn’t imagine how to get started on a thesis, so I was somewhat relieved when an old friend suggested I leave grad school and join him at BBN, designing and implementing protocols for packet radio.”

After presenting a paper at a conference, Perlman was approached by a manager from Digital Equipment Corporation, noting how her clear solution to a routing problem was largely ignored by everyone else there. She got invited to join Digital’s network architecture group, which she calls “the world’s best job.” It was at Digital that she developed the Spanning Tree Protocol (STP), along with working on other networking architectures.

Spanning tree protocol at work 5
The green and blue links make the tree and will be used for sending data – the red links will not. Image credit: GhosT / CC BY

STP is a routing problem: solving the job of delivering packets of data between switches, which are devices with unique IDs connected by links (similar to routers). In STP, the goal is to build a spanning tree between all the switches, rooted at the switch with the lowest ID, and use the paths along this tree to send packets. A spanning tree will ensure complete connectivity and no loops, and if a link between two switches goes down, the protocol can restart to find the new spanning tree.

But how do we build such a tree? It’s quite simple actually; every switch starts off thinking it’s the root, and continuously advertises a path to the root to its neighbors. Switches will change their mind if they’re offered a path to a smaller ID than what they think is the root, and will advertise the path to the smaller ID. After enough time, the protocol will converge to every switch knowing a path to the same root, and will send packets along that path to each other. A cool thing about networked algorithms is how they execute in a distributed, asynchronous way: every switch on the network communicates simultaneously and makes decisions independently to build a shared tree.

This idea was developed while working for Digital, and after working in industry for eight years, Perlman went back to MIT for her doctorate in computer science. She describes, “This time it was a much better experience. I had self-confidence. It is much easier to become an expert at a field while working rather than just being a student.”

I highly encourage you to read Dr. Perlman’s interview and see how she overcame often being ignored in the workplace, and her perspectives on managing family life with school and work. She also talks about effective leadership and the most important qualities to garner respect and success in the field. Perhaps the most important takeaway from her experiences is the following:

“The important lesson about diversity isn’t about different body shapes and skin color. It’s about different ways of looking at the world. There are so many types of skills, and some of them are mutually exclusive. An ideal team has some of the people that dive right in and start churning out code, some people who have an amazing memory, and some people who can do whatever it is I do.”

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2023 Association of Women in EE&CS
We are a student group acting independently of the University of California. We take full responsibility for our organization and this web site.