Description: In this lecture, we consider the nature of human intelligence, including our ability to tell and understand stories. We discuss the most useful elements of our inner language: classification, transitions, trajectories, and story sequences.
Instructor: Patrick H. Winston
PATRICK WINSTON: We might wonder a little bit about the nature of human intelligence, and we might reflect a little bit on the kind of intelligence we've been talking about in the past few weeks.
It's been an intelligence of sorts.
Those programs, support vector machines, boosting, they can do really smart things.
But the peculiar thing about systems that use those methods is that those systems don't have any idea about what they're doing.
They don't know anything.
So they don't give us very much of an insight into the nature of human intelligence.
And, after all, I'd like to have a model of human intelligence because, let's face it, we're the smartest things around.
So there are lots of ways to approach that question.
And we'll approach that question first from a evolutionary point of view.
Some scientists believe, me for instance, that we have a family tree that looks about like this.
Too small to see much of that, but the main point is that we haven't been around very long.
We humans have been around maybe 200,000 years, and the dinosaurs died out 60 million years ago.
So in the blink of an eye, we seem to have, more or less, taken over.
When you look at that family tree on a scale where you can see it, one of the characteristics is increasing brain size.
There we are on the left, chimpanzee on the right.
Clearly, mostly mouth, not to much brain in there.
That one down below is a reconstruction of one of those pithecus type bipedal apes from about 4 million years ago or so.
So we became bipedal a long time before we had much of a brain.
So you might think, well, maybe brain size has a lot to do with it, and I suppose it does.
So we can plot brain volume of our ancestors versus time.
So the picture I just showed you was from about 3 million years ago, I guess.
And then on the upper right hand corner, oh, that's not just us, that's also the Neanderthals.
Their brains might have been slightly bigger than ours.
So it isn't just brain size.
Here's what that guy looks like.
That's a Neanderthal, of course, on the left.
And that's one of us on the right.
Some conspicuous differences, big heads, the rib cage is kind of conical in shape, a large pelvis.
People like to make a lot of speculations about how they must have moved around.
But one thing is plain, they didn't amount to much.
They could make stone tools, but those stone tools didn't change much over tens of thousands of years.
And that was pretty much the story with us too, until something happened, probably in southern Africa, probably in a group of individuals, maybe less than 1,000.
And what's the evidence for that?
The evidence for that is mostly-- comes from DNA studies with a lot of probabilistic assumptions, and Monte Carlo simulations, but it seems among the competing hypothesis for how we came to populate the world, it seems that there was a group of us, homo sapiens, in Southern Africa that got something that nobody else had and the highest probability scenario is that we quickly took over.
That population of homo sapiens dominated the rest, went out of Africa, and within the blink of an eye did that sort of stuff.
What's that sort of stuff?
Those two paintings are from Lascaux about 25,000 years ago.
Paleoanthropologists, like Tattersall, take that as plain evidence that there was symbolic thought on the people who were around at that time, us homo sapiens.
The head is a carving of a mastodon tusk of a woman 25,000 years ago.
Also, plainly symbolic, people are making a lot of jewelry and doing self adornment.
The Neanderthals never seemed to do that.
That jewelry making seems to have gone back to Southern Africa, maybe 70,000 years ago.
People were puncturing seashells and using them as necklaces, apparently.
So something happened, and the paleoanthropologists, who write fascinating stuff, don't quite know how to talk about it other than to say it looks like we became somehow symbolic.
And somehow that has something to do with language.
So if you talk to Noam Chomsky he will say-- let me get this precise, this is near his quotation as I can get.
He thinks it was the ability to take two concepts and put them together to form a third concept without disturbing the original concepts and without limit, and each part of that's important.
The without limit part is what separates us from species that might be able to do that a little bit, but we can do it without any apparent limit.
So that's a linguist speaking.
He talks a lot about the merge operation, and combinators and language-- using terms foreign to us.
Better not use the term "combinator," it's kind of a computer science term.
But whatever it is, it seemed to happen about that time.
It didn't happen slowly and proportion to brain size, it seemed to happen all of a sudden in consequence of a brain that had grown big enough to be an enablement but the capability was not what pulled evolution in that direction.
So, I believe, that whatever that was, that capability, enabled humans, us humans, to tell and understand stories, and that's what separates us from the other primates.
That ability to-- The symbolic ability, whatever it is, enabled storytelling and understanding.
And that's what all education is about and that's why our species is special.
So what we're going to talk about today is something you might think of as an instantiation of that hypothesis, one way of thinking about it.
And it's a way of thinking about what the linguists would call the inner language.
It's not the language with which we communicate, it's the language with which we think, which is closely related to the language with which we communicate, but may not be quite the same thing.
So many of you are bilingual.
Chris is bilingual.
Chris, have you ever had the experience of remembering that someone said something to you, but not remembering what language they used?
PATRICK WINSTON: How about [? you, Sian, ?] have you had that experience?
PATRICK WINSTON: What?
STUDENT: I always [INAUDIBLE].
PATRICK WINSTON: [INAUDIBLE] experience of having-- David, have you ever had that experience of remembering some conversation but not remembering the language in which it was cast?
STUDENT: Well, you remember something you usually don't know which language it was in anyway.
PATRICK WINSTON: You usually don't remember.
That's a common view.
You remember something was said, that there was a conversation that had some content, but if it's with a speaker of your own language and your embedded in another place, you often don't remember what language the conversation was in.
Is that right, [? Wana, ?] you remember things like that?
STUDENT: People who [INAUDIBLE] speak in a certain language, so-- PATRICK WINSTON: Sometimes you don't have that confusion, she says, because you always speak to particular people in a particular language.
But many people report that they have that experience of not remembering which language something was said in.
Well, OK, so what are we going to do?
We need an inner language, and maybe we can start just by saying, let's have something that looks sort of familiar to us.
We have an object and it's supported by some other objects, so those are support relations.
That's one example of what we might call a semantic net.
It's a network that's got nodes and links, it's got-- it has some meaning.
That's where the word "semantic" comes from.
Well, we might have another example that looks like this.
There's Macbeth, there's Duncan, and Macbeth murders Duncan, then we also know, somehow, there's a kill involved as a consequence, and then ultimately, Duncan has a property.
And that property is the property of being dead.
So there's another semantic net recording something that happens in Shakespeare's Macbeth plot.
Now, we can decorate that a little bit.
So as to get a couple of other concepts in play.
First of all, the thing we've got already is we've got combinators.
Well, a fancy name for those links that connect the nodes.
Another thing we've got is an opportunity for connecting the links themselves.
So the murder sort of implies the kill, and the kill leads us to conclude that the victim is dead.
So that is treating the links themselves as objects that can be the subject or object of other links.
So we call that process "reification." Now, in artificial intelligence, semantic nets we're all over the early work, but if you have a big network that covers the wall you need some way of putting a spotlight on some pieces of it.
So Marvin Minsky put a lot of technical content into that idea and created the notion of-- a notion that deserves another color here-- he suggested that we need a localization process, so we have frames, so-called frames or templates.
And a frame for this murder action might be that there's a murder frame that has an agent and has a victim, and the agent is Macbeth, and the victim is Duncan.
So that's a way of putting a localization layer on top of what we've got so far.
Later on, I'll add sequence to that list.
So this is where it rested for a long time, and in some sense, still rests there because as soon as you've got combinators, you've got something that's pretty much universal.
You can do anything with it.
The trouble is it's sort of down at the bit level.
It's like assembly code.
It doesn't have, as a concept, enough organization to help you go to the next level of achievement.
There's also a little problem here that deserves also, some mention, and that is that we have, over this whole thing, the problem of parasitic semantics.
A kind of ugliness that surrounds this whole concept because when we look at a diagram like that, and we say, oh, Macbeth murdered Duncan, that means Duncan's the victim.
We know there must have been a motive, maybe Macbeth wanted to be king.
Well, we know all that stuff, and there's a tendency to project that knowing into the machine.
If you're going to play with your telephone, please leave.
So if we project meaning into that that's our understanding that's not the machine's understanding.
So much of the meaning can be said to be parasitic.
We're the parasite, and we're projecting the into that thing.
Putting that diagram into some machine form doesn't mean the machine knows anything.
It might be able to conclude some things, but it's understanding is not grounded in any kind of contact with the physical world.
So we have to worry a lot about that and where philosophers would stop there and go off and write a few books on the subject.
But we're not philosophers so we're going to just mention the problem and go barreling ahead.
So we need to use this notion of semantic net, and we have to ask ourselves some questions about what elements of the inner language are most useful and yet it might be very complicated, but here's usefulness number one.
The notion of classification.
So we know about stuff, and we know about, for example, pianos.
And we know about tools.
And we know about maps.
But we know about those things on different levels.
So say I'm thinking about a tool, do you have a very good image of what I'm talking about?
The answer has to be no because the notion of a tool is very vague, so it's hard to form a picture of what that's all about.
On the other hand, if I said I'm thinking about a mac.
Well, this is interesting because there's lexical ambiguity there.
You don't know if I'm talking about the Apple type Mac or the apple type Mac, or should I say the fruit or the computer?
So there's lexical ambiguity there at two levels or more.
But let's fill this in a little bit.
If I know I'm talking about a piano, you can form a picture of that so that seems to be on a more detailed level where you can do hallucination.
At a higher level you have just a musical instrument.
And I can give you a tool to think about by writing hammer.
And if I'm going to have a mac, it's going to be an apple.
And in this case, I want you to think about a fruit.
And down here, I can be more specific about these things too.
I can add a slight refinement of detail and say, I'm thinking about one of these.
Do you know what this is?
PATRICK WINSTON: No, it's not a mere hammer, it's a ball-peen hammer.
In some circles, it's called a ladies hammer.
I don't know why.
But it's-- What's it for?
Most people buy it mostly because it's small and light weight.
But, in fact, it's for metal working.
It's for taking a piece of sheet metal and pounding it out into an ashtray or something or for seating rivets.
It's a metal worker's hammer.
So you might not have known about that before but now, at least, you have a word to hang that knowledge on.
It's a ball peen hammer.
So we have various levels here, going from very specific to very general.
And we can even go to a level of specificity for pianos by saying we've got a Bosendorfer.
Why is a Bosendorfer special?
I mean, is it like a Baldwin?
Something's [INAUDIBLE], Yoka-- Lots of piano types, what's special?
You see, you don't know because unless you play the piano, and probably unless you're a serious piano player you don't know that a Bosendorfer-- Ariel, you know.
STUDENT: I think its supposed to have an extra octave at the bottom, black keys.
PATRICK WINSTON: It's got some extra keys at the bottom.
And most people don't know that unless they're serious about the piano.
Some professional piano players, when they're confronted with a Bosendorfer have to have someone cover those keys because it screws up their peripheral vision, and hit the wrong key.
Because they're not used to having those extra keys at the bottom.
So that's a little detail but the Bosendorfer.
So you can make a kind of graph, and you can say, let's go from low, very general, to a basic level, to a specific level.
So it is the case in human knowledge that that graph has a tendency to look sort of like this.
So here's tool, here's hammer, here's ball peen.
So that level, where you have a big jump, that's the general to basic level of transition.
So that basic level is probably there because that's the level on which we hang a huge amount of our knowledge.
We know a lot about pianos, and it all seems to be hanging on that word piano, which gives us power with the concept.
So that's example number one of an element of our inner language.
The ability to assemble things into hierarchies like that, and hang knowledge about those objects on the elements in that hierarchy.
Well, given that you have elements in the hierarchy, how do you talk about them?
Well, I like to consider the possibility, just for the sake of illustration, that you're thinking about a car crashing into a wall.
So you've got things to think about like the speed of the car, the distance to the wall, and the condition of the car.
And you've got the period before the crash, during the crash, and after the crash.
So you might want to think about how to talk about those objects in those three time periods.
So we can do that with a vocabulary of change and we do that because we believe that most of human thinking is thinking about change causing change.
And that flies in the face of what we learned as engineers.
Because in engineering, we learn about state, and once you know the state of the system, you know everything you need to know in order to predict the future.
The trouble is, in our heads, thinking about everything there is in the world, including the current phase of the moon, is too much stuff.
So mostly, our thinking, we think, is hinging on the idea that change leads to change.
So that's why we have a vocabulary of change.
So in the period before the crash, the speed of the car is not changing.
There's a little notation for not change, no delta.
The distance to the wall, that's decreasing.
The condition of the car, that's not changing.
Then, the car hits the wall.
So the speed of the car disappears, the distance to the wall disappears, and the condition of the car will change dramatically.
Finally, after the crash is over, the speed of the car does not appear.
The distance to the wall does not change, and the condition of the car also does not change.
So that's hinting at a vocabulary of change, and its use, which will be the second element in our development of a vocabulary of ways in which we might have constructed are inner language.
So this a particular idea, that's classification.
This is transition, and a system that purports to understand stories with a heavy emphasis on this notion of transition and we believe, that is to say, I think, that that vocabulary has to have decrease, increase, change, appear, and disappear.
So there are 10 things you can have in such diagrams.
I've done five.
That's because, for each of those, there's a not variation on that.
So with a vocabulary of 10 things can go a long way toward helping to describe things that are in process of change and making transition.
And we have a lot of those words in our vocabulary.
We use those words a lot in our vocabulary.
They seem heavily connected with vision.
Our friend appeared.
The cat disappeared.
The speed increased.
So this is a description of a crash.
In terms of those kinds of elements, now, I say to you, how does a camera work?
Well, I could say the camera works because a photon crashes into a photo receptor.
So when I say a photon crashes into a photo receptors, why am I saying that, and how does it help?
I'm saying that and it helps because it's the same pattern of change you already know about when you talk about a car crashing into a wall.
How does that work?
The speed of the photon, the distance of the photon to the receptor, and the condition of the receptor.
So analogies like that are very much of the core of what we think about all the time.
Really then, there is representation number two.
Number one is class, number two is transition, and now, you're ready for number three, which is trajectory.
Linguists, who study sentences, often talk in terms of fundamental patterns that seemed to be in a lot of what we say, and a lot of what we say is about objects moving along trajectories.
So we can talk about a trajectory frame.
And a trajectory frame will have elements like this.
It has an object moving on a trajectory that ends up at a destination.
You might start out at a source.
It's probably been arranged by some kind of agent, and the agent may assist himself making the motion happen with some kind of instrument.
There might be somebody helping out over here, a co-agent.
Well, what else can we have?
A beneficiary, someone is helped out by the action.
Sometimes, the motion is arranged by a conveyance.
So these are a lot of slots, finite slots, and descriptions of actions, many of which involve motion on a trajectory.
We have a tendency in language to decorate these things in one way or another, depending on the language.
And so in many languages, the decoration is by way of position in the sentence.
In English, it's often by way of a preposition.
It's used to help zero in on a particular role of an object in the trajectory scenario.
So if I say, I baked a cake with a friend, there's a with preposition.
If I baked a cake for a friend, the friend is the beneficiary.
If I baked a cake with an oven, that's an instrument.
The object may be moving to a destination from a source, and if I'm going to New York by train, I put a by on top of that.
If the agent isn't in subject position, I would say something like, oh, I see all the work was done by a student.
So those prepositions have a tendency to help us zoom in on the actual role of particular objects in this whole package, the whole frame.
So this is number three.
There's a variation on this in which there's no actual trajectory in which case we'll just call that a role frame.
Because if there's no trajectory, we can still have things such as an instrument, a co-agent, and a beneficiary.
So now, we've got three representations.
You might say, well, what good are they?
And you can determine what good they are these days because it's easier to go over established corpuses, and say, what fraction of those, of the sentences, in such a corpus involve classification or a transition or a trajectory.
The most well known of these is the so-called Wall Street Journal Corpus.
It has 50,000 sentences in it, drawn from some period of time, all the syntactical language types work with that corpus a great deal.
And we worked with it a little bit too, to see what fraction of the sentences or what the density of trajectories and transitions are in those sentences.
So I have to say that a little more carefully, because the finding is that in 100 sentences you'll find about 25 transitions or trajectories.
So they're very densely represented.
They're often very abstract.
Prices rose, the economy went to someplace, but there are still words that denote transition or trajectory.
Of course, once you have all this stuff, that you have then have a desire to put it together.
So the next thing we need to talk about is story sequences.
So a story sequence can be a single sentence, and I want to illustrate that with one of my favorites.
Here's the sentences.
I think I've chosen a gender neutral name so as not getting in any trouble.
So Pat, but I don't call myself Pat because I decided when I was 18 years old that pat is a unit of measure for butter.
In any case, with Pat comforted, Mary, do you have an image of what happened?
Probably not a very firm image.
You know that Pat did something, but you don't know exactly what.
Nevertheless, when Pat comforted Chris, you can construct something that looks like a role frame.
Because the role frame for that would have an agent, and that would be Pat.
There's an action.
We're going to put a question mark in there because we don't have a very firm image of what the action is.
Then again, we're building a wall frame, like so.
The object is Chris.
Oh, you know that is the object.
Is there anything else we can say?
Oh, yes, we can probably say something more.
Something else comes to mind when you see Pat comforted Chris.
There's a sort of result, and the result is a transition frame.
And the transition frame involves an object, which is Chris, and Chris has a mood, which presumably, is improved.
It goes up.
Did you have something, Elliott?
STUDENT: Could you, I guess, analogize the Pat comforted Chris to something like Pat gave comfort to Chris as Chris is the destination?
And couldn't this be [INAUDIBLE]?
PATRICK WINSTON: Elliott is wandering into a very interesting area having to do with, couldn't you think about this in another way and think of it as something moving along a trajectory.
Comfort is moving, if not from Pat, at least to Chris.
And that's a very important kind of observation because what it would suggest is there can be a utility in thinking of things in multiple ways, multiple representations.
Marvin Minsky has a wonderful aphoristic phrase, which is, if you can only think about something in one way, you have no recourse when you get stuck.
So multiple representations mean you have multiple ways of gathering regularity from the world, and collecting it and therefore that'll make you smarter.
So yes, you could do that, and that would be a compliment to what I'm doing now.
Let me continue what I'm doing now.
So what have I done?
I've got a role frame and a tradition frame, and the transition frame is the target of the result slot in the role frame.
Now, we can modify this a little bit.
And maybe want to say, instead of comforted, terrorized.
And how would that change things?
We don't know exactly what Pat did so the action remains unknown.
The agent and the object are the same.
But the result here is presumably that the mood went down.
With just what we've got so far, we can answer a lot of questions, by the way.
Once we've got the sentence understood in these terms, we can say who did the thing?
What's this all about?
And the answer is Pat.
What did Pat do?
Who do they do it too?
Who was the object?
What was the result?
Chris felt better.
Chris felt worse.
So these representations already give us a question and answering capability that makes for an understanding of the sentence.
But still, we haven't been very specific, so our next step takes this same sentence in a more specific direction.
So here's a way that goes.
Now, you begin to get a sense of what's going on, you form a mental image, so you could have a hallucination.
And that hallucinations will also be a kind of frame, but, in this case, it'll be a trajectory frame.
And the object would be, I don't know, Pat's lips.
And the destination will be Chris' lips.
I don't know, is that right?
Have you all formed a picture of what's going on here?
So it will be different depending on whether Chris is Pat's girlfriend or if Chris is Pat's daughter or if Chris is a frog and Pat is a prince -ess, I guess, the way the story usually goes.
So somehow we have, in our heads, all kinds of libraries that help us to form mental pictures of things when we see things like kissed.
So one final one just to show the variety.
We could say that Pat stabbed Chris.
Let's see, in the case of kissed, the mood is going up.
In the case of stabbed, the mood is going down.
You can also probably say that the health is going down.
And the destination is Chris' body.
And the object that's moving is Pat's knife.
We get the same pattern with both of those sentences.
So both of them involve a sequence that starts off with the action, moves to a transition and a trajectory and those are all arranged in a line.
And that line is something that gives us a lot of power over the situation relative to a semantic net.
So I'm going to decorate that one more time, and say that another element we get out of our internal language is sequence.
An element we need in order to have anything that looks like an account of an inner language is sequence.
Because if you think about things being arrayed in a vast spreading network, it's hard to deal with them.
But if you thing about things being arrayed along a line, in a sequence of actions or events, like so, then that imposes enough constraint to get a handle on what's going on.
So what we're going to call this is the representation, and I guess I'm up to one, two, three, four.
This is a representation of story sequence.
so even though that's a kind of micro-story, it's still an example of a story sequence because we get the power out of it by arranging everything in a line.
You have a sense, I think, especially if you play a musical instrument, on how dependent we are on sequence.
So if you play a musical instrument, you probably know how difficult it is to start replaying a piece of music from the middle of a measure.
You have to go back to at least the beginning of the measure, and probably to the beginning of the phrase, if not the beginning of the piece.
So our memory seems to be, at least in music, very rooted in the idea of sequences.
And that's often true of storytelling too.
We have to go back to the beginning of at least a scene, because somehow these things are arranged in sequences that form, somehow, usefulness out of their sequentialness.
So there's one more thing we can talk about and that is the idea of not just the idea of how these sequences are constructed and what they're constructed out of, but we can also talk a little bit in terms of libraries of stories.
And when we talk about libraries of stories, we can think about kind of the sort of standard stories that we have and how they're arranged, and how we can know a lot about something by what it's super class is.
So it's a variation on the theme of learning stuff from the super classes.
So here's an event frame.
And then, in addition to event frames, there's disaster frames.
And then there are party frames.
And parties and disasters are both events.
And when we talk about disasters, we know, in turn, they break up in a variety of things.
We have earthquake disasters, and maybe we have hurricanes.
And in the party world we have, I don't know, birthday parties, and we have weddings.
And each of these types of frames invites us to fill in particular slots.
So if we're reading a newspaper story about a wedding, we know that we're going to be learning the same sorts of things we will learn about any party except there is the additional information that we expect that says something about the bride and the groom.
If we have a raw event and we don't know anything more about it than that, there's a time and place.
If it's a disaster frame, it gets a grizzly over here, but if there's a disaster frame, we might have the fatalities, and how much it cost.
If it's an earthquake frame, we need to know the magnitude of the quake, and the name of the fault.
If it's a hurricane, we have the category and the name.
So each of these things up there can be viewed, not just an example of something, but as a new frame all by itself.
As we mature, we have these first four things as building blocks, and then, we educate ourselves, and we get all those kinds of frames that help us to understand the world.
But how to fill those in from a newspaper story can be sometimes, quite a challenge.
Actually the worst thing to understand is children's stories because, and this was determined experimentally when people tried to understand children's stories, because it turns out that children's stories are not simpler than the stories we write for adults.
In many cases, they're harder.
If you read about Shakespearean plots, it's all about intrigue, murder, jealousy, greed, but when you're trying to write a children's story it can be about anything.
And worse yet, the children's story often raises problems that you don't see in newspaper stories.
Let me illustrate that for you.
You want to read that story?
You have no trouble figuring it out, of course, but think about a poor machine.
It's struggling to understand anything.
What's going to be the problem?
It's going to have trouble figuring out those pronoun antecedents.
Look at them.
[MUSIC PLAYING] PATRICK WINSTON: Shut [INAUDIBLE].
Oh, that's the way.
Here are the pronouns.
One of them wanted to buy a kite.
He has one, he said.
He will make you take it back.
So that's pretty hard.
Actually, the principal here that I'm driving at is, when you have a new story or any story, if you have an old story but you want to read it-- when you want to understand it quickly, the instantiation of it quickly, you need to be sure that, if you're the storyteller, you don't add to the burden of the understanding any syntactic difficulty.
So that's an example of telling a story with additional syntactic difficulty.
No newspaper journalist would ever write the story like that.
Here's how they would write it.
They would even give you a clue that there's certain information you're never going to get, like who told the story.
It's that reliable sources business.
So this brings us to the final bit that I want to deal with today, and what I'm going to do is, if you came into this class today as a 97 pound writing weakling, you're about to emerge as a 250 pound mountain of a writer.
Because I want to tell you a few tricks that will make you usually better as a writer, especially if you're Russian or German.
And here's how it works.
Because they place additional syntactic burden on the understanding of the story by the reader.
So if you're telling somebody about some difficult new technical idea, the last thing you want them to do is to burden their syntactic processor with figuring out pronoun antecedents.
So don't use pronouns in you're writing, technical writing, at least, will be much clearer.
By the way, why does this especially apply to Germans and Russians?
Is it because-- is this an ethnic origin slur or is this a fact about their language?
It's a fact about their language.
Where is the fact?
Why can they get away with pronoun usages that we cannot get away with in English?
STUDENT: Gender and also-- PATRICK WINSTON: Gender, because if you have all of your nouns decorated with gender, that reduces by three, the potential for ambiguity in the pronoun antecedent.
So you'll frequently find that German and Russian writers will have pronouns all over the place that are perfectly clear to them because of gender.
They are interpretable by English speakers when translated because we don't have the gender to help us zero in.
So these things all have to do with minimizing extra, superfluous, gratuitous, unnecessary burden on the reader.
Number two is don't use former or latter.
You see those words used frequently in technical writing, and guess what?
No human being ever encounters those words without having to stop and go back to figure out what they refer to it.
So that's another example of not placing any unnecessary syntactic burden on the reader.
And finally, don't call a shovel a spade.
There's a habit, probably instilled by well-meaning but misadvised high school teachers, that you shouldn't repeat words, and so people go to great lengths to use some different word.
The problem is that the reader doesn't know if the shift in the word is deliberate and attached to some subtle meaning shift, or if it's just adhering to some high school teacher's admonition against using the same word again.
So you don't want to say, oh, the right way to dig this particular hole is with a spade and then switch to a shovel because the reader can't tell if it's deliberate, accidental, or a consequence of just the desire not to use the same word.
So this is how you, with some very simple mechanisms grounded in AI, you can actually make yourself into a better writer by avoiding those kinds of things that put an unnecessary syntactic burden on the people who are reading your stuff.