Here's a stronger argument for why logic can be viewed as memory organization. All of logic deals with creating associations: associations between causes and effects, between properties of objects, between principles... Every if-then statement is an association, for example if you say if statement A is true, then statement B is true, then you are associating statement A with statement B.
From here, it is not entirely clear to me how to complete the argument in the manner I want to. The weaker conclusion is: every association can be said to be this memory organization concept. When you make an association, you group concepts or properties or objects and link them to a different group. Because logic is associations, logic can be explained as creating and linking groups, and adding new data into previously created groups, and this the idea I am calling memory organization. This is okay, but it's not the answer I want. The problem is in the idea of linking groups. There are so many different ways to associate things: A is a part of a larger group B, or A causes B, or A seems to affect B, so on. Because of this, inside of your organized memory, you would need different types of links between the groups, a link meaning [a part of] and a link meaning [causes] and a link meaning [seems to affect]. For two potential applications of this idea, an understanding of the brain and a method to mimic human logic in machines, this is a problem. From what I know of neuroscience (which is almost nothing), the brain doesn't have different types of connections, so it's not clear how to explain the workings of the brain with organized memory like this. In machines also, how would you implement the idea of different types of connection between groups of objects? Instead of defining memory organization as groups connected with different links, it would be much more powerful to define it as groups connected by a single link (probably the link [group A is a part of B]). If you could do this, then it would be much easier to make the extension to the applications.
Well I had a vague answer of how to do this, but now, as I'm thinking while I'm typing, I think there's an easier answer. The brain has a memory of concepts. Any association created between groups can be explained by saying that the groups follow a concept. When you say A causes B or A seems to affect B, it can alternatively be said as A and B are a part of the larger concept [causes] or [seems to affect]. With this, it becomes possible to argue that all of logic can be explained as memory organized into group hierarchies, with connections in the hierarchy meaning that the lower group is a part of the higher group.
Now it's still a rough idea, but consider what you can do by thinking like this. First, let's talk about the brain. If a mechanism for grouping memories and linking memories can be found, then human reasoning can be explained through this concept of memory organization. It could be that connections and groups are formed through repetition. You see action A leads to result B, and a connection is formed. Then the more times you see action A lead to result B, the stronger the connection becomes. Then the connections could gradually get longer and more complex, say, action A is connected to group G by concept C and that group G is connected to another group by another concept, and that group is connected to a number of possible outcomes by other concepts, and so on.
Then there's machine learning. If logic can be achieved by organizing memory, then it is possible to create a machine capable of logic if you can make it capable of organizing its memory properly. There are already topics such as clustering and classification. Clustering could form the groups and classification could take new inputs and place them in a corresponding group, so, if you could make a computer form the connections between the groups as well, you might be able to create a logical machine. Now how would you form the connections? I would try something from the idea of repetition. You provide possible outcomes and the definitions of some conceptual ideas into the machine, and then form connections when an object in a group leads to a certain outcome, and make the connections stronger when more objects in the group lead to the same outcome. Perhaps there is even a way to make the machine generate "concept groups" on its own.
Finally, there's loose practical implications. If it is true that logic can only be achieved through a process that can be viewed as memory organization, then knowledge and memory in an area are requirements for logic in that area. In addition, you need to be able to group that which you have memorized, and draw the correct connections. Besides incorporating new data, logic can be improved by finding the concepts behind the connections, and by extending these concepts to other groups. So, to be strong logically, you would have to be strong in forming groups based on similarities, classifying new things into existing groups or forming new groups if necessary, and drawing connections between groups and outcomes and strengthening the connections by answering the question "Why?".
There's still more questions to be asked though. Should inductive and deductive logic be viewed in the same manner? If you only use one connection type, should it be [is contained in]? That type of structure seems a bit awkward. For example, you would say that the pair of Starcraft 2 practice and cognitive ability is contained in the larger group of pairs of things in which the first thing may cause the second. Anyway, these aren't questions I can answer on the spot, and if I were to wait until I had an answer to every challenge I could come up with, well, this would never get posted.
No comments:
Post a Comment