EddyGiusepe commited on
Commit
fc422f7
·
1 Parent(s): beb60e1
4_Gerando_Ideias_de_Pesquisa_com_GPT4/auto_scientist.py ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Data Scientist.: Dr. Eddy Giusepe Chirinos Isidro
3
+
4
+ Gerando Ideias novas a partir de Pesquisas de Artigos
5
+ =====================================================
6
+
7
+ Este script ao ser executado extrairá ideias "NOVAS" a partir de URLs
8
+ ou Artigos de Pesquisa, etc.
9
+
10
+ Link de estudo ---> https://www.youtube.com/watch?v=t2jUEgj3GyE
11
+
12
+ Execução:
13
+
14
+ $ python auto_scientist.py
15
+
16
+ NOTA:
17
+
18
+ Você tem duas bases de conhecimentos:
19
+
20
+ * knowledge_custom.py
21
+ * knowledge_from_urls.py
22
+ """
23
+ import openai
24
+ import json
25
+ from termcolor import colored
26
+
27
+ import requests
28
+ from bs4 import BeautifulSoup
29
+ import PyPDF2
30
+ import io
31
+ import re
32
+ import os
33
+
34
+ from tenacity import retry, stop_after_attempt, wait_exponential
35
+
36
+ import openai
37
+ import os
38
+ from dotenv import load_dotenv, find_dotenv
39
+ _ = load_dotenv(find_dotenv()) # read local .env file
40
+ openai.api_key = os.environ['OPENAI_API_KEY']
41
+
42
+ # model = "gpt-3.5-turbo-16k-0613"
43
+ model = "gpt-4-0613"
44
+
45
+ ideagen_system_message ="""Você é uma nova inteligência geradora de conhecimento.
46
+
47
+ Seu objetivo é gerar conhecimento novo com base em algumas informações.
48
+
49
+ Estarei lhe dando novos conhecimentos. Seu objetivo é extrair as informações essenciais deles e criar uma lista de informações essenciais extraídas.
50
+ Combinar as informações essenciais extraídas anteriormente com as informações recém-fornecidas para extrair uma lista combinada de informações essenciais.
51
+
52
+ Depois de ter informações essenciais suficientes, você poderá gerar uma ideia e um conhecimento novos quando sentir que é capaz de gerar uma ideia e um conhecimento novos.
53
+ Sua ideia e conhecimento novos devem ser originais e você deve apoiá-los com substância e bom raciocínio.
54
+
55
+ Somente quando você sentir que é capaz de gerar ideias e conhecimentos inovadores e novedosos, responda com:
56
+
57
+ novel_idea=True """
58
+
59
+ messages = [{"role": "system", "content": ideagen_system_message}]
60
+ messages.append({"role": "user", "content": "Você está preparado?"})
61
+ messages.append({"role": "assistant", "content": "Sim estou pronto. Por favor, forneça-me alguns novos conhecimentos. Extrairei deles as informações essenciais e listá-las-ei, combinando as novas informações com as informações essenciais extraídas anteriormente para extrair uma lista combinada de informações essenciais."})
62
+
63
+
64
+ @retry(stop=stop_after_attempt(5), wait=wait_exponential(multiplier=2, min=2, max=32))
65
+ def idea_gen(text, model, messages=messages):
66
+
67
+ response = openai.ChatCompletion.create(
68
+ model = model,
69
+ messages = messages,
70
+ stream = True,
71
+ functions = [
72
+ {
73
+ "name": "essential_information_extraction",
74
+ "description": "Recebe informações essenciais e um booleano novel_idea. Se novel_idea for True, novel_idea_output deverá ser fornecido e combined_essential_information_extraction_list será ignorado. Se novel_idea for False, combined_essential_information_extraction_list deverá ser fornecido.",
75
+ "parameters": {
76
+ "type": "object",
77
+ "properties": {
78
+ "novel_idea": {
79
+ "type": "boolean",
80
+ "description": "Se você tem uma idéia nova ou não"
81
+ },
82
+ "novel_idea_output": {
83
+ "type": "string",
84
+ "description": "Nova ideia gerada por você se novel_idea for True, caso contrário, string vazia"
85
+ },
86
+ "combined_essential_information_extraction_list": {
87
+ "type": "array",
88
+ "items": {
89
+ "type": "string",
90
+ "description": "Informações essenciais extraídas do conhecimento fornecido. Forneça isso somente quando novel_idea for False, caso contrário, lista vazia"
91
+ }
92
+ },
93
+ },
94
+ "required": ["novel_idea", "novel_idea_output", "combined_essential_information_extraction_list"]
95
+ }
96
+ }
97
+ ],
98
+ function_call = {"name": "essential_information_extraction", "arguments": {"combined_essential_information_extraction_list": "", "novel_idea": False, "novel_idea_output": ""}},
99
+ )
100
+
101
+ responses = ''
102
+ for chunk in response:
103
+ # print(chunk)
104
+ if chunk["choices"][0]["delta"].get("function_call"):
105
+ chunk = chunk["choices"][0]["delta"]
106
+ # print(chunk)
107
+ response = chunk["function_call"]["arguments"]
108
+ responses += response
109
+ print(response, end='', flush=True)
110
+
111
+ return responses
112
+
113
+
114
+ def extract_from_wikipedia(url):
115
+ print("Extraindo conteúdo da Wikipédia . . .")
116
+ response = requests.get(url)
117
+ soup = BeautifulSoup(response.text, 'html.parser')
118
+ paragraphs = soup.find_all('p')
119
+ text = ""
120
+ for p in paragraphs:
121
+ text += p.text
122
+ text = re.sub(r'\W', ' ', text) # replace non-alphanumeric characters with space
123
+ return ' '.join(text.split()[:2000])
124
+
125
+ def extract_from_arxiv(url):
126
+ print("Extraindo conteúdo do arxiv . . .")
127
+ response = requests.get(url)
128
+ f = io.BytesIO(response.content)
129
+ reader = PyPDF2.PdfReader(f)
130
+ content = []
131
+ for i in range(len(reader.pages)):
132
+ content += reader.pages[i].extract_text().split()
133
+ if len(content) >= 1200:
134
+ break
135
+ return ' '.join(content)
136
+
137
+ def extract_content(url_list):
138
+ print("Extracting content from urls...")
139
+ new_knowledge = []
140
+ for url in url_list:
141
+ if 'arxiv.org' in url:
142
+ new_knowledge.append(extract_from_arxiv(url))
143
+ elif 'wikipedia.org' in url:
144
+ new_knowledge.append(extract_from_wikipedia(url))
145
+ return new_knowledge
146
+
147
+ def write_knowledge(knowledge):
148
+ with open("knowledge_from_urls.py", "w", encoding="utf-8") as f:
149
+ f.write("knowledge = [\n")
150
+ for item in knowledge:
151
+ f.write(' """' + item + '""",\n\n\n\n') # Write each item in triple quotes
152
+ f.write("]\n")
153
+
154
+ url_list = []
155
+ with open("url_list.txt", "r", encoding="utf-8") as f:
156
+ for line in f.readlines():
157
+ url_list.append(line.strip())
158
+
159
+
160
+
161
+
162
+ ask_user = input("Você deseja usar conhecimento personalizado ou URLs? (c/u)")
163
+ if ask_user.lower() == 'c':
164
+ from knowledge_custom import knowledge
165
+ elif ask_user.lower() == 'u':
166
+ if os.path.exists("knowledge_from_urls.py"):
167
+ ask_user = input("Você quer usar o conhecimento existente? (y/n)")
168
+ if ask_user.lower() == 'y':
169
+ from knowledge_from_urls import knowledge
170
+ elif ask_user.lower() == 'n':
171
+ knowledge = extract_content(url_list)
172
+ write_knowledge(knowledge)
173
+ else:
174
+ print(colored("Execute o script novamente e forneça uma entrada válida. 'y' para yes, 'n' para no", "red"))
175
+ exit()
176
+
177
+ else:
178
+ knowledge = extract_content(url_list)
179
+ write_knowledge(knowledge)
180
+
181
+ else:
182
+ print(colored("Execute o script novamente e forneça uma entrada válida. 'c' para conhecimento customizado, 'u' para conhecimento de URLs", "red"))
183
+ exit()
184
+
185
+
186
+ def process_idea(new_knowledge, model, messages, x):
187
+ try:
188
+ response = idea_gen(new_knowledge, model, messages)
189
+ essential_information = json.loads(response)["combined_essential_information_extraction_list"]
190
+ novel_idea = json.loads(response)["novel_idea"]
191
+ novel_idea_output = json.loads(response)["novel_idea_output"]
192
+
193
+ messages.append({"role": "assistant", "content": f"{response}"})
194
+
195
+ if novel_idea:
196
+ print(colored(f"novel_idea_output={novel_idea_output}", "green"))
197
+ with open(f"novel_idea_output_{x}.txt", "w", encoding="utf-8", errors="ignore") as f:
198
+ f.write(novel_idea_output + "\n")
199
+
200
+ with open(f"combined_essential_information_extraction_list_{x}.txt", "w", encoding="utf-8", errors="ignore") as f:
201
+ f.write(str(essential_information) + "\n")
202
+ except:
203
+ if '"novel_idea": true' in response.lower():
204
+ novel_idea_output = response.lower().split('"novel_idea_output":')[1].split("}")[0]
205
+ with open(f"novel_idea_output_{x}.txt", "a", encoding="utf-8", errors="ignore") as f:
206
+ f.write(novel_idea_output + "\n")
207
+
208
+ messages.append({"role": "assistant", "content": f"{response}"})
209
+
210
+ with open(f"combined_essential_information_extraction_list_{x}.txt", "w", encoding="utf-8", errors="ignore") as f:
211
+ f.write(response)
212
+
213
+ new_knowledge = ""
214
+ return new_knowledge, messages
215
+
216
+
217
+ x = 0
218
+
219
+ while True:
220
+ if len(messages) > 4:
221
+ messages = messages[:3] + messages[-2:]
222
+ if x < len(knowledge):
223
+ new_knowledge = knowledge[x]
224
+ messages.append({"role": "user", "content": new_knowledge})
225
+
226
+ process_idea(new_knowledge, model, messages, x)
227
+ x += 1
228
+
229
+ if x!= 0 and x % 1 == 0 :
230
+ if len(messages) > 4:
231
+ messages = messages[:3] + messages[-2:]
232
+ messages.append({"role": "user", "content": "Você deve gerar uma ideia ou conhecimento novedoso e original neste momento. Defina novel_idea como True e forneça novel_idea_output."})
233
+ process_idea(new_knowledge, model, messages, x)
234
+ messages = messages[:-2]
235
+
236
+
237
+ else:
238
+ print(colored("Você chegou ao final da lista de conhecimento. Você gerou com sucesso novas ideias e conhecimentos.", "yellow"))
239
+ break
4_Gerando_Ideias_de_Pesquisa_com_GPT4/knowledge_custom.py ADDED
@@ -0,0 +1,686 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ knowledge = ["""In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction.
2
+
3
+ In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. Richard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen.[1]: Ch1 
4
+
5
+ History
6
+ Main articles: History of quantum mechanics and History of quantum field theory
7
+
8
+ Paul Dirac
9
+ The first formulation of a quantum theory describing radiation and matter interaction is attributed to British scientist Paul Dirac, who (during the 1920s) was able to compute the coefficient of spontaneous emission of an atom.[2]
10
+
11
+ Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and an elegant formulation of quantum electrodynamics by Enrico Fermi,[3] physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck,[4] and Victor Weisskopf,[5] in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer.[6] At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics.
12
+
13
+
14
+ Hans Bethe
15
+ Difficulties with the theory increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom,[7] now known as the Lamb shift and magnetic moment of the electron.[8] These experiments exposed discrepancies which the theory was unable to explain.
16
+
17
+ A first indication of a possible way out was given by Hans Bethe in 1947,[9] after attending the Shelter Island Conference.[10] While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford.[9] Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization.
18
+
19
+
20
+ Feynman (center) and Oppenheimer (right) at Los Alamos.
21
+ Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga,[11] Julian Schwinger,[12][13] Richard Feynman[14][15][16] and Freeman Dyson,[17][18] it was finally possible to get fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Shin'ichirō Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with the 1965 Nobel Prize in Physics for their work in this area.[19] Their contributions, and those of Freeman Dyson, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent.[17] Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Even though renormalization works very well in practice, Feynman was never entirely comfortable with its mathematical validity, even referring to renormalization as a "shell game" and "hocus pocus".[1]: 128 
22
+
23
+ QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s work by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on the pioneering work of Schwinger, Gerald Guralnik, Dick Hagen, and Tom Kibble,[20][21] Peter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force""",
24
+ """Chaos theory is an interdisciplinary area of scientific study and branch of mathematics focused on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions, and were once thought to have completely random states of disorder and irregularities.[1] Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals, and self-organization.[2] The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning that there is sensitive dependence on initial conditions).[3] A metaphor for this behavior is that a butterfly flapping its wings in Texas can cause a tornado in Brazil.[4][5][6]
25
+
26
+ Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general.[7] This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution[8] and is fully determined by their initial conditions, with no random elements involved.[9] In other words, the deterministic nature of these systems does not make them predictable.[10][11] This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:[12]
27
+
28
+ Chaos: When the present determines the future, but the approximate present does not approximately determine the future.
29
+
30
+ Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather, and climate.[13][14][8] It also occurs spontaneously in some systems with artificial components, such as the road traffic.[2] This behavior can be studied through the analysis of a chaotic mathematical model, or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology,[8] anthropology,[15] sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management.[16][17] The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, and self-assembly processes.
31
+
32
+ Introduction
33
+ Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years.[18] In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.[19]
34
+
35
+ Chaos theory is a method of qualitative and quantitative analysis to investigate the behavior of dynamic systems that cannot be explained and predicted by single data relationships, but must be explained and predicted by whole, continuous data relationships.
36
+
37
+ Chaotic dynamics
38
+
39
+ The map defined by x → 4 x (1 – x) and y → (x + y) mod 1 displays sensitivity to initial x positions. Here, two series of x and y values diverge markedly over time from a tiny initial difference.
40
+ In common usage, "chaos" means "a state of disorder".[20][21] However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:[22]
41
+
42
+ it must be sensitive to initial conditions,
43
+ it must be topologically transitive,
44
+ it must have dense periodic orbits.
45
+ In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions.[23][24] In the discrete-time case, this is true for all continuous maps on metric spaces.[25] In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.
46
+
47
+ If attention is restricted to intervals, the second property implies the other two.[26] An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.[27]
48
+
49
+ Sensitivity to initial conditions
50
+ Main article: Butterfly effect
51
+
52
+ Lorenz equations used to generate plots for the y variable. The initial conditions for x and z were kept the same but those for y were changed between 1.001, 1.0001 and 1.00001. The values for
53
+ �\rho ,
54
+ �\sigma and
55
+ �\beta were 45.92, 16 and 4 respectively. As can be seen from the graph, even the slightest difference in initial values causes significant changes after about 12 seconds of evolution in the three cases. This is an example of sensitive dependence on initial conditions.
56
+ Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.[2]
57
+
58
+ Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?.[28] The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
59
+
60
+ As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993,[5] "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions.[5] A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).[29]
61
+
62
+ A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead.[30] This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach 100 °C (212 °F) or fall below −130 °C (−202 °F) on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.
63
+
64
+ In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions.[31] More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation
65
+
66
+
67
+ 0
68
+ \delta \mathbf {Z} _{0}, the two trajectories end up diverging at a rate given by
69
+
70
+ |
71
+
72
+
73
+ (
74
+
75
+ )
76
+ |
77
+
78
+
79
+
80
+
81
+ |
82
+
83
+
84
+ 0
85
+ |
86
+ ,
87
+ {\displaystyle |\delta \mathbf {Z} (t)|\approx e^{\lambda t}|\delta \mathbf {Z} _{0}|,}
88
+ where
89
+
90
+ t is the time and
91
+ �\lambda is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic.[8]
92
+
93
+ In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.[11]
94
+
95
+ Non-periodicity
96
+ A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.
97
+
98
+ Topological mixing
99
+
100
+ Six iterations of a set of states
101
+ [
102
+
103
+ ,
104
+
105
+ ]
106
+ [x,y] passed through the logistic map. The first iterate (blue) is the initial condition, which essentially forms a circle. Animation shows the first to the sixth iteration of the circular initial conditions. It can be seen that mixing occurs as we progress in iterations. The sixth iteration shows that the points are almost completely scattered in the phase space. Had we progressed further in iterations, the mixing would have been homogeneous and irreversible. The logistic map has equation
107
+
108
+
109
+ +
110
+ 1
111
+ =
112
+ 4
113
+
114
+
115
+ (
116
+ 1
117
+
118
+
119
+
120
+ )
121
+ {\displaystyle x_{k+1}=4x_{k}(1-x_{k})}. To expand the state-space of the logistic map into two dimensions, a second state,
122
+
123
+ y, was created as
124
+
125
+
126
+ +
127
+ 1
128
+ =
129
+
130
+
131
+ +
132
+
133
+
134
+ {\displaystyle y_{k+1}=x_{k}+y_{k}}, if
135
+
136
+
137
+ +
138
+
139
+
140
+ <
141
+ 1
142
+ {\displaystyle x_{k}+y_{k}<1} and
143
+
144
+
145
+ +
146
+ 1
147
+ =
148
+
149
+
150
+ +
151
+
152
+
153
+
154
+ 1
155
+ {\displaystyle y_{k+1}=x_{k}+y_{k}-1} otherwise.
156
+
157
+ The map defined by x → 4 x (1 – x) and y → (x + y) mod 1 also displays topological mixing. Here, the blue region is transformed by the dynamics first to the purple region, then to the pink and red regions, and eventually to a cloud of vertical lines scattered across the space.
158
+ Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.
159
+
160
+ Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.
161
+
162
+ Topological transitivity
163
+ A map
164
+
165
+ :
166
+
167
+
168
+
169
+ {\displaystyle f:X\to X} is said to be topologically transitive if for any pair of non-empty open sets
170
+
171
+ ,
172
+
173
+
174
+
175
+ {\displaystyle U,V\subset X}, there exists
176
+
177
+ >
178
+ 0
179
+ k>0 such that
180
+
181
+
182
+ (
183
+
184
+ )
185
+
186
+
187
+
188
+ ∅{\displaystyle f^{k}(U)\cap V\neq \emptyset }. Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that it is impossible to decompose the system into two open sets.[32]
189
+
190
+ An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits.[33]
191
+
192
+ Density of periodic orbits
193
+ For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits.[32] The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example,
194
+ 5
195
+
196
+ 5
197
+ 8
198
+ {\tfrac {5-{\sqrt {5}}}{8}} →
199
+ 5
200
+ +
201
+ 5
202
+ 8
203
+ {\tfrac {5+{\sqrt {5}}}{8}} →
204
+ 5
205
+
206
+ 5
207
+ 8
208
+ {\tfrac {5-{\sqrt {5}}}{8}} (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).[34]
209
+
210
+ Sharkovskii's theorem is the basis of the Li and Yorke[35] (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.
211
+
212
+ Strange attractors
213
+
214
+ The Lorenz attractor displays chaotic behavior. These two plots demonstrate sensitive dependence on initial conditions within the region of phase space occupied by the attractor.
215
+ Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.[36]
216
+
217
+ An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.
218
+
219
+ Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them""",
220
+ """For a more general introduction to the topic, see Introduction to quantum mechanics.
221
+ Part of a series of articles about
222
+ Quantum mechanics
223
+
224
+
225
+
226
+
227
+
228
+ |
229
+
230
+ (
231
+
232
+ )
233
+
234
+ =
235
+
236
+ ^
237
+ |
238
+
239
+ (
240
+
241
+ )
242
+ ⟩{\displaystyle i\hbar {\frac {\partial }{\partial t}}|\psi (t)\rangle ={\hat {H}}|\psi (t)\rangle }
243
+ Schrödinger equation
244
+ IntroductionGlossaryHistory
245
+ Background
246
+ Fundamentals
247
+ Experiments
248
+ Formulations
249
+ Equations
250
+ DiracKlein–GordonPauliRydbergSchrödinger
251
+ Interpretations
252
+ Advanced topics
253
+ Scientists
254
+ vte
255
+ Modern physics
256
+
257
+ ^
258
+ |
259
+
260
+
261
+ (
262
+
263
+ )
264
+
265
+ =
266
+
267
+
268
+
269
+
270
+
271
+ |
272
+
273
+
274
+ (
275
+
276
+ )
277
+ ⟩{\displaystyle {\hat {H}}|\psi _{n}(t)\rangle =i\hbar {\frac {\partial }{\partial t}}|\psi _{n}(t)\rangle }
278
+
279
+
280
+
281
+ +
282
+ Λ
283
+
284
+
285
+
286
+ =
287
+
288
+
289
+
290
+ �{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }={\kappa }T_{\mu \nu }}
291
+ Schrödinger and Einstein field equations
292
+ Founders
293
+ Concepts
294
+ Branches
295
+ Scientists
296
+ Categories
297
+ vte
298
+
299
+ Schrödinger's equation inscribed on the gravestone of Annemarie and Erwin Schrödinger. (Newton's dot notation for the time derivative is used.)
300
+ The Schrödinger equation is a linear partial differential equation that governs the wave function of a quantum-mechanical system.[1]: 1–2  Its discovery was a significant landmark in the development of quantum mechanics. The equation is named after Erwin Schrödinger, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.[2][3]
301
+
302
+ Conceptually the Schrödinger equation is the quantum counterpart of Newton's second law in classical mechanics. Given a set of known initial conditions, Newton's second law makes a mathematical prediction as to what path a given physical system will take over time. The Schrödinger equation gives the evolution over time of a wave function, the quantum-mechanical characterization of an isolated physical system. The equation can be derived from the fact that the time-evolution operator must be unitary, and must therefore be generated by the exponential of a self-adjoint operator, which is the quantum Hamiltonian.
303
+
304
+ The Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. The other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, and the path integral formulation, developed chiefly by Richard Feynman. Paul Dirac incorporated matrix mechanics and the Schrödinger equation into a single formulation. When these approaches are compared, the use of the Schrödinger equation is sometimes called "wave mechanics".
305
+
306
+ Definition
307
+ Preliminaries
308
+
309
+ Complex plot of a wave function that satisfies the nonrelativistic Schrödinger equation with V = 0. In other words, this corresponds to a particle traveling freely through empty space. For more details see wave packet
310
+ Introductory courses on physics or chemistry typically introduce the Schrödinger equation in a way that can be appreciated knowing only the concepts and notations of basic calculus, particularly derivatives with respect to space and time. A special case of the Schrödinger equation that admits a statement in those terms is the position-space Schrödinger equation for a single nonrelativistic particle in one dimension:
311
+
312
+
313
+
314
+
315
+
316
+
317
+ Ψ
318
+ (
319
+
320
+ ,
321
+
322
+ )
323
+ =
324
+ [
325
+
326
+
327
+ 2
328
+ 2
329
+
330
+
331
+ 2
332
+
333
+
334
+ 2
335
+ +
336
+
337
+ (
338
+
339
+ ,
340
+
341
+ )
342
+ ]
343
+ Ψ
344
+ (
345
+
346
+ ,
347
+
348
+ )
349
+ .
350
+ {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (x,t)=\left[-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V(x,t)\right]\Psi (x,t).}
351
+ Here,
352
+ Ψ
353
+ (
354
+
355
+ ,
356
+
357
+ )
358
+ \Psi (x,t) is a wave function, a function that assigns a complex number to each point
359
+
360
+ x at each time
361
+
362
+ t. The parameter
363
+
364
+ m is the mass of the particle, and
365
+
366
+ (
367
+
368
+ ,
369
+
370
+ )
371
+ V(x,t) is the potential that represents the environment in which the particle exists.[4]: 74  The constant
372
+
373
+ i is the imaginary unit, and
374
+ ℏ\hbar is the reduced Planck constant, which has units of action (energy multiplied by time).[4]: 10 
375
+ Broadening beyond this simple case, the mathematical formulation of quantum mechanics developed by Paul Dirac,[5] David Hilbert,[6] John von Neumann,[7] and Hermann Weyl[8] defines the state of a quantum mechanical system to be a vector
376
+ |
377
+
378
+ ⟩|\psi \rangle belonging to a (separable) Hilbert space
379
+
380
+ {\mathcal {H}}. This vector is postulated to be normalized under the Hilbert space's inner product, that is, in Dirac notation it obeys
381
+
382
+
383
+ |
384
+
385
+
386
+ =
387
+ 1
388
+ \langle \psi |\psi \rangle =1. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions
389
+
390
+ 2
391
+ (
392
+
393
+ )
394
+ {\displaystyle L^{2}(\mathbb {C} )},[9] while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors
395
+
396
+ 2
397
+ {\displaystyle \mathbb {C} ^{2}} with the usual inner product.[4]: 322 
398
+
399
+ Physical quantities of interest – position, momentum, energy, spin – are represented by "observables", which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A wave function can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue
400
+ �\lambda is non-degenerate and the probability is given by
401
+ |
402
+
403
+
404
+ |
405
+
406
+
407
+ |
408
+ 2
409
+ {\displaystyle |\langle \lambda |\psi \rangle |^{2}}, where
410
+ |
411
+
412
+ ⟩{\displaystyle |\lambda \rangle } is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by
413
+
414
+
415
+ |
416
+
417
+
418
+ |
419
+
420
+ ⟩{\displaystyle \langle \psi |P_{\lambda }|\psi \rangle }, where
421
+
422
+ �P_{\lambda } is the projector onto its associated eigenspace.[note 1]
423
+
424
+ A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle's Hilbert space. Physicists sometimes introduce fictitious "bases" for a Hilbert space comprising elements outside that space. These are invented for calculational convenience and do not represent physical states.[10]: 100–105  Thus, a position-space wave function
425
+ Ψ
426
+ (
427
+
428
+ ,
429
+
430
+ )
431
+ \Psi (x,t) as used above can be written as the inner product of a time-dependent state vector
432
+ |
433
+ Ψ
434
+ (
435
+
436
+ )
437
+ ⟩|\Psi (t)\rangle with unphysical but convenient "position eigenstates"
438
+ |
439
+
440
+ ⟩|x\rangle :
441
+
442
+ Ψ
443
+ (
444
+
445
+ ,
446
+
447
+ )
448
+ =
449
+
450
+
451
+ |
452
+ Ψ
453
+ (
454
+
455
+ )
456
+
457
+ .
458
+ {\displaystyle \Psi (x,t)=\langle x|\Psi (t)\rangle .}
459
+ Time-dependent equation
460
+ The form of the Schrödinger equation depends on the physical situation. The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:[11]: 143 
461
+
462
+ Time-dependent Schrödinger equation (general)
463
+
464
+
465
+
466
+
467
+
468
+ |
469
+ Ψ
470
+ (
471
+
472
+ )
473
+
474
+ =
475
+
476
+ ^
477
+ |
478
+ Ψ
479
+ (
480
+
481
+ )
482
+ ⟩{\displaystyle i\hbar {\frac {d}{dt}}\vert \Psi (t)\rangle ={\hat {H}}\vert \Psi (t)\rangle }
483
+
484
+ where
485
+
486
+ t is time,
487
+ |
488
+ Ψ
489
+ (
490
+
491
+ )
492
+ ⟩{\displaystyle \vert \Psi (t)\rangle } is the state vector of the quantum system (
493
+ Ψ\Psi being the Greek letter psi), and
494
+
495
+ ^{\hat {H}} is an observable, the Hamiltonian operator.
496
+
497
+
498
+ Each of these three rows is a wave function which satisfies the time-dependent Schrödinger equation for a harmonic oscillator. Left: The real part (blue) and imaginary part (red) of the wave function. Right: The probability distribution of finding the particle with this wave function at a given position. The top two rows are examples of stationary states, which correspond to standing waves. The bottom row is an example of a state which is not a stationary state. The right column illustrates why stationary states are called "stationary".
499
+ The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version. The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is an approximation that yields accurate results in many situations, but only to a certain extent (see relativistic quantum mechanics and relativistic quantum field theory).
500
+
501
+ To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define a probability density function.[4]: 78  For example, given a wave function in position space
502
+ Ψ
503
+ (
504
+
505
+ ,
506
+
507
+ )
508
+ \Psi (x,t) as above, we have
509
+
510
+ Pr
511
+ (
512
+
513
+ ,
514
+
515
+ )
516
+ =
517
+ |
518
+ Ψ
519
+ (
520
+
521
+ ,
522
+
523
+ )
524
+ |
525
+ 2
526
+ .
527
+ {\displaystyle \Pr(x,t)=|\Psi (x,t)|^{2}.}
528
+ Time-independent equation
529
+ The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states. These states are particularly important as their individual study later simplifies the task of solving the time-dependent Schrödinger equation for any state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation.
530
+
531
+ Time-independent Schrödinger equation (general)
532
+ H
533
+ ^
534
+
535
+ |
536
+ Ψ
537
+
538
+ =
539
+
540
+ |
541
+ Ψ
542
+ ⟩{\displaystyle \operatorname {\hat {H}} |\Psi \rangle =E|\Psi \rangle }
543
+
544
+ where
545
+
546
+ E is the energy of the system.[4]: 134  This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function is dependent on time as explained in the section on linearity below. In the language of linear algebra, this equation is an eigenvalue equation. Therefore, the wave function is an eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s)
547
+
548
+ E.""",
549
+ """"Gravitation" and "Law of Gravity" redirect here. For other uses, see Gravitation (disambiguation) and Law of Gravity (disambiguation).
550
+
551
+ Stars from three massive galaxies (UGC 6945) are being attracted by gravity.
552
+ Part of a series on
553
+ Classical mechanics
554
+ F
555
+ =
556
+
557
+
558
+
559
+ (
560
+
561
+ v
562
+ )
563
+ {\displaystyle {\textbf {F}}={\frac {d}{dt}}(m{\textbf {v}})}
564
+ Second law of motion
565
+ HistoryTimelineTextbooks
566
+ Branches
567
+ Fundamentals
568
+ Formulations
569
+ Core topics
570
+ Rotation
571
+ Scientists
572
+ icon Physics portal Category
573
+ vte
574
+ In physics, gravity (from Latin gravitas 'weight'[1]) is a fundamental interaction which causes mutual attraction between all things that have mass. Gravity is, by far, the weakest of the four fundamental interactions, approximately 1038 times weaker than the strong interaction, 1036 times weaker than the electromagnetic force and 1029 times weaker than the weak interaction. As a result, it has no significant influence at the level of subatomic particles.[2] However, gravity is the most significant interaction between objects at the macroscopic scale, and it determines the motion of planets, stars, galaxies, and even light.
575
+
576
+ On Earth, gravity gives weight to physical objects, and the Moon's gravity is responsible for sublunar tides in the oceans (the corresponding antipodal tide is caused by the inertia of the Earth and Moon orbiting one another). Gravity also has many important biological functions, helping to guide the growth of plants through the process of gravitropism and influencing the circulation of fluids in multicellular organisms.
577
+
578
+ The gravitational attraction between the original gaseous matter in the universe caused it to coalesce and form stars which eventually condensed into galaxies, so gravity is responsible for many of the large-scale structures in the universe. Gravity has an infinite range, although its effects become weaker as objects get farther away.
579
+
580
+ Gravity is most accurately described by the general theory of relativity (proposed by Albert Einstein in 1915), which describes gravity not as a force, but as the curvature of spacetime, caused by the uneven distribution of mass, and causing masses to move along geodesic lines. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole's event horizon.[3] However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force causing any two bodies to be attracted toward each other, with magnitude proportional to the product of their masses and inversely proportional to the square of the distance between them:
581
+
582
+
583
+ =
584
+
585
+
586
+ 1
587
+
588
+ 2
589
+
590
+ 2
591
+ ,
592
+ {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}},}
593
+ where F is the force, m1 and m2 are the masses of the objects interacting, r is the distance between the centers of the masses and G is the gravitational constant.
594
+ Current models of particle physics imply that the earliest instance of gravity in the universe, possibly in the form of quantum gravity, supergravity or a gravitational singularity, along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the universe), possibly from a primeval state, such as a false vacuum, quantum vacuum or virtual particle, in a currently unknown manner.[4] Scientists are currently working to develop a theory of gravity consistent with quantum mechanics, a quantum gravity theory,[5] which would allow gravity to be united in a common mathematical framework (a theory of everything) with the other three fundamental interactions of physics.""",
595
+ """In moral and political philosophy, the social contract is a theory or model that originated during the Age of Enlightenment and usually, although not always, concerns the legitimacy of the authority of the state over the individual.[1]
596
+
597
+ Social contract arguments typically are that individuals have consented, either explicitly or tacitly, to surrender some of their freedoms and submit to the authority (of the ruler, or to the decision of a majority) in exchange for protection of their remaining rights or maintenance of the social order.[2][3] The relation between natural and legal rights is often a topic of social contract theory. The term takes its name from The Social Contract (French: Du contrat social ou Principes du droit politique), a 1762 book by Jean-Jacques Rousseau that discussed this concept. Although the antecedents of social contract theory are found in antiquity, in Greek and Stoic philosophy and Roman and Canon Law, the heyday of the social contract was the mid-17th to early 19th centuries, when it emerged as the leading doctrine of political legitimacy.
598
+
599
+ The starting point for most social contract theories is an examination of the human condition absent of any political order (termed the "state of nature" by Thomas Hobbes).[4] In this condition, individuals' actions are bound only by their personal power and conscience. From this shared starting point, social contract theorists seek to demonstrate why rational individuals would voluntarily consent to give up their natural freedom to obtain the benefits of political order.
600
+
601
+ Prominent 17th- and 18th-century theorists of the social contract and natural rights included Hugo de Groot (1625), Thomas Hobbes (1651), Samuel von Pufendorf (1673), John Locke (1689), Jean-Jacques Rousseau (1762) and Immanuel Kant (1797), each approaching the concept of political authority differently. Grotius posited that individual humans had natural rights. Thomas Hobbes famously said that in a "state of nature", human life would be "solitary, poor, nasty, brutish and short". In the absence of political order and law, everyone would have unlimited natural freedoms, including the "right to all things" and thus the freedom to plunder, rape and murder; there would be an endless "war of all against all" (bellum omnium contra omnes). To avoid this, free men contract with each other to establish political community (civil society) through a social contract in which they all gain security in return for subjecting themselves to an absolute sovereign, one man or an assembly of men. Though the sovereign's edicts may well be arbitrary and tyrannical, Hobbes saw absolute government as the only alternative to the terrifying anarchy of a state of nature. Hobbes asserted that humans consent to abdicate their rights in favor of the absolute authority of government (whether monarchical or parliamentary).
602
+
603
+ Alternatively, Locke and Rousseau argued that we gain civil rights in return for accepting the obligation to respect and defend the rights of others, giving up some freedoms to do so.
604
+
605
+ The central assertion that social contract theory approaches is that law and political order are not natural, but human creations. The social contract and the political order it creates are simply the means towards an end—the benefit of the individuals involved—and legitimate only to the extent that they fulfill their part of the agreement. Hobbes argued that government is not a party to the original contract and citizens are not obligated to submit to the government when it is too weak to act effectively to suppress factionalism and civil unrest.
606
+
607
+ Social contract theories were eclipsed in the 19th century in favor of utilitarianism, Hegelianism and Marxism; they were revived in the 20th century, notably in the form of a thought experiment by John Rawls.[5]""",
608
+ """Music theory is the study of the practices and possibilities of music. The Oxford Companion to Music describes three interrelated uses of the term "music theory": The first is the "rudiments", that are needed to understand music notation (key signatures, time signatures, and rhythmic notation); the second is learning scholars' views on music from antiquity to the present; the third is a sub-topic of musicology that "seeks to define processes and general principles in music". The musicological approach to theory differs from music analysis "in that it takes as its starting-point not the individual work or performance but the fundamental materials from which it is built."[1]
609
+
610
+ Music theory is frequently concerned with describing how musicians and composers make music, including tuning systems and composition methods among other topics. Because of the ever-expanding conception of what constitutes music, a more inclusive definition could be the consideration of any sonic phenomena, including silence. This is not an absolute guideline, however; for example, the study of "music" in the Quadrivium liberal arts university curriculum, that was common in medieval Europe, was an abstract system of proportions that was carefully studied at a distance from actual musical practice.[n 1] But this medieval discipline became the basis for tuning systems in later centuries and is generally included in modern scholarship on the history of music theory.[n 2]
611
+
612
+ Music theory as a practical discipline encompasses the methods and concepts that composers and other musicians use in creating and performing music. The development, preservation, and transmission of music theory in this sense may be found in oral and written music-making traditions, musical instruments, and other artifacts. For example, ancient instruments from prehistoric sites around the world reveal details about the music they produced and potentially something of the musical theory that might have been used by their makers. In ancient and living cultures around the world, the deep and long roots of music theory are visible in instruments, oral traditions, and current music-making. Many cultures have also considered music theory in more formal ways such as written treatises and music notation. Practical and scholarly traditions overlap, as many practical treatises about music place themselves within a tradition of other treatises, which are cited regularly just as scholarly writing cites earlier research.
613
+
614
+ In modern academia, music theory is a subfield of musicology, the wider study of musical cultures and history. Etymologically, music theory, is an act of contemplation of music, from the Greek word θεωρία, meaning a looking at, a viewing; a contemplation, speculation, theory; a sight, a spectacle.[3] As such, it is often concerned with abstract musical aspects such as tuning and tonal systems, scales, consonance and dissonance, and rhythmic relationships. In addition, there is also a body of theory concerning practical aspects, such as the creation or the performance of music, orchestration, ornamentation, improvisation, and electronic sound production.[4] A person who researches or teaches music theory is a music theorist. University study, typically to the MA or PhD level, is required to teach as a tenure-track music theorist in a US or Canadian university. Methods of analysis include mathematics, graphic analysis, and especially analysis enabled by western music notation. Comparative, descriptive, statistical, and other methods are also used. Music theory textbooks, especially in the United States of America, often include elements of musical acoustics, considerations of musical notation, and techniques of tonal composition (harmony and counterpoint), among other topics.""",
615
+ """he foundations of pre-20th-century color theory were built around "pure" or ideal colors, characterized by different sensory experiences rather than attributes of the physical world. This has led to several inaccuracies in traditional color theory principles that are not always remedied in modern formulations.[3]
616
+
617
+ Another issue has been the tendency to describe color effects holistically or categorically, for example as a contrast between "yellow" and "blue" conceived as generic colors, when most color effects are due to contrasts on three relative attributes which define all colors:
618
+
619
+ Value (light vs. dark, or white vs. black),
620
+ Chroma [saturation, purity, strength, intensity] (intense vs. dull), and
621
+ Hue (e.g. the name of the color family: red, yellow, green, cyan, blue, magenta).
622
+ The visual impact of "yellow" vs. "blue" hues in visual design depends on the relative lightness and saturation of the hues.
623
+
624
+ These confusions are partly historical and arose in scientific uncertainty about color perception that was not resolved until the late 19th century when artistic notions were already entrenched. They also arise from the attempt to describe the highly contextual and flexible behavior of color perception in terms of abstract color sensations that can be generated equivalently by any visual media.[citation needed]
625
+
626
+ Many historical "color theorists" have assumed that three "pure" primary colors can mix into all possible colors, and any failure of specific paints or inks to match this ideal performance is due to the impurity or imperfection of the colorants. In reality, only imaginary "primary colors" used in colorimetry can "mix" or quantify all visible (perceptually possible) colors; but to do this, these imaginary primaries are defined as lying outside the range of visible colors; i.e., they cannot be seen. Any three real "primary" colors of light, paint or ink can mix only a limited range of colors, called a gamut, which is always smaller (contains fewer colors) than the full range of colors humans can perceive.[4]""",
627
+ """The Origin of Consciousness in the Breakdown of the Bicameral Mind is a 1976 book by the Princeton psychologist, psychohistorian[a] and consciousness theorist Julian Jaynes (1920-1997). The book addresses the problematic nature of consciousness – "the ability to introspect" – which in Jaynes’ view must be distinguished from sensory awareness and other processes of cognition. Jaynes presents his proposed solution: that consciousness is a learned behavior based more on language and culture than on biology; this solution, in turn, points to the origin of consciousness in ancient human history rather than in metaphysical or evolutionary processes; furthermore, archaeological and historical evidence indicates that prior to the learning of consciousness, human mentality was what Jaynes called "the bicameral mind" – a mentality based on verbal hallucination.
628
+
629
+ The first edition was released in January 1977 in English. Two later editions, in 1982 and in 1990, were released by Jaynes with additions but without alterations. It was Jaynes's only book, and it is still in print, in several languages. In addition to numerous reviews and commentaries, there are several summaries of the book's material, for example, in the journal Behavioral and Brain Sciences, in lectures and discussions published in Canadian Psychology,[1] and in Art/World.
630
+
631
+ Jaynes's theories
632
+ See also: Bicameral mentality
633
+ In his book, Jaynes reviews what one of his early critics acknowledged as the “spectacular history of failure”[2] to explain consciousness – “the human ability to introspect”.[3] Abandoning the assumption that consciousness is innate, Jaynes explains it instead as a learned behavior that “arises ... from language, and specifically from metaphor.”[2] With this understanding, Jaynes then demonstrated that ancient texts and archeology can reveal a history of human mentality alongside the histories of other cultural products. His analysis of the evidence led him not only to place the origin of consciousness during the 2nd millennium BCE but also to hypothesize the existence of an older non-conscious “mentality that he called the bicameral mind, referring to the brain’s two hemispheres”.[4]
634
+
635
+ In the third chapter of the book, "The Mind of the Iliad", Jaynes states that people of the era had no consciousness.
636
+
637
+ There is in general no consciousness in the Iliad. I am saying ‘in general’ because I shall mention some exceptions later. And in general therefore, no words for consciousness or mental acts. The words in the Iliad that in a later age come to mean mental things have different meanings, all of them more concrete. The word psyche, which later means soul or conscious mind, is in most instances life-substances, such as blood or breath: a dying warrior bleeds out his psyche onto the ground or breathes it out in his last gasp. The thumos, which later comes to mean something like emotional soul, is simply motion or agitation. When a man stops moving, the thumos leaves his limbs. But it is also somehow like an organ itself, for when Glaucus prays to Apollo to alleviate his pain and to give him strength to help his friend Sarpedon, Apollo hears his prayer and "casts strength in his thumos" (Iliad, 16:529). The thumos can tell a man to eat, drink, or fight.[5]
638
+
639
+ Jaynes wrote an extensive afterword for the 1990 edition of his book, in which he addressed criticisms and clarified that his theory has four separate hypotheses: consciousness is based on and accessed by language; the non-conscious bicameral mind is based on verbal hallucinations; the breakdown of bicameral mind precedes consciousness, but the dating is variable; the 'double brain' of bicamerality is not today's functional lateralization of the cerebral hemispheres. He also expanded on the impact of consciousness on imagination and memory, notions of The Self, emotions, anxiety, guilt, and sexuality.""",
640
+ """Game theory is the study of mathematical models of strategic interactions among rational agents.[1] It has applications in all fields of social science, as well as in logic, systems science and computer science. The concepts of game theory are used extensively in economics as well.[2] The traditional methods of game theory addressed two-person zero-sum games, in which each participant's gains or losses are exactly balanced by the losses and gains of other participants. In the 21st century, the advanced game theories apply to a wider range of behavioral relations; it is now an umbrella term for the science of logical decision making in humans, animals, as well as computers.
641
+
642
+ Modern game theory began with the idea of mixed-strategy equilibria in two-person zero-sum game and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players.[3] The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty. Therefore, it is evident that game theory has evolved over time with consistent efforts of mathematicians, economists and other academicians.[citation needed]
643
+
644
+ Game theory was developed extensively in the 1950s by many scholars. It was explicitly applied to evolution in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. As of 2020, with the Nobel Memorial Prize in Economic Sciences going to game theorists Paul Milgrom and Robert B. Wilson, fifteen game theorists have won the economics Nobel Prize. John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory.""",
645
+ """In mathematics, Hilbert spaces (named after David Hilbert) allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that defines a distance function for which the space is a complete metric space.
646
+
647
+ The earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer), and ergodic theory (which forms the mathematical underpinning of thermodynamics). John von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis. Apart from the classical Euclidean vector spaces, examples of Hilbert spaces include spaces of square-integrable functions, spaces of sequences, Sobolev spaces consisting of generalized functions, and Hardy spaces of holomorphic functions.
648
+
649
+ Geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space. At a deeper level, perpendicular projection onto a linear subspace or a subspace (the analog of "dropping the altitude" of a triangle) plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be uniquely specified by its coordinates with respect to an orthonormal basis, in analogy with Cartesian coordinates in classical geometry. When this basis is countably infinite, it allows identifying the Hilbert space with the space of the infinite sequences that are square-summable. The latter space is often in the older literature referred to as the Hilbert space.""",
650
+ """This article is about the largest of the sporadic finite simple groups. For the kind of infinite group known as a Tarski monster group, see Tarski monster group.
651
+ Algebraic structure → Group theory
652
+ Group theory
653
+
654
+ Basic notions
655
+ Finite groups
656
+ Classification of finite simple groups
657
+ cyclicalternatingLie typesporadic
658
+ Cauchy's theoremLagrange's theorem
659
+ Sylow theoremsHall's theorem
660
+ p-groupElementary abelian group
661
+ Frobenius group
662
+ Schur multiplier
663
+ Symmetric group Sn
664
+ Klein four-group VDihedral group DnQuaternion group QDicyclic group Dicn
665
+ Discrete groupsLattices
666
+ Topological and Lie groups
667
+ Algebraic groups
668
+ vte
669
+ In the area of abstract algebra known as group theory, the monster group M (also known as the Fischer–Griess monster, or the friendly giant) is the largest sporadic simple group, having order
670
+   246 · 320 · 59 · 76 · 112 · 133 · 17 · 19 · 23 · 29 · 31 · 41 · 47 · 59 · 71
671
+   = 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000
672
+   ≈ 8×1053.
673
+
674
+ The finite simple groups have been completely classified. Every such group belongs to one of 18 countably infinite families, or is one of 26 sporadic groups that do not follow such a systematic pattern. The monster group contains 20 sporadic groups (including itself) as subquotients. Robert Griess, who proved the existence of the monster in 1982, has called those 20 groups the happy family, and the remaining six exceptions pariahs.
675
+
676
+ It is difficult to give a good constructive definition of the monster because of its complexity. Martin Gardner wrote a popular account of the monster group in his June 1980 Mathematical Games column in Scientific American.[1]
677
+
678
+ History
679
+ The monster was predicted by Bernd Fischer (unpublished, about 1973) and Robert Griess[2] as a simple group containing a double cover of Fischer's baby monster group as a centralizer of an involution. Within a few months, the order of M was found by Griess using the Thompson order formula, and Fischer, Conway, Norton and Thompson discovered other groups as subquotients, including many of the known sporadic groups, and two new ones: the Thompson group and the Harada–Norton group. The character table of the monster, a 194-by-194 array, was calculated in 1979 by Fischer and Donald Livingstone using computer programs written by Michael Thorne. It was not clear in the 1970s whether the monster actually existed. Griess[3] constructed M as the automorphism group of the Griess algebra, a 196,884-dimensional commutative nonassociative algebra over the real numbers; he first announced his construction in Ann Arbor on January 14, 1980. In his 1982 paper, he referred to the monster as the Friendly Giant, but this name has not been generally adopted. John Conway[4] and Jacques Tits[5][6] subsequently simplified this construction.
680
+
681
+ Griess's construction showed that the monster exists. Thompson[7] showed that its uniqueness (as a simple group satisfying certain conditions coming from the classification of finite simple groups) would follow from the existence of a 196,883-dimensional faithful representation. A proof of the existence of such a representation was announced by Norton,[8] though he has never published the details. Griess, Meierfrankenfeld and Segev gave the first complete published proof of the uniqueness of the monster (more precisely, they showed that a group with the same centralizers of involutions as the monster is isomorphic to the monster).[9]
682
+
683
+ The monster was a culmination of the development of sporadic simple groups and can be built from any two of three subquotients: the Fischer group Fi24, the baby monster, and the Conway group Co1.
684
+
685
+ The Schur multiplier and the outer automorphism group of the monster are both trivial."""
686
+ ]
4_Gerando_Ideias_de_Pesquisa_com_GPT4/knowledge_from_urls.py ADDED
Binary file (39.6 kB). View file