File size: 26,038 Bytes
9adf3ef
 
 
97e363b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
---
license: mit
---
# Table of Contents

* [FunSearch](#FunSearch)
  * [FunSearch](#FunSearch.FunSearch)
    * [make\_request\_for\_prompt](#FunSearch.FunSearch.make_request_for_prompt)
    * [request\_samplers](#FunSearch.FunSearch.request_samplers)
    * [get\_next\_state](#FunSearch.FunSearch.get_next_state)
    * [set\_up\_flow\_state](#FunSearch.FunSearch.set_up_flow_state)
    * [save\_message\_to\_state](#FunSearch.FunSearch.save_message_to_state)
    * [rename\_key\_message\_in\_state](#FunSearch.FunSearch.rename_key_message_in_state)
    * [message\_in\_state](#FunSearch.FunSearch.message_in_state)
    * [get\_message\_from\_state](#FunSearch.FunSearch.get_message_from_state)
    * [pop\_message\_from\_state](#FunSearch.FunSearch.pop_message_from_state)
    * [merge\_message\_request\_state](#FunSearch.FunSearch.merge_message_request_state)
    * [register\_data\_to\_state](#FunSearch.FunSearch.register_data_to_state)
    * [call\_program\_db](#FunSearch.FunSearch.call_program_db)
    * [call\_evaluator](#FunSearch.FunSearch.call_evaluator)
    * [call\_sampler](#FunSearch.FunSearch.call_sampler)
    * [generate\_reply](#FunSearch.FunSearch.generate_reply)
    * [run](#FunSearch.FunSearch.run)
* [ProgramDBFlowModule](#ProgramDBFlowModule)
* [ProgramDBFlowModule.ProgramDBFlow](#ProgramDBFlowModule.ProgramDBFlow)
  * [ProgramDBFlow](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow)
    * [set\_up\_flow\_state](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.set_up_flow_state)
    * [get\_prompt](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_prompt)
    * [reset\_islands](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.reset_islands)
    * [register\_program](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.register_program)
    * [get\_best\_programs](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_best_programs)
    * [run](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.run)
* [SamplerFlowModule](#SamplerFlowModule)
* [SamplerFlowModule.SamplerFlow](#SamplerFlowModule.SamplerFlow)
  * [SamplerFlow](#SamplerFlowModule.SamplerFlow.SamplerFlow)
    * [run](#SamplerFlowModule.SamplerFlow.SamplerFlow.run)
* [EvaluatorFlowModule](#EvaluatorFlowModule)
* [EvaluatorFlowModule.EvaluatorFlow](#EvaluatorFlowModule.EvaluatorFlow)
  * [EvaluatorFlow](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow)
    * [load\_functions](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.load_functions)
    * [run\_function\_with\_timeout](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.run_function_with_timeout)
    * [evaluate\_program](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.evaluate_program)
    * [analyse](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.analyse)
    * [run](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.run)

<a id="FunSearch"></a>

# FunSearch

<a id="FunSearch.FunSearch"></a>

## FunSearch Objects

```python
class FunSearch(CompositeFlow)
```

This class implements FunSearch. This code is an implementation of Funsearch (https://www.nature.com/articles/s41586-023-06924-6) and is heavily inspired by the original code (https://github.com/google-deepmind/funsearch) . It's a Flow in charge of starting, stopping and managing (passing around messages) the FunSearch process. It passes messages around to the following subflows:

- ProgramDBFlow: which is in charge of storing and retrieving programs.
- SamplerFlow: which is in charge of sampling programs.
- EvaluatorFlow: which is in charge of evaluating programs.

*Configuration Parameters*:

- `name` (str): The name of the flow. Default: "FunSearchFlow".
- `description` (str): The description of the flow. Default: "A flow implementing FunSearch"
- `subflows_config` (Dict[str,Any]): A dictionary of subflows configurations. Default:
    - `ProgramDBFlow`: By default, it uses the `ProgramDBFlow` class and uses its default parameters.
    - `SamplerFlow`: By default, it uses the `SamplerFlow` class and uses its default parameters.
    - `EvaluatorFlow`: By default, it uses the `EvaluatorFlow` class and uses its default parameters.

**Input Interface**:

- `from` (str): The flow from which the message is coming from. It can be one of the following: "FunSearch", "SamplerFlow", "EvaluatorFlow", "ProgramDBFlow".
- `operation` (str): The operation to perform. It can be one of the following: "start", "stop", "get_prompt", "get_best_programs_per_island", "register_program".
- `content` (Dict[str,Any]): The content associated to an operation. Here is the expected content for each operation:
    - "start":
        - `num_samplers` (int): The number of samplers to start up. Note that it's still restricted by the number of workers available. Default: 1.
    - "stop":
        - No content. Pass either an empty dictionary or None. Works also with no content.
    - "get_prompt":
        - No content. Pass either an empty dictionary or None. Works also with no content.
    - "get_best_programs_per_island":
        - No content. Pass either an empty dictionary or None. Works also with no content.

**Output Interface**:

- `retrieved` (Dict[str,Any]): The retrieved data.

**Citation**:

@Article{FunSearch2023,
    author  = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
    journal = {Nature},
    title   = {Mathematical discoveries from program search with large language models},
    year    = {2023},
    doi     = {10.1038/s41586-023-06924-6}
}

<a id="FunSearch.FunSearch.make_request_for_prompt"></a>

#### make\_request\_for\_prompt

```python
def make_request_for_prompt()
```

This method makes a request for a prompt. It sends a message to itself with the operation "get_prompt" which will trigger the flow to call the `ProgramDBFlow` to get a prompt.

<a id="FunSearch.FunSearch.request_samplers"></a>

#### request\_samplers

```python
def request_samplers(input_message: FlowMessage)
```

This method requests samplers. It sends a message to itself with the operation "get_prompt" which will trigger the flow to call the `ProgramDBFlow` to get a prompt.

**Arguments**:

- `input_message` (`FlowMessage`): The input message that triggered the request for samplers.

<a id="FunSearch.FunSearch.get_next_state"></a>

#### get\_next\_state

```python
def get_next_state(input_message: FlowMessage)
```

This method determines the next state of the flow based on the input message. It will return the next state based on the current state and the message received.

**Arguments**:

- `input_message` (`FlowMessage`): The input message that triggered the request for the next state.

**Returns**:

`str`: The next state of the flow.

<a id="FunSearch.FunSearch.set_up_flow_state"></a>

#### set\_up\_flow\_state

```python
def set_up_flow_state()
```

This method sets up the state of the flow. It's called at the beginning of the flow.

<a id="FunSearch.FunSearch.save_message_to_state"></a>

#### save\_message\_to\_state

```python
def save_message_to_state(msg_id: str, message: FlowMessage)
```

This method saves a message to the state of the flow. It's used to keep track of state on a per message basis (i.e., state of the flow depending on the message received and id).

**Arguments**:

- `msg_id` (`str`): The id of the message to save.
- `message` (`FlowMessage`): The message to save.

<a id="FunSearch.FunSearch.rename_key_message_in_state"></a>

#### rename\_key\_message\_in\_state

```python
def rename_key_message_in_state(old_key: str, new_key: str)
```

This method renames a key in the state of the flow in the "msg_requests" dictonary. It's used to rename a key in the state of the flow (i.e., rename a message id).

**Arguments**:

- `old_key` (`str`): The old key to rename.
- `new_key` (`str`): The new key to rename to.

<a id="FunSearch.FunSearch.message_in_state"></a>

#### message\_in\_state

```python
def message_in_state(msg_id: str) -> bool
```

This method checks if a message is in the state of the flow (in "msg_requests" dictionary). It returns True if the message is in the state, otherwise it returns False.

**Arguments**:

- `msg_id` (`str`): The id of the message to check if it's in the state.

**Returns**:

`bool`: True if the message is in the state, otherwise False.

<a id="FunSearch.FunSearch.get_message_from_state"></a>

#### get\_message\_from\_state

```python
def get_message_from_state(msg_id: str) -> Dict[str, Any]
```

This method returns the state associated with a message id in the state of the flow (in "msg_requests" dictionary).

**Arguments**:

- `msg_id` (`str`): The id of the message to get the state from.

**Returns**:

`Dict[str,Any]`: The state associated with the message id.

<a id="FunSearch.FunSearch.pop_message_from_state"></a>

#### pop\_message\_from\_state

```python
def pop_message_from_state(msg_id: str) -> Dict[str, Any]
```

This method pops a message from the state of the flow (in "msg_requests" dictionary). It the state associate to a message and removes it from the state.

**Arguments**:

- `msg_id` (`str`): The id of the message to pop from the state.

**Returns**:

`Dict[str,Any]`: The state associated with the message id.

<a id="FunSearch.FunSearch.merge_message_request_state"></a>

#### merge\_message\_request\_state

```python
def merge_message_request_state(id: str, new_states: Dict[str, Any])
```

This method merges new states to a message in the state of the flow (in "msg_requests" dictionary). It merges new states to a message in the state.

**Arguments**:

- `id` (`str`): The id of the message to merge new states to.
- `new_states` (`Dict[str,Any]`): The new states to merge to the message.

<a id="FunSearch.FunSearch.register_data_to_state"></a>

#### register\_data\_to\_state

```python
def register_data_to_state(input_message: FlowMessage)
```

This method registers the input message data to the flow state. It's called everytime a new input message is received.

**Arguments**:

- `input_message` (`FlowMessage`): The input message

<a id="FunSearch.FunSearch.call_program_db"></a>

#### call\_program\_db

```python
def call_program_db(input_message)
```

This method calls the ProgramDBFlow. It sends a message to the ProgramDBFlow with the data of the input message.

**Arguments**:

- `input_message` (`FlowMessage`): The input message to send to the ProgramDBFlow.

<a id="FunSearch.FunSearch.call_evaluator"></a>

#### call\_evaluator

```python
def call_evaluator(input_message)
```

This method calls the EvaluatorFlow. It sends a message to the EvaluatorFlow with the data of the input message.

**Arguments**:

- `input_message` (`FlowMessage`): The input message to send to the EvaluatorFlow.

<a id="FunSearch.FunSearch.call_sampler"></a>

#### call\_sampler

```python
def call_sampler(input_message)
```

This method calls the SamplerFlow. It sends a message to the SamplerFlow with the data of the input message.

**Arguments**:

- `input_message` (`FlowMessage`): The input message to send to the SamplerFlow.

<a id="FunSearch.FunSearch.generate_reply"></a>

#### generate\_reply

```python
def generate_reply(input_message: FlowMessage)
```

This method generates a reply to a message sent to user. It packages the output message and sends it.

**Arguments**:

- `input_message` (`FlowMessage`): The input message to generate a reply to.

<a id="FunSearch.FunSearch.run"></a>

#### run

```python
def run(input_message: FlowMessage)
```

This method runs the flow. It's the main method of the flow. It's called when the flow is executed.

<a id="ProgramDBFlowModule"></a>

# ProgramDBFlowModule

<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow"></a>

## ProgramDBFlow Objects

```python
class ProgramDBFlow(AtomicFlow)
```

This class implements a ProgramDBFlow. It's a flow that stores programs and their scores in a database. It can also query the database for the best programs or generate a prompt containing stored programs in order to evolve them with a SamplerFlow. This code is an implementation of Funsearch (https://www.nature.com/articles/s41586-023-06924-6) and is heavily inspired by the original code (https://github.com/google-deepmind/funsearch)

**Configuration Parameters**:

- `name` (str): The name of the flow. Default: "ProgramDBFlow"
- `description` (str): A description of the flow. This description is used to generate the help message of the flow. Default: " A flow that saves programs in a database of islands"
- `artifact_to_evolve_name` (str): The name of the artifact/program to evolve. Default: "solve_function"
- `evaluate_function` (str): The function used to evaluate the program. No Default value. This MUST be passed as a parameter.
- `evaluate_file_full_content` (str): The full content of the file containing the evaluation function. No Default value. This MUST be passed as a parameter.
- `num_islands`: The number of islands to use. Default: 3
- `reset_period`: The period in seconds to reset the islands. Default: 3600
- `artifacts_per_prompt`: The number of previous artifacts/programs to include in a prompt. Default: 2
- `temperature`: The temperature of the island. Default: 0.1
- `temperature_period`: The period in seconds to change the temperature. Default: 30000
- `sample_with_replacement`: Whether to sample with replacement. Default: False
- `portion_of_islands_to_reset`: The portion of islands to reset. Default: 0.5
- `template` (dict): The template to use for a program. Default: {"preface": ""}

**Input Interface**:

- `operation` (str): The operation to perform. It can be one of the following: ["register_program","get_prompt","get_best_programs_per_island"]

**Output Interface**:

- `retrieved` (Any): The retrieved data. It can be one of the following:
    - If the operation is "get_prompt", it can be a dictionary with the following keys
        - `code` (str): The code of the prompt
        - `version_generated` (int): The version of the prompt generated
        - `island_id` (int): The id of the island that generated the prompt
        - `header` (str): The header of the prompt
    - If the operation is "register_program", it can be a string with the message "Program registered" or "Program failed to register"
    - If the operation is "get_best_programs_per_island", it can be a dictionary with the following keys:
        - `best_island_programs` (List[Dict[str,Any]]): A list of dictionaries with the following keys:
            - `rank` (int): The rank of the program (1 is the best)
            - `score` (float): The score of the program
            - `program` (str): The program
            - `island_id` (int): The id of the island that generated the program

**Citation**:

@Article{FunSearch2023,
    author  = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
    journal = {Nature},
    title   = {Mathematical discoveries from program search with large language models},
    year    = {2023},
    doi     = {10.1038/s41586-023-06924-6}
}

<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.set_up_flow_state"></a>

#### set\_up\_flow\_state

```python
def set_up_flow_state()
```

This method sets up the state of the flow and clears the previous messages.

<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_prompt"></a>

#### get\_prompt

```python
def get_prompt()
```

This method gets a prompt from an island. It returns the code, the version generated and the island id.

<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.reset_islands"></a>

#### reset\_islands

```python
def reset_islands()
```

This method resets the islands. It resets the worst islands and copies the best programs to the worst islands.

<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.register_program"></a>

#### register\_program

```python
def register_program(program: AbstractArtifact, island_id: int,
                     scores_per_test: ScoresPerTest)
```

This method registers a program in an island. It also updates the best program if needed.

**Arguments**:

- `program` (`AbstractArtifact`): The program to register
- `island_id` (`int`): The id of the island to register the program
- `scores_per_test` (`ScoresPerTest`): The scores per test of the program

<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_best_programs"></a>

#### get\_best\_programs

```python
def get_best_programs() -> List[Dict[str, Any]]
```

This method returns the best programs per island.

<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.run"></a>

#### run

```python
def run(input_message: FlowMessage)
```

This method runs the flow. It performs the operation requested in the input message.

<a id="SamplerFlowModule"></a>

# SamplerFlowModule

<a id="SamplerFlowModule.SamplerFlow"></a>

# SamplerFlowModule.SamplerFlow

<a id="SamplerFlowModule.SamplerFlow.SamplerFlow"></a>

## SamplerFlow Objects

```python
class SamplerFlow(ChatAtomicFlow)
```

This class implements a SamplerFlow. It is a flow that queries a LLM to generate a response to a given input. This class is a child of ChatAtomicFlow.
and expects the same parameters as ChatAtomicFlow (see https://huggingface.co/aiflows/ChatFlowModule).

**Configuration Parameters**:
- `name` (str): The name of the flow. Default: "SamplerFlowModule"
- `description` (str): A description of the flow. Default: "A flow that queries an LLM model to generate prompts for the Sampler flow"
- `backend` Dict[str,Any]: The backend of the flow. Used to call models via an API.
See litellm's supported models and APIs here: https://docs.litellm.ai/docs/providers.
The default parameters of the backend are all defined at aiflows.backends.llm_lite.LiteLLMBackend
(also see the defaul parameters of litellm's completion parameters: https://docs.litellm.ai/docs/completion/input#input-params-1).
Except for the following parameters who are overwritten by the ChatAtomicFlow in ChatAtomicFlow.yaml:
    - `model_name` (Union[Dict[str,str],str]): The name of the model to use.  Default: "gpt-4"
    When using multiple API providers, the model_name can be a dictionary of the form 
    {"provider_name": "model_name"}. E.g. {"openai": "gpt-3.5-turbo", "azure": "azure/gpt-3.5-turbo"}
    Default: "gpt-3.5-turbo" (the name needs to follow the name of the model in litellm  https://docs.litellm.ai/docs/providers).
    - `n` (int) : The number of answers to generate. Default: 1
    - `max_tokens` (int): The maximum number of tokens to generate. Default: 2000
    - `temperature` float: The temperature of the generation. Default: 1.0
    - `top_p` float: An alternative to sampling with temperature. It instructs the model to consider the results of
    the tokens with top_p probability. Default: 0.4
    - `frequency_penalty` (number): It is used to penalize new tokens based on their frequency in the text so far. Default: 0.0
    - `presence_penalty` (number): It is used to penalize new tokens based on their existence in the text so far. Default: 0.0
    - `stream` (bool): Whether to stream the response or not. Default: false
- `system_message_prompt_template` (Dict[str,Any]): The template of the system message. It is used to generate the system message. Default: See SamplerFlow.yaml for default.
- `init_human_message_prompt_template` (Dict[str,Any]): The prompt template of the human/user message used to initialize the conversation
(first time in). It is used to generate the human message. It's passed as the user message to the LLM. Default: See SamplerFlow.yaml for default.
- `human_message_prompt_template` (Dict[str,Any]): The prompt template of the human/user message (message used everytime the except the first time in).
It's passed as the user message to the LLM. Default: See SamplerFlow.yaml for default.
- `previous_messages` (Dict[str,Any]): Defines which previous messages to include in the input of the LLM. Note that if `first_k`and `last_k` are both none,
all the messages of the flows's history are added to the input of the LLM. Default:
    - `first_k` (int): If defined, adds the first_k earliest messages of the flow's chat history to the input of the LLM. Default: 1
    - `last_k` (int): If defined, adds the last_k latest messages of the flow's chat history to the input of the LLM. Default: 1

*Input Interface Initialized (Expected input the first time in flow)*:

- `header` (str): A header message to include in prompt
- `code` (str): The "example" samples to generate our new sample from.

*Input Interface (Expected input the after the first time in flow)*:

- `header` (str): A header message to include in prompt
- `code` (str): The "example" samples to generate our new sample from.

*Output Interface*:

- `api_output` (str): The output of the API call. It is the response of the LLM to the input.
- `from` (str): The name of the flow that generated the output. It's always "SamplerFlow"


**Citation**:

@Article{FunSearch2023,
    author  = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
    journal = {Nature},
    title   = {Mathematical discoveries from program search with large language models},
    year    = {2023},
    doi     = {10.1038/s41586-023-06924-6}
}

<a id="SamplerFlowModule.SamplerFlow.SamplerFlow.run"></a>

#### run

```python
def run(input_message)
```

This method calls the backend of the flow (so queries the LLM). It calls the backend with the previous messages of the flow.

**Returns**:

`Any`: The output of the backend.

<a id="EvaluatorFlowModule"></a>

# EvaluatorFlowModule

<a id="EvaluatorFlowModule.EvaluatorFlow"></a>

# EvaluatorFlowModule.EvaluatorFlow


<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow"></a>

## EvaluatorFlow Objects

```python
class EvaluatorFlow(AtomicFlow)
```

This class implements an EvaluatorFlow. It is a flow that evaluates a program (python code) using a given evaluator function. This code is an implementation of Funsearch (https://www.nature.com/articles/s41586-023-06924-6) and is heavily inspired by the original code (https://github.com/google-deepmind/funsearch)

**Configuration Parameters**:

- `name` (str): The name of the flow. Default: "EvaluatorFlow"
- `description` (str): A description of the flow. This description is used to generate the help message of the flow. Default: "A flow that evaluates code on tests"
- `py_file` (str): The python code containing the evaluation function. No default value. This MUST be passed as a parameter.
- `function_to_run_name` (str): The name of the function to run (the evaluation function) in the evaluator file.  No default value. This MUST be passed as a parameter.
- `test_inputs` (Dict[str,Any]): A dictionary of test inputs to evaluate the program. Default: {"test1": None, "test2": None}
- `timeout_seconds` (int): The maximum number of seconds to run the evaluation function before returning an error. Default: 10
- `run_error_score` (int): The score to return if the evaluation function fails to run. Default: -100
- `use_test_input_as_key` (bool): Whether to use the test input parameters as the key in the output dictionary. Default: False

**Input Interface**:

- `artifact` (str): The program/artifact to evaluate.

**Output Interface**:

- `scores_per_test` (Dict[str, Dict[str, Any]]): A dictionary of scores per test input.

**Citation**:

@Article{FunSearch2023,
    author  = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
    journal = {Nature},
    title   = {Mathematical discoveries from program search with large language models},
    year    = {2023},
    doi     = {10.1038/s41586-023-06924-6}
}

<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.load_functions"></a>

#### load\_functions

```python
def load_functions()
```

Load the functions from the evaluator py file with ast parsing

<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.run_function_with_timeout"></a>

#### run\_function\_with\_timeout

```python
def run_function_with_timeout(program: str, **kwargs)
```

Run the evaluation function with a timeout

**Arguments**:

- `program` (`str`): The program to evaluate
- `kwargs` (`Dict[str, Any]`): The keyword arguments to pass to the evaluation function

**Returns**:

`Tuple[bool, Any]`: A tuple (bool, result) where bool is True if the function ran successfully and result is the output of the function

<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.evaluate_program"></a>

#### evaluate\_program

```python
def evaluate_program(program: str, **kwargs)
```

Evaluate the program using the evaluation function

**Arguments**:

- `program` (`str`): The program to evaluate
- `kwargs` (`Dict[str, Any]`): The keyword arguments to pass to the evaluation function

**Returns**:

`Tuple[bool, Any]`: A tuple (bool, result) where bool is True if the function ran successfully and result is the output of the function

<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.analyse"></a>

#### analyse

```python
def analyse(program: str)
```

Analyse the program on the test inputs

**Arguments**:

- `program` (`str`): The program to evaluate

**Returns**:

`Dict[str, Dict[str, Any]]`: A dictionary of scores per test input

<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.run"></a>

#### run

```python
def run(input_message: FlowMessage)
```

This method runs the flow. It's the main method of the flow.

**Arguments**:

- `input_message` (`FlowMessage`): The input message