File size: 2,373 Bytes
6fde9c4
939396a
 
 
 
 
 
 
 
 
 
6fde9c4
 
939396a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
language: en
license: mit
library_name: tensoraerospace
tags:
- aerospace
- reinforcement-learning
- control
- gymnasium
- tensorflow
- pytorch
---

# TensorAeroSpace — Aerospace RL & Control Organization

Realistic aerospace environments and modern RL/control algorithms for training flight control systems. Open‑source, MIT‑licensed.

- Website & Docs: https://tensoraerospace.readthedocs.io/
- GitHub: https://github.com/TensorAeroSpace/TensorAeroSpace
- PyPI: https://pypi.org/project/tensoraerospace/

## What we build

- Environments: F‑16, B747, X‑15, rockets, satellites (state‑space models, linear/linearized)
- Algorithms: IHDP, DQN, A3C/A2C‑NARX, PPO, SAC, DDPG, GAIL, PID, MPC
- Tooling: benchmarking, metrics, examples, docs, and lessons

## Featured model(s)

- IHDP agent for F‑16 longitudinal alpha tracking — TensorAeroSpace/ihdp‑f16

Quick use (IHDP):

```python
import os
import numpy as np
import gymnasium as gym
from tensoraerospace.agent.ihdp.model import IHDPAgent

agent = IHDPAgent.from_pretrained(
    "TensorAeroSpace/ihdp-f16", access_token=os.getenv("HF_TOKEN")
)

env = gym.make(
    "LinearLongitudinalF16-v0",
    number_time_steps=2002,
    initial_state=[[0],[0],[0]],
    reference_signal=np.zeros((1, 2002)),
    use_reward=False,
    state_space=["theta","alpha","q"],
    output_space=["theta","alpha","q"],
    control_space=["ele"],
    tracking_states=["alpha"],
)

obs, info = env.reset()
ref = env.unwrapped.reference_signal
for t in range(ref.shape[1]-3):
    u = agent.predict(obs, ref, t)
    obs, r, terminated, truncated, info = env.step(np.array(u))
    if terminated or truncated:
        break
```

## Install

```bash
pip install tensoraerospace
```

## Save & share your models

All major agents support saving locally and pushing to the Hub:

```python
from tensoraerospace.agent.sac.sac import SAC

agent = SAC(env)
agent.train(num_episodes=1)

# Save + push to Hub
folder = agent.save_pretrained("./checkpoints")
agent.push_to_hub("<org>/<model-name>", base_dir="./checkpoints", access_token="hf_...")
```

## Learn more

- Quick start & examples: ./example/
- English docs home: ./docs/en/index.md
- Russian docs home: ./docs/ru/index.md

## License

MIT — free for academia and industry.

---

Source: TensorAeroSpace team org page on the Hub — https://huggingface.co/TensorAeroSpace