Spaces:
Configuration error
Configuration error
| language: en | |
| license: mit | |
| library_name: tensoraerospace | |
| tags: | |
| - aerospace | |
| - reinforcement-learning | |
| - control | |
| - gymnasium | |
| - tensorflow | |
| - pytorch | |
| # TensorAeroSpace — Aerospace RL & Control Organization | |
| Realistic aerospace environments and modern RL/control algorithms for training flight control systems. Open‑source, MIT‑licensed. | |
| - Website & Docs: https://tensoraerospace.readthedocs.io/ | |
| - GitHub: https://github.com/TensorAeroSpace/TensorAeroSpace | |
| - PyPI: https://pypi.org/project/tensoraerospace/ | |
| ## What we build | |
| - Environments: F‑16, B747, X‑15, rockets, satellites (state‑space models, linear/linearized) | |
| - Algorithms: IHDP, DQN, A3C/A2C‑NARX, PPO, SAC, DDPG, GAIL, PID, MPC | |
| - Tooling: benchmarking, metrics, examples, docs, and lessons | |
| ## Featured model(s) | |
| - IHDP agent for F‑16 longitudinal alpha tracking — TensorAeroSpace/ihdp‑f16 | |
| Quick use (IHDP): | |
| ```python | |
| import os | |
| import numpy as np | |
| import gymnasium as gym | |
| from tensoraerospace.agent.ihdp.model import IHDPAgent | |
| agent = IHDPAgent.from_pretrained( | |
| "TensorAeroSpace/ihdp-f16", access_token=os.getenv("HF_TOKEN") | |
| ) | |
| env = gym.make( | |
| "LinearLongitudinalF16-v0", | |
| number_time_steps=2002, | |
| initial_state=[[0],[0],[0]], | |
| reference_signal=np.zeros((1, 2002)), | |
| use_reward=False, | |
| state_space=["theta","alpha","q"], | |
| output_space=["theta","alpha","q"], | |
| control_space=["ele"], | |
| tracking_states=["alpha"], | |
| ) | |
| obs, info = env.reset() | |
| ref = env.unwrapped.reference_signal | |
| for t in range(ref.shape[1]-3): | |
| u = agent.predict(obs, ref, t) | |
| obs, r, terminated, truncated, info = env.step(np.array(u)) | |
| if terminated or truncated: | |
| break | |
| ``` | |
| ## Install | |
| ```bash | |
| pip install tensoraerospace | |
| ``` | |
| ## Save & share your models | |
| All major agents support saving locally and pushing to the Hub: | |
| ```python | |
| from tensoraerospace.agent.sac.sac import SAC | |
| agent = SAC(env) | |
| agent.train(num_episodes=1) | |
| # Save + push to Hub | |
| folder = agent.save_pretrained("./checkpoints") | |
| agent.push_to_hub("<org>/<model-name>", base_dir="./checkpoints", access_token="hf_...") | |
| ``` | |
| ## Learn more | |
| - Quick start & examples: ./example/ | |
| - English docs home: ./docs/en/index.md | |
| - Russian docs home: ./docs/ru/index.md | |
| ## License | |
| MIT — free for academia and industry. | |
| --- | |
| Source: TensorAeroSpace team org page on the Hub — https://huggingface.co/TensorAeroSpace | |