code
stringlengths
114
1.05M
path
stringlengths
3
312
quality_prob
float64
0.5
0.99
learning_prob
float64
0.2
1
filename
stringlengths
3
168
kind
stringclasses
1 value
# RTCBot Basics This tutorial will teach you the fundamentals of using RTCBot for your projects. RTCBot is a Python 3 [asyncio](https://docs.python.org/3/library/asyncio.html) library, meaning that it is meant to run in an event loop. ## Asyncio Basics The most basic asyncio program is the following: ```python import asyncio # Run the event loop asyncio.get_event_loop().run_forever() ``` You can exit the program with `CTRL+C`. Right now, the program does nothing, just runs in a loop. Let's fix that: ```python import asyncio async def myfunction(): while True: await asyncio.sleep(1) print("1 second passed") asyncio.ensure_future(myfunction()) asyncio.get_event_loop().run_forever() ``` This will print "1 second passed" each second. Notice that `myfunction` is run in an infinite loop. The utility of an event loop is that you can run many functions _concurrently_, which behaves as if your program was running with many threads at once: ```python import asyncio async def myfunction1(): while True: await asyncio.sleep(1) print("1 second passed") async def myfunction2(): while True: await asyncio.sleep(2) print("2 seconds passed") asyncio.ensure_future(myfunction1()) asyncio.ensure_future(myfunction2()) asyncio.get_event_loop().run_forever() ``` The key here is the `await` keyword, used in an `async` function (called a coroutine). The `await asyncio.sleep(1)` command pauses execution of the function until one second has passed, allowing the event loop to spend time running the other function. This means that the event loop is a good way to program where multiple things need to happen in response to events, such as incoming data, or timers, which is precisely the situation in a robot. RTCBot is a set of tools allowing you to easily use an asyncio event loop to pass information between parts of your robot. To learn more about asyncio, it is recommended that you look at a more in-depth tutorial [here](https://www.blog.pythonlibrary.org/2016/07/26/python-3-an-intro-to-asyncio/). ## View a Video Feed To introduce you to the basic concepts of RTCBot, we will start with the simplest task, viewing a webcam video feed: ```python import asyncio from rtcbot import CVCamera, CVDisplay camera = CVCamera() display = CVDisplay() @camera.subscribe def onFrame(frame): print("got video frame") display.put_nowait(frame) try: asyncio.get_event_loop().run_forever() finally: camera.close() display.close() ``` The camera might take several seconds to initialize, but after it finishes, a window with a live feed of your webcam will pop up. The `CVCamera` and `CVDisplay` objects use OpenCV in the background to process frames. The `camera.subscribe` function allows you to subscribe to video frames incoming from the webcam, firing the `onFrame` function 30 times a second with [numpy](https://en.wikipedia.org/wiki/NumPy) arrays containing BGR images captured by the camera. The `put_nowait` function is then used to send the frame to the window where the image is displayed. These two functions are part of RTCBot's core abilities. Every producer of data (like `CVCamera`) has a `subscribe()` method, and every consumer of data (like `CVDisplay`) has a `put_nowait` method to insert data. ```eval_rst .. note:: If you are using the official Raspberry Pi camera, you should replace CVCamera with PiCamera. .. warning:: CVDisplay does not work on Mac due to issues with threading in the display toolkit - if using a Mac, you'll have to wait for the video streaming tutorial to view the video feed! ``` ## Subscriptions Using a callback function with the `subscribe` method is not the only way to get data out of a data-producing object. The `subscribe` method is also able to create what is called a `subscription`. To understand subscriptions, let's take a quick detour to python Queues: ```python import asyncio # An asyncio Queue has put_nowait and get coroutine q = asyncio.Queue() # Sends data each second async def sender(): while True: await asyncio.sleep(1) q.put_nowait("hi!") # Receives the data async def receiver(): while True: data = await q.get() print("Received:", data) asyncio.ensure_future(sender()) asyncio.ensure_future(receiver()) asyncio.get_event_loop().run_forever() ``` Here, the `sender` function sends data, and the `receiver` awaits for incoming data, and prints it. Notice how the queue had a `get` coroutine from which data could be awaited. We can use the `subscribe` method in a similar way to the above code snippet. When run without an argument, `subscribe` actually returns a subscription, which `CVCamera` automatically keeps updated with new video frames as they come in: ```python import asyncio from rtcbot import CVCamera, CVDisplay camera = CVCamera() display = CVDisplay() frameSubscription = camera.subscribe() async def receiver(): while True: frame = await frameSubscription.get() display.put_nowait(frame) asyncio.ensure_future(receiver()) try: asyncio.get_event_loop().run_forever() finally: camera.close() display.close() ``` This program displays a live video feed, just like the previous version. The `receiver` function is just running `put_nowait` on each frame received from the subscription. This can be done automatically using the `putSubscription` method, making this a shorthand for the above program: ```python import asyncio from rtcbot import CVCamera, CVDisplay camera = CVCamera() display = CVDisplay() frameSubscription = camera.subscribe() display.putSubscription(frameSubscription) try: asyncio.get_event_loop().run_forever() finally: camera.close() display.close() ``` Finally, the `camera` object has a `get` coroutine, meaning that it can be passed into `putSubscription` directly: ```python import asyncio from rtcbot import CVCamera, CVDisplay camera = CVCamera() display = CVDisplay() display.putSubscription(camera) try: asyncio.get_event_loop().run_forever() finally: camera.close() display.close() ``` ## Generalizing to Audio The above code examples all created a video stream, and displayed it in a window. RTCBot uses _exactly the same_ API for **everything**. This means that we can trivially add audio to the previous example: ```python import asyncio from rtcbot import CVCamera, CVDisplay, Microphone, Speaker camera = CVCamera() display = CVDisplay() microphone = Microphone() speaker = Speaker() display.putSubscription(camera) speaker.putSubscription(microphone) try: asyncio.get_event_loop().run_forever() finally: camera.close() display.close() microphone.close() speaker.close() ``` Here, a video stream should be displayed in a window, and all microphone input should be playing in your headphones (or speakers). ## Summary This tutorial introduced the basics of RTCBot, with a focus on the fundamentals: 1. Every data producer has the `subscribe` method and `get` coroutine 2. Every data consumer has a `putSubscription` method and a `put_nowait` method 3. `putSubscription` takes any object with a `get` coroutine 4. Subscribe can also be used for direct callbacks, or with custom subscriptions. ## Extra Notes Each producer can have multiple subscriptions active at the same time. This code shows two different windows with the same video feed: ```python import asyncio from rtcbot import CVCamera, CVDisplay camera = CVCamera() display = CVDisplay() display2 = CVDisplay() display.putSubscription(camera) subscription2 = camera.subscribe() display2.putSubscription(subscription2) try: asyncio.get_event_loop().run_forever() finally: camera.close() display.close() display2.close() ``` The `get` coroutine of camera behaves as a single default subscription, so it can only be used by one display (it returns each frame once). The `subscribe` function allows creating an arbitrary number of independent subscriptions/callbacks.
/rtcbot-0.2.5.tar.gz/rtcbot-0.2.5/examples/basics/README.md
0.520253
0.946547
README.md
pypi
# Offloading Computation Most hobbyists can't afford to do complex computations on their robot, because the little single-board computers (SBCs) available for a reasonable price do not have sufficient processing power for advanced functionality. While this is slowly changing with things like [Nvidia's Jetson Nano](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/), there is still a large gap in power between SBCs and an average desktop. The ideal situation would be if you could strap an entire desktop to your robot. With RTCBot, we can do the next best thing: we can stream the robot's inputs to a desktop, which can then perform computation, and send back commands. In this tutorial, we will go back to a single file for both server and robot for simplicitly. We set up a connection to the robot from Python, allowing you to control the robot with an xbox controller without a browser. ```eval_rst .. note:: While with a Raspberry Pi there might be a non-negligible delay between sending a video frame and getting back a command, this is not a limitation of the approach, since it is possible to stream `video games with barely-noticeable lag <https://arstechnica.com/gaming/2019/03/googles-multiyear-quest-to-overcome-ids-stadia-streaming-skepticism/>`_. In particular, rtcbot currently cannot take advantage of the Pi's hardware acceleration, meaning that all video encoding is done in software, which ends up adding to video delay. ``` ## Python to Python Streaming To start offloading, we get rid of the browser - we will create a connection from Python on your desktop to Python on your robot, stream video from the robot, and stream controls from the desktop. The robot code is identical to the code we have seen in previous tutorials. All we did was remove the browser code, since it is not needed. ```python # robot.py from aiohttp import web routes = web.RouteTableDef() from rtcbot import RTCConnection, CVCamera cam = CVCamera() conn = RTCConnection() conn.video.putSubscription(cam) @conn.subscribe def controls(msg): print("Control message:", msg) @routes.post("/connect") async def connect(request): clientOffer = await request.json() serverResponse = await conn.getLocalDescription(clientOffer) return web.json_response(serverResponse) async def cleanup(app): await conn.close() cam.close() app = web.Application() app.add_routes(routes) app.on_shutdown.append(cleanup) web.run_app(app) ``` Then, on the desktop, we run the following: ```python # desktop.py import asyncio import aiohttp import cv2 import json from rtcbot import RTCConnection, Gamepad, CVDisplay disp = CVDisplay() g = Gamepad() conn = RTCConnection() @conn.video.subscribe def onFrame(frame): # Show a 4x larger image so that it is easy to see resized = cv2.resize(frame, (frame.shape[1] * 4, frame.shape[0] * 4)) disp.put_nowait(resized) async def connect(): localDescription = await conn.getLocalDescription() async with aiohttp.ClientSession() as session: async with session.post( "http://localhost:8080/connect", data=json.dumps(localDescription) ) as resp: response = await resp.json() await conn.setRemoteDescription(response) # Start sending gamepad controls g.subscribe(conn) asyncio.ensure_future(connect()) try: asyncio.get_event_loop().run_forever() finally: conn.close() disp.close() g.close() ``` This code manually sends the connect request, and establishes a webrtc connection with the response. Also introduced was the Python version of `Gamepad`. The browser version was used in a previous tutorial. The robot code's output is now: ``` ======== Running on http://0.0.0.0:8080 ======== (Press CTRL+C to quit) Control message: {'timestamp': 1553379212.684861, 'code': 'BTN_SOUTH', 'state': 1, 'event': 'Key'} Control message: {'timestamp': 1553379212.684861, 'code': 'ABS_Y', 'state': -1, 'event': 'Absolute'} Control message: {'timestamp': 1553379213.192862, 'code': 'BTN_SOUTH', 'state': 0, 'event': 'Key'} Control message: {'timestamp': 1553379214.14487, 'code': 'BTN_SOUTH', 'state': 1, 'event': 'Key'} Control message: {'timestamp': 1553379214.964878, 'code': 'BTN_SOUTH', 'state': 0, 'event': 'Key'} Control message: {'timestamp': 1553379216.172882, 'code': 'BTN_SOUTH', 'state': 1, 'event': 'Key'} Control message: {'timestamp': 1553379216.48489, 'code': 'BTN_SOUTH', 'state': 0, 'event': 'Key'} Control message: {'timestamp': 1553379216.872889, 'code': 'ABS_X', 'state': -11, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.884891, 'code': 'ABS_X', 'state': -64, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.892888, 'code': 'ABS_X', 'state': -95, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.904886, 'code': 'ABS_X', 'state': -158, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.912884, 'code': 'ABS_X', 'state': -599, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.924894, 'code': 'ABS_X', 'state': -1240, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.932888, 'code': 'ABS_X', 'state': -1586, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.944887, 'code': 'ABS_X', 'state': -2080, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.952887, 'code': 'ABS_X', 'state': -2689, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.964892, 'code': 'ABS_X', 'state': -3833, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.972892, 'code': 'ABS_X', 'state': -4957, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.972892, 'code': 'ABS_Y', 'state': -53, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.984889, 'code': 'ABS_X', 'state': -7944, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.984889, 'code': 'ABS_Y', 'state': -106, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.992891, 'code': 'ABS_X', 'state': -10170, 'event': 'Absolute'} Control message: {'timestamp': 1553379216.992891, 'code': 'ABS_Y', 'state': -137, 'event': 'Absolute'} Control message: {'timestamp': 1553379217.004892, 'code': 'ABS_X', 'state': -12567, 'event': 'Absolute'} ``` ```eval_rst .. warning:: The output for the `Gamepad` object is currently different in Javascript and in Python. Make sure you don't mix them up! ```
/rtcbot-0.2.5.tar.gz/rtcbot-0.2.5/examples/offloading/README.md
0.527317
0.843509
README.md
pypi
# Streaming Video In the previous tutorial, a data connection was created between your python program and a browser, allowing to send messages back and forth. This tutorial will build upon the previous one's code, culminating in a 2-way video and audio connection, where the Python code displays the video stream it gets from your browser, and the browser displays the video stream from the server. You should use a browser on your laptop or desktop for this one, and put the server on a Raspberry Pi if you want to try streaming from the PiCamera. ## Skeleton Code If you have not done so yet, you should look at the previous tutorial, where the basics of an `RTCConnection` are explained. For the skeleton of this part, the button from the previous tutorial was removed, and replaced with a video element. Also removed was all code involving messages, to keep this tutorial focused entirely on video. ```python from aiohttp import web routes = web.RouteTableDef() from rtcbot import RTCConnection, getRTCBotJS # For this example, we use just one global connection conn = RTCConnection() # Serve the RTCBot javascript library at /rtcbot.js @routes.get("/rtcbot.js") async def rtcbotjs(request): return web.Response(content_type="application/javascript", text=getRTCBotJS()) # This sets up the connection @routes.post("/connect") async def connect(request): clientOffer = await request.json() serverResponse = await conn.getLocalDescription(clientOffer) return web.json_response(serverResponse) @routes.get("/") async def index(request): return web.Response( content_type="text/html", text=r""" <html> <head> <title>RTCBot: Skeleton</title> <script src="/rtcbot.js"></script> </head> <body style="text-align: center;padding-top: 30px;"> <video autoplay playsinline muted controls></video> <p> Open the browser's developer tools to see console messages (CTRL+SHIFT+C) </p> <script> var conn = new rtcbot.RTCConnection(); async function connect() { let offer = await conn.getLocalDescription(); // POST the information to /connect let response = await fetch("/connect", { method: "POST", cache: "no-cache", body: JSON.stringify(offer) }); await conn.setRemoteDescription(await response.json()); console.log("Ready!"); } connect(); </script> </body> </html> """) async def cleanup(app=None): await conn.close() app = web.Application() app.add_routes(routes) app.on_shutdown.append(cleanup) web.run_app(app) ``` This code establishes a WebRTC connection, and nothing else. It can be seen as a minimal example for RTCBot. ## Streaming Video from Python The first thing we'll do is send a video stream from a webcam to the browser. If on a desktop or laptop, you should use `CVCamera`, and if on a Raspberry Pi with the camera module, use `PiCamera` instead - they get their video differently, but behave identically. All you need is to add a couple lines of code to the skeleton to get a fully-functional video stream: ```diff from aiohttp import web routes = web.RouteTableDef() -from rtcbot import RTCConnection, getRTCBotJS +from rtcbot import RTCConnection, getRTCBotJS, CVCamera +# Initialize the camera +camera = CVCamera() # For this example, we use just one global connection conn = RTCConnection() +# Send images from the camera through the connection +conn.video.putSubscription(camera) # Serve the RTCBot javascript library at /rtcbot.js @routes.get("/rtcbot.js") async def rtcbotjs(request): return web.Response(content_type="application/javascript", text=getRTCBotJS()) # This sets up the connection @routes.post("/connect") async def connect(request): clientOffer = await request.json() serverResponse = await conn.getLocalDescription(clientOffer) return web.json_response(serverResponse) @routes.get("/") async def index(request): return web.Response( content_type="text/html", text=r""" <html> <head> <title>RTCBot: Skeleton</title> <script src="/rtcbot.js"></script> </head> <body style="text-align: center;padding-top: 30px;"> <video autoplay playsinline muted controls></video> <p> Open the browser's developer tools to see console messages (CTRL+SHIFT+C) </p> <script> var conn = new rtcbot.RTCConnection(); + // When the video stream comes in, display it in the video element + conn.video.subscribe(function(stream) { + document.querySelector("video").srcObject = stream; + }); async function connect() { let offer = await conn.getLocalDescription(); // POST the information to /connect let response = await fetch("/connect", { method: "POST", cache: "no-cache", body: JSON.stringify(offer) }); await conn.setRemoteDescription(await response.json()); console.log("Ready!"); } connect(); </script> </body> </html> """) async def cleanup(app=None): await conn.close() + camera.close() # Singletons like a camera are not awaited on close app = web.Application() app.add_routes(routes) app.on_shutdown.append(cleanup) web.run_app(app) ``` One major difference between javascript and Python, is that the audio/video `subscribe` in javascript is only called once, and returns a video stream object. In Python, the same function would get called on each video frame. Also, remember to subscribe/put all subscriptions into `conn` _before_ initializing the connection with `getLocalDescription`. This is because `getLocalDescription` uses knowledge of which types of streams you want to send and receive to construct its offer and response. ```eval_rst .. note:: In some cases you will need to click play in the browser before the video starts. ``` ## Adding Audio ```eval_rst .. warning:: Be aware that a Pi 3 with USB microphone might struggle a bit sending both audio and video at the same time. Try the code on your desktop/laptop or a Pi 4 first to make sure it works before attempting use with the Pi 3. ``` Based on what you know of RTCBot so far, and knowing that you can use a microphone with the `Microphone` class, do you think you can figure out audio just looking at the video code above? The modifications to add audio use exactly the same ideas: ```python from rtcbot import RTCConnection, getRTCBotJS, CVCamera, Microphone camera = CVCamera() mic = Microphone() conn = RTCConnection() conn.video.putSubscription(camera) conn.audio.putSubscription(mic) ``` Also, don't forget to close the microphone at the end with `mic.close()`! On the browser side, we add an `<audio autoplay></audio>` element right after the `<video>` element, and update the javascript: ```javascript var conn = new RTCConnection(); conn.video.subscribe(function (stream) { document.querySelector("video").srcObject = stream; }); conn.audio.subscribe(function (stream) { document.querySelector("audio").srcObject = stream; }); ``` ## Browser to Python Thus far, we used Python to stream video and audio to the browser, which is the main use case in a robot. However, RTCBot can handle streaming both ways. Since it is assumed that you are at a single computer, we can't stream from Python and the browser at the same time (both will try to use the same webcam). We will switch the stream directions instead. This bears repeating, so let's reiterate a bit of the basics of RTCBot's python API: - Anything that outputs data has a `subscribe` method - Anything that takes in data has a `putSubscription` method, which takes in a subscription: `putSubscription(x.subscribe())` - An RTCConnection `conn` has _both_ outputs and inputs for messages sent through the connection. Furthermore, it also has video and audio streams `conn.video` and `conn.audio`, which _also_ can be used as both inputs and outputs. With this in mind, reversing the stream direction is a simple matter: ```python from aiohttp import web routes = web.RouteTableDef() from rtcbot import RTCConnection, getRTCBotJS, CVDisplay, Speaker display = CVDisplay() speaker = Speaker() # For this example, we use just one global connection conn = RTCConnection() display.putSubscription(conn.video.subscribe()) speaker.putSubscription(conn.audio.subscribe()) # Serve the RTCBot javascript library at /rtcbot.js @routes.get("/rtcbot.js") async def rtcbotjs(request): return web.Response(content_type="application/javascript", text=getRTCBotJS()) # This sets up the connection @routes.post("/connect") async def connect(request): clientOffer = await request.json() serverResponse = await conn.getLocalDescription(clientOffer) return web.json_response(serverResponse) @routes.get("/") async def index(request): return web.Response( content_type="text/html", text=r""" <html> <head> <title>RTCBot: Skeleton</title> <script src="/rtcbot.js"></script> </head> <body style="text-align: center;padding-top: 30px;"> <video autoplay playsinline controls></video> <audio autoplay></audio> <p> Open the browser's developer tools to see console messages (CTRL+SHIFT+C) </p> <script> var conn = new rtcbot.RTCConnection(); async function connect() { let streams = await navigator.mediaDevices.getUserMedia({audio: true, video: true}); conn.video.putSubscription(streams.getVideoTracks()[0]); conn.audio.putSubscription(streams.getAudioTracks()[0]); let offer = await conn.getLocalDescription(); // POST the information to /connect let response = await fetch("/connect", { method: "POST", cache: "no-cache", body: JSON.stringify(offer) }); await conn.setRemoteDescription(await response.json()); console.log("Ready!"); } connect(); </script> </body> </html> """) async def cleanup(app=None): await conn.close() display.close() speaker.close() app = web.Application() app.add_routes(routes) app.on_shutdown.append(cleanup) web.run_app(app) ``` In the above code, instead of `CVCamera` and `Microphone`, `CVDisplay` and `Speaker` are used. In the javascript, we moved the subscribing code to the `connect` function, because `getUserMedia` is an asynchronous function, and cannot be `await`ed outside an async function (like connect). ## Summary This tutorial introduced video and audio streaming over WebRTC. Everything here relied on the `RTCConnection` object `conn`, which can be initialized both from browser and Python. 1. `conn.video` is both a data producer and a consumer, allowing both to subscribe to remote video and send video streams 2. `conn.audio` behaves in exactly the same way as `conn.video` Put together with messages that can be sent directly using `conn` (see previous tutorial), this allows you to send data back and forth however you like. ## Extra Notes While the `RTCConnection` was created globally here, but should generally be created for each connection, the camera/microphone/speaker/display objects should be used as singletons, initialized once at the beginning of the program, and closed when the program is exiting.
/rtcbot-0.2.5.tar.gz/rtcbot-0.2.5/examples/streaming/README.md
0.511961
0.82755
README.md
pypi
# Connecting over 4G Thus far, the tutorials have all had you connect directly to the robot, which meant that it had to be on your local wifi network. In this tutorial, we will finally decouple the server and the robot. Rather than connecting to the robot, we will have two separate Python programs. The first is a server, which will be served at a known IP address. The second will be the robot, which connects to the server with a websocket, and waits for the information necessary to initialize a WebRTC connection directly to your browser. ```eval_rst .. note:: The server must be accessible from the internet. Running your own server might involve a bit of configuration in your router settings or setup of a cloud server, such as a virtual machine on DigitalOcean. You can also use the provided server at https://rtcbot.dev to help establish connections (see below). ``` In a previous tutorial, we developed a connection that streamed video to the browser. This tutorial will implement exactly the same functionality, but with the robot on a remote connection. The browser-side code will remain unchanged - all of the work here will be in Python. ## Server Code Most of the server code is unchanged. The only difference is that we set up a listener at `/ws`, which will establish a websocket connection with the robot: ```python ws = None # Websocket connection to the robot @routes.get("/ws") async def websocket(request): global ws ws = Websocket(request) print("Robot Connected") await ws # Wait until the websocket closes print("Robot disconnected") return ws.ws ``` The above code sets up a global `ws` variable which will hold the active connection. We then use this websocket in the `/connect` handler. Instead of establishing a WebRTC connection ourselves, the server forwards the information directly to the robot using the websocket: ```python # Called by the browser to set up a connection @routes.post("/connect") async def connect(request): global ws if ws is None: raise web.HTTPInternalServerError("There is no robot connected") clientOffer = await request.json() # Send the offer to the robot, and receive its response ws.put_nowait(clientOffer) robotResponse = await ws.get() return web.json_response(robotResponse) ``` This is all that is needed from the server - its function is simply to route the information necessary to establish the connection directly between robot and browser. The full server code is here: ```python from aiohttp import web routes = web.RouteTableDef() from rtcbot import Websocket, getRTCBotJS ws = None # Websocket connection to the robot @routes.get("/ws") async def websocket(request): global ws ws = Websocket(request) print("Robot Connected") await ws # Wait until the websocket closes print("Robot disconnected") return ws.ws # Called by the browser to set up a connection @routes.post("/connect") async def connect(request): global ws if ws is None: raise web.HTTPInternalServerError("There is no robot connected") clientOffer = await request.json() # Send the offer to the robot, and receive its response ws.put_nowait(clientOffer) robotResponse = await ws.get() return web.json_response(robotResponse) # Serve the RTCBot javascript library at /rtcbot.js @routes.get("/rtcbot.js") async def rtcbotjs(request): return web.Response(content_type="application/javascript", text=getRTCBotJS()) @routes.get("/") async def index(request): return web.Response( content_type="text/html", text=""" <html> <head> <title>RTCBot: Remote Video</title> <script src="/rtcbot.js"></script> </head> <body style="text-align: center;padding-top: 30px;"> <video autoplay playsinline muted controls></video> <p> Open the browser's developer tools to see console messages (CTRL+SHIFT+C) </p> <script> var conn = new rtcbot.RTCConnection(); conn.video.subscribe(function(stream) { document.querySelector("video").srcObject = stream; }); async function connect() { let offer = await conn.getLocalDescription(); // POST the information to /connect let response = await fetch("/connect", { method: "POST", cache: "no-cache", body: JSON.stringify(offer) }); await conn.setRemoteDescription(await response.json()); console.log("Ready!"); } connect(); </script> </body> </html> """) async def cleanup(app=None): global ws if ws is not None: c = ws.close() if c is not None: await c app = web.Application() app.add_routes(routes) app.on_shutdown.append(cleanup) web.run_app(app) ``` ## Remote Code For simplicity, we will just run both server and robot on the local machine. The robot connects to the server with a websocket, and waits for the message that will allow it to initialize its WebRTC connection. ```python import asyncio from rtcbot import Websocket, RTCConnection, CVCamera cam = CVCamera() conn = RTCConnection() conn.video.putSubscription(cam) # Connect establishes a websocket connection to the server, # and uses it to send and receive info to establish webRTC connection. async def connect(): ws = Websocket("http://localhost:8080/ws") remoteDescription = await ws.get() robotDescription = await conn.getLocalDescription(remoteDescription) ws.put_nowait(robotDescription) print("Started WebRTC") await ws.close() asyncio.ensure_future(connect()) try: asyncio.get_event_loop().run_forever() finally: cam.close() conn.close() ``` With these two pieces of code, you first start the server, then start the robot, and finally open `http://localhost:8080` in the browser to view a video stream coming directly from the robot, even if the robot has an unknown IP. ## rtcbot.dev The above example requires you to have your own internet-accessible server at a known IP address to set up the connection, if your remote code is not on your local network. The server's only real purpose is to help _establish_ a connection - once the connection is established, it does not do anything. For this reason, I am hosting a free testing server online at `https://rtcbot.dev` that performs the equivalent of the following operation from the above server code: ```python @routes.get("/ws") async def websocket(request): global ws ws = Websocket(request) print("Robot Connected") await ws # Wait until the websocket closes print("Robot disconnected") return ws.ws # Called by the browser to set up a connection @routes.post("/connect") async def connect(request): global ws if ws is None: raise web.HTTPInternalServerError("There is no robot connected") clientOffer = await request.json() # Send the offer to the robot, and receive its response ws.put_nowait(clientOffer) robotResponse = await ws.get() return web.json_response(robotResponse) ``` Since the server at `rtcbot.dev` is open to anyone, instead of `/ws` and `/connect`, you need to choose some random sequence of letters and numbers that will identify your connection, for example `myRandomSequence11`. Once you have chosen your sequence, you can both connect your websocket and POST to `https://rtcbot.dev/myRandomSequence11`: ```eval_rst .. note:: If you open https://rtcbot.dev/myRandomSequence11 in your browser, you can see if your remote code is connected with a websocket, and optionally open a video connection. ``` When using `rtcbot.dev`, the remote connection code becomes: ```python async def connect(): ws = Websocket("https://rtcbot.dev/myRandomSequence11") remoteDescription = await ws.get() robotDescription = await conn.getLocalDescription(remoteDescription) ws.put_nowait(robotDescription) print("Started WebRTC") await ws.close() ``` and the local browser's connection code becomes: ```js let response = await fetch("https://rtcbot.dev/myRandomSequence11", { method: "POST", cache: "no-cache", body: JSON.stringify(offer), }); ``` With `rtcbot.dev`, you no longer need your local server code to run websockets or a connection service. Its only purpose is to give the browser the html and javascript necessary to establish a connection. We will get rid of the browser entirely in the next tutorial. ## If it doesn't work over 4G The above example should work for most people. However, some mobile network operators perform routing that disallows creating a direct WebRTC connection to a mobile device over 4G. If this is your situation, you need to use what is called a TURN server, which will forward data between the browser and robot. ```eval_rst .. note:: You can check if your mobile operator allows such connections by using your phone to create a wifi hotspot, to which you can connect your robot. If video streaming works with the code above, you can ignore this section! ``` ```eval_rst .. warning:: Because a TURN server essentially serves as a proxy through which an entire WebRTC connection is routed, it can send and receive quite a bit of data - make sure that you don't exceed your download and upload limits! ``` There are two options through which to setup a TURN server: [coTURN](https://github.com/coturn/coturn) and [Pion](https://github.com/pion/turn). Pion is meant to be a more simple and temporary solution that's easy to setup while coTURN is recommended for more permanent setups. ### Setup with Pion The Pion server is easy to set up on Windows,Mac and Linux - all you need to do is [download the executable](https://github.com/pion/turn/releases/tag/1.0.3), and run it from the command line as shown. **Linux/Mac**: ```bash chmod +x ./simple-turn-linux-amd64 # allow executing the downloaded file export USERS='myusername=mypassword' export REALM=my.server.ip export UDP_PORT=3478 ./simple-turn-linux-amd64 # simple-turn-darwin-amd64 if on Mac ``` **Windows**: You can run the following from powershell: ```powershell $env:USERS = "myusername=mypassword" $env:REALM = "my.server.ip" $env:UDP_PORT = 3478 ./simple-turn-windows-amd64.exe ``` With the Pion server running, you will need to let both Python and Javascript know about it when creating your `RTCConnection`: ```python from aiortc import RTCConfiguration, RTCIceServer myConnection = RTCConnection(rtcConfiguration=RTCConfiguration([ RTCIceServer(urls="stun:stun.l.google.com:19302"), RTCIceServer(urls="turn:my.server.ip:3478", username="myusername",credential="mypassword") ])) ``` ```javascript var conn = new rtcbot.RTCConnection(true, { iceServers:[ { urls: ["stun:stun.l.google.com:19302"] }, { urls: "turn:my.server.ip:3478?transport=udp", username: "myusername", credential: "mypassword", }, ]); ``` ### Setup with coTURN Setting up a coTURN server takes a bit more work and is only supported on Linux and Mac. The following steps will assume a Linux system running Ubuntu. Install coTURN and stop the coTURN service to modify config files with ```bash sudo apt install coturn sudo systemctl stop coturn ``` Edit the file `/etc/default/coturn` by uncommenting the line `TURNSERVER_ENABLED=1`. This will allow coTURN to start in daemon mode on boot. Edit another file `/etc/turnserver.conf` and add the following lines. Be sure to put your system's public facing IP address in place of `<PUBLIC_NETWORK_IP>`, your domain name in place of `<DOMAIN>`, and your own credentials in place of `<USERNAME>` and `<PASSWORD>`. ``` listening-port=3478 tls-listening-port=5349 listening-ip=<PUBLIC_NETWORK_IP> relay-ip=<PUBLIC_NETWORK_IP> external-ip=<PUBLIC_NETWORK_IP> realm=<DOMAIN> server-name=<DOMAIN> user=<USERNAME>:<PASSWORD> lt-cred-mech ``` ```eval_rst .. note:: If you are running coTURN within a local network, <DOMAIN> can be whatever you want. ``` Restart the coTURN service, check that it's running, and reboot. ```bash sudo systemctl start coturn sudo systemctl status coturn sudo reboot ``` With the coTURN server running, you will need to let both Python and Javascript know about it when creating your `RTCConnection`: ```python from aiortc import RTCConfiguration, RTCIceServer myConnection = RTCConnection(rtcConfiguration=RTCConfiguration([ RTCIceServer(urls="stun:stun.l.google.com:19302"), RTCIceServer(urls="turn:<PUBLIC_NETWORK_IP>:3478", username="myusername",credential="mypassword") ])) ``` ```javascript var conn = new rtcbot.RTCConnection(true, { iceServers:[ { urls: ["stun:stun.l.google.com:19302"] }, { urls: "turn:<PUBLIC_NETWORK_IP:3478?transport=udp", username: "myusername", credential: "mypassword", }, ]); ``` ```eval_rst .. note:: If you are running coTURN on a local network, replace <PUBLIC_NETWORK_IP> with the public facing IP of the system running coTURN. If coTURN is running on a server with a domain, replace <PUBLIC_NETWORK_IP> with the domain/realm set in /etc/turnserver.conf. ``` With either of the options above, you should be able to stream video to your browser using 4G, even if your mobile operator disallows direct connections. ## Summary This tutorial split up the server and robot code into distinct pieces. Also introduced was rtcbot's websocket wrapper, allowing you to easily establish a data-only connection. Finally, TURN servers were introduced, and instructions were given on how to set one up if direct connections fail. ## Extra Notes Be aware that throughout these tutorials, all error handling and robustness was left out in the interest of clarity in the fundamental program flow. In reality, you will probably want to make sure that the connection did not have an error, and add the ability to connect and disconnect multiple times.
/rtcbot-0.2.5.tar.gz/rtcbot-0.2.5/examples/mobile/README.md
0.81468
0.910625
README.md
pypi
import socket import base64 class ntripClient: def __init__(self,host,mountpoint,username,password,port,logger): """Connection to NTRIP server and sends authorization credentials Args: host (string): The IP (or URL) of the NTRIP server you want to connect to port (int): The port number of the NTRIP server you want to connect to (default: 2101) mountpoint (str): The mountpoint in the NTRIP server you want to connect to username (str): Your username to the NTRIP server password (str, optional): Your password to the NTRIP server. Defaults to "None". port (int, optional): The port you want to connect to. Defaults to 2101. Returns: NtripClient: An NtripClient object """ self.logger = logger # Encode username and password in base64 for transmission self.host = host self.port = port self.conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM) auth_str = f"{username}:{password}".encode('utf-8') auth_b64 = base64.b64encode(auth_str).decode('utf-8') # Construct GET request with mountpoint and authorization headers request = f"GET /{mountpoint} HTTP/1.1\r\n" request += f"Host: {self.host}\r\n" request += "Ntrip-Version: Ntrip/1.0\r\n" request += "User-Agent: NTRIP Python Client\r\n" request += "Connection: close\r\n" request += f"Authorization: Basic {auth_b64}\r\n" request += "\r\n" self.request = request def connect(self): # Open socket connection to NTRIP server and send request self.logger.info("Attempting to connect") self.conn.settimeout(5) self.conn.connect((self.host, self.port)) self.conn.settimeout(None) self.conn.send(self.request.encode('utf-8')) # Receive response from server response = self.conn.recv(4096*2) self.logger.info("Connected") self.logger.info(response.decode('utf-8'))
/rtcmdecoder-1.0-py3-none-any.whl/rtcm/clients/ntripClient.py
0.566019
0.166337
ntripClient.py
pypi
import socket, re, sys, select from . import sip, sipparser import pprint class CreateSocketError(Exception): pass class BindSocketError(Exception): pass class SendDataError(Exception): pass class UnsupportedSIPVersion(Exception): pass class UnsupportedSIPTransport(Exception): pass class CollectorServer: ''' The CollectorServer object opens a SIP socket to receive RTCP-XR packets, parses them, then sends the data to a handler. Args: - None - Attributes: local_ip (ipV4 address): [None] Local IPV4 address to bind to (None: Autodetect) port (int) : [5060] Local Port to bind to reply_to_socket (bool) : [False] Should we reply to the address from the socket? Otherwise use SIP Header IP contact_from_sip(bool) : [False] Should we set our contact from the SIP header? Otherwise bound IP debug (bool) : [False] Print Debugging information handler (func) : [None] Handler function for recieved data (None: pprint res data) timeout (int) : [10] Select Timeout in seconds timeout_handler (func) : [None] Handler for select timeout event Handler Function: Takes 1 arg that is the parsed data structure. Returns: Send Response Packet? True or False ''' def __init__(self, local_ip=None, port=5060, reply_to_socket=False, contact_from_sip=False, debug=False, handler=None, timeout=10, timeout_handler=None): self.port = port self.reply_to_socket = reply_to_socket self.contact_from_sip = contact_from_sip self.debug = debug self.selectto = timeout self.handler = handler if self.handler is None: self.handler = self.default_handler self.timeout_handler = timeout_handler if self.timeout_handler is None: self.timeout_handler = lambda x: (x) self.local_ip = local_ip if self.local_ip is None: s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.connect(('google.com', 80)) self.local_ip = s.getsockname()[0] s.close() self.printDebug("Local IP: %s" % self.local_ip) self.recvsocket = self._create_socket() def printDebug(self, *args, **kwargs): if self.debug: print(*args, file=sys.stderr, **kwargs) def listen(self): inputs = [self.recvsocket] outputs = [] self.printDebug("Starting listening loop") while inputs: readable, writable, exceptional = select.select(inputs, outputs, inputs, self.selectto) if len(readable) == 0: self.printDebug("Select timeout event") self.timeout_handler(self.selectto) for s in readable: if s is self.recvsocket: if not self.handle_sip_packet(): continue def handle_sip_packet(self): data, remote = self.recvsocket.recvfrom(10240) try: request = sip.Request(data) except sip.SipUnpackError: return False self.printDebug("Received request from %s:%d : \n%s" % (remote[0], remote[1], str(request))) # Verify SIP transport and Version # Regexp parsing via Header: SIP/2.0/UDP 172.16.18.90:5060;rport m = re.search(r'SIP/(.*)/(.*)\s(.*):([0-9]*);*', request.headers['via']) if not m: SendDataError("Wrong Via: header") return False if m.group(1) != "2.0": UnsupportedSIPVersion("Unsupported SIP version in Via header: %s" % m.group(1)) return False if m.group(2).upper() != "UDP": UnsupportedSIPTransport("Unsupported Transport in Via: header") return False # Build our response response = sip.Response() if request.method != "PUBLISH" \ or "content-type" not in request.headers \ or request.headers["content-type"] != "application/vq-rtcpxr": self.printDebug("Received a non PUBLISH: %s" % request.method) response.reason = "Not implemented" response.status = "501" for i in ['via', 'from', 'to', 'cseq', 'call-id']: if i in request.headers: response.headers[i] = request.headers[i] else: response.headers[i] = '' response.headers['content-length'] = 0 response.headers['expires'] = 0 response.headers['contact'] = "<sip:%s:%d;transport=tcp;handler=dum>" % (self.local_ip, self.port) if self.contact_from_sip and 'to' in request.headers: # Pull out the to header rem = re.search(r'\@([0-9.]+)\:([0-9]+)', request.headers['to']) if rem: response.headers['contact'] = "<sip:%s:%d;transport=tcp;handler=dum>" % \ (rem.group(1), int(rem.group(2))) # Determine endpoint to send to if self.reply_to_socket is False: sipaddr = sipparser.parseSipAddr(request.headers['contact']) if sipaddr: phone_ip = sipaddr['ip'] phone_port = sipaddr['port'] self.printDebug("Phone IP and port from Contact header: %s:%s" % (phone_ip, phone_port)) else: phone_ip = remote[0] phone_port = remote[1] self.printDebug("Phone IP and port from socket: %s:%d" % (phone_ip, phone_port)) if self.handler(sipparser.parsesip(request)): self.send_response(phone_ip, phone_port, response) def default_handler(self, request): pp = pprint.PrettyPrinter(indent=2) pp.pprint(request) return True def send_response(self, phone_ip, phone_port, response): self.printDebug("Creating send socket") try: self.sendsock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) except Exception as e: CreateSocketError("Cannot create socket: %s" % e) try: self.sendsock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.sendsock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1) except AttributeError: pass try: self.printDebug("Binding to local ip:port %s:%s" % (self.local_ip, self.port)) self.sendsock.bind((self.local_ip, self.port)) except Exception as e: SendDataError("Cannot bind socket to %s:%d: %s" % (self.local_ip, self.port, e)) # sent the OK (or 501) try: self.printDebug("Sending response to %s:%s : \n%s" % (phone_ip, phone_port, str(response))) sent = self.sendsock.sendto(str(response).encode("utf-8"), (phone_ip, int(phone_port))) self.printDebug("Sent %s bytes" % sent) except Exception as e: SendDataError("Cannot send OK/DENY response to %s:%s: %s" % (phone_ip, phone_port, e)) self.sendsock.close() def _create_socket(self): try: sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sock.setblocking(0) except Exception as e: raise CreateSocketError("Cannot create socket: %s" % e) try: sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1) except AttributeError: pass try: sock.bind((socket.gethostbyname(self.local_ip), self.port)) except Exception as e: raise BindSocketError("Cannot bind socket to %s:%d: %s" % (self.local_ip, self.port, e)) return sock
/rtcpxr_collector-0.1.7-py3-none-any.whl/rtcpxr_collector/vqcollector.py
0.463201
0.204183
vqcollector.py
pypi
from io import BytesIO class SipError(Exception): pass class SipUnpackError(SipError): pass class SipNeedData(SipUnpackError): pass class SipPackError(SipError): pass def canon_header(s): exception = {'call-id': 'Call-ID', 'cseq': 'CSeq', 'www-authenticate': 'WWW-Authenticate'} short = ['allow-events', 'u', 'call-id', 'i', 'contact', 'm', 'content-encoding', 'e', 'content-length', 'l', 'content-type', 'c', 'event', 'o', 'from', 'f', 'subject', 's', 'supported', 'k', 'to', 't', 'via', 'v'] s = s.lower() return ((len(s) == 1) and s in short and canon_header(short[short.index(s) - 1])) \ or (s in exception and exception[s]) or '-'.join([x.capitalize() for x in s.split('-')]) def parse_headers(f): """Return dict of HTTP headers parsed from a file object.""" d = {} while 1: line = f.readline().decode("utf-8", "replace") line = line.strip() if not line: break lsplit = line.split(None, 1) if not lsplit[0].endswith(':'): raise SipUnpackError('invalid header: %r' % line) k = lsplit[0][:-1].lower() d[k] = len(lsplit) != 1 and lsplit[1] or '' return d def parse_body(f, headers): """Return SIP body parsed from a file object, given HTTP header dict.""" if 'content-length' in headers: n = int(headers['content-length']) body = f.read(n) if len(body) != n: raise SipNeedData('short body (missing %d bytes)' % (n - len(body))) elif 'content-type' in headers: body = f.read() else: body = '' return body.decode("utf-8", "replace") class Message: """SIP Protocol headers + body.""" __metaclass__ = type __hdr_defaults__ = {} headers = None body = None def __init__(self, *args, **kwargs): if args: self.unpack(args[0]) else: self.headers = {} self.body = '' for k, v in self.__hdr_defaults__.items(): setattr(self, k, v) for k, v in kwargs.items(): setattr(self, k, v) def unpack(self, buf): f = BytesIO(buf) self.headers = parse_headers(f) self.body = parse_body(f, self.headers) self.data = f.read().decode("utf-8", "replace") def pack_hdr(self): return ''.join(['%s: %s\r\n' % (canon_header(k), v) for k, v in self.headers.items()]) def __len__(self): return len(str(self)) def __str__(self): return '%s\r\n%s' % (self.pack_hdr(), self.body) class Request(Message): """SIP request.""" __hdr_defaults__ = { 'method': 'INVITE', 'uri': 'sip:user@example.com', 'version': '2.0', 'headers': {'to': '', 'from': '', 'call-id': '', 'cseq': '', 'contact': ''} } __methods = dict.fromkeys(( 'ACK', 'BYE', 'CANCEL', 'INFO', 'INVITE', 'MESSAGE', 'NOTIFY', 'OPTIONS', 'PRACK', 'PUBLISH', 'REFER', 'REGISTER', 'SUBSCRIBE', 'UPDATE' )) __proto = 'SIP' def unpack(self, buf): f = BytesIO(buf) line = f.readline().decode("utf-8", "replace") lsplit = line.strip().split() if len(lsplit) != 3 or lsplit[0] not in self.__methods or not lsplit[2].startswith(self.__proto): raise SipUnpackError('invalid request: %r' % line) self.method = lsplit[0] self.uri = lsplit[1] self.version = lsplit[2][len(self.__proto) + 1:] Message.unpack(self, f.read()) def __str__(self): return '%s %s %s/%s\r\n' % (self.method, self.uri, self.__proto, self.version) + Message.__str__(self) class Response(Message): """SIP response.""" __hdr_defaults__ = { 'version': '2.0', 'status': '200', 'reason': 'OK', 'headers': {'to': '', 'from': '', 'call-id': '', 'cseq': '', 'contact': ''} } __proto = 'SIP' def unpack(self, buf): f = BytesIO(buf) line = f.readline().decode("utf-8", "replace") lsplit = line.strip().split(None, 2) if len(lsplit) < 2 or not lsplit[0].startswith(self.__proto) or not lsplit[1].isdigit(): raise SipUnpackError('invalid response: %r' % line) self.version = lsplit[0][len(self.__proto) + 1:] self.status = lsplit[1] self.reason = lsplit[2] Message.unpack(self, f.read()) def __str__(self): return '%s/%s %s %s\r\n' % (self.__proto, self.version, self.status, self.reason) + Message.__str__(self)
/rtcpxr_collector-0.1.7-py3-none-any.whl/rtcpxr_collector/sip.py
0.772574
0.162579
sip.py
pypi
import numpy as np class UnionFind: """ An implementation of a Union--Find class. The class performs path compression by default. It uses integers for storing one disjoint set, assuming that vertices are zero-indexed. """ def __init__(self, n_vertices): """ Initializes an empty Union--Find data structure for a given number of vertices. """ self._parent = np.arange(n_vertices, dtype=int) def find(self, u): """ Finds and returns the parent of u with respect to the hierarchy. """ if self._parent[u] == u: return u else: # Perform path collapse operation self._parent[u] = self.find(self._parent[u]) return self._parent[u] def merge(self, u, v): """ Merges vertex u into the component of vertex v. Note the asymmetry of this operation. """ if u != v: self._parent[self.find(u)] = self.find(v) def roots(self): """ Generator expression for returning roots, i.e. components that are their own parents. """ for vertex, parent in enumerate(self._parent): if vertex == parent: yield vertex class PersistentHomologyCalculation: def __call__(self, matrix): n_vertices = matrix.shape[0] uf = UnionFind(n_vertices) triu_indices = np.triu_indices_from(matrix) edge_weights = matrix[triu_indices] edge_indices = np.argsort(edge_weights, kind='stable') # 1st dimension: 'source' vertex index of edge # 2nd dimension: 'target' vertex index of edge persistence_pairs = [] for edge_index, edge_weight in \ zip(edge_indices, edge_weights[edge_indices]): u = triu_indices[0][edge_index] v = triu_indices[1][edge_index] younger_component = uf.find(u) older_component = uf.find(v) # Not an edge of the MST, so skip it if younger_component == older_component: continue elif younger_component > older_component: uf.merge(v, u) else: uf.merge(u, v) if u < v: persistence_pairs.append((u, v)) else: persistence_pairs.append((v, u)) # Return empty cycles component return np.array(persistence_pairs), np.array([]) class AlephPersistenHomologyCalculation(): def __init__(self, compute_cycles, sort_selected): """Calculate persistent homology using aleph. Args: compute_cycles: Whether to compute cycles sort_selected: Whether to sort the selected pairs using the distance matrix (such that they are in the order of the filteration) """ self.compute_cycles = compute_cycles self.sort_selected = sort_selected def __call__(self, distance_matrix): """Do PH calculation. Args: distance_matrix: numpy array of distances Returns: tuple(edge_featues, cycle_features) """ import aleph if self.compute_cycles: pairs_0, pairs_1 = aleph.vietoris_rips_from_matrix_2d( distance_matrix) pairs_0 = np.array(pairs_0) pairs_1 = np.array(pairs_1) else: pairs_0 = aleph.vietoris_rips_from_matrix_1d( distance_matrix) pairs_0 = np.array(pairs_0) # Return empty cycles component pairs_1 = np.array([]) if self.sort_selected: selected_distances = \ distance_matrix[(pairs_0[:, 0], pairs_0[:, 1])] indices_0 = np.argsort(selected_distances) pairs_0 = pairs_0[indices_0] if self.compute_cycles: cycle_creation_times = \ distance_matrix[(pairs_1[:, 0], pairs_1[:, 1])] cycle_destruction_times = \ distance_matrix[(pairs_1[:, 2], pairs_1[:, 3])] cycle_persistences = \ cycle_destruction_times - cycle_creation_times # First sort by destruction time and then by persistence of the # create cycles in order to recover original filtration order. indices_1 = np.lexsort( (cycle_destruction_times, cycle_persistences)) pairs_1 = pairs_1[indices_1] return pairs_0, pairs_1
/rtd_ae-0.1.2-py3-none-any.whl/rtd_ae/topology.py
0.929408
0.773024
topology.py
pypi
import PIL import torch import numpy as np import gudhi as gd import gudhi.hera as hera import matplotlib.pyplot as plt from torch import nn from torch.utils.data import Dataset from sklearn.metrics.pairwise import pairwise_distances from sklearn.neighbors import kneighbors_graph from scipy.sparse.csgraph import connected_components, shortest_path from sklearn.exceptions import NotFittedError from scipy.spatial import distance_matrix from itertools import chain def get_linear_model(input_dim, latent_dim=2, n_hidden_layers=2, hidden_dim=32, m_type='encoder', **kwargs): layers = list( chain.from_iterable( [ (nn.Linear(hidden_dim, hidden_dim), nn.ReLU()) for _ in range(n_hidden_layers) ] ) ) if m_type == 'encoder': layers = [nn.Linear(input_dim, hidden_dim), nn.ReLU()] + layers + [nn.Linear(hidden_dim, latent_dim)] elif m_type == 'decoder': layers = [nn.Linear(latent_dim, hidden_dim), nn.ReLU()] + layers + [nn.Linear(hidden_dim, input_dim)] return nn.Sequential(*layers) def get_cnn_model(input_dim=(64, 64), latent_dim=2, n_hidden_layers=2, hidden_dim=32, m_type='encoder', **kwargs): modules = [] width, heigth = input_dim if m_type == 'encoder': in_channels = 1 for i in range(n_hidden_layers): modules.append( nn.Sequential( nn.Conv2d(in_channels, out_channels=hidden_dim, kernel_size=3, stride=2, padding=1), nn.BatchNorm2d(hidden_dim), nn.LeakyReLU()) ) in_channels = hidden_dim modules.append(nn.Flatten(start_dim=1, end_dim=- 1)) modules.append(nn.Linear(int(hidden_dim * width * heigth / (4 ** n_hidden_layers)), latent_dim)) elif m_type == 'decoder': shape = int(hidden_dim * width * heigth / (4 ** n_hidden_layers)) modules.append(nn.Linear(latent_dim, shape)) modules.append(Reshape(hidden_dim, int(width / (2 ** n_hidden_layers)), int(heigth / (2 ** n_hidden_layers)))) for i in range(n_hidden_layers - 1): modules.append( nn.Sequential( nn.ConvTranspose2d(hidden_dim, hidden_dim, kernel_size=3, stride=2, padding=1, output_padding=1), nn.BatchNorm2d(hidden_dim), nn.LeakyReLU()) ) modules.append( nn.Sequential( nn.ConvTranspose2d(hidden_dim, 1, kernel_size=3, stride=2, padding=1, output_padding=1), nn.BatchNorm2d(1), nn.LeakyReLU()) ) return nn.Sequential(*modules) class Reshape(nn.Module): def __init__(self, *args): super().__init__() self.shape = args def forward(self, x): batch_size = x.shape[0] return x.view((batch_size, *self.shape)) def collate_with_matrix(samples): indicies, data, labels = zip(*samples) data, labels = torch.tensor(np.asarray(data)), torch.tensor(np.asarray(labels)) if len(data.shape) > 2: dist_data = torch.flatten(data, start_dim=1) else: dist_data = data x_dist = torch.cdist(dist_data, dist_data, p=2) / np.sqrt(dist_data.shape[1]) # x_dist = (x_dist + x_dist.T) / 2.0 # make symmetrical (cdist is prone to computational errors) return data, x_dist, labels def collate_with_matrix_geodesic(samples): indicies, data, labels, dist_data = zip(*samples) data, labels = torch.tensor(np.asarray(data)), torch.tensor(np.asarray(labels)) x_dist = torch.tensor(np.asarray(dist_data)[:, indicies]) return data, x_dist, labels def get_geodesic_distance(data, n_neighbors=3, **kwargs): kng = kneighbors_graph(data, n_neighbors=n_neighbors, mode='distance', **kwargs) n_connected_components, labels = connected_components(kng) if n_connected_components > 1: kng = _fix_connected_components( X=data, graph=kng, n_connected_components=n_connected_components, component_labels=labels, mode="distance", **kwargs ) if connected_components(kng)[0] != 1: raise ValueError("More than 1 connected component in the end!") # return shortest_path(kng, directed=False) print(f"N connected: {n_connected_components}") return shortest_path(kng, directed=False) class FromNumpyDataset(Dataset): def __init__(self, data, labels=None, geodesic=False, flatten=True, scaler=None, **kwargs): if labels is not None: assert len(labels) == len(data), "The length of labels and data are not equal" self.labels = labels if flatten: self.data = torch.tensor(data).flatten(start_dim=1).numpy() else: self.data = data if scaler is not None: try: self.data = scaler.transform(self.data) except NotFittedError: self.data = scaler.fit_transform(self.data) self.scaler = scaler if geodesic: self.data_dist = get_geodesic_distance(self.data, **kwargs) def __len__(self): return len(self.data) def __getitem__(self, idx): if hasattr(self, 'labels'): label = self.labels[idx] else: label = 0 if hasattr(self, 'data_dist'): return idx, self.data[idx], label, self.data_dist[idx] else: return idx, self.data[idx], label def get_latent_representations(model, data_loader): labels = [] data = [] model.eval() model.to('cpu') with torch.no_grad(): for x, _, y in data_loader: # if x.device != model.device: # x.to(model.device) labels.append(y.numpy()) data.append(model(x).cpu().numpy()) return np.concatenate(data, axis=0), np.concatenate(labels, axis=0) def vizualize_data(data, labels=None, alpha=1.0, s=1.0, title="", ax=None): assert labels.shape[0] == data.shape[0], "Length of labels and data are not equal" if ax is None: _, ax = plt.subplots(figsize=(12, 8)) if data.shape[1] == 2: x, y = zip(*data) ax.scatter(x, y, alpha=alpha, c=labels, s=s) else: x, y, z = zip(*data) ax.scatter(x, y, z, alpha=alpha, c=labels, s=s) ax.set_title(title, fontsize=20) return ax def plot_latent_tensorboard(latent, labels): if latent.shape[1] < 3: fig, ax = plt.subplots(figsize=(8, 8)) ax.scatter(latent[:, 0], latent[:, 1], c=labels, s=20.0, alpha=0.7, cmap='viridis') elif latent.shape[1] == 3: fig = plt.figure(figsize=(10, 10)) ax = fig.add_subplot(1, 1, 1, projection='3d') ax.scatter(latent[:, 0], latent[:, 1], latent[:, 2], c=labels, s=1.0, alpha=0.7, cmap='viridis') else: return None fig.canvas.draw() image = np.array(PIL.Image.frombytes('RGB', fig.canvas.get_width_height(), fig.canvas.tostring_rgb())) plt.close(fig) return image # return np.swapaxes(np.array(fig.canvas.renderer.buffer_rgba()), -1, 1) def plot_latent(train_latent, train_labels, model_name, dataset_name): if train_latent.shape[1] > 2: fig = plt.figure(figsize=(12, 8)) axes = fig.add_subplot(1, 1, 1, projection='3d') else: fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(12, 8)) axes = vizualize_data(train_latent, train_labels, title=f"Model: {model_name}, dataset:{dataset_name}", ax=axes) return fig, axes def calculate_barcodes(distances, max_dim=1): skeleton = gd.RipsComplex(distance_matrix=distances) simplex_tree = skeleton.create_simplex_tree(max_dimension=max_dim + 1) barcodes = simplex_tree.persistence() pbarcodes = {} for i in range(max_dim + 1): pbarcodes[i] = [[b[1][0], b[1][1]] for b in barcodes if b[0] == i] return pbarcodes def cast_to_normal_array(barcodes): return np.array([[b, d] for b, d in barcodes]) def calculate_wasserstein_distance(x, z, n_runs=5, batch_size=2048, max_dim=1): if batch_size > len(x): n_runs = 1 results = {d: [] for d in range(max_dim + 1)} x = x.reshape(len(x), -1) z = z.reshape(len(z), -1) for i in range(n_runs): ids = np.random.choice(np.arange(0, len(x)), size=min(batch_size, len(x)), replace=False) data = x[ids] distances = distance_matrix(data, data) distances = distances / np.percentile(distances.flatten(), 90) barcodes = {'original': calculate_barcodes(distances, max_dim=max_dim)} data = z[ids] distances = distance_matrix(data, data) distances = distances / np.percentile(distances.flatten(), 90) barcodes['model'] = calculate_barcodes(distances, max_dim=max_dim) for dim in range(max_dim + 1): original = cast_to_normal_array(barcodes['original'][dim]) model = cast_to_normal_array(barcodes['model'][dim]) results[dim].append(hera.wasserstein_distance(original, model, internal_p=1)) return results def _fix_connected_components( X, graph, n_connected_components, component_labels, mode="distance", metric="euclidean", **kwargs, ): """Add connections to sparse graph to connect unconnected components. For each pair of unconnected components, compute all pairwise distances from one component to the other, and add a connection on the closest pair of samples. This is a hacky way to get a graph with a single connected component, which is necessary for example to compute a shortest path between all pairs of samples in the graph. Parameters ---------- X : array of shape (n_samples, n_features) or (n_samples, n_samples) Features to compute the pairwise distances. If `metric = "precomputed"`, X is the matrix of pairwise distances. graph : sparse matrix of shape (n_samples, n_samples) Graph of connection between samples. n_connected_components : int Number of connected components, as computed by `scipy.sparse.csgraph.connected_components`. component_labels : array of shape (n_samples) Labels of connected components, as computed by `scipy.sparse.csgraph.connected_components`. mode : {'connectivity', 'distance'}, default='distance' Type of graph matrix: 'connectivity' corresponds to the connectivity matrix with ones and zeros, and 'distance' corresponds to the distances between neighbors according to the given metric. metric : str Metric used in `sklearn.metrics.pairwise.pairwise_distances`. kwargs : kwargs Keyword arguments passed to `sklearn.metrics.pairwise.pairwise_distances`. Returns ------- graph : sparse matrix of shape (n_samples, n_samples) Graph of connection between samples, with a single connected component. """ if metric == "precomputed" and sparse.issparse(X): raise RuntimeError( "_fix_connected_components with metric='precomputed' requires the " "full distance matrix in X, and does not work with a sparse " "neighbors graph." ) for i in range(n_connected_components): idx_i = np.flatnonzero(component_labels == i) Xi = X[idx_i] for j in range(i): idx_j = np.flatnonzero(component_labels == j) Xj = X[idx_j] if metric == "precomputed": D = X[np.ix_(idx_i, idx_j)] else: D = pairwise_distances(Xi, Xj, metric=metric, **kwargs) ii, jj = np.unravel_index(D.argmin(axis=None), D.shape) if mode == "connectivity": graph[idx_i[ii], idx_j[jj]] = 1 graph[idx_j[jj], idx_i[ii]] = 1 elif mode == "distance": graph[idx_i[ii], idx_j[jj]] = D[ii, jj] graph[idx_j[jj], idx_i[ii]] = D[ii, jj] else: raise ValueError( "Unknown mode=%r, should be one of ['connectivity', 'distance']." % mode ) return graph class FurthestScaler: def __init__(self, p=2): # approximate self.is_fitted = False self.p = p def fit(self, data): self.furthest = self._furthest_distance(data) self.is_fitted = True def transform(self, data): if not self.is_fitted: raise NotFittedError return data / self.furthest def fit_transform(self, data): self.fit(data) return self.transform(data) def _furthest_distance(self, points, sample_frac=0.0): # exact solution, very computationaly expesive # hull = ConvexHull(points) # hullpoints = points[hull.vertices,:] # hdist = distance_matrix(hullpoints, hullpoints, p=self.p) # approximation: upper bound # pick random point and compute distances to all of the points # diameter min: max(distances), diameter max (triangle inequality): 2 max(distances) if len(points.shape) > 2: points = points.reshape(points.shape[0], -1) idx = np.random.choice(np.arange(len(points)), size=1) hdist = distance_matrix(points[idx], points, p=self.p) return 0.1 * hdist.max() # upper bound
/rtd_ae-0.1.2-py3-none-any.whl/rtd_ae/utils.py
0.886641
0.529872
utils.py
pypi
import numpy as np import copy import torch from torch import nn from torch.nn import functional as F import pytorch_lightning as pl from .utils import plot_latent_tensorboard, calculate_wasserstein_distance class AutoEncoder(pl.LightningModule): def __init__(self, encoder, decoder, RTDLoss=None, MSELoss=None, rtd_l=0.1, rtd_every_n_batches=1, rtd_start_epoch=0, lr=5e-4, **kwargs): """ RTDLoss - function of topological (RTD) loss between the latent representation and the input l - parameter of regularization lambda (L = L_reconstruct + \lambda L_RTD) """ super().__init__() self.encoder = copy.deepcopy(encoder) self.decoder = copy.deepcopy(decoder) self.norm_constant = nn.Parameter(data=torch.ones(1), requires_grad=True) self.RTDLoss = RTDLoss self.MSELoss = MSELoss self.rtd_l = rtd_l self.rtd_every_n_batches = rtd_every_n_batches self.rtd_start_epoch = rtd_start_epoch self.lr = lr def forward(self, x): embedding = self.norm_constant * self.encoder(x) return embedding def z_dist(self, z): z_dist = torch.cdist(z, z) # if self.norm_constant is None: # self.norm_constant = 1.0 / np.quantile(z_dist.flatten().detach().cpu().numpy(), 0.9) # norm_constant = torch.quantile(z_dist.view(-1), 0.9) z_dist = self.norm_constant * (z_dist / np.sqrt(z_dist.shape[1])) return z_dist def configure_optimizers(self): optimizer = torch.optim.AdamW(self.parameters(), lr=self.lr) return optimizer def training_step(self, train_batch, batch_idx): x, x_dist, y = train_batch z = self.encoder(x) x_hat = self.decoder(z) loss = 0.0 if self.MSELoss is not None: loss += self.MSELoss(x_hat, x) self.log('train/mse_loss', loss) if self.RTDLoss is not None: if (self.rtd_start_epoch <= self.current_epoch) and batch_idx % self.rtd_every_n_batches == 0: z_dist = self.z_dist(z) loss_xz, loss_zx, rtd_loss = self.RTDLoss(x_dist, z_dist) self.log('train/rtd_loss', rtd_loss) self.log('train/rtd_loss_xz', loss_xz) self.log('train/rtd_loss_zx', loss_zx) loss += self.rtd_l * rtd_loss self.log('train/loss', loss) return loss def validation_step(self, val_batch, batch_idx): x, x_dist, y = val_batch z = self.encoder(x) x_hat = self.decoder(z) loss = 0.0 if self.MSELoss is not None: loss += self.MSELoss(x_hat, x) self.log('val/mse_loss', loss) if self.RTDLoss is not None and self.rtd_start_epoch <= self.current_epoch + 1: z_dist = self.z_dist(z) loss_xz, loss_zx, rtd_loss = self.RTDLoss(x_dist, z_dist) self.log('val/rtd_loss', rtd_loss) self.log('val/rtd_loss_xz', loss_xz) self.log('val/rtd_loss_zx', loss_zx) loss += self.rtd_l * rtd_loss self.log('val/loss', loss) class DiagnosticAutoEncoder(pl.LightningModule): def __init__(self, encoder, decoder, RTDLoss=None, MSELoss=None, rtd_l=0.1, rtd_every_n_batches=1, rtd_start_epoch=0, lr=5e-4, **kwargs): """ RTDLoss - function of topological (RTD) loss between the latent representation and the input l - parameter of regularization lambda (L = L_reconstruct + \lambda L_RTD) """ super().__init__() self.encoder = copy.deepcopy(encoder) self.decoder = copy.deepcopy(decoder) self.norm_constant = nn.Parameter(data=torch.ones(1), requires_grad=True) self.RTDLoss = RTDLoss self.MSELoss = MSELoss self.rtd_l = rtd_l self.rtd_every_n_batches = rtd_every_n_batches self.rtd_start_epoch = rtd_start_epoch self.lr = lr def forward(self, x): embedding = self.norm_constant * self.encoder(x) return embedding def z_dist(self, z): z_dist = torch.cdist(z, z) # if self.norm_constant is None: # self.norm_constant = 1.0 / np.quantile(z_dist.flatten().detach().cpu().numpy(), 0.9) # norm_constant = torch.quantile(z_dist.view(-1), 0.9) z_dist = self.norm_constant * (z_dist / np.sqrt(z_dist.shape[1])) return z_dist def configure_optimizers(self): optimizer = torch.optim.AdamW(self.parameters(), lr=self.lr) return optimizer def training_step(self, train_batch, batch_idx): x, x_dist, y = train_batch z = self.encoder(x) x_hat = self.decoder(z) loss = 0.0 if self.MSELoss is not None: loss += self.MSELoss(x_hat, x) self.log('train/mse_loss', loss) if self.RTDLoss is not None: if (self.rtd_start_epoch <= self.current_epoch) and batch_idx % self.rtd_every_n_batches == 0: z_dist = self.z_dist(z) loss_xz, loss_zx, rtd_loss = self.RTDLoss(x_dist, z_dist) self.log('train/rtd_loss', rtd_loss) self.log('train/rtd_loss_xz', loss_xz) self.log('train/rtd_loss_zx', loss_zx) loss += self.rtd_l * rtd_loss self.log('train/loss', loss) return loss def validation_step(self, val_batch, batch_idx): x, x_dist, y = val_batch z = self.encoder(x) x_hat = self.decoder(z) loss = 0.0 if self.MSELoss is not None: loss += self.MSELoss(x_hat, x) self.log('val/mse_loss', loss) if self.RTDLoss is not None and self.rtd_start_epoch <= self.current_epoch + 1: z_dist = self.z_dist(z) loss_xz, loss_zx, rtd_loss = self.RTDLoss(x_dist, z_dist) self.log('val/rtd_loss', rtd_loss) self.log('val/rtd_loss_xz', loss_xz) self.log('val/rtd_loss_zx', loss_zx) loss += self.rtd_l * rtd_loss self.log('val/loss', loss) return x, z, y def validation_epoch_end(self, validation_step_outputs): logger = self.logger.experiment if self.current_epoch % 5 == 0: xs, zs, ys = [], [], [] for x, z, y in validation_step_outputs: xs.append(x.cpu().detach().numpy()) zs.append(z.cpu().detach().numpy()) ys.append(y.cpu().detach().numpy()) x = np.concatenate(xs, axis=0) z = np.concatenate(zs, axis=0) y = np.concatenate(ys, axis=0) image = plot_latent_tensorboard(z, y) if image is not None: logger.add_image('val/image', image, self.current_epoch, dataformats='HWC') if self.current_epoch % 20 == 0: wass = calculate_wasserstein_distance(x, z, batch_size=2048, max_dim=0) logger.add_scalar('val/wasserstein_h0', np.mean(wass.get(0, 0.0)), self.current_epoch) logger.add_scalar('val/wasserstein_h1', np.mean(wass.get(1, 0.0)), self.current_epoch)
/rtd_ae-0.1.2-py3-none-any.whl/rtd_ae/autoencoder.py
0.924976
0.321101
autoencoder.py
pypi
import numpy as np from tadasets.dimension import embed ''' source: https://github.com/scikit-tda/tadasets/blob/master/tadasets/shapes.py We modify the module here locally, s.t. all shapes return a conitnuous label for nicer visualization ''' __all__ = ["torus", "dsphere", "sphere", "swiss_roll", "infty_sign"] # TODO: Make a base class that controls `ambient` and `noise`. class Shape: def __init__(self): pass def dsphere(n=100, d=2, r=1, noise=None, ambient=None): """ Sample `n` data points on a d-sphere. Parameters ----------- n : int Number of data points in shape. r : float Radius of sphere. ambient : int, default=None Embed the sphere into a space with ambient dimension equal to `ambient`. The sphere is randomly rotated in this high dimensional space. """ original = np.random.randn(n, d + 1) # Normalize points to the sphere original = r * original / np.sqrt(np.sum(original ** 2, 1, keepdims=True)) data = original.copy() if noise: data += noise * np.random.randn(*data.shape) if ambient: assert ambient > d, "Must embed in higher dimensions" data = embed(data, ambient) return original, data def sphere(n=100, r=1, noise=None, ambient=None): """ Sample `n` data points on a sphere. Parameters ----------- n : int Number of data points in shape. r : float Radius of sphere. ambient : int, default=None Embed the sphere into a space with ambient dimension equal to `ambient`. The sphere is randomly rotated in this high dimensional space. """ theta = np.random.random((n,)) * 2.0 * np.pi phi = np.random.random((n,)) * np.pi rad = np.ones((n,)) * r data = np.zeros((n, 3)) data[:, 0] = rad * np.cos(theta) * np.cos(phi) data[:, 1] = rad * np.cos(theta) * np.sin(phi) data[:, 2] = rad * np.sin(theta) if noise: data += noise * np.random.randn(*data.shape) if ambient: data = embed(data, ambient) return data, theta def torus(n=100, c=2, a=1, noise=None, ambient=None): """ Sample `n` data points on a torus. Parameters ----------- n : int Number of data points in shape. c : float Distance from center to center of tube. a : float Radius of tube. ambient : int, default=None Embed the torus into a space with ambient dimension equal to `ambient`. The torus is randomly rotated in this high dimensional space. """ assert a <= c, "That's not a torus" theta = np.random.random((n,)) * 2.0 * np.pi phi = np.random.random((n,)) * 2.0 * np.pi data = np.zeros((n, 3)) data[:, 0] = (c + a * np.cos(theta)) * np.cos(phi) data[:, 1] = (c + a * np.cos(theta)) * np.sin(phi) data[:, 2] = a * np.sin(theta) if noise: data += noise * np.random.randn(*data.shape) if ambient: embedded = embed(data, ambient) return embedded, data def swiss_roll(n=100, r=10, noise=None, ambient=None): """Swiss roll implementation Parameters ---------- n : int Number of data points in shape. r : float Length of roll ambient : int, default=None Embed the swiss roll into a space with ambient dimension equal to `ambient`. The swiss roll is randomly rotated in this high dimensional space. References ---------- Equations mimic [Swiss Roll and SNE by jlmelville](https://jlmelville.github.io/smallvis/swisssne.html) """ phi = (np.random.random((n,)) * 3 + 1.5) * np.pi psi = np.random.random((n,)) * r data = np.zeros((n, 3)) data[:, 0] = phi * np.cos(phi) data[:, 1] = phi * np.sin(phi) data[:, 2] = psi if noise: data += noise * np.random.randn(*data.shape) if ambient: embedded = embed(data, ambient) return embedded, data def infty_sign(n=100, noise=None, ambient=None): """Construct a figure 8 or infinity sign with :code:`n` points and noise level with :code:`noise` standard deviation. Parameters ============ n: int number of points in returned data set. noise: float standard deviation of normally distributed noise added to data. """ t = np.linspace(0, 2 * np.pi, n + 1)[0:n] data = np.zeros((n, 2)) data[:, 0] = np.cos(t) data[:, 1] = np.sin(2 * t) if noise: data += noise * np.random.randn(n, 2) if ambient: embedded = embed(data, ambient) return embedded, data
/rtd_ae-0.1.2-py3-none-any.whl/rtd_ae/custom_shapes.py
0.877227
0.839142
custom_shapes.py
pypi
import bottle as bt import cgi import logging import re import os.path import secrets from metrics import Time from bin import root, config, models from bin.highlight import highlight, languages, langtoext, exttolang logger = logging.getLogger(__name__) BOTUARE = re.compile(r'|'.join([ re.escape('Mozilla/5.0 (compatible; Discordbot/2.0; +https://discordapp.com)'), re.escape('facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)'), ])) @bt.route('/health', method='GET') def healthcheck(): """ Get a dummy string 'alive' to ensure the server is responding """ return "alive" @bt.route('/', method='GET') def get_new_form(): """ Get the browser-friendly html form to easily post a new snippet :param lang: (query) optional lang that is selected in the lang selection instead of the configured default :param parentid: (query) optional 'parent' snippet to duplicate the code from :raises HTTPError: code 404 when the 'parent' snippet is not found """ parentid = bt.request.query.parentid lang = bt.request.query.lang or config.DEFAULT_LANGUAGE try: code = models.Snippet.get_by_id(parentid).code if parentid else "" except KeyError: raise bt.HTTPError(404, "Parent snippet not found") return bt.template( 'newform.html', languages=languages, default_language=lang, code=code, parentid=parentid, ) @bt.route('/assets/<filepath:path>') def assets(filepath): """ Get a static css/js/media file that is stored in the filesystem. This route exists for developers of bin who wish to run the service with minimum system requirements. In production, we suggest you use a web server to deliver the static content. """ return bt.static_file(filepath, root=root.joinpath('assets')) @bt.route('/new', method='POST') def post_new(): """ Post a new snippet and redirect the user to the generated unique URL for the snippet. :param code: (form) required snippet text, can alternativaly be sent as a Multi-Part utf-8 file :param lang: (form) optional language :param maxusage: (form) optional maximum download of the snippet before it is deleted :param lifetime: (form) optional time (defined in seconds) the snippet is keep in the database before it is deleted :param parentid: (form) optional snippet id this new snippet is a duplicate of :param token: (form) optional the "admin" token allows you to delete your snippet :raises HTTPError: code 411 when the ``Content-Length`` http header is missing :raises HTTPError: code 413 when the http request is too large (mostly because the snippet is too long) :raises HTTPError: code 400 with a sensible status when the form processing fails """ content_length = bt.request.get_header('Content-Length') if not content_length: raise bt.HTTPError(411, "Content-Length required") if int(content_length) > config.MAXSIZE: raise bt.HTTPError(413, f"Payload too large, we accept maximum {config.MAXSIZE}") files = bt.request.files forms = bt.request.forms token = None code = None lang = None ext = None maxusage = config.DEFAULT_MAXUSAGE lifetime = config.DEFAULT_LIFETIME parentid = '' try: # Form extraction if files: part = next(files.values()) charset = cgi.parse_header(part.content_type)[1].get('charset', 'utf-8') code = part.file.read(config.MAXSIZE).decode(charset) ext = os.path.splitext(part.filename)[1][1:] or langtoext[config.DEFAULT_LANGUAGE] if forms: # WSGI forces latin-1 decoding, this is wrong, we recode it in utf-8 code = forms.get('code', '').encode('latin-1').decode() or code lang = forms.get('lang') or config.DEFAULT_LANGUAGE maxusage = int(forms.get('maxusage') or maxusage) lifetime = Time(forms.get('lifetime') or lifetime) parentid = forms.get('parentid', '') token = forms.get('token') # Form validation if lang: ext = langtoext.get(lang) if ext is None: logger.warning('Unknown lang %r, using %r.', lang, config.DEFAULT_LANGUAGE) lang = config.DEFAULT_LANGUAGE ext = langtoext[config.DEFAULT_LANGUAGE] if ext: lang = exttolang.get(ext) if lang is None: logger.warning('Unknown file extension %r, using %r.', ext, langtoext[config.DEFAULT_LANGUAGE]) lang = config.DEFAULT_LANGUAGE ext = langtoext[config.DEFAULT_LANGUAGE] if not code: raise ValueError("Code is missing") if maxusage < 0: raise ValueError("Maximum usage must be positive") if lifetime < 0: raise ValueError("Lifetime must be positive") if parentid: try: models.Snippet.get_by_id(parentid) except KeyError: raise ValueError("Parent does not exist") if token and len(token) > 22: raise ValueError("Token must not exceed 22 chars as of 16 random bytes base64 encoded") except ValueError as exc: raise bt.HTTPError(400, str(exc)) snippet = models.Snippet.create(code, maxusage, lifetime, parentid, token) logger.info("New %s snippet of %s chars: %s", lang, len(code), snippet.id) bt.redirect(f'/{snippet.id}.{ext}') @bt.route('/<snippetid>', method='GET') @bt.route('/<snippetid>.<ext>', method='GET') def get_html(snippetid, ext=None): """ Get a snippet in a beautiful html page :param snippetid: (path) required snippet id :param ext: (path) optional language file extension, used to determine the highlight backend :param token: (query) optional the "admin" token :raises HTTPError: code 404 when the snippet is not found """ if BOTUARE.match(bt.request.headers.get('User-Agent', '')): return bt.template('blank.html') try: snippet = models.Snippet.get_by_id(snippetid) except KeyError: raise bt.HTTPError(404, "Snippet not found") lang = langtoext.get(ext, config.DEFAULT_LANGUAGE) ext = langtoext[lang] # always use the prefered extension for that lang codehl = highlight(snippet.code, lang) return bt.template( 'highlight.html', languages=languages, codehl=codehl, lang=lang, ext=ext, snippetid=snippetid, parentid=snippet.parentid, token=bt.request.query.token, ) @bt.route('/<snippetid>', method='DELETE') @bt.route('/<snippetid>.<ext>', method='DELETE') def delete_snippet(snippetid, ext=None): """ Delete a snippet :param Authorization: (header) required "Authorization: Token <ADMIN_TOKEN>" the "admin" token :raises HTTPError: code 400 when the Authorization (token) is missing :raises HTTPError: code 404 when the snippet is not found :raises HTTPError: code 401 when the token is incorrect """ auth = bt.request.get_header('Authorization', '').split(None, 1) if len(auth) != 2 or auth[0] != 'Token': raise bt.HTTPError(400, "Token is missing") try: snippet = models.Snippet.get_by_id(snippetid) except KeyError: raise bt.HTTPError(404, "Snippet not found") if not (snippet.token and secrets.compare_digest(snippet.token, auth[1])): raise bt.HTTPError(401, "Unauthorized") snippet.delete() logger.info("Snippet %s deleted by user", snippetid) @bt.route('/raw/<snippetid>', method='GET') @bt.route('/raw/<snippetid>.<ext>', method='GET') def get_raw(snippetid, ext=None): """ Get a snippet in plain text without code hightlight :param snippetid: (path) required snippet id :param ext: (path) ignored parameter """ if BOTUARE.match(bt.request.headers.get('User-Agent', '')): return bt.template('blank.html') try: snippet = models.Snippet.get_by_id(snippetid) except KeyError: raise bt.HTTPError(404, "Snippet not found") bt.response.headers['Content-Type'] = 'text/plain' return snippet.code @bt.route('/report', method='POST') def report(): """ Report a problematic snippet to the system administrator. :param snippetid: (form) the reported snippet :param name: (form) the name of the user reporting the problem :raises HTTPError: code 400 when any of the snippetid or the name is missing :raises HTTPError: code 404 when the reported snippet is not found """ name = bt.request.forms.get("name", "").encode('latin-1').decode().strip() snippetid = bt.request.forms.get("snippetid") if not name: raise bt.HTTPError(400, "Missing name") if not snippetid: raise bt.HTTPError(400, "Missing snippetid") try: models.Snippet.get_by_id(snippetid) except KeyError: raise bt.HTTPError(404, "Snippet not found") logger.warning("The snippet %s got reported by %s", snippetid, name) return bt.HTTPResponse("The snippet have been reported.")
/rtd_bin_server-1.4.0-py3-none-any.whl/bin/controller.py
0.63477
0.158402
controller.py
pypi
from redis import Redis from genpw import pronounceable_passwd from bin import config # We always connect to Redis database = Redis( host=config.REDIS_HOST, port=config.REDIS_PORT, db=config.REDIS_DB, password=config.REDIS_PASSWORD, username=config.REDIS_USERNAME, ) class Snippet: """ A snippet is a immuable text that have been saved in the database and that is retrivable via an unique URL. """ def __init__(self, ident, code, views_left, parentid, token): self.id = ident #: snippet unique identifier self.code = code #: snippet text self.views_left = views_left #: how many time this snippet can be retrieved again self.parentid = parentid #: the original snippet this one is a duplicate of or an empty string self.token = token #: the admin token of the snippet @classmethod def new_id(cls): """ Generate a safe unique identifier """ for _ in range(20): ident = pronounceable_passwd(config.IDENTSIZE) if len(ident) != config.IDENTSIZE: continue if ident in {'health', 'assets', 'new', 'raw', 'report'}: continue if database.exists(ident): continue return ident raise RuntimeError("No free or valid identifier has been found after 20 attempts") @classmethod def create(cls, code, maxusage, lifetime, parentid, token=None): """ Save a snippet in the database and return a snippet object :param code: the source code utf-8 encoded :param maxusage: how many times this snippet can be retrieve before self-deletion :param lifetime: how long the snippet is saved before self-deletion :param parentid: the original snippet id this new snippet is a duplicate of, empty string for original snippet :param token: the "admin" token of the snippet, ``None`` if the snippet has no "admin" token """ ident = cls.new_id() database.hset(ident, b'code', code) database.hset(ident, b'views_left', maxusage) database.hset(ident, b'parentid', parentid) if token: database.hset(ident, b'token', token) if lifetime > 0: database.expire(ident, int(lifetime)) return cls(ident, code, maxusage, parentid, token) @classmethod def get_by_id(cls, ident): """ Retrieve a snippet from the database and return a snippet object :param ident: the snippet identifier :raises KeyError: the snippet does not exist or have been removed """ snippet = database.hgetall(ident) if not snippet: raise KeyError('Snippet not found') code = snippet[b'code'].decode('utf-8') views_left = int(snippet[b'views_left'].decode('utf-8')) parentid = snippet[b'parentid'].decode('ascii') token = snippet.get(b'token', b'').decode() or None if views_left == 0: pass elif views_left == 1: database.delete(ident) else: database.hincrby(ident, 'views_left', -1) return cls(ident, code, views_left, parentid, token) def delete(self): """ Delete the snippet from the database """ database.delete(self.id)
/rtd_bin_server-1.4.0-py3-none-any.whl/bin/models.py
0.620507
0.237742
models.py
pypi
import argparse from Bio import SeqIO import re import numpy as np import matplotlib.pyplot as plt from scipy.spatial.distance import squareform, pdist import pandas import dendropy from ete3 import Tree class CustomFormatter(argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter): pass def getRTD(sequence, kmer): """Compute return times of kmer in given sequence and return mean and standard deviation for RTD vector """ modSeq = re.sub(kmer, "*", sequence) rt = modSeq.split("*") rtvector = list(map(len, rt)) if len(rtvector) > 1: del rtvector[0] del rtvector[-1] else: rtvector = [] msd = getMeanSD(rtvector) return msd def getMeanSD(vector): """Compute mean and standard deviation of RTD vector""" if len(vector) > 0: mean = np.mean(vector) sd = np.std(vector) else: mean = 0 sd = 0 ms = [mean, sd] return ms def getKmers(k, bases): """Generate k-mers of size k""" import itertools kmers = [''.join(p) for p in itertools.product(bases, repeat=k)] return kmers def list_to_npArray(vector1, vector2): """convert the list to numpy array""" if type(vector1) == list: vector1 = np.array(vector1) if type(vector2) == list: vector2 = np.array(vector2) return vector1, vector2 def euclidean(vector1, vector2): """ use matplotlib.mlab to calculate the euclidean distance. """ vector1, vector2 = list_to_npArray(vector1, vector2) dist = plt.mlab.dist(vector1, vector2) return dist def main(args): """Main function to process the inputs and perform computations""" fvector = args.fastaFile + ".RTD_vector.k_" + str(args.kmerSize) + ".tsv" fdist = args.fastaFile + ".RTD_distance_matrix.k_" + str(args.kmerSize) + ".tsv" fnewick = args.fastaFile + ".RTD_newick.k_" + str(args.kmerSize) + ".nwk" fpng = args.fastaFile + ".RTD_newick.k_" + str(args.kmerSize) + ".png" if args.seqType == 'N': bases = ['A', 'C', 'G', 'T'] else: bases = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y'] fasta_sequences = SeqIO.parse(open(args.fastaFile), 'fasta') kmers = getKmers(args.kmerSize, bases) fout = open(fvector, "w") fout.write("OTU\t") for kmer in kmers: kmean = kmer + '_mean' ksd = kmer + '_sd' fout.write(kmean + "\t" + ksd + "\t") fout.write("\n") for seq in fasta_sequences: fout.write(seq.id + "\t") for kmer in kmers: rtd = getRTD(str(seq.seq), kmer) fout.write("{}\t{}\t".format(rtd[0], rtd[1])) fout.write("\n") fout.close() df = pandas.read_csv(fvector, delimiter="\t") del df[list(df.columns)[-1]] dm = pandas.DataFrame(squareform(pdist(df.iloc[:, 1:])), columns=df.OTU.unique(), index=df.OTU.unique()) dm.to_csv(fdist, sep='\t') pdm = dendropy.PhylogeneticDistanceMatrix.from_csv(src=open(fdist), delimiter="\t") nj_tree = pdm.nj_tree() sn = str(nj_tree) + ";" ftree = open(fnewick, "w") ftree.write(sn) ftree.close() t = Tree(sn) t.render(fpng) if __name__ == '__main__': parser = argparse.ArgumentParser(description="Return Time Distribution based Alignment-free Phylogeny Analysis", epilog="Citation Information:\n\nKolekar P., Kale M., Kulkarni-Kale U., " "Alignment-free distance measure based on return time distribution for sequence" " analysis: Applications to clustering, molecular phylogeny and subtyping, " "Molecular Phylogenetics and Evolution, (2012) 65(2): 510-522. " "\n[PMID: 22820020]\n \n", formatter_class=CustomFormatter) parser.add_argument("--fastaFile", default=None, help="File with sequences in FASTA format", required=True) parser.add_argument("--seqType", default='N', help="Type of input sequences, either Nucleotide (N) or Protein (P)", choices=['N', 'P']) parser.add_argument("--kmerSize", default=1, help="Size of k-mer to compute return time distributions (RTDs)", type=int, required=True, choices=[1, 2, 3, 4, 5, 6, 7]) parser.add_argument('--version', action='version', version='RTD v0.0.1') args = parser.parse_args() main(args)
/rtd_phylogeny-0.0.51.tar.gz/rtd_phylogeny-0.0.51/src/rtd_phylogeny.py
0.669421
0.456107
rtd_phylogeny.py
pypi
from typing import Union from ..connectors.connection_interface import ConnectionInterface from .time_series import ( raw, resample, interpolate, interpolation_at_time, time_weighted_average, circular_average, circular_standard_deviation, ) from . import metadata from pandas import DataFrame class QueryBuilder: """ A builder for developing RTDIP queries using any delta table """ parameters: dict connection: ConnectionInterface data_source: str tagname_column: str timestamp_column: str status_column: str value_column: str def connect(self, connection: ConnectionInterface): """ Specifies the connection to be used for the query Args: connection: Connection chosen by the user (Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect) """ self.connection = connection return self def source( self, source: str, tagname_column: str = "TagName", timestamp_column: str = "EventTime", status_column: Union[str, None] = "Status", value_column: str = "Value", ): """ Specifies the source of the query Args: source (str): Source of the query can be a Unity Catalog table, Hive metastore table or path tagname_column (optional str): The column name in the source that contains the tagnames or series timestamp_column (optional str): The timestamp column name in the source status_column (optional str): The status column name in the source indicating `Good` or `Bad`. If this is not available, specify `None` value_column (optional str): The value column name in the source which is normally a float or string value for the time series event """ self.data_source = "`.`".join(source.split(".")) self.tagname_column = tagname_column self.timestamp_column = timestamp_column self.status_column = status_column self.value_column = value_column return self def raw( self, tagname_filter: [str], start_date: str, end_date: str, include_bad_data: bool = False, ) -> DataFrame: """ A function to return back raw data Args: tagname_filter (list str): List of tagnames to filter on the source start_date (str): Start date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) include_bad_data (optional bool): Include "Bad" data points with True or remove "Bad" data points with False Returns: DataFrame: A dataframe of raw timeseries data. """ raw_parameters = { "source": self.data_source, "tag_names": tagname_filter, "start_date": start_date, "end_date": end_date, "include_bad_data": include_bad_data, "tagname_column": self.tagname_column, "timestamp_column": self.timestamp_column, "status_column": self.status_column, "value_column": self.value_column, } return raw.get(self.connection, raw_parameters) def resample( self, tagname_filter: [str], start_date: str, end_date: str, time_interval_rate: str, time_interval_unit: str, agg_method: str, include_bad_data: bool = False, ) -> DataFrame: """ A query to resample the source data Args: tagname_filter (list str): List of tagnames to filter on the source start_date (str): Start date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) time_interval_rate (str): The time interval rate (numeric input) time_interval_unit (str): The time interval unit (second, minute, day, hour) agg_method (str): Aggregation Method (first, last, avg, min, max) include_bad_data (optional bool): Include "Bad" data points with True or remove "Bad" data points with False Returns: DataFrame: A dataframe of resampled timeseries data. """ resample_parameters = { "source": self.data_source, "tag_names": tagname_filter, "start_date": start_date, "end_date": end_date, "include_bad_data": include_bad_data, "time_interval_rate": time_interval_rate, "time_interval_unit": time_interval_unit, "agg_method": agg_method, "tagname_column": self.tagname_column, "timestamp_column": self.timestamp_column, "status_column": self.status_column, "value_column": self.value_column, } return resample.get(self.connection, resample_parameters) def interpolate( self, tagname_filter: [str], start_date: str, end_date: str, time_interval_rate: str, time_interval_unit: str, agg_method: str, interpolation_method: str, include_bad_data: bool = False, ) -> DataFrame: """ The Interpolate function will forward fill, backward fill or linearly interpolate the resampled data depending on the parameters specified Args: tagname_filter (list str): List of tagnames to filter on the source start_date (str): Start date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) time_interval_rate (str): The time interval rate (numeric input) time_interval_unit (str): The time interval unit (second, minute, day, hour) agg_method (str): Aggregation Method (first, last, avg, min, max) interpolation_method (str): Interpolation method (forward_fill, backward_fill, linear) include_bad_data (optional bool): Include "Bad" data points with True or remove "Bad" data points with False Returns: DataFrame: A dataframe of interpolated timeseries data. """ interpolation_parameters = { "source": self.data_source, "tag_names": tagname_filter, "start_date": start_date, "end_date": end_date, "include_bad_data": include_bad_data, "time_interval_rate": time_interval_rate, "time_interval_unit": time_interval_unit, "agg_method": agg_method, "interpolation_method": interpolation_method, "tagname_column": self.tagname_column, "timestamp_column": self.timestamp_column, "status_column": self.status_column, "value_column": self.value_column, } return interpolate.get(self.connection, interpolation_parameters) def interpolation_at_time( self, tagname_filter: [str], timestamp_filter: [str], include_bad_data: bool = False, window_length: int = 1, ) -> DataFrame: """ A interpolation at time function which works out the linear interpolation at a specific time based on the points before and after Args: tagname_filter (list str): List of tagnames to filter on the source timestamp_filter (list): List of timestamp or timestamps in the format YYY-MM-DDTHH:MM:SS or YYY-MM-DDTHH:MM:SS+zz:zz where %z is the timezone. (Example +00:00 is the UTC timezone) include_bad_data (optional bool): Include "Bad" data points with True or remove "Bad" data points with False window_length (optional int): Add longer window time in days for the start or end of specified date to cater for edge cases. Returns: DataFrame: A dataframe of interpolation at time timeseries data """ interpolation_at_time_parameters = { "source": self.data_source, "tag_names": tagname_filter, "timestamps": timestamp_filter, "include_bad_data": include_bad_data, "window_length": window_length, "tagname_column": self.tagname_column, "timestamp_column": self.timestamp_column, "status_column": self.status_column, "value_column": self.value_column, } return interpolation_at_time.get( self.connection, interpolation_at_time_parameters ) def time_weighted_average( self, tagname_filter: [str], start_date: str, end_date: str, time_interval_rate: str, time_interval_unit: str, step: str, source_metadata: str = None, include_bad_data: bool = False, window_length: int = 1, ) -> DataFrame: """ A function that receives a dataframe of raw tag data and performs a time weighted averages Args: tagname_filter (list str): List of tagnames to filter on the source start_date (str): Start date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) time_interval_rate (str): The time interval rate (numeric input) time_interval_unit (str): The time interval unit (second, minute, day, hour) step (str): data points with step "enabled" or "disabled". The options for step are "true", "false" or "metadata". "metadata" will retrieve the step value from the metadata table source_metadata (optional str): if step is set to "metadata", then this parameter must be populated with the source containing the tagname metadata with a column called "Step" include_bad_data (optional bool): Include "Bad" data points with True or remove "Bad" data points with False window_length (optional int): Add longer window time in days for the start or end of specified date to cater for edge cases. Returns: DataFrame: A dataframe of time weighted averages timeseries data """ time_weighted_average_parameters = { "source": self.data_source, "tag_names": tagname_filter, "start_date": start_date, "end_date": end_date, "include_bad_data": include_bad_data, "time_interval_rate": time_interval_rate, "time_interval_unit": time_interval_unit, "step": step, "source_metadata": "`.`".join(source_metadata.split(".")), "window_length": window_length, "tagname_column": self.tagname_column, "timestamp_column": self.timestamp_column, "status_column": self.status_column, "value_column": self.value_column, } return time_weighted_average.get( self.connection, time_weighted_average_parameters ) def metadata(self, tagname_filter: [str]) -> DataFrame: """ A query to retrieve metadata Args: tagname_filter (list str): List of tagnames to filter on the source Returns: DataFrame: A dataframe of metadata """ metadata_parameters = { "source": self.data_source, "tag_names": tagname_filter, "tagname_column": self.tagname_column, } return metadata.get(self.connection, metadata_parameters) def circular_average( self, tagname_filter: [str], start_date: str, end_date: str, time_interval_rate: str, time_interval_unit: str, lower_bound: int, upper_bound: int, include_bad_data: bool = False, ) -> DataFrame: """ A function that receives a dataframe of raw tag data and computes the circular mean for samples in a range Args: tagname_filter (list str): List of tagnames to filter on the source start_date (str): Start date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) time_interval_rate (str): The time interval rate (numeric input) time_interval_unit (str): The time interval unit (second, minute, day, hour) lower_bound (int): Lower boundary for the sample range upper_bound (int): Upper boundary for the sample range include_bad_data (optional bool): Include "Bad" data points with True or remove "Bad" data points with False Returns: DataFrame: A dataframe containing the circular averages """ circular_average_parameters = { "source": self.data_source, "tag_names": tagname_filter, "start_date": start_date, "end_date": end_date, "include_bad_data": include_bad_data, "time_interval_rate": time_interval_rate, "time_interval_unit": time_interval_unit, "lower_bound": lower_bound, "upper_bound": upper_bound, "tagname_column": self.tagname_column, "timestamp_column": self.timestamp_column, "status_column": self.status_column, "value_column": self.value_column, } return circular_average.get(self.connection, circular_average_parameters) def circular_standard_deviation( self, tagname_filter: [str], start_date: str, end_date: str, time_interval_rate: str, time_interval_unit: str, lower_bound: int, upper_bound: int, include_bad_data: bool = False, ) -> DataFrame: """ A function that receives a dataframe of raw tag data and computes the circular standard deviation for samples assumed to be in the range Args: tagname_filter (list str): List of tagnames to filter on the source start_date (str): Start date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) time_interval_rate (str): The time interval rate (numeric input) time_interval_unit (str): The time interval unit (second, minute, day, hour) lower_bound (int): Lower boundary for the sample range upper_bound (int): Upper boundary for the sample range include_bad_data (optional bool): Include "Bad" data points with True or remove "Bad" data points with False Returns: DataFrame: A dataframe containing the circular standard deviations """ circular_stddev_parameters = { "source": self.data_source, "tag_names": tagname_filter, "start_date": start_date, "end_date": end_date, "include_bad_data": include_bad_data, "time_interval_rate": time_interval_rate, "time_interval_unit": time_interval_unit, "lower_bound": lower_bound, "upper_bound": upper_bound, "tagname_column": self.tagname_column, "timestamp_column": self.timestamp_column, "status_column": self.status_column, "value_column": self.value_column, } return circular_standard_deviation.get( self.connection, circular_stddev_parameters )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/queries/query_builder.py
0.964162
0.420183
query_builder.py
pypi
import logging import pandas as pd from ._query_builder import _query_builder def get(connection: object, parameters_dict: dict) -> pd.DataFrame: """ An RTDIP interpolation at time function which works out the linear interpolation at a specific time based on the points before and after. This function requires the user to input a dictionary of parameters. (See Attributes table below.) Args: connection: Connection chosen by the user (Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect) parameters_dict: A dictionary of parameters (see Attributes table below) Attributes: business_unit (str): Business unit of the data region (str): Region asset (str): Asset data_security_level (str): Level of data security data_type (str): Type of the data (float, integer, double, string) tag_names (str): Name of the tag timestamps (list): List of timestamp or timestamps in the format YYY-MM-DDTHH:MM:SS or YYY-MM-DDTHH:MM:SS+zz:zz where %z is the timezone. (Example +00:00 is the UTC timezone) window_length (int): Add longer window time in days for the start or end of specified date to cater for edge cases. include_bad_data (bool): Include "Bad" data points with True or remove "Bad" data points with False Returns: DataFrame: A interpolated at time dataframe. """ if isinstance(parameters_dict["tag_names"], list) is False: raise ValueError("tag_names must be a list") if isinstance(parameters_dict["timestamps"], list) is False: raise ValueError("timestamps must be a list") try: query = _query_builder(parameters_dict, "interpolation_at_time") try: cursor = connection.cursor() cursor.execute(query) df = cursor.fetch_all() cursor.close() connection.close() return df except Exception as e: logging.exception("error returning dataframe") raise e except Exception as e: logging.exception("error with interpolation at time function") raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/queries/time_series/interpolation_at_time.py
0.86898
0.529203
interpolation_at_time.py
pypi
import logging import pandas as pd from ._query_builder import _query_builder def get(connection: object, parameters_dict: dict) -> pd.DataFrame: """ A function that receives a dataframe of raw tag data and performs a time weighted averages, returning the results. This function requires the input of a pandas dataframe acquired via the rtdip.functions.raw() method and the user to input a dictionary of parameters. (See Attributes table below) Pi data points will either have step enabled (True) or step disabled (False). You can specify whether you want step to be fetched by "Pi" or you can set the step parameter to True/False in the dictionary below. Args: connection: Connection chosen by the user (Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect) parameters_dict (dict): A dictionary of parameters (see Attributes table below) Attributes: business_unit (str): Business unit region (str): Region asset (str): Asset data_security_level (str): Level of data security data_type (str): Type of the data (float, integer, double, string) tag_names (list): List of tagname or tagnames start_date (str): Start date (Either a utc date in the format YYYY-MM-DD or a utc datetime in the format YYYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a utc date in the format YYYY-MM-DD or a utc datetime in the format YYYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) window_size_mins (int): (deprecated) Window size in minutes. Please use time_interval_rate and time_interval_unit below instead. time_interval_rate (str): The time interval rate (numeric input) time_interval_unit (str): The time interval unit (second, minute, day, hour) window_length (int): Add longer window time in days for the start or end of specified date to cater for edge cases. include_bad_data (bool): Include "Bad" data points with True or remove "Bad" data points with False step (str): data points with step "enabled" or "disabled". The options for step are "true", "false" or "metadata". "metadata" will retrieve the step value from the metadata table. Returns: DataFrame: A dataframe containing the time weighted averages. """ if isinstance(parameters_dict["tag_names"], list) is False: raise ValueError("tag_names must be a list") if "window_size_mins" in parameters_dict: logging.warning( "Parameter window_size_mins is deprecated and will be removed in v1.0.0. Please use time_interval_rate and time_interval_unit instead." ) parameters_dict["time_interval_rate"] = str(parameters_dict["window_size_mins"]) parameters_dict["time_interval_unit"] = "minute" try: query = _query_builder(parameters_dict, "time_weighted_average") try: cursor = connection.cursor() cursor.execute(query) df = cursor.fetch_all() cursor.close() connection.close() return df except Exception as e: logging.exception("error returning dataframe") raise e except Exception as e: logging.exception("error with time weighted average function") raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/queries/time_series/time_weighted_average.py
0.868757
0.646934
time_weighted_average.py
pypi
import logging import pandas as pd from ._query_builder import _query_builder def get(connection: object, parameters_dict: dict) -> pd.DataFrame: """ A function that receives a dataframe of raw tag data and computes the circular standard deviation for samples assumed to be in the range, returning the results. Args: connection: Connection chosen by the user (Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect) parameters_dict (dict): A dictionary of parameters (see Attributes table below) Attributes: business_unit (str): Business unit region (str): Region asset (str): Asset data_security_level (str): Level of data security data_type (str): Type of the data (float, integer, double, string) tag_names (list): List of tagname or tagnames start_date (str): Start date (Either a utc date in the format YYYY-MM-DD or a utc datetime in the format YYYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a utc date in the format YYYY-MM-DD or a utc datetime in the format YYYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) time_interval_rate (str): The time interval rate (numeric input) time_interval_unit (str): The time interval unit (second, minute, day, hour) lower_bound (int): Lower boundary for the sample range upper_bound (int): Upper boundary for the sample range include_bad_data (bool): Include "Bad" data points with True or remove "Bad" data points with False Returns: DataFrame: A dataframe containing the circular standard deviations. """ if isinstance(parameters_dict["tag_names"], list) is False: raise ValueError("tag_names must be a list") try: query = _query_builder(parameters_dict, "circular_standard_deviation") try: cursor = connection.cursor() cursor.execute(query) df = cursor.fetch_all() cursor.close() connection.close() return df except Exception as e: logging.exception("error returning dataframe") raise e except Exception as e: logging.exception("error with circular standard_deviation function") raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/queries/time_series/circular_standard_deviation.py
0.884021
0.532121
circular_standard_deviation.py
pypi
import logging import pandas as pd from ._query_builder import _query_builder def get(connection: object, parameters_dict: dict) -> pd.DataFrame: """ A function to return back raw data by querying databricks SQL Warehouse using a connection specified by the user. The available connectors by RTDIP are Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect. The available authentication methods are Certificate Authentication, Client Secret Authentication or Default Authentication. See documentation. This function requires the user to input a dictionary of parameters. (See Attributes table below) Args: connection: Connection chosen by the user (Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect) parameters_dict: A dictionary of parameters (see Attributes table below) Attributes: business_unit (str): Business unit region (str): Region asset (str): Asset data_security_level (str): Level of data security data_type (str): Type of the data (float, integer, double, string) tag_names (list): List of tagname or tagnames ["tag_1", "tag_2"] start_date (str): Start date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) include_bad_data (bool): Include "Bad" data points with True or remove "Bad" data points with False Returns: DataFrame: A dataframe of raw timeseries data. """ try: query = _query_builder(parameters_dict, "raw") try: cursor = connection.cursor() cursor.execute(query) df = cursor.fetch_all() cursor.close() connection.close() return df except Exception as e: logging.exception("error returning dataframe") raise e except Exception as e: logging.exception("error with raw function") raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/queries/time_series/raw.py
0.857991
0.464173
raw.py
pypi
import logging import pandas as pd from ._query_builder import _query_builder def get(connection: object, parameters_dict: dict) -> pd.DataFrame: """ An RTDIP Resampling function in spark to resample data by querying databricks SQL warehouses using a connection and authentication method specified by the user. This spark resample function will return a resampled dataframe. The available connectors by RTDIP are Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect. The available authentication methods are Certificate Authentication, Client Secret Authentication or Default Authentication. See documentation. This function requires the user to input a dictionary of parameters. (See Attributes table below) Args: connection: Connection chosen by the user (Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect) parameters_dict: A dictionary of parameters (see Attributes table below) Attributes: business_unit (str): Business unit of the data region (str): Region asset (str): Asset data_security_level (str): Level of data security data_type (str): Type of the data (float, integer, double, string) tag_names (list): List of tagname or tagnames ["tag_1", "tag_2"] start_date (str): Start date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) sample_rate (int): (deprecated) Please use time_interval_rate instead. See below. sample_unit (str): (deprecated) Please use time_interval_unit instead. See below. time_interval_rate (str): The time interval rate (numeric input) time_interval_unit (str): The time interval unit (second, minute, day, hour) agg_method (str): Aggregation Method (first, last, avg, min, max) include_bad_data (bool): Include "Bad" data points with True or remove "Bad" data points with False Returns: DataFrame: A resampled dataframe. """ if isinstance(parameters_dict["tag_names"], list) is False: raise ValueError("tag_names must be a list") if "sample_rate" in parameters_dict: logging.warning( "Parameter sample_rate is deprecated and will be removed in v1.0.0. Please use time_interval_rate instead." ) parameters_dict["time_interval_rate"] = parameters_dict["sample_rate"] if "sample_unit" in parameters_dict: logging.warning( "Parameter sample_unit is deprecated and will be removed in v1.0.0. Please use time_interval_unit instead." ) parameters_dict["time_interval_unit"] = parameters_dict["sample_unit"] try: query = _query_builder(parameters_dict, "resample") try: cursor = connection.cursor() cursor.execute(query) df = cursor.fetch_all() cursor.close() connection.close() return df except Exception as e: logging.exception("error returning dataframe") raise e except Exception as e: logging.exception("error with resampling function") raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/queries/time_series/resample.py
0.841403
0.575349
resample.py
pypi
import logging import pandas as pd from ._query_builder import _query_builder def get(connection: object, parameters_dict: dict) -> pd.DataFrame: """ A function that receives a dataframe of raw tag data and computes the circular mean for samples in a range, returning the results. Args: connection: Connection chosen by the user (Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect) parameters_dict (dict): A dictionary of parameters (see Attributes table below) Attributes: business_unit (str): Business unit region (str): Region asset (str): Asset data_security_level (str): Level of data security data_type (str): Type of the data (float, integer, double, string) tag_names (list): List of tagname or tagnames start_date (str): Start date (Either a utc date in the format YYYY-MM-DD or a utc datetime in the format YYYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a utc date in the format YYYY-MM-DD or a utc datetime in the format YYYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) time_interval_rate (str): The time interval rate (numeric input) time_interval_unit (str): The time interval unit (second, minute, day, hour) lower_bound (int): Lower boundary for the sample range upper_bound (int): Upper boundary for the sample range include_bad_data (bool): Include "Bad" data points with True or remove "Bad" data points with False Returns: DataFrame: A dataframe containing the circular averages. """ if isinstance(parameters_dict["tag_names"], list) is False: raise ValueError("tag_names must be a list") try: query = _query_builder(parameters_dict, "circular_average") try: cursor = connection.cursor() cursor.execute(query) df = cursor.fetch_all() cursor.close() connection.close() return df except Exception as e: logging.exception("error returning dataframe") raise e except Exception as e: logging.exception("error with circular average function") raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/queries/time_series/circular_average.py
0.887966
0.51501
circular_average.py
pypi
from jinja2 import Template import datetime from datetime import datetime, time TIMESTAMP_FORMAT = "%Y-%m-%dT%H:%M:%S%z" seconds_per_unit = {"s": 1, "m": 60, "h": 3600, "d": 86400, "w": 604800} def _is_date_format(dt, format): try: return datetime.strptime(dt, format) except Exception: return False def _parse_date(dt, is_end_date=False, exclude_date_format=False): if isinstance(dt, datetime): if dt.time() == time.min: if dt.tzinfo is not None: dt = datetime.strftime(dt, "%Y-%m-%d%z") else: dt = dt.date() else: dt = datetime.strftime(dt, TIMESTAMP_FORMAT) dt = str(dt) if _is_date_format(dt, "%Y-%m-%d") and exclude_date_format == False: _time = "T23:59:59" if is_end_date == True else "T00:00:00" return dt + _time + "+00:00" elif _is_date_format(dt, "%Y-%m-%dT%H:%M:%S"): return dt + "+00:00" elif _is_date_format(dt, TIMESTAMP_FORMAT): return dt elif _is_date_format(dt, "%Y-%m-%d%z"): _time = "T23:59:59" if is_end_date == True else "T00:00:00" dt = dt[0:10] + _time + dt[10:] return dt else: msg = f"Inputted timestamp: '{dt}', is not in the correct format." if exclude_date_format == True: msg += " List of timestamps must be in datetime format." raise ValueError(msg) def _parse_dates(parameters_dict): if "start_date" in parameters_dict: parameters_dict["start_date"] = _parse_date(parameters_dict["start_date"]) sample_dt = parameters_dict["start_date"] if "end_date" in parameters_dict: parameters_dict["end_date"] = _parse_date(parameters_dict["end_date"], True) if "timestamps" in parameters_dict: parsed_timestamp = [ _parse_date(dt, is_end_date=False, exclude_date_format=True) for dt in parameters_dict["timestamps"] ] parameters_dict["timestamps"] = parsed_timestamp sample_dt = parsed_timestamp[0] parameters_dict["time_zone"] = datetime.strptime( sample_dt, TIMESTAMP_FORMAT ).strftime("%z") return parameters_dict def _convert_to_seconds(s): return int(s[:-1]) * seconds_per_unit[s[-1]] def _raw_query(parameters_dict: dict) -> str: raw_query = ( "SELECT DISTINCT from_utc_timestamp(to_timestamp(date_format(`{{ timestamp_column }}`, 'yyyy-MM-dd HH:mm:ss.SSS')), \"{{ time_zone }}\") as `{{ timestamp_column }}`, `{{ tagname_column }}`, {% if include_status is defined and include_status == true %} `{{ status_column }}`, {% endif %} `{{ value_column }}` FROM " "{% if source is defined and source is not none %}" "`{{ source|lower }}` " "{% else %}" "`{{ business_unit|lower }}`.`sensors`.`{{ asset|lower }}_{{ data_security_level|lower }}_events_{{ data_type|lower }}` " "{% endif %}" "WHERE `{{ timestamp_column }}` BETWEEN to_timestamp(\"{{ start_date }}\") AND to_timestamp(\"{{ end_date }}\") AND `{{ tagname_column }}` in ('{{ tag_names | join('\\', \\'') }}') " "{% if include_status is defined and include_status == true and include_bad_data is defined and include_bad_data == false %}" "AND `{{ status_column }}` = 'Good'" "{% endif %}" "ORDER BY `{{ tagname_column }}`, `{{ timestamp_column }}` " ) raw_parameters = { "source": parameters_dict.get("source", None), "business_unit": parameters_dict.get("business_unit"), "region": parameters_dict.get("region"), "asset": parameters_dict.get("asset"), "data_security_level": parameters_dict.get("data_security_level"), "data_type": parameters_dict.get("data_type"), "start_date": parameters_dict["start_date"], "end_date": parameters_dict["end_date"], "tag_names": list(dict.fromkeys(parameters_dict["tag_names"])), "include_bad_data": parameters_dict["include_bad_data"], "time_zone": parameters_dict["time_zone"], "tagname_column": parameters_dict.get("tagname_column", "TagName"), "timestamp_column": parameters_dict.get("timestamp_column", "EventTime"), "include_status": False if "status_column" in parameters_dict and parameters_dict.get("status_column") is None else True, "status_column": "Status" if "status_column" in parameters_dict and parameters_dict.get("status_column") is None else parameters_dict.get("status_column", "Status"), "value_column": parameters_dict.get("value_column", "Value"), } sql_template = Template(raw_query) return sql_template.render(raw_parameters) def _sample_query(parameters_dict: dict) -> tuple: sample_query = ( "WITH raw_events AS (SELECT DISTINCT from_utc_timestamp(to_timestamp(date_format(`{{ timestamp_column }}`, 'yyyy-MM-dd HH:mm:ss.SSS')), \"{{ time_zone }}\") as `{{ timestamp_column }}`, `{{ tagname_column }}`, {% if include_status is defined and include_status == true %} `{{ status_column }}`, {% else %} 'Good' as `Status`, {% endif %} `{{ value_column }}` FROM " "{% if source is defined and source is not none %}" "`{{ source|lower }}` " "{% else %}" "`{{ business_unit|lower }}`.`sensors`.`{{ asset|lower }}_{{ data_security_level|lower }}_events_{{ data_type|lower }}` " "{% endif %}" "WHERE `{{ timestamp_column }}` BETWEEN to_timestamp(\"{{ start_date }}\") AND to_timestamp(\"{{ end_date }}\") AND `{{ tagname_column }}` in ('{{ tag_names | join('\\', \\'') }}') " "{% if include_status is defined and include_status == true and include_bad_data is defined and include_bad_data == false %} AND `{{ status_column }}` = 'Good' {% endif %}) " ',date_array AS (SELECT explode(sequence(from_utc_timestamp(to_timestamp("{{ start_date }}"), "{{ time_zone }}"), from_utc_timestamp(to_timestamp("{{ end_date }}"), "{{ time_zone }}"), INTERVAL \'{{ time_interval_rate + \' \' + time_interval_unit }}\')) AS timestamp_array) ' ",window_buckets AS (SELECT timestamp_array AS window_start, LEAD(timestamp_array) OVER (ORDER BY timestamp_array) AS window_end FROM date_array) " ",project_resample_results AS (SELECT /*+ RANGE_JOIN(d, {{ range_join_seconds }} ) */ d.window_start, d.window_end, e.`{{ tagname_column }}`, {{ agg_method }}(e.`{{ value_column }}`) OVER (PARTITION BY e.`{{ tagname_column }}`, d.window_start ORDER BY e.`{{ timestamp_column }}` ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS `{{ value_column }}` FROM window_buckets d INNER JOIN raw_events e ON d.window_start <= e.`{{ timestamp_column }}` AND d.window_end > e.`{{ timestamp_column }}`) " "SELECT window_start AS `{{ timestamp_column }}`, `{{ tagname_column }}`, `{{ value_column }}` FROM project_resample_results GROUP BY window_start, `{{ tagname_column }}`, `{{ value_column }}` " "{% if is_resample is defined and is_resample == true %}" "ORDER BY `{{ tagname_column }}`, `{{ timestamp_column }}` " "{% endif %}" ) sample_parameters = { "source": parameters_dict.get("source", None), "business_unit": parameters_dict.get("business_unit"), "region": parameters_dict.get("region"), "asset": parameters_dict.get("asset"), "data_security_level": parameters_dict.get("data_security_level"), "data_type": parameters_dict.get("data_type"), "start_date": parameters_dict["start_date"], "end_date": parameters_dict["end_date"], "tag_names": list(dict.fromkeys(parameters_dict["tag_names"])), "include_bad_data": parameters_dict["include_bad_data"], "time_interval_rate": parameters_dict["time_interval_rate"], "time_interval_unit": parameters_dict["time_interval_unit"], "agg_method": parameters_dict["agg_method"], "time_zone": parameters_dict["time_zone"], "is_resample": True, "tagname_column": parameters_dict.get("tagname_column", "TagName"), "timestamp_column": parameters_dict.get("timestamp_column", "EventTime"), "include_status": False if "status_column" in parameters_dict and parameters_dict.get("status_column") is None else True, "status_column": "Status" if "status_column" in parameters_dict and parameters_dict.get("status_column") is None else parameters_dict.get("status_column", "Status"), "value_column": parameters_dict.get("value_column", "Value"), "range_join_seconds": parameters_dict["range_join_seconds"], } sql_template = Template(sample_query) sql_query = sql_template.render(sample_parameters) return sql_query, sample_query, sample_parameters def _interpolation_query( parameters_dict: dict, sample_query: str, sample_parameters: dict ) -> str: if parameters_dict["interpolation_method"] == "forward_fill": interpolation_methods = "last_value/UNBOUNDED PRECEDING/CURRENT ROW" if parameters_dict["interpolation_method"] == "backward_fill": interpolation_methods = "first_value/CURRENT ROW/UNBOUNDED FOLLOWING" if ( parameters_dict["interpolation_method"] == "forward_fill" or parameters_dict["interpolation_method"] == "backward_fill" ): interpolation_options = interpolation_methods.split("/") interpolate_query = ( f"WITH resample AS ({sample_query})" ",date_array AS (SELECT explode(sequence(from_utc_timestamp(to_timestamp(\"{{ start_date }}\"), \"{{ time_zone }}\"), from_utc_timestamp(to_timestamp(\"{{ end_date }}\"), \"{{ time_zone }}\"), INTERVAL '{{ time_interval_rate + ' ' + time_interval_unit }}')) AS `{{ timestamp_column }}`, explode(array('{{ tag_names | join('\\', \\'') }}')) AS `{{ tagname_column }}`) " '{% if (interpolation_method is defined) and (interpolation_method == "forward_fill" or interpolation_method == "backward_fill") %}' "SELECT a.`{{ timestamp_column }}`, a.`{{ tagname_column }}`, {{ interpolation_options_0 }}(b.`{{ value_column }}`, true) OVER (PARTITION BY a.`{{ tagname_column }}` ORDER BY a.`{{ timestamp_column }}` ROWS BETWEEN {{ interpolation_options_1 }} AND {{ interpolation_options_2 }}) AS `{{ value_column }}` FROM date_array a LEFT OUTER JOIN resample b ON a.`{{ timestamp_column }}` = b.`{{ timestamp_column }}` AND a.`{{ tagname_column }}` = b.`{{ tagname_column }}` ORDER BY a.`{{ tagname_column }}`, a.`{{ timestamp_column }}` " '{% elif (interpolation_method is defined) and (interpolation_method == "linear") %}' ",linear_interpolation_calculations AS (SELECT coalesce(a.`{{ tagname_column }}`, b.`{{ tagname_column }}`) as `{{ tagname_column }}`, coalesce(a.`{{ timestamp_column }}`, b.`{{ timestamp_column }}`) as `{{ timestamp_column }}`, a.`{{ timestamp_column }}` as `Requested_{{ timestamp_column }}`, b.`{{ timestamp_column }}` as `Found_{{ timestamp_column }}`, b.`{{ value_column }}`, " "last_value(b.`{{ timestamp_column }}`, true) OVER (PARTITION BY a.`{{ tagname_column }}` ORDER BY a.`{{ timestamp_column }}` ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS `Last_{{ timestamp_column }}`, last_value(b.`{{ value_column }}`, true) OVER (PARTITION BY a.`{{ tagname_column }}` ORDER BY a.`{{ timestamp_column }}` ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS `Last_{{ value_column }}`, " "first_value(b.`{{ timestamp_column }}`, true) OVER (PARTITION BY a.`{{ tagname_column }}` ORDER BY a.`{{ timestamp_column }}` ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS `Next_{{ timestamp_column }}`, first_value(b.`{{ value_column }}`, true) OVER (PARTITION BY a.`{{ tagname_column }}` ORDER BY a.`{{ timestamp_column }}` ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS `Next_{{ value_column }}`, " "CASE WHEN b.`{{ value_column }}` is NULL THEN `Last_{{ value_column }}` + (unix_timestamp(a.`{{ timestamp_column }}`) - unix_timestamp(`Last_{{ timestamp_column }}`)) * ((`Next_{{ value_column }}` - `Last_{{ value_column }}`)) / ((unix_timestamp(`Next_{{ timestamp_column }}`) - unix_timestamp(`Last_{{ timestamp_column }}`))) ELSE b.`{{ value_column }}` END AS `linear_interpolated_{{ value_column }}` FROM date_array a FULL OUTER JOIN resample b ON a.`{{ timestamp_column }}` = b.`{{ timestamp_column }}` AND a.`{{ tagname_column }}` = b.`{{ tagname_column }}`) " "SELECT `{{ timestamp_column }}`, `{{ tagname_column }}`, `linear_interpolated_{{ value_column }}` AS `{{ value_column }}` FROM linear_interpolation_calculations ORDER BY `{{ tagname_column }}`, `{{ timestamp_column }}` " "{% else %}" "SELECT * FROM resample ORDER BY `{{ tagname_column }}`, `{{ timestamp_column }}` " "{% endif %}" ) interpolate_parameters = sample_parameters.copy() interpolate_parameters["interpolation_method"] = parameters_dict[ "interpolation_method" ] if ( parameters_dict["interpolation_method"] == "forward_fill" or parameters_dict["interpolation_method"] == "backward_fill" ): interpolate_parameters["interpolation_options_0"] = interpolation_options[0] interpolate_parameters["interpolation_options_1"] = interpolation_options[1] interpolate_parameters["interpolation_options_2"] = interpolation_options[2] sql_template = Template(interpolate_query) return sql_template.render(interpolate_parameters) def _interpolation_at_time(parameters_dict: dict) -> str: timestamps_deduplicated = list( dict.fromkeys(parameters_dict["timestamps"]) ) # remove potential duplicates in tags parameters_dict["timestamps"] = timestamps_deduplicated.copy() parameters_dict["min_timestamp"] = min(timestamps_deduplicated) parameters_dict["max_timestamp"] = max(timestamps_deduplicated) interpolate_at_time_query = ( "WITH raw_events AS (SELECT DISTINCT from_utc_timestamp(to_timestamp(date_format(`{{ timestamp_column }}`, 'yyyy-MM-dd HH:mm:ss.SSS')), \"{{ time_zone }}\") AS `{{ timestamp_column }}`, `{{ tagname_column }}`, {% if include_status is defined and include_status == true %} `{{ status_column }}`, {% else %} 'Good' as `Status`, {% endif %} `{{ value_column }}` FROM " "{% if source is defined and source is not none %}" "`{{ source|lower }}` " "{% else %}" "`{{ business_unit|lower }}`.`sensors`.`{{ asset|lower }}_{{ data_security_level|lower }}_events_{{ data_type|lower }}` " "{% endif %}" "WHERE to_date(`{{ timestamp_column }}`) BETWEEN " "{% if timestamps is defined %} " 'date_sub(to_date(to_timestamp("{{ min_timestamp }}")), {{ window_length }}) AND date_add(to_date(to_timestamp("{{ max_timestamp }}")), {{ window_length}}) ' "{% endif %} AND `{{ tagname_column }}` in ('{{ tag_names | join('\\', \\'') }}') " "{% if include_status is defined and include_status == true and include_bad_data is defined and include_bad_data == false %} AND `{{ status_column }}` = 'Good' {% endif %}) " ", date_array AS (SELECT explode(array( " "{% for timestamp in timestamps -%} " 'from_utc_timestamp(to_timestamp("{{timestamp}}"), "{{time_zone}}") ' "{% if not loop.last %} , {% endif %} {% endfor %} )) AS `{{ timestamp_column }}`, " "explode(array('{{ tag_names | join('\\', \\'') }}')) AS `{{ tagname_column }}`) " ", interpolation_events AS (SELECT coalesce(a.`{{ tagname_column }}`, b.`{{ tagname_column }}`) AS `{{ tagname_column }}`, coalesce(a.`{{ timestamp_column }}`, b.`{{ timestamp_column }}`) as `{{ timestamp_column }}`, a.`{{ timestamp_column }}` as `Requested_{{ timestamp_column }}`, b.`{{ timestamp_column }}` as `Found_{{ timestamp_column }}`, b.`{{ status_column }}`, b.`{{ value_column }}` FROM date_array a FULL OUTER JOIN raw_events b ON a.`{{ timestamp_column }}` = b.`{{ timestamp_column }}` AND a.`{{ tagname_column }}` = b.`{{ tagname_column }}`) " ", interpolation_calculations AS (SELECT *, lag(`{{ timestamp_column }}`) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}`) AS `Previous_{{ timestamp_column }}`, lag(`{{ value_column }}`) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}`) AS `Previous_{{ value_column }}`, lead(`{{ timestamp_column }}`) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}`) AS `Next_{{ timestamp_column }}`, lead(`{{ value_column }}`) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}`) AS `Next_{{ value_column }}`, " "CASE WHEN `Requested_{{ timestamp_column }}` = `Found_{{ timestamp_column }}` THEN `{{ value_column }}` WHEN `Next_{{ timestamp_column }}` IS NULL THEN `Previous_{{ value_column }}` WHEN `Previous_{{ timestamp_column }}` IS NULL and `Next_{{ timestamp_column }}` IS NULL THEN NULL " "ELSE `Previous_{{ value_column }}` + ((`Next_{{ value_column }}` - `Previous_{{ value_column }}`) * ((unix_timestamp(`{{ timestamp_column }}`) - unix_timestamp(`Previous_{{ timestamp_column }}`)) / (unix_timestamp(`Next_{{ timestamp_column }}`) - unix_timestamp(`Previous_{{ timestamp_column }}`)))) END AS `Interpolated_{{ value_column }}` FROM interpolation_events) " "SELECT `{{ tagname_column }}`, `{{ timestamp_column }}`, `Interpolated_{{ value_column }}` AS `{{ value_column }}` FROM interpolation_calculations WHERE `{{ timestamp_column }}` in ( " "{% for timestamp in timestamps -%} " 'from_utc_timestamp(to_timestamp("{{timestamp}}"), "{{time_zone}}") ' "{% if not loop.last %} , {% endif %} {% endfor %}) " ) interpolation_at_time_parameters = { "source": parameters_dict.get("source", None), "business_unit": parameters_dict.get("business_unit"), "region": parameters_dict.get("region"), "asset": parameters_dict.get("asset"), "data_security_level": parameters_dict.get("data_security_level"), "data_type": parameters_dict.get("data_type"), "tag_names": list(dict.fromkeys(parameters_dict["tag_names"])), "timestamps": parameters_dict["timestamps"], "include_bad_data": parameters_dict["include_bad_data"], "time_zone": parameters_dict["time_zone"], "min_timestamp": parameters_dict["min_timestamp"], "max_timestamp": parameters_dict["max_timestamp"], "window_length": parameters_dict["window_length"], "tagname_column": parameters_dict.get("tagname_column", "TagName"), "timestamp_column": parameters_dict.get("timestamp_column", "EventTime"), "include_status": False if "status_column" in parameters_dict and parameters_dict.get("status_column") is None else True, "status_column": "Status" if "status_column" in parameters_dict and parameters_dict.get("status_column") is None else parameters_dict.get("status_column", "Status"), "value_column": parameters_dict.get("value_column", "Value"), } sql_template = Template(interpolate_at_time_query) return sql_template.render(interpolation_at_time_parameters) def _metadata_query(parameters_dict: dict) -> str: metadata_query = ( "SELECT * FROM " "{% if source is defined and source is not none %}" "`{{ source|lower }}` " "{% else %}" "`{{ business_unit|lower }}`.`sensors`.`{{ asset|lower }}_{{ data_security_level|lower }}_metadata` " "{% endif %}" "{% if tag_names is defined and tag_names|length > 0 %} " "WHERE `{{ tagname_column }}` in ('{{ tag_names | join('\\', \\'') }}') " "{% endif %}" ) metadata_parameters = { "source": parameters_dict.get("source", None), "business_unit": parameters_dict.get("business_unit"), "region": parameters_dict.get("region"), "asset": parameters_dict.get("asset"), "data_security_level": parameters_dict.get("data_security_level"), "tag_names": list(dict.fromkeys(parameters_dict["tag_names"])), "tagname_column": parameters_dict.get("tagname_column", "TagName"), } sql_template = Template(metadata_query) return sql_template.render(metadata_parameters) def _time_weighted_average_query(parameters_dict: dict) -> str: parameters_dict["start_datetime"] = datetime.strptime( parameters_dict["start_date"], TIMESTAMP_FORMAT ).strftime("%Y-%m-%dT%H:%M:%S") parameters_dict["end_datetime"] = datetime.strptime( parameters_dict["end_date"], TIMESTAMP_FORMAT ).strftime("%Y-%m-%dT%H:%M:%S") time_weighted_average_query = ( "WITH raw_events AS (SELECT DISTINCT `{{ tagname_column }}`, from_utc_timestamp(to_timestamp(date_format(`{{ timestamp_column }}`, 'yyyy-MM-dd HH:mm:ss.SSS')), \"{{ time_zone }}\") as `{{ timestamp_column }}`, {% if include_status is defined and include_status == true %} `{{ status_column }}`, {% else %} 'Good' as `Status`, {% endif %} `{{ value_column }}` FROM " "{% if source is defined and source is not none %}" "`{{ source|lower }}` " "{% else %}" "`{{ business_unit|lower }}`.`sensors`.`{{ asset|lower }}_{{ data_security_level|lower }}_events_{{ data_type|lower }}` " "{% endif %}" "WHERE to_date(`{{ timestamp_column }}`) BETWEEN date_sub(to_date(to_timestamp(\"{{ start_date }}\")), {{ window_length }}) AND date_add(to_date(to_timestamp(\"{{ end_date }}\")), {{ window_length }}) AND `{{ tagname_column }}` in ('{{ tag_names | join('\\', \\'') }}') " "{% if include_status is defined and include_status == true and include_bad_data is defined and include_bad_data == false %} AND `{{ status_column }}` = 'Good' {% endif %}) " '{% if step is defined and step == "metadata" %} ' ",meta_data AS (SELECT `{{ tagname_column }}`, IFNULL(Step, false) AS Step FROM " "{% if source_metadata is defined and source_metadata is not none %}" "`{{ source_metadata|lower }}` " "{% else %}" "`{{ business_unit|lower }}`.`sensors`.`{{ asset|lower }}_{{ data_security_level|lower }}_metadata` " "{% endif %}" ") " "{% endif %}" ",date_array AS (SELECT explode(sequence(from_utc_timestamp(to_timestamp(\"{{ start_date }}\"), \"{{ time_zone }}\"), from_utc_timestamp(to_timestamp(\"{{ end_date }}\"), \"{{ time_zone }}\"), INTERVAL '{{ time_interval_rate + ' ' + time_interval_unit }}')) AS `{{ timestamp_column }}`, explode(array('{{ tag_names | join('\\', \\'') }}')) AS `{{ tagname_column }}`) " ",window_events AS (SELECT coalesce(a.`{{ tagname_column }}`, b.`{{ tagname_column }}`) AS `{{ tagname_column }}`, coalesce(a.`{{ timestamp_column }}`, b.`{{ timestamp_column }}`) as `{{ timestamp_column }}`, window(coalesce(a.`{{ timestamp_column }}`, b.`{{ timestamp_column }}`), '{{ time_interval_rate + ' ' + time_interval_unit }}').start `Window{{ timestamp_column }}`, b.`{{ status_column }}`, b.`{{ value_column }}` FROM date_array a " "FULL OUTER JOIN raw_events b ON CAST(a.`{{ timestamp_column }}` AS long) = CAST(b.`{{ timestamp_column }}` AS long) AND a.`{{ tagname_column }}` = b.`{{ tagname_column }}`) " ',fill_status AS (SELECT *, last_value(`{{ status_column }}`, true) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}` ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as `Fill_{{ status_column }}`, CASE WHEN `Fill_{{ status_column }}` = "Good" THEN `{{ value_column }}` ELSE null END AS `Good_{{ value_column }}` FROM window_events) ' ",fill_value AS (SELECT *, last_value(`Good_{{ value_column }}`, true) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}` ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS `Fill_{{ value_column }}` FROM fill_status) " '{% if step is defined and step == "metadata" %} ' ",twa_calculations AS (SELECT f.`{{ tagname_column }}`, f.`{{ timestamp_column }}`, f.`Window{{ timestamp_column }}`, m.Step, f.`{{ status_column }}`, f.`{{ value_column }}`, f.`Fill_{{ status_column }}`, f.`Fill_{{ value_column }}`, lead(f.`{{ timestamp_column }}`) OVER (PARTITION BY f.`{{ tagname_column }}` ORDER BY f.`{{ timestamp_column }}`) AS `Next_{{ timestamp_column }}`, lead(f.`Fill_{{ status_column }}`) OVER (PARTITION BY f.`{{ tagname_column }}` ORDER BY f.`{{ timestamp_column }}`) AS `Next_{{ status_column }}` " ',CASE WHEN `Next_{{ status_column }}` = "Good" OR (f.`Fill_{{ status_column }}` = "Good" AND `Next_{{ status_column }}` = "Bad") THEN lead(f.`Fill_{{ value_column }}`) OVER (PARTITION BY f.`{{ tagname_column }}` ORDER BY f.`{{ timestamp_column }}`) ELSE f.`{{ value_column }}` END AS `Next_{{ value_column }}_For_{{ status_column }}` ' ',CASE WHEN f.`Fill_{{ status_column }}` = "Good" THEN `Next_{{ value_column }}_For_{{ status_column }}` ELSE 0 END AS `Next_{{ value_column }}` ' ',CASE WHEN f.`Fill_{{ status_column }}` = "Good" and `Next_{{ status_column }}` = "Good" THEN ((cast(`Next_{{ timestamp_column }}` as double) - cast(f.`{{ timestamp_column }}` as double)) / 60) WHEN f.`Fill_{{ status_column }}` = "Good" and `Next_{{ status_column }}` != "Good" THEN ((cast(`Next_{{ timestamp_column }}` as integer) - cast(f.`{{ timestamp_column }}` as double)) / 60) ELSE 0 END AS good_minutes ' ",CASE WHEN m.Step == false THEN ((f.`Fill_{{ value_column }}` + `Next_{{ value_column }}`) * 0.5) * good_minutes ELSE (f.`Fill_{{ value_column }}` * good_minutes) END AS twa_value FROM fill_value f LEFT JOIN meta_data m ON f.`{{ tagname_column }}` = m.`{{ tagname_column }}`) " "{% else %} " ",twa_calculations AS (SELECT `{{ tagname_column }}`, `{{ timestamp_column }}`, `Window{{ timestamp_column }}`, {{ step }} AS Step, `{{ status_column }}`, `{{ value_column }}`, `Fill_{{ status_column }}`, `Fill_{{ value_column }}`, lead(`{{ timestamp_column }}`) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}`) AS `Next_{{ timestamp_column }}`, lead(`Fill_{{ status_column }}`) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}`) AS `Next_{{ status_column }}` " ',CASE WHEN `Next_{{ status_column }}` = "Good" OR (`Fill_{{ status_column }}` = "Good" AND `Next_{{ status_column }}` = "Bad") THEN lead(`Fill_{{ value_column }}`) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}`) ELSE `{{ value_column }}` END AS `Next_{{ value_column }}_For_{{ status_column }}` ' ',CASE WHEN `Fill_{{ status_column }}` = "Good" THEN `Next_{{ value_column }}_For_{{ status_column }}` ELSE 0 END AS `Next_{{ value_column }}` ' ',CASE WHEN `Fill_{{ status_column }}` = "Good" and `Next_{{ status_column }}` = "Good" THEN ((cast(`Next_{{ timestamp_column }}` as double) - cast(`{{ timestamp_column }}` as double)) / 60) WHEN `Fill_{{ status_column }}` = "Good" and `Next_{{ status_column }}` != "Good" THEN ((cast(`Next_{{ timestamp_column }}` as integer) - cast(`{{ timestamp_column }}` as double)) / 60) ELSE 0 END AS good_minutes ' ",CASE WHEN Step == false THEN ((`Fill_{{ value_column }}` + `Next_{{ value_column }}`) * 0.5) * good_minutes ELSE (`Fill_{{ value_column }}` * good_minutes) END AS twa_value FROM fill_value) " "{% endif %} " ",project_result AS (SELECT `{{ tagname_column }}`, `Window{{ timestamp_column }}` AS `{{ timestamp_column }}`, sum(twa_value) / sum(good_minutes) AS `{{ value_column }}` from twa_calculations GROUP BY `{{ tagname_column }}`, `Window{{ timestamp_column }}`) " 'SELECT * FROM project_result WHERE `{{ timestamp_column }}` BETWEEN to_timestamp("{{ start_datetime }}") AND to_timestamp("{{ end_datetime }}") ORDER BY `{{ tagname_column }}`, `{{ timestamp_column }}` ' ) time_weighted_average_parameters = { "source": parameters_dict.get("source", None), "source_metadata": parameters_dict.get("source_metadata", None), "business_unit": parameters_dict.get("business_unit"), "region": parameters_dict.get("region"), "asset": parameters_dict.get("asset"), "data_security_level": parameters_dict.get("data_security_level"), "data_type": parameters_dict.get("data_type"), "start_date": parameters_dict["start_date"], "end_date": parameters_dict["end_date"], "start_datetime": parameters_dict["start_datetime"], "end_datetime": parameters_dict["end_datetime"], "tag_names": list(dict.fromkeys(parameters_dict["tag_names"])), "time_interval_rate": parameters_dict["time_interval_rate"], "time_interval_unit": parameters_dict["time_interval_unit"], "window_length": parameters_dict["window_length"], "include_bad_data": parameters_dict["include_bad_data"], "step": parameters_dict["step"], "time_zone": parameters_dict["time_zone"], "tagname_column": parameters_dict.get("tagname_column", "TagName"), "timestamp_column": parameters_dict.get("timestamp_column", "EventTime"), "include_status": False if "status_column" in parameters_dict and parameters_dict.get("status_column") is None else True, "status_column": "Status" if "status_column" in parameters_dict and parameters_dict.get("status_column") is None else parameters_dict.get("status_column", "Status"), "value_column": parameters_dict.get("value_column", "Value"), } sql_template = Template(time_weighted_average_query) return sql_template.render(time_weighted_average_parameters) def _circular_stats_query(parameters_dict: dict) -> str: circular_base_query = ( "WITH raw_events AS (SELECT `{{ timestamp_column }}`, `{{ tagname_column }}`, {% if include_status is defined and include_status == true %} `{{ status_column }}`, {% else %} 'Good' as `Status`, {% endif %} `{{ value_column }}` FROM " "{% if source is defined and source is not none %}" "`{{ source|lower }}` " "{% else %}" "`{{ business_unit|lower }}`.`sensors`.`{{ asset|lower }}_{{ data_security_level|lower }}_events_{{ data_type|lower }}` " "{% endif %}" "WHERE `{{ timestamp_column }}` BETWEEN TO_TIMESTAMP(\"{{ start_date }}\") AND TO_TIMESTAMP(\"{{ end_date }}\") AND `{{ tagname_column }}` IN ('{{ tag_names | join('\\', \\'') }}') " "{% if include_status is defined and include_status == true and include_bad_data is defined and include_bad_data == false %} AND `{{ status_column }}` = 'Good' {% endif %}) " ",date_array AS (SELECT EXPLODE(SEQUENCE(FROM_UTC_TIMESTAMP(TO_TIMESTAMP(\"{{ start_date }}\"), \"{{ time_zone }}\"), FROM_UTC_TIMESTAMP(TO_TIMESTAMP(\"{{ end_date }}\"), \"{{ time_zone }}\"), INTERVAL '{{ time_interval_rate + ' ' + time_interval_unit }}')) AS `{{ timestamp_column }}`, EXPLODE(ARRAY('{{ tag_names | join('\\', \\'') }}')) AS `{{ tagname_column }}`) " ",window_events AS (SELECT COALESCE(a.`{{ tagname_column }}`, b.`{{ tagname_column }}`) AS `{{ tagname_column }}`, COALESCE(a.`{{ timestamp_column }}`, b.`{{ timestamp_column }}`) AS `{{ timestamp_column }}`, WINDOW(COALESCE(a.`{{ timestamp_column }}`, b.`{{ timestamp_column }}`), '{{ time_interval_rate + ' ' + time_interval_unit }}').START `Window{{ timestamp_column }}`, b.`{{ status_column }}`, b.`{{ value_column }}` FROM date_array a FULL OUTER JOIN raw_events b ON CAST(a.`{{ timestamp_column }}` AS LONG) = CAST(b.`{{ timestamp_column }}` AS LONG) AND a.`{{ tagname_column }}` = b.`{{ tagname_column }}`) " ",calculation_set_up AS (SELECT `{{ timestamp_column }}`, `Window{{ timestamp_column }}`, `{{ tagname_column }}`, `{{ value_column }}`, MOD(`{{ value_column }}` - {{ lower_bound }}, ({{ upper_bound }} - {{ lower_bound }}))*(2*pi()/({{ upper_bound }} - {{ lower_bound }})) as `{{ value_column }}_in_Radians`, LAG(`{{ timestamp_column }}`) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}`) AS `Previous_{{ timestamp_column }}`, (unix_millis(`{{ timestamp_column }}`) - unix_millis(`Previous_{{ timestamp_column }}`)) / 86400000 AS Time_Difference, COS(`{{ value_column }}_in_Radians`) as Cos_Value, SIN(`{{ value_column }}_in_Radians`) as Sin_Value FROM window_events) " ",circular_average_calculations AS (SELECT `Window{{ timestamp_column }}`, `{{ tagname_column }}`, Time_Difference, AVG(Cos_Value) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}` ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS Average_Cos, AVG(Sin_Value) OVER (PARTITION BY `{{ tagname_column }}` ORDER BY `{{ timestamp_column }}` ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS Average_Sin, SQRT(POW(Average_Cos, 2) + POW(Average_Sin, 2)) AS Vector_Length, Average_Cos/Vector_Length AS Rescaled_Average_Cos, Average_Sin/Vector_Length AS Rescaled_Average_Sin, Time_Difference * Rescaled_Average_Cos AS Diff_Average_Cos, Time_Difference * Rescaled_Average_Sin AS Diff_Average_Sin FROM calculation_set_up) " ) if parameters_dict["circular_function"] == "average": circular_stats_query = ( f"{circular_base_query} " ",project_circular_average_results AS (SELECT `Window{{ timestamp_column }}` AS `{{ timestamp_column }}`, `{{ tagname_column }}`, sum(Diff_Average_Cos)/sum(Time_Difference) AS Cos_Time_Averages, sum(Diff_Average_Sin)/sum(Time_Difference) AS Sin_Time_Averages, array_min(array(1, sqrt(pow(Cos_Time_Averages, 2) + pow(Sin_Time_Averages, 2)))) AS R, mod(2*pi() + atan2(Sin_Time_Averages, Cos_Time_Averages), 2*pi()) AS Circular_Average_Value_in_Radians, (Circular_Average_Value_in_Radians * ({{ upper_bound }} - {{ lower_bound }})) / (2*pi())+ 0 AS Circular_Average_Value_in_Degrees FROM circular_average_calculations GROUP BY `{{ tagname_column }}`, `Window{{ timestamp_column }}`) " "SELECT `{{ timestamp_column }}`, `{{ tagname_column }}`, Circular_Average_Value_in_Degrees AS `{{ value_column }}` FROM project_circular_average_results ORDER BY `{{ tagname_column }}`, `{{ timestamp_column }}` " ) elif parameters_dict["circular_function"] == "standard_deviation": circular_stats_query = ( f"{circular_base_query} " ",project_circular_average_results AS (SELECT `Window{{ timestamp_column }}` AS `{{ timestamp_column }}`, `{{ tagname_column }}`, sum(Diff_Average_Cos)/sum(Time_Difference) AS Cos_Time_Averages, sum(Diff_Average_Sin)/sum(Time_Difference) AS Sin_Time_Averages, array_min(array(1, sqrt(pow(Cos_Time_Averages, 2) + pow(Sin_Time_Averages, 2)))) AS R, mod(2*pi() + atan2(Sin_Time_Averages, Cos_Time_Averages), 2*pi()) AS Circular_Average_Value_in_Radians, SQRT(-2*LN(R)) * ( {{ upper_bound }} - {{ lower_bound }}) / (2*PI()) AS Circular_Standard_Deviation FROM circular_average_calculations GROUP BY `{{ tagname_column }}`, `Window{{ timestamp_column }}`) " "SELECT `{{ timestamp_column }}`, `{{ tagname_column }}`, Circular_Standard_Deviation AS `Value` FROM project_circular_average_results ORDER BY `{{ tagname_column }}`, `{{ timestamp_column }}` " ) circular_stats_parameters = { "source": parameters_dict.get("source", None), "business_unit": parameters_dict.get("business_unit"), "region": parameters_dict.get("region"), "asset": parameters_dict.get("asset"), "data_security_level": parameters_dict.get("data_security_level"), "data_type": parameters_dict.get("data_type"), "start_date": parameters_dict["start_date"], "end_date": parameters_dict["end_date"], "tag_names": list(dict.fromkeys(parameters_dict["tag_names"])), "time_interval_rate": parameters_dict["time_interval_rate"], "time_interval_unit": parameters_dict["time_interval_unit"], "lower_bound": parameters_dict["lower_bound"], "upper_bound": parameters_dict["upper_bound"], "include_bad_data": parameters_dict["include_bad_data"], "time_zone": parameters_dict["time_zone"], "circular_function": parameters_dict["circular_function"], "tagname_column": parameters_dict.get("tagname_column", "TagName"), "timestamp_column": parameters_dict.get("timestamp_column", "EventTime"), "include_status": False if "status_column" in parameters_dict and parameters_dict.get("status_column") is None else True, "status_column": "Status" if "status_column" in parameters_dict and parameters_dict.get("status_column") is None else parameters_dict.get("status_column", "Status"), "value_column": parameters_dict.get("value_column", "Value"), } sql_template = Template(circular_stats_query) return sql_template.render(circular_stats_parameters) def _query_builder(parameters_dict: dict, query_type: str) -> str: if "tag_names" not in parameters_dict: parameters_dict["tag_names"] = [] tagnames_deduplicated = list( dict.fromkeys(parameters_dict["tag_names"]) ) # remove potential duplicates in tags parameters_dict["tag_names"] = tagnames_deduplicated.copy() if query_type == "metadata": return _metadata_query(parameters_dict) parameters_dict = _parse_dates(parameters_dict) if query_type == "interpolation_at_time": return _interpolation_at_time(parameters_dict) if query_type == "raw": return _raw_query(parameters_dict) if query_type == "resample": parameters_dict["range_join_seconds"] = _convert_to_seconds( parameters_dict["time_interval_rate"] + " " + parameters_dict["time_interval_unit"][0] ) sample_prepared_query, sample_query, sample_parameters = _sample_query( parameters_dict ) return sample_prepared_query if query_type == "interpolate": parameters_dict["range_join_seconds"] = _convert_to_seconds( parameters_dict["time_interval_rate"] + " " + parameters_dict["time_interval_unit"][0] ) sample_prepared_query, sample_query, sample_parameters = _sample_query( parameters_dict ) sample_parameters["is_resample"] = False return _interpolation_query(parameters_dict, sample_query, sample_parameters) if query_type == "time_weighted_average": return _time_weighted_average_query(parameters_dict) if query_type == "circular_average": parameters_dict["circular_function"] = "average" return _circular_stats_query(parameters_dict) if query_type == "circular_standard_deviation": parameters_dict["circular_function"] = "standard_deviation" return _circular_stats_query(parameters_dict)
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/queries/time_series/_query_builder.py
0.525369
0.241221
_query_builder.py
pypi
import logging import pandas as pd import sys from ._query_builder import _query_builder def get(connection: object, parameters_dict: dict) -> pd.DataFrame: """ An RTDIP interpolation function that is intertwined with the RTDIP Resampling function. The Interpolation function will forward fill, backward fill or linearly interpolate the resampled data depending users specified interpolation method. This function requires the user to input a dictionary of parameters. (See Attributes table below.) Args: connection: Connection chosen by the user (Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect) parameters_dict: A dictionary of parameters (see Attributes table below) Attributes: business_unit (str): Business unit of the data region (str): Region asset (str): Asset data_security_level (str): Level of data security data_type (str): Type of the data (float, integer, double, string) tag_names (list): List of tagname or tagnames ["tag_1", "tag_2"] start_date (str): Start date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) end_date (str): End date (Either a date in the format YY-MM-DD or a datetime in the format YYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) sample_rate (int): (deprecated) Please use time_interval_rate instead. See below. sample_unit (str): (deprecated) Please use time_interval_unit instead. See below. time_interval_rate (str): The time interval rate (numeric input) time_interval_unit (str): The time interval unit (second, minute, day, hour) agg_method (str): Aggregation Method (first, last, avg, min, max) interpolation_method (str): Interpolation method (forward_fill, backward_fill, linear) include_bad_data (bool): Include "Bad" data points with True or remove "Bad" data points with False Returns: DataFrame: A resampled and interpolated dataframe. """ if isinstance(parameters_dict["tag_names"], list) is False: raise ValueError("tag_names must be a list") if "sample_rate" in parameters_dict: logging.warning( "Parameter sample_rate is deprecated and will be removed in v1.0.0. Please use time_interval_rate instead." ) parameters_dict["time_interval_rate"] = parameters_dict["sample_rate"] if "sample_unit" in parameters_dict: logging.warning( "Parameter sample_unit is deprecated and will be removed in v1.0.0. Please use time_interval_unit instead." ) parameters_dict["time_interval_unit"] = parameters_dict["sample_unit"] try: query = _query_builder(parameters_dict, "interpolate") try: cursor = connection.cursor() cursor.execute(query) df = cursor.fetch_all() cursor.close() connection.close() return df except Exception as e: logging.exception("error returning dataframe") raise e except Exception as e: logging.exception("error with interpolate function") raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/queries/time_series/interpolate.py
0.792785
0.585694
interpolate.py
pypi
import sys from typing import Union from importlib_metadata import PackageNotFoundError, version from importlib.util import module_from_spec, spec_from_file_location from pathlib import Path from io import BytesIO from databricks.sdk import WorkspaceClient from databricks.sdk.service.jobs import CreateJob, JobSettings from databricks.sdk.service.compute import Library, PythonPyPiLibrary, MavenLibrary from .interfaces import DeployInterface from ..utilities.pipeline_components import PipelineComponentsGetUtility __name__: str __version__: str __description__: str class DatabricksSDKDeploy(DeployInterface): """ Deploys an RTDIP Pipeline to Databricks Workflows leveraging the Databricks [SDK.](https://docs.databricks.com/dev-tools/sdk-python.html) Deploying an RTDIP Pipeline to Databricks requires only a few additional pieces of information to ensure the RTDIP Pipeline Job can be run in Databricks. This information includes: - **Cluster**: This can be defined a the Job or Task level and includes the size of the cluster to be used for the job - **Task**: The cluster to be used to execute the task, as well as any task scheduling information, if required. All options available in the [Databricks Jobs REST API v2.1](https://docs.databricks.com/dev-tools/api/latest/jobs.html) can be configured in the Databricks classes that have been defined in `rtdip_sdk.pipelines.deploy.models.databricks`, enabling full control of the configuration of the Databricks Workflow : - `CreateJob` - `Task` RTDIP Pipeline Components provide Databricks with all the required Python packages and JARs to execute each component and these will be setup on the Workflow automatically during the Databricks Workflow creation. Example: This example assumes that a PipelineJob has already been defined by a variable called `pipeline_job` ```python from rtdip_sdk.pipelines.deploy import DatabricksSDKDeploy, CreateJob, JobCluster, ClusterSpec, Task, NotebookTask, ComputeSpecKind, AutoScale, RuntimeEngine, DataSecurityMode cluster_list = [] cluster_list.append(JobCluster( job_cluster_key="test_cluster", new_cluster=ClusterSpec( node_type_id="Standard_E4ds_v5", autoscale=AutoScale(min_workers=1, max_workers=3), spark_version="13.2.x-scala2.12", data_security_mode=DataSecurityMode.SINGLE_USER, runtime_engine=RuntimeEngine.PHOTON ) )) task_list = [] task_list.append(Task( task_key="test_task", job_cluster_key="test_cluster", notebook_task=NotebookTask( notebook_path="/path/to/pipeline/rtdip_pipeline.py" ) )) job = CreateJob( name="test_job_rtdip", job_clusters=cluster_list, tasks=task_list ) databricks_job = DatabricksSDKDeploy(databricks_job=job, host="https://test.databricks.net", token="test_token") # Execute the deploy method to create a Workflow in the specified Databricks Environment deploy_result = databricks_job.deploy() # If the job should be executed immediately, execute the `launch` method launch_result = databricks_job.launch() ``` Args: databricks_job (DatabricksJob): Contains Databricks specific information required for deploying the RTDIP Pipeline Job to Databricks, such as cluster and workflow scheduling information. This can be any field in the [Databricks Jobs REST API v2.1](https://docs.databricks.com/dev-tools/api/latest/jobs.html) host (str): Databricks URL token (str): Token for authenticating with Databricks such as a Databricks PAT Token or Azure AD Token workspace_directory (str, optional): Determines the folder location in the Databricks Workspace. Defaults to /rtdip """ def __init__( self, databricks_job: CreateJob, host: str, token: str, workspace_directory: str = "/rtdip", ) -> None: if databricks_job.name is None or databricks_job.name == "": raise ValueError("databricks_job.name cannot be empty") self.databricks_job = databricks_job self.host = host self.token = token self.workspace_directory = workspace_directory def _convert_file_to_binary(self, path) -> BytesIO: with open(path, "rb") as f: return BytesIO(f.read()) def _load_module(self, module_name, path): spec = spec_from_file_location(module_name, path) module = module_from_spec(spec) spec.loader.exec_module(module) sys.modules[module.__name__] = module return module def deploy(self) -> Union[bool, ValueError]: """ Deploys an RTDIP Pipeline Job to Databricks Workflows. The deployment is managed by the Job Name and therefore will overwrite any existing workflow in Databricks with the same name. """ # Add libraries to Databricks Job workspace_client = WorkspaceClient(host=self.host, token=self.token) for task in self.databricks_job.tasks: if task.notebook_task is None and task.spark_python_task is None: return ValueError( "A Notebook or Spark Python Task must be populated for each task in the Databricks Job" ) # NOSONAR if task.notebook_task is not None: module = self._load_module( task.task_key + "file_upload", task.notebook_task.notebook_path ) (task_libraries, spark_configuration) = PipelineComponentsGetUtility( module.__name__ ).execute() workspace_client.workspace.mkdirs(path=self.workspace_directory) path = "{}/{}".format( self.workspace_directory, Path(task.notebook_task.notebook_path).name, ) workspace_client.workspace.upload( path=path, overwrite=True, content=self._convert_file_to_binary( task.notebook_task.notebook_path ), ) task.notebook_task.notebook_path = path else: module = self._load_module( task.task_key + "file_upload", task.spark_python_task.python_file ) (task_libraries, spark_configuration) = PipelineComponentsGetUtility( module ).execute() workspace_client.workspace.mkdirs(path=self.workspace_directory) path = "{}/{}".format( self.workspace_directory, Path(task.spark_python_task.python_file).name, ) workspace_client.workspace.upload( path=path, overwrite=True, content=self._convert_file_to_binary( task.spark_python_task.python_file ), ) task.spark_python_task.python_file = path task.libraries = [] for pypi_library in task_libraries.pypi_libraries: task.libraries.append( Library( pypi=PythonPyPiLibrary( package=pypi_library.to_string(), repo=pypi_library.repo ) ) ) for maven_library in task_libraries.maven_libraries: if not maven_library.group_id in ["io.delta", "org.apache.spark"]: task.libraries.append( Library( maven=MavenLibrary( coordinates=maven_library.to_string(), repo=maven_library.repo, ) ) ) for wheel_library in task_libraries.pythonwheel_libraries: task.libraries.append(Library(whl=wheel_library)) try: rtdip_version = version("rtdip-sdk") task.libraries.append( Library( pypi=PythonPyPiLibrary( package="rtdip-sdk[pipelines]=={}".format(rtdip_version) ) ) ) except PackageNotFoundError as e: task.libraries.append( Library(pypi=PythonPyPiLibrary(package="rtdip-sdk[pipelines]")) ) # Add Spark Configuration to Databricks Job if ( task.new_cluster is None and task.job_cluster_key is None and task.compute_key is None ): return ValueError( "A Cluster or Compute must be specified for each task in the Databricks Job" ) if task.new_cluster is not None: if spark_configuration is not None: if task.new_cluster.spark_conf is None: task.new_cluster.spark_conf = {} task.new_cluster.spark_conf.update(spark_configuration) elif task.job_cluster_key is not None: for job_cluster in self.databricks_job.job_clusters: if job_cluster.job_cluster_key == task.job_cluster_key: if spark_configuration is not None: if job_cluster.new_cluster.spark_conf is None: job_cluster.new_cluster.spark_conf = {} job_cluster.new_cluster.spark_conf.update( spark_configuration ) break elif task.compute_key is not None: for compute in self.databricks_job.compute: if compute.compute_key == task.compute_key: # TODO : Add spark config for compute. Does not seem to be currently available in the Databricks SDK # NOSONAR # compute.spark_conf.update(spark_configuration) break # Create Databricks Job job_found = False for existing_job in workspace_client.jobs.list(name=self.databricks_job.name): new_settings = JobSettings() for key, value in self.databricks_job.__dict__.items(): if key in new_settings.__dict__: setattr(new_settings, key, value) workspace_client.jobs.reset( job_id=existing_job.job_id, new_settings=new_settings ) job_found = True break if job_found == False: workspace_client.jobs.create(**self.databricks_job.__dict__) return True def launch(self): """ Launches an RTDIP Pipeline Job in Databricks Workflows. This will perform the equivalent of a `Run Now` in Databricks Workflows """ workspace_client = WorkspaceClient(host=self.host, token=self.token) job_found = False for existing_job in workspace_client.jobs.list(name=self.databricks_job.name): workspace_client.jobs.run_now(job_id=existing_job.job_id) job_found = True break if job_found == False: raise ValueError("Job not found in Databricks Workflows") return True
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/deploy/databricks.py
0.668988
0.650911
databricks.py
pypi
import logging import time from pyspark.sql import DataFrame from py4j.protocol import Py4JJavaError from ..interfaces import DestinationInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class SparkDeltaDestination(DestinationInterface): """ The Spark Delta Destination is used to write data to a Delta table. Args: data (DataFrame): Dataframe to be written to Delta options (dict): Options that can be specified for a Delta Table write operation (See Attributes table below). Further information on the options is available for [batch](https://docs.delta.io/latest/delta-batch.html#write-to-a-table){ target="_blank" } and [streaming](https://docs.delta.io/latest/delta-streaming.html#delta-table-as-a-sink){ target="_blank" }. destination (str): Either the name of the Hive Metastore or Unity Catalog Delta Table **or** the path to the Delta table mode (str): Method of writing to Delta Table - append/overwrite (batch), append/complete (stream) trigger (str): Frequency of the write operation. Specify "availableNow" to execute a trigger once, otherwise specify a time period such as "30 seconds", "5 minutes" query_name (str): Unique name for the query in associated SparkSession Attributes: checkpointLocation (str): Path to checkpoint files. (Streaming) txnAppId (str): A unique string that you can pass on each DataFrame write. (Batch & Streaming) txnVersion (str): A monotonically increasing number that acts as transaction version. (Batch & Streaming) maxRecordsPerFile (int str): Specify the maximum number of records to write to a single file for a Delta Lake table. (Batch) replaceWhere (str): Condition(s) for overwriting. (Batch) partitionOverwriteMode (str): When set to dynamic, overwrites all existing data in each logical partition for which the write will commit new data. Default is static. (Batch) overwriteSchema (bool str): If True, overwrites the schema as well as the table data. (Batch) """ data: DataFrame options: dict destination: str mode: str trigger: str query_name: str def __init__( self, data: DataFrame, options: dict, destination: str, mode: str = "append", trigger="10 seconds", query_name="DeltaDestination", ) -> None: self.data = data self.options = options self.destination = destination self.mode = mode self.trigger = trigger self.query_name = query_name @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() libraries.add_maven_library(get_default_package("spark_delta_core")) return libraries @staticmethod def settings() -> dict: return { "spark.sql.extensions": "io.delta.sql.DeltaSparkSessionExtension", "spark.sql.catalog.spark_catalog": "org.apache.spark.sql.delta.catalog.DeltaCatalog", } def pre_write_validation(self): return True def post_write_validation(self): return True def write_batch(self): """ Writes batch data to Delta. Most of the options provided by the Apache Spark DataFrame write API are supported for performing batch writes on tables. """ try: if "/" in self.destination: return ( self.data.write.format("delta") .mode(self.mode) .options(**self.options) .save(self.destination) ) else: return ( self.data.write.format("delta") .mode(self.mode) .options(**self.options) .saveAsTable(self.destination) ) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def write_stream(self): """ Writes streaming data to Delta. Exactly-once processing is guaranteed """ TRIGGER_OPTION = ( {"availableNow": True} if self.trigger == "availableNow" else {"processingTime": self.trigger} ) try: if "/" in self.destination: query = ( self.data.writeStream.trigger(**TRIGGER_OPTION) .format("delta") .queryName(self.query_name) .outputMode(self.mode) .options(**self.options) .start(self.destination) ) else: query = ( self.data.writeStream.trigger(**TRIGGER_OPTION) .format("delta") .queryName(self.query_name) .outputMode(self.mode) .options(**self.options) .toTable(self.destination) ) while query.isActive: if query.lastProgress: logging.info(query.lastProgress) time.sleep(10) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/destinations/spark/delta.py
0.68637
0.589657
delta.py
pypi
import logging import time from pyspark.sql import DataFrame from py4j.protocol import Py4JJavaError from pyspark.sql.functions import to_json, struct from ..interfaces import DestinationInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class SparkKafkaDestination(DestinationInterface): """ This Spark destination class is used to write batch or streaming data from Kafka. Required and optional configurations can be found in the Attributes tables below. Additionally, there are more optional configurations which can be found [here.](https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html){ target="_blank" } For compatibility between Spark and Kafka, the columns in the input dataframe are concatenated into one 'value' column of JSON string. Args: data (DataFrame): Dataframe to be written to Kafka options (dict): A dictionary of Kafka configurations (See Attributes tables below). For more information on configuration options see [here](https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html){ target="_blank" } trigger (str): Frequency of the write operation. Specify "availableNow" to execute a trigger once, otherwise specify a time period such as "30 seconds", "5 minutes" query_name (str): Unique name for the query in associated SparkSession The following options must be set for the Kafka destination for both batch and streaming queries. Attributes: kafka.bootstrap.servers (A comma-separated list of host︰port): The Kafka "bootstrap.servers" configuration. (Streaming and Batch) The following configurations are optional: Attributes: topic (str):Sets the topic that all rows will be written to in Kafka. This option overrides any topic column that may exist in the data. (Streaming and Batch) includeHeaders (bool): Whether to include the Kafka headers in the row. (Streaming and Batch) """ def __init__( self, data: DataFrame, options: dict, trigger="10 seconds", query_name="KafkaDestination", ) -> None: self.data = data self.options = options self.trigger = trigger self.query_name = query_name @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): spark_libraries = Libraries() spark_libraries.add_maven_library(get_default_package("spark_sql_kafka")) return spark_libraries @staticmethod def settings() -> dict: return {} def pre_write_validation(self): return True def post_write_validation(self): return True def write_batch(self): """ Writes batch data to Kafka. """ try: return ( self.data.select(to_json(struct("*")).alias("value")) .write.format("kafka") .options(**self.options) .save() ) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def write_stream(self): """ Writes steaming data to Kafka. """ try: TRIGGER_OPTION = ( {"availableNow": True} if self.trigger == "availableNow" else {"processingTime": self.trigger} ) query = ( self.data.select(to_json(struct("*")).alias("value")) .writeStream.trigger(**TRIGGER_OPTION) .format("kafka") .options(**self.options) .queryName(self.query_name) .start() ) while query.isActive: if query.lastProgress: logging.info(query.lastProgress) time.sleep(10) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/destinations/spark/kafka.py
0.787646
0.63341
kafka.py
pypi
import logging import time from pyspark.sql import DataFrame from py4j.protocol import Py4JJavaError from pyspark.sql.functions import col, struct, to_json from pyspark.sql.types import StringType, BinaryType from ..interfaces import DestinationInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class SparkEventhubDestination(DestinationInterface): """ This Spark destination class is used to write batch or streaming data to Eventhubs. Eventhub configurations need to be specified as options in a dictionary. Additionally, there are more optional configurations which can be found [here.](https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/PySpark/structured-streaming-pyspark.md#event-hubs-configuration){ target="_blank" } If using startingPosition or endingPosition make sure to check out **Event Position** section for more details and examples. Args: data (DataFrame): Dataframe to be written to Eventhub options (dict): A dictionary of Eventhub configurations (See Attributes table below). All Configuration options for Eventhubs can be found [here.](https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/PySpark/structured-streaming-pyspark.md#event-hubs-configuration){ target="_blank" } trigger (str): Frequency of the write operation. Specify "availableNow" to execute a trigger once, otherwise specify a time period such as "30 seconds", "5 minutes" query_name (str): Unique name for the query in associated SparkSession Attributes: checkpointLocation (str): Path to checkpoint files. (Streaming) eventhubs.connectionString (str): Eventhubs connection string is required to connect to the Eventhubs service. (Streaming and Batch) eventhubs.consumerGroup (str): A consumer group is a view of an entire eventhub. Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. (Streaming and Batch) eventhubs.startingPosition (JSON str): The starting position for your Structured Streaming job. If a specific EventPosition is not set for a partition using startingPositions, then we use the EventPosition set in startingPosition. If nothing is set in either option, we will begin consuming from the end of the partition. (Streaming and Batch) eventhubs.endingPosition: (JSON str): The ending position of a batch query. This works the same as startingPosition. (Batch) maxEventsPerTrigger (long): Rate limit on maximum number of events processed per trigger interval. The specified total number of events will be proportionally split across partitions of different volume. (Stream) """ def __init__( self, data: DataFrame, options: dict, trigger="10 seconds", query_name="EventhubDestination", ) -> None: self.data = data self.options = options self.trigger = trigger self.query_name = query_name @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): spark_libraries = Libraries() spark_libraries.add_maven_library(get_default_package("spark_azure_eventhub")) return spark_libraries @staticmethod def settings() -> dict: return {} def pre_write_validation(self): return True def post_write_validation(self): return True def prepare_columns(self): if "body" in self.data.columns: if self.data.schema["body"].dataType not in [StringType(), BinaryType()]: try: self.data.withColumn("body", col("body").cast(StringType())) except: raise ValueError("'body' column must be of string or binary type") else: self.data = self.data.withColumn( "body", to_json( struct( [ col(column).alias(column) for column in self.data.columns if column not in ["partitionId", "partitionKey"] ] ) ), ) for column in self.data.schema: if ( column.name in ["partitionId", "partitionKey"] and column.dataType != StringType() ): try: self.data = self.data.withColumn( column.name, col(column.name).cast(StringType()) ) except: raise ValueError(f"Column {column.name} must be of string type") return self.data.select( [ column for column in self.data.columns if column in ["partitionId", "partitionKey", "body"] ] ) def write_batch(self): """ Writes batch data to Eventhubs. """ eventhub_connection_string = "eventhubs.connectionString" try: if eventhub_connection_string in self.options: sc = self.spark.sparkContext self.options[ eventhub_connection_string ] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt( self.options[eventhub_connection_string] ) df = self.prepare_columns() return df.write.format("eventhubs").options(**self.options).save() except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def write_stream(self): """ Writes steaming data to Eventhubs. """ eventhub_connection_string = "eventhubs.connectionString" try: TRIGGER_OPTION = ( {"availableNow": True} if self.trigger == "availableNow" else {"processingTime": self.trigger} ) if eventhub_connection_string in self.options: sc = self.spark.sparkContext self.options[ eventhub_connection_string ] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt( self.options[eventhub_connection_string] ) df = self.prepare_columns() df = self.data.select( [ column for column in self.data.columns if column in ["partitionId", "partitionKey", "body"] ] ) query = ( df.writeStream.trigger(**TRIGGER_OPTION) .format("eventhubs") .options(**self.options) .queryName(self.query_name) .start() ) while query.isActive: if query.lastProgress: logging.info(query.lastProgress) time.sleep(10) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/destinations/spark/eventhub.py
0.8067
0.536981
eventhub.py
pypi
import logging import time import math import requests from requests.adapters import HTTPAdapter from pyspark.sql import DataFrame from pyspark.sql.functions import ( to_json, struct, col, row_number, concat_ws, collect_list, lit, udf, ) from pyspark.sql.window import Window from py4j.protocol import Py4JJavaError from ..interfaces import DestinationInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class SparkRestAPIDestination(DestinationInterface): """ The Spark Rest API Destination is used to write data to a Rest API. The payload sent to the API is constructed by converting each row in the DataFrame to Json. !!! Note While it is possible to use the `write_batch` method, it is easy to overwhlem a Rest API with large volumes of data. Consider reducing data volumes when writing to a Rest API in Batch mode to prevent API errors including throtting. Args: data (DataFrame): Dataframe to be merged into a Delta Table options (dict): A dictionary of options for streaming writes url (str): The Rest API Url headers (dict): A dictionary of headers to be provided to the Rest API batch_size (int): The number of DataFrame rows to be used in each Rest API call method (str): The method to be used when calling the Rest API. Allowed values are POST, PATCH and PUT parallelism (int): The number of concurrent calls to be made to the Rest API trigger (str): Frequency of the write operation. Specify "availableNow" to execute a trigger once, otherwise specify a time period such as "30 seconds", "5 minutes" query_name (str): Unique name for the query in associated SparkSession Attributes: checkpointLocation (str): Path to checkpoint files. (Streaming) """ data: DataFrame options: dict url: str headers: dict batch_size: int method: str parallelism: int trigger: str query_name: str def __init__( self, data: DataFrame, options: dict, url: str, headers: dict, batch_size: int, method: str = "POST", parallelism: int = 8, trigger="1 minutes", query_name: str = "DeltaRestAPIDestination", ) -> None: self.data = data self.options = options self.url = url self.headers = headers self.batch_size = batch_size self.method = method self.parallelism = parallelism self.trigger = trigger self.query_name = query_name @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() libraries.add_pypi_library(get_default_package("api_requests")) return libraries @staticmethod def settings() -> dict: return {} def pre_write_validation(self): return True def post_write_validation(self): return True def _pre_batch_records_for_api_call(self, micro_batch_df: DataFrame): batch_count = math.ceil(micro_batch_df.count() / self.batch_size) micro_batch_df = ( micro_batch_df.withColumn("content", to_json(struct(col("*")))) .withColumn("row_number", row_number().over(Window().orderBy(lit("A")))) .withColumn("batch_id", col("row_number") % batch_count) ) return micro_batch_df.groupBy("batch_id").agg( concat_ws(",|", collect_list("content")).alias("payload") ) def _api_micro_batch(self, micro_batch_df: DataFrame, epoch_id=None): # NOSONAR url = self.url method = self.method headers = self.headers @udf("string") def _rest_api_execute(data): session = requests.Session() adapter = HTTPAdapter(max_retries=3) session.mount("http://", adapter) # NOSONAR session.mount("https://", adapter) if method == "POST": response = session.post(url, headers=headers, data=data, verify=False) elif method == "PATCH": response = session.patch(url, headers=headers, data=data, verify=False) elif method == "PUT": response = session.put(url, headers=headers, data=data, verify=False) else: raise Exception("Method {} is not supported".format(method)) # NOSONAR if not (response.status_code == 200 or response.status_code == 201): raise Exception( "Response status : {} .Response message : {}".format( str(response.status_code), response.text ) ) # NOSONAR return str(response.status_code) micro_batch_df.persist() micro_batch_df = self._pre_batch_records_for_api_call(micro_batch_df) micro_batch_df = micro_batch_df.repartition(self.parallelism) ( micro_batch_df.withColumn( "rest_api_response_code", _rest_api_execute(micro_batch_df["payload"]) ).collect() ) micro_batch_df.unpersist() def write_batch(self): """ Writes batch data to a Rest API """ try: return self._api_micro_batch(self.data) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def write_stream(self): """ Writes streaming data to a Rest API """ try: TRIGGER_OPTION = ( {"availableNow": True} if self.trigger == "availableNow" else {"processingTime": self.trigger} ) query = ( self.data.writeStream.trigger(**TRIGGER_OPTION) .foreachBatch(self._api_micro_batch) .queryName(self.query_name) .outputMode("update") .options(**self.options) .start() ) while query.isActive: if query.lastProgress: logging.info(query.lastProgress) time.sleep(10) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/destinations/spark/rest_api.py
0.810479
0.39905
rest_api.py
pypi
import logging import time from pyspark.sql import DataFrame, SparkSession from pyspark.sql.functions import col, when, date_format, floor from py4j.protocol import Py4JJavaError from ..interfaces import DestinationInterface from ..spark.delta import SparkDeltaDestination from ..spark.delta_merge import ( SparkDeltaMergeDestination, DeltaMergeCondition, DeltaMergeConditionValues, ) from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class ValueTypeConstants: INTEGER_VALUE = "ValueType = 'integer'" FLOAT_VALUE = "ValueType = 'float'" STRING_VALUE = "ValueType = 'string'" class SparkPCDMToDeltaDestination(DestinationInterface): """ The Process Control Data Model written to Delta Args: data (DataFrame): Dataframe to be merged into a Delta Table options (dict): Options that can be specified for a Delta Table read operation (See Attributes table below). Further information on the options is available for [batch](https://docs.delta.io/latest/delta-batch.html#write-to-a-table){ target="_blank" } and [streaming](https://docs.delta.io/latest/delta-streaming.html#delta-table-as-a-sink){ target="_blank" }. destination_float (str): Either the name of the Hive Metastore or Unity Catalog Delta Table **or** the path to the Delta table to store float values. destination_string (Optional str): Either the name of the Hive Metastore or Unity Catalog Delta Table **or** the path to the Delta table to store string values. destination_integer (Optional str): Either the name of the Hive Metastore or Unity Catalog Delta Table **or** the path to the Delta table to store integer values mode (str): Method of writing to Delta Table - append/overwrite (batch), append/complete (stream) trigger (str): Frequency of the write operation. Specify "availableNow" to execute a trigger once, otherwise specify a time period such as "30 seconds", "5 minutes" query_name (str): Unique name for the query in associated SparkSession merge (bool): Use Delta Merge to perform inserts, updates and deletes try_broadcast_join (bool): Attempts to perform a broadcast join in the merge which can leverage data skipping using partition pruning and file pruning automatically. Can fail if dataframe being merged is large and therefore more suitable for streaming merges than batch merges remove_nanoseconds (bool): Removes nanoseconds from the EventTime column and replaces with zeros remove_duplicates (bool: Removes duplicates before writing the data Attributes: checkpointLocation (str): Path to checkpoint files. (Streaming) """ spark: SparkSession data: DataFrame options: dict destination_float: str destination_string: str destination_integer: str mode: str trigger: str query_name: str merge: bool try_broadcast_join: bool remove_nanoseconds: bool remove_duplicates: bool def __init__( self, spark: SparkSession, data: DataFrame, options: dict, destination_float: str, destination_string: str = None, destination_integer: str = None, mode: str = None, trigger="10 seconds", query_name: str = "PCDMToDeltaDestination", merge: bool = True, try_broadcast_join=False, remove_nanoseconds: bool = False, remove_duplicates: bool = True, ) -> None: self.spark = spark self.data = data self.destination_float = destination_float self.destination_string = destination_string self.destination_integer = destination_integer self.options = options self.mode = mode self.trigger = trigger self.query_name = query_name self.merge = merge self.try_broadcast_join = try_broadcast_join self.remove_nanoseconds = remove_nanoseconds self.remove_duplicates = remove_duplicates @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() libraries.add_maven_library(get_default_package("spark_delta_core")) return libraries @staticmethod def settings() -> dict: return {} def pre_write_validation(self): return True def post_write_validation(self): return True def _get_eventdate_string(self, df: DataFrame) -> str: dates_df = df.select("EventDate").distinct() dates_df = dates_df.select( date_format("EventDate", "yyyy-MM-dd").alias("EventDate") ) dates_list = list(dates_df.toPandas()["EventDate"]) return str(dates_list).replace("[", "").replace("]", "") def _write_delta_merge(self, df: DataFrame, destination: str): df = df.select( "EventDate", "TagName", "EventTime", "Status", "Value", "ChangeType" ) when_matched_update_list = [ DeltaMergeConditionValues( condition="(source.ChangeType IN ('insert', 'update', 'upsert')) AND ((source.Status != target.Status) OR (source.Value != target.Value))", values={ "EventDate": "source.EventDate", "TagName": "source.TagName", "EventTime": "source.EventTime", "Status": "source.Status", "Value": "source.Value", }, ) ] when_matched_delete_list = [ DeltaMergeCondition(condition="source.ChangeType = 'delete'") ] when_not_matched_insert_list = [ DeltaMergeConditionValues( condition="(source.ChangeType IN ('insert', 'update', 'upsert'))", values={ "EventDate": "source.EventDate", "TagName": "source.TagName", "EventTime": "source.EventTime", "Status": "source.Status", "Value": "source.Value", }, ) ] merge_condition = "source.EventDate = target.EventDate AND source.TagName = target.TagName AND source.EventTime = target.EventTime" perform_merge = True if self.try_broadcast_join != True: eventdate_string = self._get_eventdate_string(df) if eventdate_string == None or eventdate_string == "": perform_merge = False else: merge_condition = ( "target.EventDate in ({}) AND ".format(eventdate_string) + merge_condition ) if perform_merge == True: SparkDeltaMergeDestination( spark=self.spark, data=df, destination=destination, options=self.options, merge_condition=merge_condition, when_matched_update_list=when_matched_update_list, when_matched_delete_list=when_matched_delete_list, when_not_matched_insert_list=when_not_matched_insert_list, try_broadcast_join=self.try_broadcast_join, trigger=self.trigger, query_name=self.query_name, ).write_batch() def _write_delta_batch(self, df: DataFrame, destination: str): if self.merge == True: if "EventDate" not in df.columns: df = df.withColumn("EventDate", date_format("EventTime", "yyyy-MM-dd")) self._write_delta_merge( df.filter(col("ChangeType").isin("insert", "update", "upsert")), destination, ) self._write_delta_merge( df.filter(col("ChangeType") == "delete"), destination ) else: df = df.select("TagName", "EventTime", "Status", "Value") SparkDeltaDestination( data=df, destination=destination, options=self.options, mode=self.mode, trigger=self.trigger, query_name=self.query_name, ).write_batch() def _write_data_by_type(self, df: DataFrame): if self.merge == True: df = df.withColumn( "ChangeType", when(df["ChangeType"].isin("insert", "update"), "upsert").otherwise( df["ChangeType"] ), ) if self.remove_nanoseconds == True: df = df.withColumn( "EventTime", (floor(col("EventTime").cast("double") * 1000) / 1000).cast( "timestamp" ), ) if self.remove_duplicates == True: df = df.drop_duplicates(["TagName", "EventTime", "ChangeType"]) float_df = df.filter(ValueTypeConstants.FLOAT_VALUE).withColumn( "Value", col("Value").cast("float") ) self._write_delta_batch(float_df, self.destination_float) if self.destination_string != None: string_df = df.filter(ValueTypeConstants.STRING_VALUE) self._write_delta_batch(string_df, self.destination_string) if self.destination_integer != None: integer_df = df.filter(ValueTypeConstants.INTEGER_VALUE).withColumn( "Value", col("Value").cast("integer") ) self._write_delta_batch(integer_df, self.destination_integer) def _write_stream_microbatches(self, df: DataFrame, epoch_id=None): # NOSONAR df.persist() self._write_data_by_type(df) df.unpersist() def write_batch(self): """ Writes Process Control Data Model data to Delta """ try: if self.try_broadcast_join != True: self.data.persist() self._write_data_by_type(self.data) if self.try_broadcast_join != True: self.data.unpersist() except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def write_stream(self): """ Writes streaming Process Control Data Model data to Delta using foreachBatch """ try: TRIGGER_OPTION = ( {"availableNow": True} if self.trigger == "availableNow" else {"processingTime": self.trigger} ) if self.merge == True: query = ( self.data.writeStream.trigger(**TRIGGER_OPTION) .format("delta") .foreachBatch(self._write_stream_microbatches) .queryName(self.query_name) .outputMode("update") .options(**self.options) .start() ) else: delta_float = SparkDeltaDestination( data=self.data.select("TagName", "EventTime", "Status", "Value") .filter(ValueTypeConstants.FLOAT_VALUE) .withColumn("Value", col("Value").cast("float")), destination=self.destination_float, options=self.options, mode=self.mode, trigger=self.trigger, query_name=self.query_name + "_float", ) delta_float.write_stream() if self.destination_string != None: delta_string = SparkDeltaDestination( data=self.data.select( "TagName", "EventTime", "Status", "Value" ).filter(ValueTypeConstants.STRING_VALUE), destination=self.destination_string, options=self.options, mode=self.mode, trigger=self.trigger, query_name=self.query_name + "_string", ) delta_string.write_stream() if self.destination_integer != None: delta_integer = SparkDeltaDestination( data=self.data.select( "TagName", "EventTime", "Status", "Value" ).filter(ValueTypeConstants.INTEGER_VALUE), destination=self.destination_integer, options=self.options, mode=self.mode, trigger=self.trigger, query_name=self.query_name + "_integer", ) delta_integer.write_stream() while self.spark.streams.active != []: for query in self.spark.streams.active: if query.lastProgress: logging.info( "{}: {}".format(query.name, query.lastProgress) ) time.sleep(10) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/destinations/spark/pcdm_to_delta.py
0.692538
0.588889
pcdm_to_delta.py
pypi
import logging import time from pyspark.sql import DataFrame from py4j.protocol import Py4JJavaError from ..interfaces import DestinationInterface from ..._pipeline_utils.models import Libraries, SystemType class SparkKinesisDestination(DestinationInterface): """ This Kinesis destination class is used to write batch or streaming data to Kinesis. Kinesis configurations need to be specified as options in a dictionary. Args: data (DataFrame): Dataframe to be written to Delta options (dict): A dictionary of Kinesis configurations (See Attributes table below). All Configuration options for Kinesis can be found [here.](https://github.com/qubole/kinesis-sql#kinesis-sink-configuration){ target="_blank" } mode (str): Method of writing to Kinesis - append, complete, update trigger (str): Frequency of the write operation. Specify "availableNow" to execute a trigger once, otherwise specify a time period such as "30 seconds", "5 minutes" query_name (str): Unique name for the query in associated SparkSession Attributes: endpointUrl (str): Endpoint of the kinesis stream. awsAccessKey (str): AWS access key. awsSecretKey (str): AWS secret access key corresponding to the access key. streamName (List[str]): Name of the streams in Kinesis to write to. """ def __init__( self, data: DataFrame, options: dict, mode: str = "update", trigger: str = "10 seconds", query_name="KinesisDestination", ) -> None: self.data = data self.options = options self.mode = mode self.trigger = trigger self.query_name = query_name @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK_DATABRICKS """ return SystemType.PYSPARK_DATABRICKS @staticmethod def libraries(): spark_libraries = Libraries() return spark_libraries @staticmethod def settings() -> dict: return {} def pre_write_validation(self): return True def post_write_validation(self): return True def write_batch(self): """ Writes batch data to Kinesis. """ try: return self.data.write.format("kinesis").options(**self.options).save() except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def write_stream(self): """ Writes steaming data to Kinesis. """ try: TRIGGER_OPTION = ( {"availableNow": True} if self.trigger == "availableNow" else {"processingTime": self.trigger} ) query = ( self.data.writeStream.trigger(**TRIGGER_OPTION) .format("kinesis") .outputMode(self.mode) .options(**self.options) .queryName(self.query_name) .start() ) while query.isActive: if query.lastProgress: logging.info(query.lastProgress) time.sleep(10) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/destinations/spark/kinesis.py
0.721056
0.540742
kinesis.py
pypi
import logging import time from typing import List, Optional, Union from pydantic import BaseModel from pyspark.sql.functions import broadcast from pyspark.sql import DataFrame, SparkSession from py4j.protocol import Py4JJavaError from delta.tables import DeltaTable, DeltaMergeBuilder from ..interfaces import DestinationInterface from ..._pipeline_utils.models import Libraries, SystemType from ...._sdk_utils.compare_versions import _package_version_meets_minimum from ..._pipeline_utils.constants import get_default_package class DeltaMergeConditionValues(BaseModel): condition: Optional[str] values: Union[dict, str] class DeltaMergeCondition(BaseModel): condition: Optional[str] class SparkDeltaMergeDestination(DestinationInterface): """ The Spark Delta Merge Destination is used to merge data into a Delta table. Refer to this [documentation](https://docs.delta.io/latest/delta-update.html#upsert-into-a-table-using-merge&language-python) for more information about Delta Merge. Args: data (DataFrame): Dataframe to be merged into a Delta Table destination (str): Either the name of the Hive Metastore or Unity Catalog Delta Table **or** the path to the Delta table options (dict): Options that can be specified for a Delta Table read operation (See Attributes table below). Further information on the options is available for [batch](https://docs.delta.io/latest/delta-batch.html#write-to-a-table){ target="_blank" } and [streaming](https://docs.delta.io/latest/delta-streaming.html#delta-table-as-a-sink){ target="_blank" }. merge_condition (str): Condition for matching records between dataframe and delta table. Reference Dataframe columns as `source` and Delta Table columns as `target`. For example `source.id = target.id`. when_matched_update_list (list[DeltaMergeConditionValues]): Conditions(optional) and values to be used when updating rows that match the `merge_condition`. Specify `*` for Values if all columns from Dataframe should be inserted. when_matched_delete_list (list[DeltaMergeCondition]): Conditions(optional) to be used when deleting rows that match the `merge_condition`. when_not_matched_insert_list (list[DeltaMergeConditionValues]): Conditions(optional) and values to be used when inserting rows that do not match the `merge_condition`. Specify `*` for Values if all columns from Dataframe should be inserted. when_not_matched_by_source_update_list (list[DeltaMergeConditionValues]): Conditions(optional) and values to be used when updating rows that do not match the `merge_condition`. when_not_matched_by_source_delete_list (list[DeltaMergeCondition]): Conditions(optional) to be used when deleting rows that do not match the `merge_condition`. try_broadcast_join (bool): Attempts to perform a broadcast join in the merge which can leverage data skipping using partition pruning and file pruning automatically. Can fail if dataframe being merged is large and therefore more suitable for streaming merges than batch merges trigger (str): Frequency of the write operation. Specify "availableNow" to execute a trigger once, otherwise specify a time period such as "30 seconds", "5 minutes" query_name (str): Unique name for the query in associated SparkSession Attributes: checkpointLocation (str): Path to checkpoint files. (Streaming) """ spark: SparkSession data: DataFrame destination: str options: dict merge_condition: str when_matched_update_list: List[DeltaMergeConditionValues] when_matched_delete_list: List[DeltaMergeCondition] when_not_matched_insert_list: List[DeltaMergeConditionValues] when_not_matched_by_source_update_list: List[DeltaMergeConditionValues] when_not_matched_by_source_delete_list: List[DeltaMergeCondition] try_broadcast_join: bool trigger: str query_name: str def __init__( self, spark: SparkSession, data: DataFrame, destination: str, options: dict, merge_condition: str, when_matched_update_list: List[DeltaMergeConditionValues] = None, when_matched_delete_list: List[DeltaMergeCondition] = None, when_not_matched_insert_list: List[DeltaMergeConditionValues] = None, when_not_matched_by_source_update_list: List[DeltaMergeConditionValues] = None, when_not_matched_by_source_delete_list: List[DeltaMergeCondition] = None, try_broadcast_join: bool = False, trigger="10 seconds", query_name: str = "DeltaMergeDestination", ) -> None: self.spark = spark self.data = data self.destination = destination self.options = options self.merge_condition = merge_condition self.when_matched_update_list = ( [] if when_matched_update_list is None else when_matched_update_list ) self.when_matched_delete_list = ( [] if when_matched_delete_list is None else when_matched_delete_list ) self.when_not_matched_insert_list = ( [] if when_not_matched_insert_list is None else when_not_matched_insert_list ) if ( isinstance(when_not_matched_by_source_update_list, list) and len(when_not_matched_by_source_update_list) > 0 ): _package_version_meets_minimum("delta-spark", "2.3.0") self.when_not_matched_by_source_update_list = ( [] if when_not_matched_by_source_update_list is None else when_not_matched_by_source_update_list ) if ( isinstance(when_not_matched_by_source_delete_list, list) and len(when_not_matched_by_source_delete_list) > 0 ): _package_version_meets_minimum("delta-spark", "2.3.0") self.when_not_matched_by_source_delete_list = ( [] if when_not_matched_by_source_delete_list is None else when_not_matched_by_source_delete_list ) self.try_broadcast_join = try_broadcast_join self.trigger = trigger self.query_name = query_name @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() libraries.add_maven_library(get_default_package("spark_delta_core")) return libraries @staticmethod def settings() -> dict: return { "spark.sql.extensions": "io.delta.sql.DeltaSparkSessionExtension", "spark.sql.catalog.spark_catalog": "org.apache.spark.sql.delta.catalog.DeltaCatalog", "spark.databricks.delta.schema.autoMerge.enabled": "true", } def pre_write_validation(self): return True def post_write_validation(self): return True def _delta_merge_builder( self, df: DataFrame, try_broadcast_join: bool ) -> DeltaMergeBuilder: if "/" in self.destination: delta_table = DeltaTable.forPath(self.spark, self.destination) else: delta_table = DeltaTable.forName(self.spark, self.destination) if try_broadcast_join == True: delta_merge_builder = delta_table.alias("target").merge( source=broadcast(df).alias("source"), condition=self.merge_condition ) else: delta_merge_builder = delta_table.alias("target").merge( source=df.alias("source"), condition=self.merge_condition ) for when_matched_update in self.when_matched_update_list: if when_matched_update.values == "*": delta_merge_builder = delta_merge_builder.whenMatchedUpdateAll( condition=when_matched_update.condition, ) else: delta_merge_builder = delta_merge_builder.whenMatchedUpdate( condition=when_matched_update.condition, set=when_matched_update.values, ) for when_matched_delete in self.when_matched_delete_list: delta_merge_builder = delta_merge_builder.whenMatchedDelete( condition=when_matched_delete.condition, ) for when_not_matched_insert in self.when_not_matched_insert_list: if when_not_matched_insert.values == "*": delta_merge_builder = delta_merge_builder.whenNotMatchedInsertAll( condition=when_not_matched_insert.condition, ) else: delta_merge_builder = delta_merge_builder.whenNotMatchedInsert( condition=when_not_matched_insert.condition, values=when_not_matched_insert.values, ) for ( when_not_matched_by_source_update ) in self.when_not_matched_by_source_update_list: delta_merge_builder = delta_merge_builder.whenNotMatchedBySourceUpdate( condition=when_not_matched_by_source_update.condition, set=when_not_matched_by_source_update.values, ) for ( when_not_matched_by_source_delete ) in self.when_not_matched_by_source_delete_list: delta_merge_builder = delta_merge_builder.whenNotMatchedBySourceDelete( condition=when_not_matched_by_source_delete.condition, ) return delta_merge_builder def _stream_merge_micro_batch( self, micro_batch_df: DataFrame, epoch_id=None ): # NOSONAR micro_batch_df.persist() retry_delta_merge = False if self.try_broadcast_join == True: try: delta_merge = self._delta_merge_builder( micro_batch_df, self.try_broadcast_join ) delta_merge.execute() except Exception as e: if "SparkOutOfMemoryError" in str(e): retry_delta_merge = True else: raise e if self.try_broadcast_join == False or retry_delta_merge == True: delta_merge = self._delta_merge_builder(micro_batch_df, False) delta_merge.execute() micro_batch_df.unpersist() def write_batch(self): """ Merges batch data into a Delta Table. """ try: delta_merge = self._delta_merge_builder(self.data, self.try_broadcast_join) return delta_merge.execute() except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def write_stream(self): """ Merges streaming data to Delta using foreachBatch """ TRIGGER_OPTION = ( {"availableNow": True} if self.trigger == "availableNow" else {"processingTime": self.trigger} ) try: query = ( self.data.writeStream.trigger(**TRIGGER_OPTION) .format("delta") .foreachBatch(self._stream_merge_micro_batch) .queryName(self.query_name) .outputMode("update") .options(**self.options) .start() ) while query.isActive: if query.lastProgress: logging.info(query.lastProgress) time.sleep(10) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/destinations/spark/delta_merge.py
0.843251
0.593167
delta_merge.py
pypi
import logging import time import pandas as pd from deltalake import write_deltalake, DeltaTable from typing import Literal import pyarrow as pa import polars as pl from polars import LazyFrame from typing import Callable from ..interfaces import DestinationInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class PythonDeltaDestination(DestinationInterface): """ The Python Delta Destination is used to write data to a Delta table from a Polars LazyFrame. Args: data (LazyFrame): Polars LazyFrame to be written to Delta path (str): Path to Delta table to be written to; either local or [remote](https://delta-io.github.io/delta-rs/python/usage.html#loading-a-delta-table){ target="_blank" }. **Locally** if the Table does't exist one will be created, but to write to AWS or Azure, you must have an existing Delta Table options (Optional dict): Used if writing to a remote location. For AWS use format {"aws_access_key_id": "<>", "aws_secret_access_key": "<>"}. For Azure use format {"azure_storage_account_name": "storageaccountname", "azure_storage_access_key": "<>"} mode (Literal['error', 'append', 'overwrite', 'ignore']): Defaults to error if table exists, 'ignore' won't write anything if table exists overwrite_schema (bool): If True will allow for the table schema to be overwritten delta_write_options (dict): Options when writing to a Delta table. See [here](https://delta-io.github.io/delta-rs/python/api_reference.html#writing-deltatables){ target="_blank" } for all options """ data: LazyFrame path: str options: dict mode: Literal["error", "append", "overwrite", "ignore"] overwrite_schema: bool delta_write_options: bool def __init__( self, data: LazyFrame, path: str, options: dict = None, mode: Literal["error", "append", "overwrite", "ignore"] = "error", overwrite_schema: bool = False, delta_write_options: bool = False, query_name=None, ) -> None: self.data = data self.path = path self.options = options self.mode = mode self.overwrite_schema = overwrite_schema self.delta_write_options = delta_write_options @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYTHON """ return SystemType.PYTHON @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_write_validation(self): return True def post_write_validation(self): return True def write_batch(self): """ Writes batch data to Delta without using Spark. """ if isinstance(self.data, pl.LazyFrame): df = self.data.collect() df.write_delta( self.path, mode=self.mode, overwrite_schema=self.overwrite_schema, storage_options=self.options, delta_write_options=self.delta_write_options, ) else: raise ValueError( "Data must be a Polars LazyFrame. See https://pola-rs.github.io/polars/py-polars/html/reference/lazyframe/index.html" ) def write_stream(self): """ Raises: NotImplementedError: Writing to a Delta table using Python is only possible for batch writes. To perform a streaming read, use the write_stream method of the SparkDeltaDestination component. """ raise NotImplementedError( "Writing to a Delta table using Python is only possible for batch writes. To perform a streaming read, use the write_stream method of the SparkDeltaDestination component" )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/destinations/python/delta.py
0.82347
0.33939
delta.py
pypi
from semver.version import Version from .models import MavenLibrary, PyPiLibrary from ..._sdk_utils.compare_versions import ( _get_python_package_version, _get_package_version, ) def get_default_package(package_name): delta_spark_artifact_id = "delta-core_2.12" if ( Version.compare( _get_python_package_version("delta-spark"), Version.parse("3.0.0") ) >= 0 ): delta_spark_artifact_id = "delta-spark_2.12" DEFAULT_PACKAGES = { "spark_delta_core": MavenLibrary( group_id="io.delta", artifact_id=delta_spark_artifact_id, version=_get_package_version("delta-spark"), ), "spark_delta_sharing": MavenLibrary( group_id="io.delta", artifact_id="delta-sharing-spark_2.12", version="0.6.3" ), "spark_azure_eventhub": MavenLibrary( group_id="com.microsoft.azure", artifact_id="azure-eventhubs-spark_2.12", version="2.3.22", ), "spark_sql_kafka": MavenLibrary( group_id="org.apache.spark", artifact_id="spark-sql-kafka-0-10_2.12", version=_get_package_version("pyspark"), ), "spark_remote": MavenLibrary( group_id="org.apache.spark", artifact_id="spark-connect_2.12", version=_get_package_version("pyspark"), ), "rtdip_sdk": PyPiLibrary(name="rtdip_sdk", version="0.5.1"), "azure_adls_gen_2": PyPiLibrary( name="azure-storage-file-datalake", version="12.10.1" ), "azure_key_vault_secret": PyPiLibrary( name="azure-keyvault-secrets", version="4.7.0" ), "aws_boto3": PyPiLibrary(name="boto3", version="1.26.118"), "hashicorp_vault": PyPiLibrary(name="hvac", version="1.1.0"), "api_requests": PyPiLibrary(name="requests", version="2.30.0"), "pyarrow": PyPiLibrary(name="pyarrow", version="12.0.0"), "pandas": PyPiLibrary(name="pandas", version="2.0.1"), } return DEFAULT_PACKAGES[package_name]
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/_pipeline_utils/constants.py
0.509276
0.173638
constants.py
pypi
from pyspark.sql.types import ( StructType, StructField, DoubleType, StringType, IntegerType, TimestampType, ) WEATHER_FORECAST_SCHEMA = StructType( [ StructField("Latitude", DoubleType(), True), StructField("Longitude", DoubleType(), True), StructField("Class", StringType(), True), StructField("ExpireTimeGmt", IntegerType(), True), StructField("FcstValid", IntegerType(), True), StructField("FcstValidLocal", StringType(), True), StructField("Num", IntegerType(), True), StructField("DayInd", StringType(), True), StructField("Temp", IntegerType(), True), StructField("Dewpt", IntegerType(), True), StructField("Hi", IntegerType(), True), StructField("Wc", IntegerType(), True), StructField("FeelsLike", IntegerType(), True), StructField("IconExtd", IntegerType(), True), StructField("Wxman", StringType(), True), StructField("IconCode", IntegerType(), True), StructField("Dow", StringType(), True), StructField("Phrase12Char", StringType(), True), StructField("Phrase22Char", StringType(), True), StructField("Phrase32Char", StringType(), True), StructField("SubphrasePt1", StringType(), True), StructField("SubphrasePt2", StringType(), True), StructField("SubphrasePt3", StringType(), True), StructField("Pop", StringType(), True), StructField("PrecipType", StringType(), True), StructField("Qpf", DoubleType(), True), StructField("SnowQpf", DoubleType(), True), StructField("Rh", IntegerType(), True), StructField("Wspd", IntegerType(), True), StructField("Wdir", IntegerType(), True), StructField("WdirCardinal", StringType(), True), StructField("Gust", DoubleType(), True), StructField("Clds", IntegerType(), True), StructField("Vis", DoubleType(), True), StructField("Mslp", DoubleType(), True), StructField("UvIndexRaw", DoubleType(), True), StructField("UvIndex", IntegerType(), True), StructField("UvWarning", IntegerType(), True), StructField("UvDesc", StringType(), True), StructField("GolfIndex", DoubleType(), True), StructField("GolfCategory", StringType(), True), StructField("Severity", IntegerType(), True), ] ) WEATHER_DATA_MODEL = StructType( [ StructField("Latitude", DoubleType(), False), StructField("Longitude", DoubleType(), False), StructField("WeatherDay", StringType(), False), StructField("WeatherHour", IntegerType(), False), StructField("WeatherTimezoneOffset", StringType(), False), StructField("WeatherType", StringType(), False), StructField("ProcessedDate", TimestampType(), False), StructField("Temperature", DoubleType(), True), StructField("DewPoint", DoubleType(), True), StructField("Humidity", DoubleType(), True), StructField("HeatIndex", DoubleType(), True), StructField("WindChill", DoubleType(), True), StructField("WindDirection", DoubleType(), True), StructField("WindSpeed", DoubleType(), True), StructField("CloudCover", DoubleType(), True), StructField("WetBulbTemp", StringType(), True), StructField("SolarIrradiance", StringType(), True), StructField("Precipitation", DoubleType(), True), StructField("DayOrNight", StringType(), True), StructField("DayOfWeek", StringType(), True), StructField("WindGust", IntegerType(), True), StructField("MslPressure", DoubleType(), True), StructField("ForecastDayNum", IntegerType(), True), StructField("PropOfPrecip", IntegerType(), True), StructField("PrecipType", StringType(), True), StructField("SnowAccumulation", DoubleType(), True), StructField("UvIndex", DoubleType(), True), StructField("Visibility", DoubleType(), True), ] )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/_pipeline_utils/weather.py
0.740831
0.377914
weather.py
pypi
import struct import uuid from typing import cast, List, Callable from datetime import datetime from pyspark.sql.functions import udf from pyspark.sql.types import MapType, StringType SYSTEM_PROPERTIES = { "x-opt-sequence-number": b"\x52", "x-opt-offset": b"\xa1", "x-opt-partition-key": b"\xa1", "x-opt-enqueued-time": b"\x83", "message-id": b"\xa1", "user-id": b"\xa1", "to": b"\xa1", "subject": b"\xa1", "reply-to": b"\xa1", "correlation-id": b"\xa1", "content-type": b"\xa1", "content-encoding": b"\xa1", "absolute-expiry-time": b"\x83", "creation-time": b"\x83", "group-id": b"\xa1", "group-sequence": b"\xa1", "reply-to-group-id": b"\xa1", } c_unsigned_char = struct.Struct(">B") c_signed_char = struct.Struct(">b") c_unsigned_short = struct.Struct(">H") c_signed_short = struct.Struct(">h") c_unsigned_int = struct.Struct(">I") c_signed_int = struct.Struct(">i") c_unsigned_long = struct.Struct(">L") c_unsigned_long_long = struct.Struct(">Q") c_signed_long_long = struct.Struct(">q") c_float = struct.Struct(">f") c_double = struct.Struct(">d") def _decode_null(buffer): return buffer, None def _decode_true(buffer): return buffer, True def _decode_false(buffer): return buffer, False def _decode_zero(buffer): return buffer, 0 def _decode_empty(buffer): return buffer, [] def _decode_boolean(buffer): return buffer[1:], buffer[:1] == b"\x01" def _decode_ubyte(buffer): return buffer[1:], buffer[0] def _decode_ushort(buffer): return buffer[2:], c_unsigned_short.unpack(buffer[:2])[0] def _decode_uint_small(buffer): return buffer[1:], buffer[0] def _decode_uint_large(buffer): return buffer[4:], c_unsigned_int.unpack(buffer[:4])[0] def _decode_ulong_small(buffer): return buffer[1:], buffer[0] def _decode_ulong_large(buffer): return buffer[8:], c_unsigned_long_long.unpack(buffer[:8])[0] def _decode_byte(buffer): return buffer[1:], c_signed_char.unpack(buffer[:1])[0] def _decode_short(buffer): return buffer[2:], c_signed_short.unpack(buffer[:2])[0] def _decode_int_small(buffer): return buffer[1:], c_signed_char.unpack(buffer[:1])[0] def _decode_int_large(buffer): return buffer[4:], c_signed_int.unpack(buffer[:4])[0] def _decode_long_small(buffer): return buffer[1:], c_signed_char.unpack(buffer[:1])[0] def _decode_long_large(buffer): return buffer[8:], c_signed_long_long.unpack(buffer[:8])[0] def _decode_float(buffer): return buffer[4:], c_float.unpack(buffer[:4])[0] def _decode_double(buffer): return buffer[8:], c_double.unpack(buffer[:8])[0] def _decode_timestamp(buffer): return buffer[8:], c_signed_long_long.unpack(buffer[:8])[0] def _decode_uuid(buffer): return buffer[16:], uuid.UUID(bytes=buffer[:16].tobytes()) def _decode_binary_small(buffer): length_index = buffer[0] + 1 return buffer[length_index:], buffer[1:length_index].tobytes() def _decode_binary_large(buffer): length_index = c_unsigned_long.unpack(buffer[:4])[0] + 4 return buffer[length_index:], buffer[4:length_index].tobytes() def _decode_list_small(buffer): count = buffer[1] buffer = buffer[2:] values = [None] * count for i in range(count): buffer, values[i] = _DECODE_BY_CONSTRUCTOR[buffer[0]](buffer[1:]) return buffer, values def _decode_list_large(buffer): count = c_unsigned_long.unpack(buffer[4:8])[0] buffer = buffer[8:] values = [None] * count for i in range(count): buffer, values[i] = _DECODE_BY_CONSTRUCTOR[buffer[0]](buffer[1:]) return buffer, values def _decode_map_small(buffer): count = int(buffer[1] / 2) buffer = buffer[2:] values = {} for _ in range(count): buffer, key = _DECODE_BY_CONSTRUCTOR[buffer[0]](buffer[1:]) buffer, value = _DECODE_BY_CONSTRUCTOR[buffer[0]](buffer[1:]) values[key] = value return buffer, values def _decode_map_large(buffer): count = int(c_unsigned_long.unpack(buffer[4:8])[0] / 2) buffer = buffer[8:] values = {} for _ in range(count): buffer, key = _DECODE_BY_CONSTRUCTOR[buffer[0]](buffer[1:]) buffer, value = _DECODE_BY_CONSTRUCTOR[buffer[0]](buffer[1:]) values[key] = value return buffer, values def _decode_array_small(buffer): count = buffer[1] # Ignore first byte (size) and just rely on count if count: subconstructor = buffer[2] buffer = buffer[3:] values = [None] * count for i in range(count): buffer, values[i] = _DECODE_BY_CONSTRUCTOR[subconstructor](buffer) return buffer, values return buffer[2:], [] def _decode_array_large(buffer): count = c_unsigned_long.unpack(buffer[4:8])[0] if count: subconstructor = buffer[8] buffer = buffer[9:] values = [None] * count for i in range(count): buffer, values[i] = _DECODE_BY_CONSTRUCTOR[subconstructor](buffer) return buffer, values return buffer[8:], [] _COMPOSITES = { 35: "received", 36: "accepted", 37: "rejected", 38: "released", 39: "modified", } def _decode_described(buffer): composite_type = buffer[0] buffer, descriptor = _DECODE_BY_CONSTRUCTOR[composite_type](buffer[1:]) buffer, value = _DECODE_BY_CONSTRUCTOR[buffer[0]](buffer[1:]) try: composite_type = cast(int, _COMPOSITES[descriptor]) return buffer, {composite_type: value} except KeyError: return buffer, value _DECODE_BY_CONSTRUCTOR: List[Callable] = cast(List[Callable], [None] * 256) _DECODE_BY_CONSTRUCTOR[0] = _decode_described _DECODE_BY_CONSTRUCTOR[64] = _decode_null _DECODE_BY_CONSTRUCTOR[65] = _decode_true _DECODE_BY_CONSTRUCTOR[66] = _decode_false _DECODE_BY_CONSTRUCTOR[67] = _decode_zero _DECODE_BY_CONSTRUCTOR[68] = _decode_zero _DECODE_BY_CONSTRUCTOR[69] = _decode_empty _DECODE_BY_CONSTRUCTOR[80] = _decode_ubyte _DECODE_BY_CONSTRUCTOR[81] = _decode_byte _DECODE_BY_CONSTRUCTOR[82] = _decode_uint_small _DECODE_BY_CONSTRUCTOR[83] = _decode_ulong_small _DECODE_BY_CONSTRUCTOR[84] = _decode_int_small _DECODE_BY_CONSTRUCTOR[85] = _decode_long_small _DECODE_BY_CONSTRUCTOR[86] = _decode_boolean _DECODE_BY_CONSTRUCTOR[96] = _decode_ushort _DECODE_BY_CONSTRUCTOR[97] = _decode_short _DECODE_BY_CONSTRUCTOR[112] = _decode_uint_large _DECODE_BY_CONSTRUCTOR[113] = _decode_int_large _DECODE_BY_CONSTRUCTOR[114] = _decode_float _DECODE_BY_CONSTRUCTOR[128] = _decode_ulong_large _DECODE_BY_CONSTRUCTOR[129] = _decode_long_large _DECODE_BY_CONSTRUCTOR[130] = _decode_double _DECODE_BY_CONSTRUCTOR[131] = _decode_timestamp _DECODE_BY_CONSTRUCTOR[152] = _decode_uuid _DECODE_BY_CONSTRUCTOR[160] = _decode_binary_small _DECODE_BY_CONSTRUCTOR[161] = _decode_binary_small _DECODE_BY_CONSTRUCTOR[163] = _decode_binary_small _DECODE_BY_CONSTRUCTOR[176] = _decode_binary_large _DECODE_BY_CONSTRUCTOR[177] = _decode_binary_large _DECODE_BY_CONSTRUCTOR[179] = _decode_binary_large _DECODE_BY_CONSTRUCTOR[192] = _decode_list_small _DECODE_BY_CONSTRUCTOR[193] = _decode_map_small _DECODE_BY_CONSTRUCTOR[208] = _decode_list_large _DECODE_BY_CONSTRUCTOR[209] = _decode_map_large _DECODE_BY_CONSTRUCTOR[224] = _decode_array_small _DECODE_BY_CONSTRUCTOR[240] = _decode_array_large def _decode_to_string(decoder_value, value): if decoder_value == b"\x83": return datetime.fromtimestamp(int(value) / 1000).strftime( "%Y-%m-%dT%H:%M:%S.%fZ" ) elif type(value) is bytes or type(value) is bytearray: return value.decode("utf-8") else: return str(value) @udf(returnType=MapType(StringType(), StringType())) def decode_kafka_headers_to_amqp_properties(headers: dict) -> dict: if headers is None or len(headers) == 0 or type(headers) is not dict: return {} else: properties = {} for key, value in headers.items(): try: if key in SYSTEM_PROPERTIES: properties[key] = _decode_to_string(SYSTEM_PROPERTIES[key], value) else: decoder_value = value[0:1] buffer_val = memoryview(value) buffer_val, decoded_value = _DECODE_BY_CONSTRUCTOR[buffer_val[0]]( buffer_val[1:] ) properties[key] = _decode_to_string(decoder_value, decoded_value) except Exception as e: print(f"Error decoding header {key}: {e}") properties[key] = _decode_to_string(None, value) return properties
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/_pipeline_utils/amqp.py
0.621541
0.28892
amqp.py
pypi
from typing import List, Optional, Type, Union, Dict import re from pydantic import BaseConfig, BaseModel, validator from abc import ABCMeta from ..sources.interfaces import SourceInterface from ..transformers.interfaces import TransformerInterface from ..destinations.interfaces import DestinationInterface from ..secrets.models import PipelineSecret from ..utilities.interfaces import UtilitiesInterface BaseConfig.json_encoders = { ABCMeta: lambda x: x.__name__, PipelineSecret: lambda x: {"pipeline_secret": x.dict()}, } def validate_name(name: str) -> str: if re.match("^[a-z0-9_]*$", name) is None: raise ValueError("Can only contain lower case letters, numbers and underscores") else: return name class PipelineStep(BaseModel): name: str description: str depends_on_step: Optional[List[str]] component: Union[ Type[SourceInterface], Type[TransformerInterface], Type[DestinationInterface], Type[UtilitiesInterface], ] component_parameters: Optional[dict] provide_output_to_step: Optional[List[str]] class Config: json_encoders = { ABCMeta: lambda x: x.__name__, PipelineSecret: lambda x: {"pipeline_secret": x.dict()}, } # validators _validate_name = validator("name", allow_reuse=True, always=True)(validate_name) _validate_depends_on_step = validator( "depends_on_step", allow_reuse=True, each_item=True )(validate_name) _validate_provide_output_to_step = validator( "provide_output_to_step", allow_reuse=True, each_item=True )(validate_name) class PipelineTask(BaseModel): name: str description: str depends_on_task: Optional[List[str]] step_list: List[PipelineStep] batch_task: Optional[bool] class Config: json_encoders = { ABCMeta: lambda x: x.__name__, PipelineSecret: lambda x: {"pipeline_secret": x.dict()}, } # validators _validate_name = validator("name", allow_reuse=True)(validate_name) _validate_depends_on_step = validator( "depends_on_task", allow_reuse=True, each_item=True )(validate_name) class PipelineJob(BaseModel): name: str description: str version: str task_list: List[PipelineTask] class Config: json_encoders = { ABCMeta: lambda x: x.__name__, PipelineSecret: lambda x: {"pipeline_secret": x.dict()}, } # validators _validate_name = validator("name", allow_reuse=True)(validate_name)
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/execute/models.py
0.885037
0.21264
models.py
pypi
from .interfaces import SecretsInterface import hvac from .._pipeline_utils.models import Libraries, SystemType from .._pipeline_utils.constants import get_default_package class HashiCorpVaultSecrets(SecretsInterface): """ Reads secrets from a Hashicorp Vault. For more information about Hashicorp Vaults, see [here.](https://developer.hashicorp.com/vault/docs/get-started/developer-qs) Args: vault (str): Hashicorp Vault URL key (str): Name/Key of the secret in the Hashicorp Vault secret (str): Secret or Password to be stored in the Hashicorp Vault credential (str): Token for authentication with the Hashicorp Vault kwargs (dict): List of additional parameters to be passed when creating a Hashicorp Vault Client. Please see [here](https://hvac.readthedocs.io/en/stable/overview.html#initialize-the-client) for more details on parameters that can be provided to the client """ vault: str key: str secret: str credential: str def __init__( self, vault: str, key: str, secret: str = None, credential: str = None, kwargs: dict = {}, ): # NOSONAR self.vault = vault self.key = key self.secret = secret self.credential = credential self.kwargs = kwargs self.client = self._get_hvac_client() @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYTHON """ return SystemType.PYTHON @staticmethod def libraries(): libraries = Libraries() libraries.add_pypi_library(get_default_package("hashicorp_vault")) return libraries @staticmethod def settings() -> dict: return {} def _get_hvac_client(self): return hvac.Client(url=self.vault, token=self.credential, **self.kwargs) def get(self): """ Retrieves the secret from the Hashicorp Vault """ response = self.client.secrets.kv.read_secret_version(path=self.key) return response["data"]["data"]["password"] def set(self): """ Creates or updates a secret in the Hashicorp Vault """ self.client.secrets.kv.v2.create_or_update_secret( path=self.key, secret=dict(password=self.secret), ) return True
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/secrets/hashicorp_vault.py
0.845273
0.260773
hashicorp_vault.py
pypi
from .interfaces import SecretsInterface from azure.keyvault.secrets import SecretClient from .._pipeline_utils.models import Libraries, SystemType from .._pipeline_utils.constants import get_default_package class AzureKeyVaultSecrets(SecretsInterface): """ Reads secrets from Azure Key Vault. For more information about Azure Key Vaults, see [here.](https://learn.microsoft.com/en-gb/azure/key-vault/general/overview) Args: vault (str): Azure Key Vault URL key (str): Key for the secret secret (str): Secret or Password to be set in the Azure Key Vault credential (str): Credential for authenticating with Azure Key Vault kwargs (dict): List of additional parameters to be passed when creating a Azure Key Vault Client. Please see [here](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/keyvault/azure-keyvault-secrets) for more details on parameters that can be provided to the client """ vault: str key: str secret: str credential: str kwargs: dict def __init__( self, vault: str, key: str, secret: str = None, credential=None, kwargs: dict = None, ): self.vault = vault self.key = key self.secret = secret self.credential = credential self.kwargs = {} if kwargs is None else kwargs self.client = self._get_akv_client() @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYTHON """ return SystemType.PYTHON @staticmethod def libraries(): libraries = Libraries() libraries.add_pypi_library(get_default_package("azure_key_vault_secret")) return libraries @staticmethod def settings() -> dict: return {} def _get_akv_client(self): return SecretClient( vault_url="https://{}.vault.azure.net".format(self.vault), credential=self.credential, **self.kwargs ) def get(self): """ Retrieves the secret from the Azure Key Vault """ response = self.client.get_secret(name=self.key) return response.value def set(self): """ Creates or updates a secret in the Azure Key Vault """ self.client.set_secret(name=self.key, value=self.secret) return True
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/secrets/azure_key_vault.py
0.894962
0.168241
azure_key_vault.py
pypi
import logging from py4j.protocol import Py4JJavaError from pyspark.sql import DataFrame, SparkSession from ..interfaces import SourceInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class SparkDeltaSource(SourceInterface): """ The Spark Delta Source is used to read data from a Delta table. Args: spark (SparkSession): Spark Session required to read data from a Delta table options (dict): Options that can be specified for a Delta Table read operation (See Attributes table below). Further information on the options is available for [batch](https://docs.delta.io/latest/delta-batch.html#read-a-table){ target="_blank" } and [streaming](https://docs.delta.io/latest/delta-streaming.html#delta-table-as-a-source){ target="_blank" }. table_name (str): Name of the Hive Metastore or Unity Catalog Delta Table Attributes: maxFilesPerTrigger (int): How many new files to be considered in every micro-batch. The default is 1000. (Streaming) maxBytesPerTrigger (int): How much data gets processed in each micro-batch. (Streaming) ignoreDeletes (bool str): Ignore transactions that delete data at partition boundaries. (Streaming) ignoreChanges (bool str): Pre-process updates if files had to be rewritten in the source table due to a data changing operation. (Streaming) startingVersion (int str): The Delta Lake version to start from. (Streaming) startingTimestamp (datetime str): The timestamp to start from. (Streaming) withEventTimeOrder (bool str): Whether the initial snapshot should be processed with event time order. (Streaming) timestampAsOf (datetime str): Query the Delta Table from a specific point in time. (Batch) versionAsOf (int str): Query the Delta Table from a specific version. (Batch) """ spark: SparkSession options: dict table_name: str def __init__(self, spark: SparkSession, options: dict, table_name: str) -> None: self.spark = spark self.options = options self.table_name = table_name @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() libraries.add_maven_library(get_default_package("spark_delta_core")) return libraries @staticmethod def settings() -> dict: return {} def pre_read_validation(self): return True def post_read_validation(self): return True def read_batch(self): """ Reads batch data from Delta. Most of the options provided by the Apache Spark DataFrame read API are supported for performing batch reads on Delta tables. """ try: return ( self.spark.read.format("delta") .options(**self.options) .table(self.table_name) ) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def read_stream(self) -> DataFrame: """ Reads streaming data from Delta. All of the data in the table is processed as well as any new data that arrives after the stream started. .load() can take table name or path. """ try: return ( self.spark.readStream.format("delta") .options(**self.options) .load(self.table_name) ) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/delta.py
0.822153
0.592342
delta.py
pypi
import os import logging from py4j.protocol import Py4JJavaError from pyspark.sql import DataFrame, SparkSession from pyspark.sql.functions import col, map_from_entries, map_filter from urllib.parse import urlparse from ..interfaces import SourceInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.spark import KAFKA_EVENTHUB_SCHEMA from ..._pipeline_utils.constants import get_default_package from ..._pipeline_utils.amqp import decode_kafka_headers_to_amqp_properties eventhub_system_properties = [ "x-opt-enqueued-time", "x-opt-sequence-number", "x-opt-offset", "x-opt-publisher", "x-opt-partition-key", "message-id", "iothub-enqueuedtime", "user-id", "iothub-connection-device-id", "iothub-connection-module-id", "iothub-connection-auth-generation-id", "iothub-connection-auth-method", "iothub-app-iothub-creation-time-utc", "iothub-creation-time-utc", "dt-dataschema", "dt-subject", ] class SparkKafkaEventhubSource(SourceInterface): """ This Spark source class is used to read batch or streaming data from an Eventhub using the Kafka protocol. This enables Eventhubs to be used as a source in applications like Delta Live Tables or Databricks Serverless Jobs as the Spark Eventhubs JAR is not supported in this scenarios. The dataframe returned is transformed to ensure the schema is as close to the Eventhub Spark source as possible. There are some minor differences: - `offset` is dependent on `x-opt-offset` being populated in the headers provided. If this is not found in the headers, the value will be null - `publisher` is dependent on `x-opt-publisher` being populated in the headers provided. If this is not found in the headers, the value will be null - `partitionKey` is dependent on `x-opt-partition-key` being populated in the headers provided. If this is not found in the headers, the value will be null - `systemProperties` are identified according to the list provided in the [Eventhub documentation](https://learn.microsoft.com/en-us/azure/data-explorer/ingest-data-event-hub-overview#event-system-properties-mapping){ target="_blank" } and [IoT Hub documentation](https://learn.microsoft.com/en-us/azure/data-explorer/ingest-data-iot-hub-overview#event-system-properties-mapping){ target="_blank" } Default settings will be specified if not provided in the `options` parameter: - `kafka.sasl.mechanism` will be set to `PLAIN` - `kafka.security.protocol` will be set to `SASL_SSL` - `kafka.request.timeout.ms` will be set to `60000` - `kafka.session.timeout.ms` will be set to `60000` Required and optional configurations can be found in the Attributes tables below. Additionally, there are more optional configurations which can be found [here.](https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html){ target="_blank" } Args: spark (SparkSession): Spark Session options (dict): A dictionary of Kafka configurations (See Attributes tables below). For more information on configuration options see [here](https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html){ target="_blank" } connection_string (str): Eventhubs connection string is required to connect to the Eventhubs service. This must include the Eventhub name as the `EntityPath` parameter. Example `"Endpoint=sb://test.servicebus.windows.net/;SharedAccessKeyName=test;SharedAccessKey=test_key;EntityPath=test_eventhub"` consumer_group (str): The Eventhub consumer group to use for the connection The following options are the most common configurations for Kafka. The only configuration that must be set for the Kafka source for both batch and streaming queries is listed below. Attributes: kafka.bootstrap.servers (A comma-separated list of host︰port): The Kafka "bootstrap.servers" configuration. (Streaming and Batch) There are multiple ways of specifying which topics to subscribe to. You should provide only one of these parameters: Attributes: assign (json string {"topicA"︰[0,1],"topicB"︰[2,4]}): Specific TopicPartitions to consume. Only one of "assign", "subscribe" or "subscribePattern" options can be specified for Kafka source. (Streaming and Batch) subscribe (A comma-separated list of topics): The topic list to subscribe. Only one of "assign", "subscribe" or "subscribePattern" options can be specified for Kafka source. (Streaming and Batch) subscribePattern (Java regex string): The pattern used to subscribe to topic(s). Only one of "assign, "subscribe" or "subscribePattern" options can be specified for Kafka source. (Streaming and Batch) The following configurations are optional: Attributes: startingTimestamp (timestamp str): The start point of timestamp when a query is started, a string specifying a starting timestamp for all partitions in topics being subscribed. Please refer the note on starting timestamp offset options below. (Streaming and Batch) startingOffsetsByTimestamp (JSON str): The start point of timestamp when a query is started, a json string specifying a starting timestamp for each TopicPartition. Please refer the note on starting timestamp offset options below. (Streaming and Batch) startingOffsets ("earliest", "latest" (streaming only), or JSON string): The start point when a query is started, either "earliest" which is from the earliest offsets, "latest" which is just from the latest offsets, or a json string specifying a starting offset for each TopicPartition. In the json, -2 as an offset can be used to refer to earliest, -1 to latest. endingTimestamp (timestamp str): The end point when a batch query is ended, a json string specifying an ending timestamp for all partitions in topics being subscribed. Please refer the note on ending timestamp offset options below. (Batch) endingOffsetsByTimestamp (JSON str): The end point when a batch query is ended, a json string specifying an ending timestamp for each TopicPartition. Please refer the note on ending timestamp offset options below. (Batch) endingOffsets (latest or JSON str): The end point when a batch query is ended, either "latest" which is just referred to the latest, or a json string specifying an ending offset for each TopicPartition. In the json, -1 as an offset can be used to refer to latest, and -2 (earliest) as an offset is not allowed. (Batch) maxOffsetsPerTrigger (long): Rate limit on maximum number of offsets processed per trigger interval. The specified total number of offsets will be proportionally split across topicPartitions of different volume. (Streaming) minOffsetsPerTrigger (long): Minimum number of offsets to be processed per trigger interval. The specified total number of offsets will be proportionally split across topicPartitions of different volume. (Streaming) failOnDataLoss (bool): Whether to fail the query when it's possible that data is lost (e.g., topics are deleted, or offsets are out of range). This may be a false alarm. You can disable it when it doesn't work as you expected. minPartitions (int): Desired minimum number of partitions to read from Kafka. By default, Spark has a 1-1 mapping of topicPartitions to Spark partitions consuming from Kafka. (Streaming and Batch) includeHeaders (bool): Whether to include the Kafka headers in the row. (Streaming and Batch) !!! note "Starting Timestamp Offset Note" If Kafka doesn't return the matched offset, the behavior will follow to the value of the option <code>startingOffsetsByTimestampStrategy</code>. <code>startingTimestamp</code> takes precedence over <code>startingOffsetsByTimestamp</code> and </code>startingOffsets</code>. For streaming queries, this only applies when a new query is started, and that resuming will always pick up from where the query left off. Newly discovered partitions during a query will start at earliest. !!! note "Ending Timestamp Offset Note" If Kafka doesn't return the matched offset, the offset will be set to latest. <code>endingOffsetsByTimestamp</code> takes precedence over <code>endingOffsets</code>. """ def __init__( self, spark: SparkSession, options: dict, connection_string: str, consumer_group: str, ) -> None: self.spark = spark self.options = options self.connection_string = connection_string self.consumer_group = consumer_group self.connection_string_properties = self._parse_connection_string( connection_string ) self.schema = KAFKA_EVENTHUB_SCHEMA self.options = self._configure_options(options) @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): spark_libraries = Libraries() spark_libraries.add_maven_library(get_default_package("spark_sql_kafka")) return spark_libraries @staticmethod def settings() -> dict: return {} def pre_read_validation(self) -> bool: return True def post_read_validation(self, df: DataFrame) -> bool: assert df.schema == self.schema return True # Code is from Azure Eventhub Python SDK. Will import the package if possible with Conda in the conda-forge channel in the future def _parse_connection_string(self, connection_string: str): conn_settings = [s.split("=", 1) for s in connection_string.split(";")] if any(len(tup) != 2 for tup in conn_settings): raise ValueError("Connection string is either blank or malformed.") conn_settings = dict(conn_settings) shared_access_signature = None for key, value in conn_settings.items(): if key.lower() == "sharedaccesssignature": shared_access_signature = value shared_access_key = conn_settings.get("SharedAccessKey") shared_access_key_name = conn_settings.get("SharedAccessKeyName") if any([shared_access_key, shared_access_key_name]) and not all( [shared_access_key, shared_access_key_name] ): raise ValueError( "Connection string must have both SharedAccessKeyName and SharedAccessKey." ) if shared_access_signature is not None and shared_access_key is not None: raise ValueError( "Only one of the SharedAccessKey or SharedAccessSignature must be present." ) endpoint = conn_settings.get("Endpoint") if not endpoint: raise ValueError("Connection string is either blank or malformed.") parsed = urlparse(endpoint.rstrip("/")) if not parsed.netloc: raise ValueError("Invalid Endpoint on the Connection String.") namespace = parsed.netloc.strip() properties = { "fully_qualified_namespace": namespace, "endpoint": endpoint, "eventhub_name": conn_settings.get("EntityPath"), "shared_access_signature": shared_access_signature, "shared_access_key_name": shared_access_key_name, "shared_access_key": shared_access_key, } return properties def _connection_string_builder(self, properties: dict) -> str: connection_string = "Endpoint=" + properties.get("endpoint") + ";" if properties.get("shared_access_key"): connection_string += ( "SharedAccessKey=" + properties.get("shared_access_key") + ";" ) if properties.get("shared_access_key_name"): connection_string += ( "SharedAccessKeyName=" + properties.get("shared_access_key_name") + ";" ) if properties.get("shared_access_signature"): connection_string += ( "SharedAccessSignature=" + properties.get("shared_access_signature") + ";" ) return connection_string def _configure_options(self, options: dict) -> dict: if "subscribe" not in options: options["subscribe"] = self.connection_string_properties.get( "eventhub_name" ) if "kafka.bootstrap.servers" not in options: options["kafka.bootstrap.servers"] = ( self.connection_string_properties.get("fully_qualified_namespace") + ":9093" ) if "kafka.sasl.mechanism" not in options: options["kafka.sasl.mechanism"] = "PLAIN" if "kafka.security.protocol" not in options: options["kafka.security.protocol"] = "SASL_SSL" if "kafka.sasl.jaas.config" not in options: kafka_package = "org.apache.kafka.common.security.plain.PlainLoginModule" if "DATABRICKS_RUNTIME_VERSION" in os.environ: kafka_package = "kafkashaded.org.apache.kafka.common.security.plain.PlainLoginModule" connection_string = self._connection_string_builder( self.connection_string_properties ) options[ "kafka.sasl.jaas.config" ] = '{} required username="$ConnectionString" password="{}";'.format( kafka_package, connection_string ) # NOSONAR if "kafka.request.timeout.ms" not in options: options["kafka.request.timeout.ms"] = "60000" if "kafka.session.timeout.ms" not in options: options["kafka.session.timeout.ms"] = "60000" if "kafka.group.id" not in options: options["kafka.group.id"] = self.consumer_group options["includeHeaders"] = "true" return options def _transform_to_eventhub_schema(self, df: DataFrame) -> DataFrame: return ( df.withColumn("headers", map_from_entries(col("headers"))) .select( col("value").alias("body"), col("partition").cast("string"), col("offset").alias("sequenceNumber"), col("timestamp").alias("enqueuedTime"), decode_kafka_headers_to_amqp_properties(col("headers")).alias( "properties" ), ) .withColumn("offset", col("properties").getItem("x-opt-offset")) .withColumn("publisher", col("properties").getItem("x-opt-publisher")) .withColumn( "partitionKey", col("properties").getItem("x-opt-partition-key") ) .withColumn( "systemProperties", map_filter( col("properties"), lambda k, _: k.isin(eventhub_system_properties) ), ) .withColumn( "properties", map_filter( col("properties"), lambda k, _: ~k.isin(eventhub_system_properties) ), ) .select( col("body"), col("partition"), col("offset"), col("sequenceNumber"), col("enqueuedTime"), col("publisher"), col("partitionKey"), col("properties"), col("systemProperties"), ) ) def read_batch(self) -> DataFrame: """ Reads batch data from Kafka. """ try: df = self.spark.read.format("kafka").options(**self.options).load() return self._transform_to_eventhub_schema(df) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def read_stream(self) -> DataFrame: """ Reads streaming data from Kafka. """ try: df = self.spark.readStream.format("kafka").options(**self.options).load() return self._transform_to_eventhub_schema(df) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/kafka_eventhub.py
0.821796
0.385577
kafka_eventhub.py
pypi
import logging from py4j.protocol import Py4JJavaError from pyspark.sql import DataFrame, SparkSession from ..interfaces import SourceInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.spark import KAFKA_SCHEMA from ..._pipeline_utils.constants import get_default_package class SparkKafkaSource(SourceInterface): """ This Spark source class is used to read batch or streaming data from Kafka. Required and optional configurations can be found in the Attributes tables below. Additionally, there are more optional configurations which can be found [here.](https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html){ target="_blank" } Args: spark (SparkSession): Spark Session options (dict): A dictionary of Kafka configurations (See Attributes tables below). For more information on configuration options see [here](https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html){ target="_blank" } The following options are the most common configurations for Kafka. The only configuration that must be set for the Kafka source for both batch and streaming queries is listed below. Attributes: kafka.bootstrap.servers (A comma-separated list of host︰port): The Kafka "bootstrap.servers" configuration. (Streaming and Batch) There are multiple ways of specifying which topics to subscribe to. You should provide only one of these parameters: Attributes: assign (json string {"topicA"︰[0,1],"topicB"︰[2,4]}): Specific TopicPartitions to consume. Only one of "assign", "subscribe" or "subscribePattern" options can be specified for Kafka source. (Streaming and Batch) subscribe (A comma-separated list of topics): The topic list to subscribe. Only one of "assign", "subscribe" or "subscribePattern" options can be specified for Kafka source. (Streaming and Batch) subscribePattern (Java regex string): The pattern used to subscribe to topic(s). Only one of "assign, "subscribe" or "subscribePattern" options can be specified for Kafka source. (Streaming and Batch) The following configurations are optional: Attributes: startingTimestamp (timestamp str): The start point of timestamp when a query is started, a string specifying a starting timestamp for all partitions in topics being subscribed. Please refer the note on starting timestamp offset options below. (Streaming and Batch) startingOffsetsByTimestamp (JSON str): The start point of timestamp when a query is started, a json string specifying a starting timestamp for each TopicPartition. Please refer the note on starting timestamp offset options below. (Streaming and Batch) startingOffsets ("earliest", "latest" (streaming only), or JSON string): The start point when a query is started, either "earliest" which is from the earliest offsets, "latest" which is just from the latest offsets, or a json string specifying a starting offset for each TopicPartition. In the json, -2 as an offset can be used to refer to earliest, -1 to latest. endingTimestamp (timestamp str): The end point when a batch query is ended, a json string specifying an ending timestamp for all partitions in topics being subscribed. Please refer the note on ending timestamp offset options below. (Batch) endingOffsetsByTimestamp (JSON str): The end point when a batch query is ended, a json string specifying an ending timestamp for each TopicPartition. Please refer the note on ending timestamp offset options below. (Batch) endingOffsets (latest or JSON str): The end point when a batch query is ended, either "latest" which is just referred to the latest, or a json string specifying an ending offset for each TopicPartition. In the json, -1 as an offset can be used to refer to latest, and -2 (earliest) as an offset is not allowed. (Batch) maxOffsetsPerTrigger (long): Rate limit on maximum number of offsets processed per trigger interval. The specified total number of offsets will be proportionally split across topicPartitions of different volume. (Streaming) minOffsetsPerTrigger (long): Minimum number of offsets to be processed per trigger interval. The specified total number of offsets will be proportionally split across topicPartitions of different volume. (Streaming) failOnDataLoss (bool): Whether to fail the query when it's possible that data is lost (e.g., topics are deleted, or offsets are out of range). This may be a false alarm. You can disable it when it doesn't work as you expected. minPartitions (int): Desired minimum number of partitions to read from Kafka. By default, Spark has a 1-1 mapping of topicPartitions to Spark partitions consuming from Kafka. (Streaming and Batch) includeHeaders (bool): Whether to include the Kafka headers in the row. (Streaming and Batch) !!! note "Starting Timestamp Offset Note" If Kafka doesn't return the matched offset, the behavior will follow to the value of the option <code>startingOffsetsByTimestampStrategy</code>. <code>startingTimestamp</code> takes precedence over <code>startingOffsetsByTimestamp</code> and </code>startingOffsets</code>. For streaming queries, this only applies when a new query is started, and that resuming will always pick up from where the query left off. Newly discovered partitions during a query will start at earliest. !!! note "Ending Timestamp Offset Note" If Kafka doesn't return the matched offset, the offset will be set to latest. <code>endingOffsetsByTimestamp</code> takes precedence over <code>endingOffsets</code>. """ spark: SparkSession options: dict def __init__(self, spark: SparkSession, options: dict) -> None: self.spark = spark self.options = options self.schema = KAFKA_SCHEMA @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): spark_libraries = Libraries() spark_libraries.add_maven_library(get_default_package("spark_sql_kafka")) return spark_libraries @staticmethod def settings() -> dict: return {} def pre_read_validation(self) -> bool: return True def post_read_validation(self, df: DataFrame) -> bool: assert df.schema == self.schema return True def read_batch(self) -> DataFrame: """ Reads batch data from Kafka. """ try: return self.spark.read.format("kafka").options(**self.options).load() except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def read_stream(self) -> DataFrame: """ Reads streaming data from Kafka. """ try: return self.spark.readStream.format("kafka").options(**self.options).load() except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/kafka.py
0.882111
0.678873
kafka.py
pypi
import logging from py4j.protocol import Py4JJavaError from pyspark.sql import DataFrame, SparkSession from ..interfaces import SourceInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.spark import EVENTHUB_SCHEMA from ..._pipeline_utils.constants import get_default_package class SparkIoThubSource(SourceInterface): """ This Spark source class is used to read batch or streaming data from an IoT Hub. IoT Hub configurations need to be specified as options in a dictionary. Additionally, there are more optional configurations which can be found [here.](https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/PySpark/structured-streaming-pyspark.md#event-hubs-configuration){ target="_blank" } If using startingPosition or endingPosition make sure to check out the **Event Position** section for more details and examples. Args: spark (SparkSession): Spark Session options (dict): A dictionary of IoT Hub configurations (See Attributes table below) Attributes: eventhubs.connectionString (str): IoT Hub connection string is required to connect to the Eventhubs service. (Streaming and Batch) eventhubs.consumerGroup (str): A consumer group is a view of an entire IoT Hub. Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. (Streaming and Batch) eventhubs.startingPosition (JSON str): The starting position for your Structured Streaming job. If a specific EventPosition is not set for a partition using startingPositions, then we use the EventPosition set in startingPosition. If nothing is set in either option, we will begin consuming from the end of the partition. (Streaming and Batch) eventhubs.endingPosition: (JSON str): The ending position of a batch query. This works the same as startingPosition. (Batch) maxEventsPerTrigger (long): Rate limit on maximum number of events processed per trigger interval. The specified total number of events will be proportionally split across partitions of different volume. (Stream) """ options: dict spark: SparkSession def __init__(self, spark: SparkSession, options: dict) -> None: self.spark = spark self.schema = EVENTHUB_SCHEMA self.options = options @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def settings() -> dict: return {} @staticmethod def libraries(): spark_libraries = Libraries() spark_libraries.add_maven_library(get_default_package("spark_azure_eventhub")) return spark_libraries def pre_read_validation(self) -> bool: return True def post_read_validation(self, df: DataFrame) -> bool: assert df.schema == self.schema return True def read_batch(self) -> DataFrame: """ Reads batch data from IoT Hubs. """ iothub_connection_string = "eventhubs.connectionString" try: if iothub_connection_string in self.options: sc = self.spark.sparkContext self.options[ iothub_connection_string ] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt( self.options[iothub_connection_string] ) return self.spark.read.format("eventhubs").options(**self.options).load() except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def read_stream(self) -> DataFrame: """ Reads streaming data from IoT Hubs. """ iothub_connection_string = "eventhubs.connectionString" try: if iothub_connection_string in self.options: sc = self.spark.sparkContext self.options[ iothub_connection_string ] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt( self.options[iothub_connection_string] ) return ( self.spark.readStream.format("eventhubs").options(**self.options).load() ) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/iot_hub.py
0.843219
0.577912
iot_hub.py
pypi
import logging from py4j.protocol import Py4JJavaError from pyspark.sql import DataFrame, SparkSession from ..interfaces import SourceInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.spark import EVENTHUB_SCHEMA from ..._pipeline_utils.constants import get_default_package class SparkEventhubSource(SourceInterface): """ This Spark source class is used to read batch or streaming data from Eventhubs. Eventhub configurations need to be specified as options in a dictionary. Additionally, there are more optional configurations which can be found [here.](https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/PySpark/structured-streaming-pyspark.md#event-hubs-configuration){ target="_blank" } If using startingPosition or endingPosition make sure to check out the **Event Position** section for more details and examples. Args: spark (SparkSession): Spark Session options (dict): A dictionary of Eventhub configurations (See Attributes table below) Attributes: eventhubs.connectionString (str): Eventhubs connection string is required to connect to the Eventhubs service. (Streaming and Batch) eventhubs.consumerGroup (str): A consumer group is a view of an entire eventhub. Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. (Streaming and Batch) eventhubs.startingPosition (JSON str): The starting position for your Structured Streaming job. If a specific EventPosition is not set for a partition using startingPositions, then we use the EventPosition set in startingPosition. If nothing is set in either option, we will begin consuming from the end of the partition. (Streaming and Batch) eventhubs.endingPosition: (JSON str): The ending position of a batch query. This works the same as startingPosition. (Batch) maxEventsPerTrigger (long): Rate limit on maximum number of events processed per trigger interval. The specified total number of events will be proportionally split across partitions of different volume. (Stream) """ spark: SparkSession options: dict def __init__(self, spark: SparkSession, options: dict) -> None: self.spark = spark self.options = options self.schema = EVENTHUB_SCHEMA @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): spark_libraries = Libraries() spark_libraries.add_maven_library(get_default_package("spark_azure_eventhub")) return spark_libraries @staticmethod def settings() -> dict: return {} def pre_read_validation(self) -> bool: return True def post_read_validation(self, df: DataFrame) -> bool: assert df.schema == self.schema return True def read_batch(self) -> DataFrame: """ Reads batch data from Eventhubs. """ eventhub_connection_string = "eventhubs.connectionString" try: if eventhub_connection_string in self.options: sc = self.spark.sparkContext self.options[ eventhub_connection_string ] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt( self.options[eventhub_connection_string] ) return self.spark.read.format("eventhubs").options(**self.options).load() except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def read_stream(self) -> DataFrame: """ Reads streaming data from Eventhubs. """ eventhub_connection_string = "eventhubs.connectionString" try: if eventhub_connection_string in self.options: sc = self.spark.sparkContext self.options[ eventhub_connection_string ] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt( self.options[eventhub_connection_string] ) return ( self.spark.readStream.format("eventhubs").options(**self.options).load() ) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/eventhub.py
0.853745
0.561816
eventhub.py
pypi
import logging from pyspark.sql import DataFrame, SparkSession from ..interfaces import SourceInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class DataBricksAutoLoaderSource(SourceInterface): """ The Spark Auto Loader is used to read new data files as they arrive in cloud storage. Further information on Auto Loader is available [here](https://docs.databricks.com/ingestion/auto-loader/index.html) Args: spark (SparkSession): Spark Session required to read data from cloud storage options (dict): Options that can be specified for configuring the Auto Loader. Further information on the options available are [here](https://docs.databricks.com/ingestion/auto-loader/options.html) path (str): The cloud storage path format (str): Specifies the file format to be read. Supported formats are available [here](https://docs.databricks.com/ingestion/auto-loader/options.html#file-format-options) """ spark: SparkSession options: dict path: str def __init__( self, spark: SparkSession, options: dict, path: str, format: str ) -> None: self.spark = spark self.options = options self.path = path self.options["cloudFiles.format"] = format @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK on Databricks """ return SystemType.PYSPARK_DATABRICKS @staticmethod def libraries(): libraries = Libraries() libraries.add_maven_library(get_default_package("spark_delta_core")) return libraries @staticmethod def settings() -> dict: return {} def pre_read_validation(self): return True def post_read_validation(self, df: DataFrame): return True def read_batch(self): """ Raises: NotImplementedError: Auto Loader only supports streaming reads. To perform a batch read, use the read_stream method of this component and specify the Trigger on the write_stream to be `availableNow` to perform batch-like reads of cloud storage files. """ raise NotImplementedError( "Auto Loader only supports streaming reads. To perform a batch read, use the read_stream method and specify Trigger on the write_stream as `availableNow`" ) def read_stream(self) -> DataFrame: """ Performs streaming reads of files in cloud storage. """ try: return ( self.spark.readStream.format("cloudFiles") .options(**self.options) .load(self.path) ) except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/autoloader.py
0.83152
0.512571
autoloader.py
pypi
import logging from pyspark.sql import DataFrame, SparkSession from py4j.protocol import Py4JJavaError from ..interfaces import SourceInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class SparkDeltaSharingSource(SourceInterface): """ The Spark Delta Sharing Source is used to read data from a Delta table where Delta sharing is configured Args: spark (SparkSession): Spark Session required to read data from a Delta table options (dict): Options that can be specified for a Delta Table read operation (See Attributes table below). Further information on the options is available [here](https://docs.databricks.com/data-sharing/read-data-open.html#apache-spark-read-shared-data){ target="_blank" } table_path (str): Path to credentials file and Delta table to query Attributes: ignoreDeletes (bool str): Ignore transactions that delete data at partition boundaries. (Streaming) ignoreChanges (bool str): Pre-process updates if files had to be rewritten in the source table due to a data changing operation. (Streaming) startingVersion (int str): The Delta Lake version to start from. (Streaming) startingTimestamp (datetime str): The timestamp to start from. (Streaming) maxFilesPerTrigger (int): How many new files to be considered in every micro-batch. The default is 1000. (Streaming) maxBytesPerTrigger (int): How much data gets processed in each micro-batch. (Streaming) readChangeFeed (bool str): Stream read the change data feed of the shared table. (Batch & Streaming) timestampAsOf (datetime str): Query the Delta Table from a specific point in time. (Batch) versionAsOf (int str): Query the Delta Table from a specific version. (Batch) """ spark: SparkSession options: dict table_path: str def __init__(self, spark: SparkSession, options: dict, table_path: str) -> None: self.spark = spark self.options = options self.table_path = table_path @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() libraries.add_maven_library(get_default_package("spark_delta_sharing")) return libraries @staticmethod def settings() -> dict: return {} def pre_read_validation(self): return True def post_read_validation(self): return True def read_batch(self): """ Reads batch data from Delta. Most of the options provided by the Apache Spark DataFrame read API are supported for performing batch reads on Delta tables. """ try: return ( self.spark.read.format("deltaSharing") .options(**self.options) .table(self.table_path) ) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e def read_stream(self) -> DataFrame: """ Reads streaming data from Delta. All of the data in the table is processed as well as any new data that arrives after the stream started. .load() can take table name or path. """ try: return ( self.spark.readStream.format("deltaSharing") .options(**self.options) .load(self.table_path) ) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/delta_sharing.py
0.831896
0.561575
delta_sharing.py
pypi
import logging from py4j.protocol import Py4JJavaError from pyspark.sql import DataFrame, SparkSession from ..interfaces import SourceInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.spark import KINESIS_SCHEMA class SparkKinesisSource(SourceInterface): """ The Spark Kinesis Source is used to read data from Kinesis in a Databricks environment. Structured streaming from Kinesis is **not** supported in open source Spark. Args: spark (SparkSession): Spark Session required to read data from Kinesis options (dict): Options that can be specified for a Kinesis read operation (See Attributes table below). Further information on the options is available [here](https://docs.databricks.com/structured-streaming/kinesis.html#configuration){ target="_blank" } Attributes: awsAccessKey (str): AWS access key. awsSecretKey (str): AWS secret access key corresponding to the access key. streamName (List[str]): The stream names to subscribe to. region (str): The region the streams are defined in. endpoint (str): The regional endpoint for Kinesis Data Streams. initialPosition (str): The point to start reading from; earliest, latest, or at_timestamp. """ spark: SparkSession options: dict def __init__(self, spark: SparkSession, options: dict) -> None: self.spark = spark self.options = options self.schema = KINESIS_SCHEMA @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK_DATABRICKS """ return SystemType.PYSPARK_DATABRICKS @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_read_validation(self): return True def post_read_validation(self, df: DataFrame) -> bool: assert df.schema == self.schema return True def read_batch(self): """ Raises: NotImplementedError: Kinesis only supports streaming reads. To perform a batch read, use the read_stream method of this component and specify the Trigger on the write_stream to be `availableNow=True` to perform batch-like reads of cloud storage files. """ raise NotImplementedError( "Kinesis only supports streaming reads. To perform a batch read, use the read_stream method and specify Trigger on the write_stream as `availableNow=True`" ) def read_stream(self) -> DataFrame: """ Reads streaming data from Kinesis. All of the data in the table is processed as well as any new data that arrives after the stream started. """ try: return ( self.spark.readStream.format("kinesis").options(**self.options).load() ) except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/kinesis.py
0.790934
0.582936
kinesis.py
pypi
import json import pandas as pd from pyspark.sql import SparkSession from pyspark.sql.types import StringType, DoubleType, IntegerType from .base_weather import SparkWeatherCompanyBaseWeatherSource from ...._pipeline_utils.weather import WEATHER_FORECAST_SCHEMA class SparkWeatherCompanyForecastAPIV1Source(SparkWeatherCompanyBaseWeatherSource): """ The Weather Forecast API V1 Source is used to read 15 days forecast from the Weather API. URL: <a href="https://api.weather.com/v1/geocode/32.3667/-95.4/forecast/hourly/360hour.json"> https://api.weather.com/v1/geocode/32.3667/-95.4/forecast/hourly/360hour.json</a> Args: spark (SparkSession): Spark Session instance options (dict): A dictionary of ISO Source specific configurations Attributes: lat (str): Latitude of the Weather Station. lon (str): Longitude of the Weather Station. api_key (str): Weather API key. language (str): API response language. Defaults to `en-US`. units (str): Unit of measurements. Defaults to `e`. """ spark: SparkSession spark_schema = WEATHER_FORECAST_SCHEMA options: dict weather_url: str = "https://api.weather.com/v1/geocode/" required_options = ["lat", "lon", "api_key"] def __init__(self, spark: SparkSession, options: dict) -> None: super(SparkWeatherCompanyForecastAPIV1Source, self).__init__(spark, options) self.spark = spark self.options = options self.lat = self.options.get("lat", "").strip() self.lon = self.options.get("lon", "").strip() self.api_key = self.options.get("api_key", "").strip() self.language = self.options.get("language", "en-US").strip() self.units = self.options.get("units", "e").strip() def _prepare_data(self, df: pd.DataFrame) -> pd.DataFrame: """ Prepares weather data for the use. Args: df: Data received after preparation. Returns: Final data after all the transformations. """ rename_cols = { "latitude": "Latitude", "longitude": "Longitude", "class": "Class", "expire_time_gmt": "ExpireTimeGmt", "fcst_valid": "FcstValid", "fcst_valid_local": "FcstValidLocal", "num": "Num", "day_ind": "DayInd", "temp": "Temp", "dewpt": "Dewpt", "hi": "Hi", "wc": "Wc", "feels_like": "FeelsLike", "icon_extd": "IconExtd", "wxman": "Wxman", "icon_code": "IconCode", "dow": "Dow", "phrase_12char": "Phrase12Char", "phrase_22char": "Phrase22Char", "phrase_32char": "Phrase32Char", "subphrase_pt1": "SubphrasePt1", "subphrase_pt2": "SubphrasePt2", "subphrase_pt3": "SubphrasePt3", "pop": "Pop", "precip_type": "PrecipType", "qpf": "Qpf", "snow_qpf": "SnowQpf", "rh": "Rh", "wspd": "Wspd", "wdir": "Wdir", "wdir_cardinal": "WdirCardinal", "gust": "Gust", "clds": "Clds", "vis": "Vis", "mslp": "Mslp", "uv_index_raw": "UvIndexRaw", "uv_index": "UvIndex", "uv_warning": "UvWarning", "uv_desc": "UvDesc", "golf_index": "GolfIndex", "golf_category": "GolfCategory", "severity": "Severity", } df = df.rename(columns=rename_cols) fields = self.spark_schema.fields str_cols = list( map( lambda x: x.name, filter(lambda x: isinstance(x.dataType, StringType), fields), ) ) double_cols = list( map( lambda x: x.name, filter(lambda x: isinstance(x.dataType, DoubleType), fields), ) ) int_cols = list( map( lambda x: x.name, filter(lambda x: isinstance(x.dataType, IntegerType), fields), ) ) df[str_cols] = df[str_cols].astype(str) df[double_cols] = df[double_cols].astype(float) df[int_cols] = df[int_cols].astype(int) df.reset_index(inplace=True, drop=True) return df def _get_api_params(self): params = { "language": self.language, "units": self.units, "apiKey": self.api_key, } return params def _pull_for_weather_station(self, lat: str, lon: str) -> pd.DataFrame: response = json.loads( self._fetch_from_url(f"{lat}/{lon}/forecast/hourly/360hour.json").decode( "utf-8" ) ) return pd.DataFrame(response["forecasts"]) def _pull_data(self) -> pd.DataFrame: """ Pulls data from the Weather API and parses the JSON file. Returns: Raw form of data. """ df = self._pull_for_weather_station(self.lat, self.lon) df["latitude"] = self.lat df["longitude"] = self.lon return df
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/the_weather_company/weather_forecast_api_v1.py
0.838713
0.497742
weather_forecast_api_v1.py
pypi
import pandas as pd from pyspark.sql import SparkSession from ...._pipeline_utils.weather import WEATHER_FORECAST_SCHEMA from .weather_forecast_api_v1 import SparkWeatherCompanyForecastAPIV1Source class SparkWeatherCompanyForecastAPIV1MultiSource( SparkWeatherCompanyForecastAPIV1Source ): """ The Weather Forecast API V1 Multi Source is used to read 15 days forecast from the Weather API. It allows to pull weather data for multiple stations and returns all of them in a single DataFrame. URL for one station: <a href="https://api.weather.com/v1/geocode/32.3667/-95.4/forecast/hourly/360hour.json"> https://api.weather.com/v1/geocode/32.3667/-95.4/forecast/hourly/360hour.json</a> It takes a list of Weather Stations. Each station item must contain comma separated Latitude & Longitude. Examples - `["32.3667,-95.4", "51.52,-0.11"]` Args: spark (SparkSession): Spark Session instance options (dict): A dictionary of ISO Source specific configurations Attributes: stations (list[str]): List of Weather Stations. api_key (str): Weather API key. language (str): API response language. Defaults to `en-US`. units (str): Unit of measurements. Defaults to `e`. """ spark: SparkSession options: dict spark_schema = WEATHER_FORECAST_SCHEMA required_options = ["stations", "api_key"] def __init__(self, spark: SparkSession, options: dict) -> None: super(SparkWeatherCompanyForecastAPIV1MultiSource, self).__init__( spark, options ) self.spark = spark self.options = options self.stations = self.options.get("stations", []) self.api_key = self.options.get("api_key", "").strip() self.language = self.options.get("language", "en-US").strip() self.units = self.options.get("units", "e").strip() def _pull_data(self) -> pd.DataFrame: """ Pulls data from the Weather API and parses the JSON file for multiple stations Returns: Raw form of data. """ result_df = None for station in self.stations: parts = station.split(",") lat, lon = parts df = self._pull_for_weather_station(lat, lon) df["latitude"] = lat df["longitude"] = lon if result_df is not None: result_df = pd.concat([result_df, df]) else: result_df = df return result_df def _validate_options(self) -> bool: for station in self.stations: parts = station.split(",") if len(parts) != 2 or parts[0].strip() == "" or parts[1].strip() == "": raise ValueError( f"Each station item must contain comma separated Latitude & Longitude. Eg: 10.23:45.2" ) return True
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/the_weather_company/weather_forecast_api_v1_multi.py
0.878471
0.415314
weather_forecast_api_v1_multi.py
pypi
import pandas as pd import numpy as np from ...interfaces import SourceInterface from ...._pipeline_utils.models import Libraries, SystemType from .base_mars import SparkECMWFBaseMarsSource from pyspark.sql import SparkSession class SparkECMWFWeatherForecastSource(SourceInterface): """ The Weather Forecast API V1 Source class to doownload nc files from ECMWF MARS server using the ECMWF python API. Args: spark (SparkSession): Spark Session instance save_path (str): Path to local directory where the nc files will be stored, in format "yyyy-mm-dd_HH.nc" date_start (str): Start date of extraction in "YYYY-MM-DD HH:MM:SS" format date_end:str, date_end (str): End date of extraction in "YYYY-MM-DD HH:MM:SS" format ecmwf_class (str): ecmwf classification of data stream (str): Operational model stream expver (str): Version of data leveltype (str): Surface level forecasts ec_vars (list): Variables of forecast measurements. forecast_area (list): N/W/S/E coordinates of the forecast area ecmwf_api_key (str): API key for ECMWF API ecmwf_api_email (str): Email for ECMWF API """ spark: SparkSession def __init__( self, spark: SparkSession, save_path: str, date_start: str, date_end: str, ecmwf_class: str, stream: str, expver: str, leveltype: str, ec_vars: list, forecast_area: list, ecmwf_api_key: str, ecmwf_api_email: str, ) -> None: self.spark = spark self.save_path = save_path self.date_start = date_start self.date_end = date_end self.ecmwf_class = ecmwf_class self.stream = stream # operational model self.expver = expver # experiment version of data self.leveltype = leveltype # surface level forecasts self.ec_vars = ec_vars # variables self.forecast_area = forecast_area # N/W/S/E self.ecmwf_api_key = ecmwf_api_key self.ecmwf_api_email = ecmwf_api_email @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_read_validation(self): return True def post_read_validation(self): return True def read_stream(self): return True @classmethod def _get_lead_time(cls): """ Lead time for the forecast data. 90 hours - 1 Hour Interval 90-146 - 3 Hour interval 146 -246 - 6 Hour interval Returns: lead_times: Lead times in an array format. """ lead_times = [*range(91), *range(93, 146, 3), *range(150, 246, 6)] np.array(lead_times) return lead_times def _get_api_params(self, lead_times): """ API parameters for the forecast data. Returns: params (dict): API parameters for the forecast data. """ params = { "class": self.ecmwf_class, # ecmwf classification of data "stream": self.stream, # operational model "expver": self.expver, # experiment version of data "levtype": self.leveltype, # surface level forecasts "type": "fc", # forecasts "param": self.ec_vars, # variables "step": lead_times, # which lead times to download "area": self.forecast_area, # N/W/S/E "grid": [0.1, 0.1], # grid res of output } return params def read_batch(self): """ Pulls data from the Weather API and returns as .nc files. """ lead_times = self._get_lead_time() para = self._get_api_params(lead_times=lead_times) ec_conn = SparkECMWFBaseMarsSource( date_start=self.date_start, date_end=self.date_end, save_path=self.save_path, run_interval="12", run_frequency="H", ecmwf_api_key=self.ecmwf_api_key, ecmwf_api_email=self.ecmwf_api_email, ecmwf_api_url="https://api.ecmwf.int/v1", ) ec_conn.retrieve( mars_dict=para, tries=5, n_jobs=-1, # maximum of 20 queued requests per user (only two allowed active) )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/ecmwf/weather_forecast.py
0.847968
0.272366
weather_forecast.py
pypi
import pandas as pd import numpy as np import os from ecmwfapi import ECMWFService from joblib import Parallel, delayed class SparkECMWFBaseMarsSource: """ Download nc files from ECMWF MARS server using the ECMWF python API. Data is downloaded in parallel using joblib from ECMWF MARS server using the ECMWF python API. Args: save_path (str): Path to local directory where the nc files will be stored, in format "yyyy-mm-dd_HH.nc" date_start (str): Start date of extraction in "YYYY-MM-DD HH:MM:SS" format date_end (str): End date of extraction in "YYYY-MM-DD HH:MM:SS" format ecmwf_api_key (str): API key for ECMWF MARS server ecmwf_api_email (str): Email for ECMWF MARS server ecmwf_api_url (str): URL for ECMWF MARS server run_frequency (str):Frequency format of runs to download, e.g. "H" run_interval (str): Interval of runs, e.g. a run_frequency of "H" and run_interval of "12" will extract the data of the 00 and 12 run for each day. """ def __init__( self, date_start: str, date_end: str, save_path: str, ecmwf_api_key: str, ecmwf_api_email: str, ecmwf_api_url: str = "https://api.ecmwf.int/v1", run_interval: str = "12", run_frequency: str = "H", ): self.retrieve_ran = False self.date_start = date_start self.date_end = date_end self.save_path = save_path self.format = format self.run_interval = run_interval self.run_frequency = run_frequency self.ecmwf_api_key = ecmwf_api_key self.ecmwf_api_url = ecmwf_api_url self.ecmwf_api_email = ecmwf_api_email # Pandas date_list (info best retrieved per forecast day) self.dates = pd.date_range( start=date_start, end=date_end, freq=run_interval + run_frequency ) def retrieve( self, mars_dict: dict, n_jobs=None, backend="loky", tries=5, cost=False, ): """Retrieve the data from the server. Function will use the ecmwf api to download the data from the server. Note that mars has a max of two active requests per user and 20 queued requests. Data is downloaded in parallel using joblib from ECMWF MARS server using the ECMWF python API. Args: mars_dict (dict): Dictionary of mars parameters. n_jobs (int, optional): Download in parallel? by default None, i.e. no parallelization backend (str, optional) : Specify the parallelization backend implementation in joblib, by default "loky" tries (int, optional): Number of tries for each request if it fails, by default 5 cost (bool, optional): Pass a cost request to mars to estimate the size and efficiency of your request, but not actually download the data. Can be useful for defining requests, by default False. """ chk = ["date", "target", "time", "format", "output"] for i in chk: if i in mars_dict.keys(): raise ValueError(f"don't include {i} in the mars_dict") parallel = Parallel(n_jobs=n_jobs, backend=backend) def _retrieve_datetime(i, j, cost=cost): i_dict = {"date": i, "time": j} if cost: filename = f"{i}_{j}.txt" # NOSONAR else: filename = f"{i}_{j}.nc" i_dict["format"] = "netcdf" # NOSONAR target = os.path.join(self.save_path, filename) msg = f"retrieving mars data --- {filename}" req_dict = {**i_dict, **mars_dict} for k, v in req_dict.items(): if isinstance(v, (list, tuple)): req_dict[k] = "/".join([str(x) for x in v]) # NOSONAR req_dict = ["{}={}".format(k, v) for k, v in req_dict.items()] if cost: req_dict = "list,output=cost,{}".format(",".join(req_dict)) # NOSONAR else: req_dict = "retrieve,{}".format(",".join(req_dict)) # NOSONAR for j in range(tries): try: print(msg) server = ECMWFService( "mars", url=self.ecmwf_api_url, email=self.ecmwf_api_email, key=self.ecmwf_api_key, ) server.execute(req_dict, target) return 1 # NOSONAR except: # NOSONAR if j < tries - 1: continue # NOSONAR else: return 0 # NOSONAR self.success = parallel( delayed(_retrieve_datetime)(str(k.date()), f"{k.hour:02}") for k in self.dates ) self.retrieve_ran = True return self def info(self) -> pd.Series: """ Return info on each ECMWF request. Returns: pd.Series: Successful request for each run == 1. """ if not self.retrieve_ran: raise ValueError( "Before using self.info(), prepare the request using " + "self.retrieve()" ) y = pd.Series(self.success, index=self.dates, name="success", dtype=bool) return y
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/ecmwf/base_mars.py
0.665193
0.221435
base_mars.py
pypi
import logging import pandas as pd from pyspark.sql import SparkSession from datetime import datetime, timedelta from io import BytesIO import time from . import PJMDailyLoadISOSource class PJMHistoricalLoadISOSource(PJMDailyLoadISOSource): """ The PJM Historical Load ISO Source is used to read historical load data from PJM API. API: <a href="https://api.pjm.com/api/v1/">https://api.pjm.com/api/v1/</a> (must be a valid apy key from PJM) Historical doc: <a href="https://dataminer2.pjm.com/feed/ops_sum_prev_period/definition">https://dataminer2.pjm.com/feed/ops_sum_prev_period/definition</a> Historical is the same PJM endpoint as Actual, but is called repeatedly within a range established by the start_date & end_date attributes Args: spark (SparkSession): Spark Session instance options (dict): A dictionary of ISO Source specific configurations Attributes: api_key (str): Must be a valid key from PJM, see PJM documentation start_date (str): Must be in `YYYY-MM-DD` format. end_date (str): Must be in `YYYY-MM-DD` format. query_batch_days (int): (optional) Number of days must be < 160 as per PJM & is defaulted to `120` sleep_duration (int): (optional) Number of seconds to sleep between request, defaulted to `5` seconds, used to manage requests to PJM endpoint request_count (int): (optional) Number of requests made to PJM endpoint before sleep_duration, currently defaulted to `1` """ spark: SparkSession options: dict required_options = ["api_key", "start_date", "end_date"] def __init__(self, spark: SparkSession, options: dict) -> None: super().__init__(spark, options) self.spark: SparkSession = spark self.options: dict = options self.api_key: str = self.options.get("api_key", "").strip() self.start_date: str = self.options.get("start_date", "") self.end_date: str = self.options.get("end_date", "") self.query_batch_days: int = self.options.get("query_batch_days", 120) self.sleep_duration: int = self.options.get("sleep_duration", 5) self.request_count: int = self.options.get("request_count", 1) self.load_type: str = "actual" self.user_datetime_format = "%Y-%m-%d" def _pull_data(self) -> pd.DataFrame: """ Pulls data from the PJM API and parses the return including date ranges. Returns: Raw form of data. """ logging.info( f"Historical load requested from {self.start_date} to {self.end_date}" ) start_date = datetime.strptime(self.start_date, self.user_datetime_format) end_date = datetime.strptime(self.end_date, self.user_datetime_format).replace( hour=23 ) days_diff = (end_date - start_date).days logging.info(f"Expected hours for a single zone = {(days_diff + 1) * 24}") generated_days_ranges = [] dates = pd.date_range( start_date, end_date, freq=pd.DateOffset(days=self.query_batch_days) ) for date in dates: py_date = date.to_pydatetime() date_last = (py_date + timedelta(days=self.query_batch_days - 1)).replace( hour=23 ) date_last = min(date_last, end_date) generated_days_ranges.append((py_date, date_last)) logging.info( f"Generated date ranges for batch days {self.query_batch_days} are {generated_days_ranges}" ) # Collect all historical data on yearly basis. dfs = [] for idx, date_range in enumerate(generated_days_ranges): start_date_str = date_range[0].strftime(self.query_datetime_format) end_date_str = date_range[1].strftime(self.query_datetime_format) df = pd.read_csv( BytesIO(self._fetch_from_url("", start_date_str, end_date_str)) ) dfs.append(df) if idx > 0 and idx % self.request_count == 0: logging.info(f"Going to sleep for {self.sleep_duration} seconds") time.sleep(self.sleep_duration) df = pd.concat(dfs, sort=False) df = df.reset_index(drop=True) return df def _validate_options(self) -> bool: """ Validates all parameters including the following examples: - `start_date` & `end_data` must be in the correct format. - `start_date` must be behind `end_data`. - `start_date` must not be in the future (UTC). Returns: True if all looks good otherwise raises Exception. """ try: start_date = datetime.strptime(self.start_date, self.user_datetime_format) except ValueError: raise ValueError( f"Unable to parse Start date. Please specify in {self.user_datetime_format} format." ) try: end_date = datetime.strptime(self.end_date, self.user_datetime_format) except ValueError: raise ValueError( f"Unable to parse End date. Please specify in {self.user_datetime_format} format." ) if start_date > datetime.utcnow() - timedelta(days=1): raise ValueError("Start date can't be in future.") if start_date > end_date: raise ValueError("Start date can't be ahead of End date.") if end_date > datetime.utcnow() - timedelta(days=1): raise ValueError("End date can't be in future.") if self.sleep_duration < 0: raise ValueError("Sleep duration can't be negative.") if self.request_count < 0: raise ValueError("Request count can't be negative.") if self.query_batch_days < 0: raise ValueError("Query batch days count can't be negative.") return True
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/iso/pjm_historical_load_iso.py
0.859914
0.379091
pjm_historical_load_iso.py
pypi
import logging import pandas as pd from py4j.protocol import Py4JJavaError from pyspark.sql import DataFrame, SparkSession import requests from datetime import datetime, timezone import pytz from pyspark.sql.types import StructType, StructField, IntegerType from requests import HTTPError from io import BytesIO from ...interfaces import SourceInterface from ...._pipeline_utils.models import Libraries, SystemType from ....._sdk_utils.pandas import _prepare_pandas_to_convert_to_spark class BaseISOSource(SourceInterface): """ Base class for all the ISO Sources. It provides common functionality and helps in reducing the code redundancy. Args: spark (SparkSession): Spark Session instance options (dict): A dictionary of ISO Source specific configurations """ spark: SparkSession options: dict iso_url: str = "https://" query_datetime_format: str = "%Y%m%d" required_options: list = [] spark_schema = StructType([StructField("id", IntegerType(), True)]) default_query_timezone: str = "UTC" def __init__(self, spark: SparkSession, options: dict) -> None: self.spark = spark self.options = options self.query_timezone = pytz.timezone( self.options.get("query_timezone", self.default_query_timezone) ) self.current_date = datetime.now(timezone.utc).astimezone(self.query_timezone) def _fetch_from_url(self, url_suffix: str) -> bytes: """ Gets data from external ISO API. Args: url_suffix: String to be used as suffix to iso url. Returns: Raw content of the data received. """ url = f"{self.iso_url}{url_suffix}" logging.info(f"Requesting URL - {url}") response = requests.get(url) code = response.status_code if code != 200: raise HTTPError( f"Unable to access URL `{url}`." f" Received status code {code} with message {response.content}" ) return response.content def _get_localized_datetime(self, datetime_str: str) -> datetime: """ Converts string datetime into Python datetime object with configured format and timezone. Args: datetime_str: String to be converted into datetime. Returns: Timezone aware datetime object. """ parsed_dt = datetime.strptime(datetime_str, self.query_datetime_format) parsed_dt = parsed_dt.replace(tzinfo=self.query_timezone) return parsed_dt def _pull_data(self) -> pd.DataFrame: """ Hits the fetch_from_url method with certain parameters to get raw data from API. All the children ISO classes must override this method and call the fetch_url method in it. Returns: Raw DataFrame from API. """ return pd.read_csv(BytesIO(self._fetch_from_url(""))) def _prepare_data(self, df: pd.DataFrame) -> pd.DataFrame: """ Performs all the basic transformations to prepare data for further processing. All the children ISO classes must override this method. Args: df: Raw DataFrame, received from the API. Returns: Modified DataFrame, ready for basic use. """ return df def _sanitize_data(self, df: pd.DataFrame) -> pd.DataFrame: """ Another data transformation helper method to be called after prepare data. Used for advance data processing such as cleaning, filtering, restructuring. All the children ISO classes must override this method if there is any post-processing required. Args: df: Initial modified version of DataFrame, received after preparing the data. Returns: Final version of data after all the fixes and modifications. """ return df def _get_data(self) -> pd.DataFrame: """ Entrypoint method to return the final version of DataFrame. Returns: Modified form of data for specific use case. """ df = self._pull_data() df = self._prepare_data(df) df = self._sanitize_data(df) # Reorder columns to keep the data consistent df = df[self.spark_schema.names] return df @staticmethod def system_type(): return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def _validate_options(self) -> bool: """ Performs all the options checks. Raises exception in case of any invalid value. Returns: True if all checks are passed. """ return True def pre_read_validation(self) -> bool: """ Ensures all the required options are provided and performs other validations. Returns: True if all checks are passed. """ for key in self.required_options: if key not in self.options: raise ValueError(f"Required option `{key}` is missing.") return self._validate_options() def post_read_validation(self) -> bool: return True def read_batch(self) -> DataFrame: """ Spark entrypoint, It executes the entire process of pulling, transforming & fixing data. Returns: Final Spark DataFrame converted from Pandas DataFrame post-execution. """ try: self.pre_read_validation() pdf = self._get_data() pdf = _prepare_pandas_to_convert_to_spark(pdf) df = self.spark.createDataFrame(data=pdf, schema=self.spark_schema) return df except Exception as e: logging.exception(str(e)) raise e def read_stream(self) -> DataFrame: """ By default, the streaming operation is not supported but child classes can override if ISO supports streaming. Returns: Final Spark DataFrame after all the processing. """ raise NotImplementedError( f"{self.__class__.__name__} connector doesn't support stream operation." )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/iso/base_iso.py
0.890948
0.422445
base_iso.py
pypi
from pyspark.sql import SparkSession import pandas as pd import logging from datetime import datetime, timedelta from . import MISODailyLoadISOSource class MISOHistoricalLoadISOSource(MISODailyLoadISOSource): """ The MISO Historical Load ISO Source is used to read historical load data from MISO API. API: <a href="https://docs.misoenergy.org/marketreports/">https://docs.misoenergy.org/marketreports/</a> Args: spark (SparkSession): Spark Session instance options (dict): A dictionary of ISO Source specific configurations Attributes: start_date (str): Must be in `YYYYMMDD` format. end_date (str): Must be in `YYYYMMDD` format. fill_missing (str): Set to `"true"` to fill missing Actual load with Forecast load. Default - `true`. """ spark: SparkSession options: dict required_options = ["start_date", "end_date"] def __init__(self, spark: SparkSession, options: dict): super().__init__(spark, options) self.start_date = self.options.get("start_date", "") self.end_date = self.options.get("end_date", "") self.fill_missing = bool(self.options.get("fill_missing", "true") == "true") def _get_historical_data_for_date(self, date: datetime) -> pd.DataFrame: logging.info(f"Getting historical data for date {date}") df = pd.read_excel( self._fetch_from_url( f"{date.strftime(self.query_datetime_format)}_dfal_HIST.xls" ), skiprows=5, ) if date.month == 12 and date.day == 31: expected_year_rows = ( pd.Timestamp(date.year, 12, 31).dayofyear * 24 * 7 ) # Every hour has 7 zones. received_year_rows = ( len(df[df["MarketDay"] != "MarketDay"]) - 2 ) # Last 2 rows are invalid. if expected_year_rows != received_year_rows: logging.warning( f"Didn't receive full year historical data for year {date.year}." f" Expected {expected_year_rows} but Received {received_year_rows}" ) return df def _pull_data(self) -> pd.DataFrame: """ Pulls data from the MISO API and parses the Excel file. Returns: Raw form of data. """ logging.info( f"Historical load requested from {self.start_date} to {self.end_date}" ) start_date = self._get_localized_datetime(self.start_date) end_date = self._get_localized_datetime(self.end_date) dates = pd.date_range( start_date, end_date + timedelta(days=365), freq="Y", inclusive="left" ) logging.info(f"Generated date ranges are - {dates}") # Collect all historical data on yearly basis. df = pd.concat( [ self._get_historical_data_for_date(min(date, self.current_date)) for date in dates ], sort=False, ) return df def _prepare_data(self, df: pd.DataFrame) -> pd.DataFrame: """ Creates a new `Datetime` column, removes null values and pivots the data. Args: df: Raw form of data received from the API. Returns: Data after basic transformations and pivoting. """ df = df[df["MarketDay"] != "MarketDay"] # Fill missing actual values with the forecast values to avoid gaps. if self.fill_missing: df = df.fillna({"ActualLoad (MWh)": df["MTLF (MWh)"]}) df = df.rename( columns={ "MarketDay": "date", "HourEnding": "hour", "ActualLoad (MWh)": "load", "LoadResource Zone": "zone", } ) df = df.dropna() df["date_time"] = pd.to_datetime(df["date"]) + pd.to_timedelta( df["hour"].astype(int) - 1, "h" ) df.drop(["hour", "date"], axis=1, inplace=True) df["load"] = df["load"].astype(float) df = df.pivot_table( index="date_time", values="load", columns="zone" ).reset_index() df.columns = [str(x.split(" ")[0]).upper() for x in df.columns] rename_cols = { "LRZ1": "Lrz1", "LRZ2_7": "Lrz2_7", "LRZ3_5": "Lrz3_5", "LRZ4": "Lrz4", "LRZ6": "Lrz6", "LRZ8_9_10": "Lrz8_9_10", "MISO": "Miso", "DATE_TIME": "Datetime", } df = df.rename(columns=rename_cols) return df def _sanitize_data(self, df: pd.DataFrame) -> pd.DataFrame: """ Filter outs data outside the requested date range. Args: df: Data received after preparation. Returns: Final data after all the transformations. """ start_date = self._get_localized_datetime(self.start_date) end_date = self._get_localized_datetime(self.end_date).replace( hour=23, minute=59, second=59 ) df = df[ (df["Datetime"] >= start_date.replace(tzinfo=None)) & (df["Datetime"] <= end_date.replace(tzinfo=None)) ] df = df.sort_values(by="Datetime", ascending=True).reset_index(drop=True) expected_rows = ((min(end_date, self.current_date) - start_date).days + 1) * 24 actual_rows = len(df) logging.info(f"Rows Expected = {expected_rows}, Rows Found = {actual_rows}") return df def _validate_options(self) -> bool: """ Validates the following options: - `start_date` & `end_data` must be in the correct format. - `start_date` must be behind `end_data`. - `start_date` must not be in the future (UTC). Returns: True if all looks good otherwise raises Exception. """ try: start_date = self._get_localized_datetime(self.start_date) except ValueError: raise ValueError( "Unable to parse Start date. Please specify in YYYYMMDD format." ) try: end_date = self._get_localized_datetime(self.end_date) except ValueError: raise ValueError( "Unable to parse End date. Please specify in YYYYMMDD format." ) if start_date > self.current_date: raise ValueError("Start date can't be in future.") if start_date > end_date: raise ValueError("Start date can't be ahead of End date.") return True
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/iso/miso_historical_load_iso.py
0.916124
0.523359
miso_historical_load_iso.py
pypi
import logging import pandas as pd from pyspark.sql import SparkSession from ...._pipeline_utils.iso import MISO_SCHEMA from . import BaseISOSource class MISODailyLoadISOSource(BaseISOSource): """ The MISO Daily Load ISO Source is used to read daily load data from MISO API. It supports both Actual and Forecast data. API: <a href="https://docs.misoenergy.org/marketreports/">https://docs.misoenergy.org/marketreports/</a> Actual data is available for one day minus from the given date. Forecast data is available for next 6 day (inclusive of given date). Args: spark (SparkSession): Spark Session instance options (dict): A dictionary of ISO Source specific configurations Attributes: load_type (str): Must be one of `actual` or `forecast` date (str): Must be in `YYYYMMDD` format. """ spark: SparkSession options: dict iso_url: str = "https://docs.misoenergy.org/marketreports/" query_datetime_format: str = "%Y%m%d" required_options = ["load_type", "date"] spark_schema = MISO_SCHEMA default_query_timezone = "US/Central" def __init__(self, spark: SparkSession, options: dict) -> None: super().__init__(spark, options) self.spark = spark self.options = options self.load_type = self.options.get("load_type", "actual") self.date = self.options.get("date", "").strip() def _pull_data(self) -> pd.DataFrame: """ Pulls data from the MISO API and parses the Excel file. Returns: Raw form of data. """ logging.info(f"Getting {self.load_type} data for date {self.date}") df = pd.read_excel(self._fetch_from_url(f"{self.date}_df_al.xls"), skiprows=4) return df def _prepare_data(self, df: pd.DataFrame) -> pd.DataFrame: """ Creates a new `date_time` column and removes null values. Args: df: Raw form of data received from the API. Returns: Data after basic transformations. """ df.drop( df.index[(df["HourEnding"] == "HourEnding") | df["MISO MTLF (MWh)"].isna()], inplace=True, ) df.rename(columns={"Market Day": "date"}, inplace=True) df["date_time"] = pd.to_datetime(df["date"]) + pd.to_timedelta( df["HourEnding"].astype(int) - 1, "h" ) df.drop(["HourEnding", "date"], axis=1, inplace=True) data_cols = df.columns[df.columns != "date_time"] df[data_cols] = df[data_cols].astype(float) df.reset_index(inplace=True, drop=True) return df def _sanitize_data(self, df: pd.DataFrame) -> pd.DataFrame: """ Filter outs Actual or Forecast data based on `load_type`. Args: df: Data received after preparation. Returns: Final data either containing Actual or Forecast values. """ skip_col_suffix = "" if self.load_type == "actual": skip_col_suffix = "MTLF (MWh)" elif self.load_type == "forecast": skip_col_suffix = "ActualLoad (MWh)" df = df[[x for x in df.columns if not x.endswith(skip_col_suffix)]] df = df.dropna() df.columns = [str(x.split(" ")[0]).upper() for x in df.columns] rename_cols = { "LRZ1": "Lrz1", "LRZ2_7": "Lrz2_7", "LRZ3_5": "Lrz3_5", "LRZ4": "Lrz4", "LRZ6": "Lrz6", "LRZ8_9_10": "Lrz8_9_10", "MISO": "Miso", "DATE_TIME": "Datetime", } df = df.rename(columns=rename_cols) return df def _validate_options(self) -> bool: """ Validates the following options: - `date` must be in the correct format. - `load_type` must be valid. Returns: True if all looks good otherwise raises Exception. """ try: date = self._get_localized_datetime(self.date) except ValueError: raise ValueError("Unable to parse Date. Please specify in YYYYMMDD format.") if date > self.current_date: raise ValueError("Query date can't be in future.") valid_load_types = ["actual", "forecast"] if self.load_type not in valid_load_types: raise ValueError( f"Invalid load_type `{self.load_type}` given. Supported values are {valid_load_types}." ) return True
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/iso/miso_daily_load_iso.py
0.847274
0.621369
miso_daily_load_iso.py
pypi
import logging import pandas as pd import numpy as np import requests from pyspark.sql import SparkSession from datetime import timedelta from io import BytesIO from ...._pipeline_utils.iso import PJM_SCHEMA from . import BaseISOSource class PJMDailyLoadISOSource(BaseISOSource): """ The PJM Daily Load ISO Source is used to read daily load data from PJM API. It supports both Actual and Forecast data. Actual will return 1 day, Forecast will return 7 days API: <a href="https://api.pjm.com/api/v1/">https://api.pjm.com/api/v1/</a> (must be a valid apy key from PJM) Actual doc: <a href="https://dataminer2.pjm.com/feed/ops_sum_prev_period/definition">https://dataminer2.pjm.com/feed/ops_sum_prev_period/definition</a> Forecast doc: <a href="https://dataminer2.pjm.com/feed/load_frcstd_7_day/definition">https://dataminer2.pjm.com/feed/load_frcstd_7_day/definition</a> Args: spark (SparkSession): Spark Session instance options (dict): A dictionary of ISO Source specific configurations Attributes: api_key (str): Must be a valid key from PJM, see api url load_type (str): Must be one of `actual` or `forecast` """ spark: SparkSession spark_schema = PJM_SCHEMA options: dict iso_url: str = "https://api.pjm.com/api/v1/" query_datetime_format: str = "%Y-%m-%d %H:%M" required_options = ["api_key", "load_type"] default_query_timezone = "US/Eastern" def __init__(self, spark: SparkSession, options: dict) -> None: super().__init__(spark, options) self.spark: SparkSession = spark self.options: dict = options self.load_type: str = self.options.get("load_type", "").strip() self.api_key: str = self.options.get("api_key", "").strip() self.days: int = self.options.get("days", 7) def _fetch_from_url(self, url_suffix: str, start_date: str, end_date: str) -> bytes: """ Gets data from external ISO API. Args: url_suffix: String to be used as suffix to iso url. Returns: Raw content of the data received. """ url = f"{self.iso_url}{url_suffix}" headers = {"Ocp-Apim-Subscription-Key": self.api_key} logging.info( f"Requesting URL - {url}, start_date={start_date}, end_date={end_date}, load_type={self.load_type}" ) load_key = ( "datetime_beginning_ept" if self.load_type != "forecast" else "forecast_datetime_beginning_ept" ) feed = ( "ops_sum_prev_period" if self.load_type != "forecast" else "load_frcstd_7_day" ) query = { "startRow": "1", load_key: f"{start_date}to{end_date}", "format": "csv", "download": "true", } query_s = "&".join(["=".join([k, v]) for k, v in query.items()]) new_url = f"{url}{feed}?{query_s}" response = requests.get(new_url, headers=headers) code = response.status_code if code != 200: raise requests.HTTPError( f"Unable to access URL `{url}`." f" Received status code {code} with message {response.content}" ) return response.content def _pull_data(self) -> pd.DataFrame: """ Pulls data from the PJM API and parses the return. Returns: Raw form of data. """ start_date = self.current_date - timedelta(days=1) start_date = start_date.replace(hour=0, minute=0) end_date = (start_date + timedelta(days=self.days)).replace(hour=23) start_date_str = start_date.strftime(self.query_datetime_format) end_date_str = end_date.strftime(self.query_datetime_format) df = pd.read_csv( BytesIO(self._fetch_from_url("", start_date_str, end_date_str)) ) return df def _prepare_data(self, df: pd.DataFrame) -> pd.DataFrame: """ Creates a new date time column and removes null values. Renames columns Args: df: Raw form of data received from the API. Returns: Data after basic transformations. """ if self.load_type == "forecast": df = df.rename( columns={ "forecast_datetime_beginning_utc": "start_time", "forecast_area": "zone", "forecast_datetime_ending_utc": "end_time", "forecast_load_mw": "load", } ) else: df = df.rename( columns={ "datetime_beginning_utc": "start_time", "area": "zone", "datetime_ending_utc": "end_time", "actual_load": "load", } ) df = df[["start_time", "end_time", "zone", "load"]] df = df.replace({np.nan: None, "": None}) date_cols = ["start_time", "end_time"] for col in date_cols: df[col] = pd.to_datetime(df[col], format="%m/%d/%Y %I:%M:%S %p") df["load"] = df["load"].astype(float) df = df.replace({np.nan: None, "": None}) df.columns = list(map(lambda x: x.upper(), df.columns)) rename_cols = { "START_TIME": "StartTime", "END_TIME": "EndTime", "ZONE": "Zone", "LOAD": "Load", } df = df.rename(columns=rename_cols) df.reset_index(inplace=True, drop=True) return df def _validate_options(self) -> bool: """ Validates the following options: - `load_type` must be valid. Returns: True if all looks good otherwise raises Exception. """ valid_load_types = ["actual", "forecast"] if self.load_type not in valid_load_types: raise ValueError( f"Invalid load_type `{self.load_type}` given. Supported values are {valid_load_types}." ) return True
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/spark/iso/pjm_daily_load_iso.py
0.854217
0.394784
pjm_daily_load_iso.py
pypi
from ..interfaces import SourceInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package import polars as pl from polars import LazyFrame class PythonDeltaSource(SourceInterface): """ The Python Delta Source is used to read data from a Delta table without using Apache Spark, returning a Polars LazyFrame Args: path (str): Path to the Delta table. Can be local or in S3/Azure storage version (optional int): Specify the Delta table version to read from. Defaults to the latest version storage_options (optional dict): Used to read from AWS/Azure storage. For AWS use format {"aws_access_key_id": "<>", "aws_secret_access_key":"<>"}. For Azure use format {"azure_storage_account_name": "<>", "azure_storage_account_key": "<>"}. pyarrow_options (optional dict): Data Access and Efficiency options when reading from Delta. See [to_pyarrow_dataset](https://delta-io.github.io/delta-rs/python/api_reference.html#deltalake.table.DeltaTable.to_pyarrow_dataset){ target="_blank" }. without_files (optional bool): If True loads the table without tracking files """ path: str version: int storage_options: dict pyarrow_options: dict without_files: bool def __init__( self, path: str, version: int = None, storage_options: dict = None, pyarrow_options: dict = None, without_files: bool = False, ): self.path = path self.version = version self.storage_options = storage_options self.pyarrow_options = pyarrow_options self.without_files = without_files @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYTHON """ return SystemType.PYTHON @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_read_validation(self): return True def post_read_validation(self): return True def read_batch(self) -> LazyFrame: """ Reads data from a Delta table into a Polars LazyFrame """ without_files_dict = {"without_files": self.without_files} lf = pl.scan_delta( source=self.path, version=self.version, storage_options=self.storage_options, delta_table_options=without_files_dict, pyarrow_options=self.pyarrow_options, ) return lf def read_stream(self): """ Raises: NotImplementedError: Reading from a Delta table using Python is only possible for batch reads. To perform a streaming read, use the read_stream method of the SparkDeltaSource component. """ raise NotImplementedError( "Reading from a Delta table using Python is only possible for batch reads. To perform a streaming read, use the read_stream method of the SparkDeltaSource component" )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/python/delta.py
0.812682
0.407628
delta.py
pypi
from ..interfaces import SourceInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package import delta_sharing import polars as pl from polars import LazyFrame class PythonDeltaSharingSource(SourceInterface): """ The Python Delta Sharing Source is used to read data from a Delta table with Delta Sharing configured, without using Apache Spark. Args: profile_path (str): Location of the credential file. Can be any URL supported by [FSSPEC](https://filesystem-spec.readthedocs.io/en/latest/index.html){ target="_blank" } share_name (str): The value of 'share=' for the table schema_name (str): The value of 'schema=' for the table table_name (str): The value of 'name=' for the table """ profile_path: str share_name: str schema_name: str table_name: str def __init__( self, profile_path: str, share_name: str, schema_name: str, table_name: str ): self.profile_path = profile_path self.share_name = share_name self.schema_name = schema_name self.table_name = table_name @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYTHON """ return SystemType.PYTHON @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_read_validation(self): return True def post_read_validation(self): return True def read_batch(self) -> LazyFrame: """ Reads data from a Delta table with Delta Sharing into a Polars LazyFrame. """ pandas_df = delta_sharing.load_as_pandas( f"{self.profile_path}#{self.share_name}.{self.schema_name}.{self.table_name}" ) polars_lazyframe = pl.from_pandas(pandas_df).lazy() return polars_lazyframe def read_stream(self): """ Raises: NotImplementedError: Reading from a Delta table with Delta Sharing using Python is only possible for batch reads. """ raise NotImplementedError( "Reading from a Delta table with Delta Sharing using Python is only possible for batch reads." )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/sources/python/delta_sharing.py
0.846149
0.382199
delta_sharing.py
pypi
import logging import sys import inspect from typing import List, Tuple from .interfaces import UtilitiesInterface from .._pipeline_utils.models import Libraries, SystemType class PipelineComponentsGetUtility(UtilitiesInterface): """ Gets the list of imported RTDIP components. Returns the libraries and settings of the components to be used in the pipeline. Call this component after all imports of the RTDIP components to ensure that the components can be determined. Args: module (optional str): Provide the module to use for imports of rtdip-sdk components. If not populated, it will use the calling module to check for imports """ def __init__(self, module: str = None) -> None: if module == None: frm = inspect.stack()[1] mod = inspect.getmodule(frm[0]) self.module = mod.__name__ else: self.module = module @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYTHON """ return SystemType.PYTHON @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def execute(self) -> Tuple[Libraries, dict]: from ..sources.interfaces import SourceInterface from ..destinations.interfaces import DestinationInterface from ..deploy.interfaces import DeployInterface from ..secrets.interfaces import SecretsInterface from ..transformers.interfaces import TransformerInterface try: classes_imported = inspect.getmembers( sys.modules[self.module], inspect.isclass ) component_list = [] for cls in classes_imported: class_check = getattr(sys.modules[self.module], cls[0]) if ( ( issubclass(class_check, SourceInterface) and class_check != SourceInterface ) or ( issubclass(class_check, DestinationInterface) and class_check != DestinationInterface ) or ( issubclass(class_check, DeployInterface) and class_check != DeployInterface ) or ( issubclass(class_check, SecretsInterface) and class_check != SecretsInterface ) or ( issubclass(class_check, TransformerInterface) and class_check != TransformerInterface ) or ( issubclass(class_check, UtilitiesInterface) and class_check != UtilitiesInterface ) ): component_list.append(cls[1]) task_libraries = Libraries() task_libraries.get_libraries_from_components(component_list) spark_configuration = {} for component in component_list: spark_configuration = {**spark_configuration, **component.settings()} return (task_libraries, spark_configuration) except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/utilities/pipeline_components.py
0.617974
0.280121
pipeline_components.py
pypi
import logging from typing import Dict, Union from ..interfaces import UtilitiesInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package from azure.storage.filedatalake import DataLakeServiceClient, FileSystemClient from azure.core.credentials import ( TokenCredential, AzureNamedKeyCredential, AzureSasCredential, ) class ADLSGen2DirectoryACLUtility(UtilitiesInterface): """ Assigns Azure AD Groups to ACLs on directories in an Azure Data Lake Store Gen 2 storage account Args: storage_account (str): ADLS Gen 2 Storage Account Name container (str): ADLS Gen 2 Container Name credential (TokenCredential): Credentials to authenticate with ADLS Gen 2 Storage Account directory (str): Directory to be assign ACLS to in an ADLSS Gen 2 group_object_id (str): Azure AD Group Object ID to be assigned to Directory folder_permissions (optional, str): Folder Permissions to Assign to directory parent_folder_permissions (optional, str): Folder Permissions to Assign to parent directories. Parent Folder ACLs not set if None root_folder_permissions (optional, str): Folder Permissions to Assign to root directory. Root Folder ACL not set if None set_as_default_acl (bool, optional): Sets the ACL as the default ACL on the folder create_directory_if_not_exists (bool, optional): Creates the directory(and Parent Directories) if it does not exist """ storage_account: str container: str credential: Union[ str, Dict[str, str], AzureNamedKeyCredential, AzureSasCredential, TokenCredential, None, ] directory: str group_object_id: str folder_permissions: str parent_folder_permissions: str root_folder_permissions: str set_as_default_acl: bool create_directory_if_not_exists: bool def __init__( self, storage_account: str, container: str, credential: Union[ str, Dict[str, str], AzureNamedKeyCredential, AzureSasCredential, TokenCredential, None, ], directory: str, group_object_id: str, folder_permissions: str = "r-x", parent_folder_permissions: Union[str, None] = "r-x", root_folder_permissions: Union[str, None] = "r-x", set_as_default_acl: bool = True, create_directory_if_not_exists: bool = True, ) -> None: self.storage_account = storage_account self.container = container self.credential = credential self.directory = directory self.group_object_id = group_object_id self.folder_permissions = folder_permissions self.parent_folder_permissions = parent_folder_permissions self.root_folder_permissions = root_folder_permissions self.set_as_default_acl = set_as_default_acl self.create_directory_if_not_exists = create_directory_if_not_exists @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYTHON """ return SystemType.PYTHON @staticmethod def libraries(): libraries = Libraries() libraries.add_pypi_library(get_default_package("azure_adls_gen_2")) return libraries @staticmethod def settings() -> dict: return {} def _set_acl( self, file_system_client: FileSystemClient, path: str, group_object_id: str, folder_permissions: str, set_as_default_acl: bool, ): acl_directory_client = file_system_client.get_directory_client(path) group_id_acl = "group:{}:{}".format(group_object_id, folder_permissions) acl_props = acl_directory_client.get_access_control().get("acl") acl_props_list = acl_props.split(",") for acl in acl_props_list: if group_object_id in acl: acl_props_list.remove(acl) if set_as_default_acl == True: acl_props_list.append("default:{}".format(group_id_acl)) else: acl_props_list.append(group_id_acl) new_acl_props = ",".join(acl_props_list) acl_directory_client.set_access_control(acl=new_acl_props) def execute(self) -> bool: try: # Setup file system client service_client = DataLakeServiceClient( account_url="{}://{}.dfs.core.windows.net".format( "https", self.storage_account ), credential=self.credential, ) file_system_client = service_client.get_file_system_client( file_system=self.container ) # Create directory if it doesn't already exist if self.create_directory_if_not_exists: directory_client = file_system_client.get_directory_client( self.directory ) if not directory_client.exists(): file_system_client.create_directory(self.directory) group_object_id = str(self.group_object_id) acl_path = "" directory_list = self.directory.split("/") # Set Root Folder ACLs if specified if self.root_folder_permissions != None: self._set_acl( file_system_client, "/", group_object_id, self.root_folder_permissions, False, ) # Set Parent Folders ACLs if specified if self.parent_folder_permissions != None: for directory in directory_list[:-1]: if directory == "": acl_path = "/" continue elif acl_path == "/": acl_path += directory else: acl_path += "/" + directory self._set_acl( file_system_client, acl_path, group_object_id, self.parent_folder_permissions, False, ) # Set Folder ACLs self._set_acl( file_system_client, self.directory, group_object_id, self.folder_permissions, self.set_as_default_acl, ) return True except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/utilities/azure/adls_gen2_acl.py
0.768733
0.166472
adls_gen2_acl.py
pypi
import logging from pyspark.sql import SparkSession from py4j.protocol import Py4JJavaError from ..interfaces import UtilitiesInterface from .configuration import SparkConfigurationUtility from ..._pipeline_utils.models import Libraries, SystemType class SparkADLSGen2SPNConnectUtility(UtilitiesInterface): """ Configures Spark to Connect to an ADLS Gen 2 Storage Account using a Service Principal Args: spark (SparkSession): Spark Session required to read data from cloud storage storage_account (str): Name of the ADLS Gen 2 Storage Account tenant_id (str): Tenant ID of the Service Principal client_id (str): Service Principal Client ID client_secret (str): Service Principal Client Secret """ spark: SparkSession storage_account: str tenant_id: str client_id: str client_secret: str def __init__( self, spark: SparkSession, storage_account: str, tenant_id: str, client_id: str, client_secret: str, ) -> None: self.spark = spark self.storage_account = storage_account self.tenant_id = tenant_id self.client_id = client_id self.client_secret = client_secret @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def execute(self) -> bool: try: adls_gen2_config = SparkConfigurationUtility( spark=self.spark, config={ "fs.azure.account.auth.type.{}.dfs.core.windows.net".format( self.storage_account ): "OAuth", "fs.azure.account.oauth.provider.type.{}.dfs.core.windows.net".format( self.storage_account ): "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider", "fs.azure.account.oauth2.client.id.{}.dfs.core.windows.net".format( self.storage_account ): self.client_id, "fs.azure.account.oauth2.client.secret.{}.dfs.core.windows.net".format( self.storage_account ): self.client_secret, "fs.azure.account.oauth2.client.endpoint.{}.dfs.core.windows.net".format( self.storage_account ): "https://login.microsoftonline.com/{}/oauth2/token".format( self.tenant_id ), }, ) adls_gen2_config.execute() return True except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/utilities/spark/adls_gen2_spn_connect.py
0.775095
0.175397
adls_gen2_spn_connect.py
pypi
import logging import sys import inspect from typing import List from pyspark.sql import SparkSession from ..interfaces import UtilitiesInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.spark import SparkClient from ..pipeline_components import PipelineComponentsGetUtility class SparkSessionUtility(UtilitiesInterface): """ Creates or Gets a Spark Session and uses settings and libraries of the imported RTDIP components to populate the spark configuration and jars in the spark session. Call this component after all imports of the RTDIP components to ensure that the spark session is configured correctly. Args: config (dict): Dictionary of spark configuration to be applied to the spark session module (optional str): Provide the module to use for imports of rtdip-sdk components. If not populated, it will use the calling module to check for imports remote (optional str): Specify the remote parameters if intending to use Spark Connect """ spark: SparkSession config: dict module: str def __init__(self, config: dict, module: str = None, remote: str = None) -> None: self.config = config if module == None: frm = inspect.stack()[1] mod = inspect.getmodule(frm[0]) self.module = mod.__name__ else: self.module = module self.remote = remote @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def execute(self) -> SparkSession: try: (task_libraries, spark_configuration) = PipelineComponentsGetUtility( self.module ).execute() self.spark = SparkClient( spark_configuration=spark_configuration, spark_libraries=task_libraries, spark_remote=self.remote, ).spark_session return self.spark except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/utilities/spark/session.py
0.698227
0.36977
session.py
pypi
import logging from typing import List, Optional from pyspark.sql import SparkSession from py4j.protocol import Py4JJavaError from delta.tables import DeltaTable from ..interfaces import UtilitiesInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class DeltaTableOptimizeUtility(UtilitiesInterface): """ [Optimizes](https://docs.delta.io/latest/optimizations-oss.html) a Delta Table Args: spark (SparkSession): Spark Session required to read data from cloud storage table_name (str): Name of the table, including catalog and schema if table is to be created in Unity Catalog where (str, optional): Apply a partition filter to limit optimize to specific partitions. Example, "date='2021-11-18'" or "EventDate<=current_date()" zorder_by (list[str], optional): List of column names to zorder the table by. For more information, see [here.](https://docs.delta.io/latest/optimizations-oss.html#optimize-performance-with-file-management&language-python) """ spark: SparkSession table_name: str where: Optional[str] zorder_by: Optional[List[str]] def __init__( self, spark: SparkSession, table_name: str, where: str = None, zorder_by: List[str] = None, ) -> None: self.spark = spark self.table_name = table_name self.where = where self.zorder_by = zorder_by @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() libraries.add_maven_library(get_default_package("spark_delta_core")) return libraries @staticmethod def settings() -> dict: return {} def execute(self) -> bool: try: delta_table = DeltaTable.forName(self.spark, self.table_name).optimize() if self.where is not None: delta_table = delta_table.where(self.where) if self.zorder_by is not None: delta_table = delta_table.executeZOrderBy(self.zorder_by) else: delta_table.executeCompaction() return True except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/utilities/spark/delta_table_optimize.py
0.881181
0.418281
delta_table_optimize.py
pypi
import logging from typing import List, Optional from pydantic import BaseModel from pyspark.sql import SparkSession from pyspark.sql.types import StructField from py4j.protocol import Py4JJavaError from delta.tables import DeltaTable from ..interfaces import UtilitiesInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class DeltaTableColumn(BaseModel): name: str type: str nullable: bool metadata: Optional[dict] class DeltaTableCreateUtility(UtilitiesInterface): """ Creates a Delta Table in a Hive Metastore or in Databricks Unity Catalog. Args: spark (SparkSession): Spark Session required to read data from cloud storage table_name (str): Name of the table, including catalog and schema if table is to be created in Unity Catalog columns (list[DeltaTableColumn]): List of columns and their related column properties partitioned_by (list[str], optional): List of column names to partition the table by location (str, optional): Path to storage location properties (dict, optional): Propoerties that can be specified for a Delta Table. Further information on the options available are [here](https://docs.databricks.com/delta/table-properties.html#delta-table-properties) comment (str, optional): Provides a comment on the table metadata Example: ```python from rtdip_sdk.pipelines.utilities.spark.delta_table_create import DeltaTableCreateUtility, DeltaTableColumn table_create_utility = DeltaTableCreateUtility( spark=spark_session, table_name="delta_table", columns=[ DeltaTableColumn(name="EventDate", type="date", nullable=False, metadata={"delta.generationExpression": "CAST(EventTime AS DATE)"}), DeltaTableColumn(name="TagName", type="string", nullable=False), DeltaTableColumn(name="EventTime", type="timestamp", nullable=False), DeltaTableColumn(name="Status", type="string", nullable=True), DeltaTableColumn(name="Value", type="float", nullable=True) ], partitioned_by=["EventDate"], properties={"delta.logRetentionDuration": "7 days", "delta.enableChangeDataFeed": "true"}, comment="Creation of Delta Table" ) result = table_create_utility.execute() ``` """ spark: SparkSession table_name: str columns: List[DeltaTableColumn] partitioned_by: List[str] location: str properties: dict comment: str def __init__( self, spark: SparkSession, table_name: str, columns: List[StructField], partitioned_by: List[str] = None, location: str = None, properties: dict = None, comment: str = None, ) -> None: self.spark = spark self.table_name = table_name self.columns = columns self.partitioned_by = partitioned_by self.location = location self.properties = properties self.comment = comment @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() libraries.add_maven_library(get_default_package("spark_delta_core")) return libraries @staticmethod def settings() -> dict: return {} def execute(self) -> bool: try: columns = [StructField.fromJson(column.dict()) for column in self.columns] delta_table = ( DeltaTable.createIfNotExists(self.spark) .tableName(self.table_name) .addColumns(columns) ) if self.partitioned_by is not None: delta_table = delta_table.partitionedBy(self.partitioned_by) if self.location is not None: delta_table = delta_table.location(self.location) if self.properties is not None: for key, value in self.properties.items(): delta_table = delta_table.property(key, value) if self.comment is not None: delta_table = delta_table.comment(self.comment) delta_table.execute() return True except Py4JJavaError as e: logging.exception(e.errmsg) raise e except Exception as e: logging.exception(str(e)) raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/utilities/spark/delta_table_create.py
0.893581
0.76882
delta_table_create.py
pypi
import os import logging import boto3 from boto3.s3.transfer import S3Transfer from botocore.config import Config from ..interfaces import UtilitiesInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package from ....data_models.storage_objects import utils class S3CopyUtility(UtilitiesInterface): """ Copies an object from S3 to S3, from Local to S3 and S3 to local depending on the source and destination uri Args: source_uri (str): URI of the source object destination_uri (str): URI of the destination object source_version_id (optional str): Version ID of the source bucket extra_args (optional dict): Extra arguments that can be passed to the client operation. See [here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/customizations/s3.html#boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGS){ target="_blank" } for a list of download arguments callback (optional function): Takes a UDF used for tracking the progress of the copy operation source_client (optional botocore or boto3 client): A different S3 client to use for the source bucket during the copy operation transfer_config (optional class): The transfer configuration used during the copy. See [here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/customizations/s3.html#boto3.s3.transfer.TransferConfig){ target="_blank" } for all parameters """ source_uri: str destination_uri: str destination_key: str extra_args: dict callback: str source_client: S3Transfer transfer_config: Config def __init__( self, source_uri: str, destination_uri: str, source_version_id: str = None, extra_args: dict = None, callback=None, source_client: S3Transfer = None, transfer_config: Config = None, ): self.source_uri = source_uri self.destination_uri = destination_uri self.source_version_id = source_version_id self.extra_args = extra_args self.callback = callback self.source_client = source_client self.transfer_config = transfer_config @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYTHON """ return SystemType.PYTHON @staticmethod def libraries(): libraries = Libraries() libraries.add_pypi_library(get_default_package("aws_boto3")) return libraries @staticmethod def settings() -> dict: return {} def execute(self) -> bool: # S3 to S3 Copy if self.source_uri.startswith( utils.S3_SCHEME ) and self.destination_uri.startswith(utils.S3_SCHEME): schema, source_domain, source_key = utils.validate_uri(self.source_uri) schema, destination_domain, destination_key = utils.validate_uri( self.destination_uri ) s3 = boto3.resource(schema) copy_source = {"Bucket": source_domain, "Key": source_key} if self.source_version_id is not None: copy_source["VersionId"] = self.source_version_id try: s3.meta.client.copy( copy_source, destination_domain, destination_key, self.extra_args, self.callback, self.source_client, self.transfer_config, ) except Exception as ex: logging.error(ex) return False # Local File to S3 Copy (Upload) elif (os.path.isfile(self.source_uri)) and self.destination_uri.startswith( utils.S3_SCHEME ): schema, destination_domain, destination_key = utils.validate_uri( self.destination_uri ) s3_client = boto3.client(schema) try: s3_client.upload_file( self.source_uri, destination_domain, destination_key ) except Exception as ex: logging.error(ex) return False # S3 to Local File Copy (Download) elif self.source_uri.startswith( utils.S3_SCHEME ) and not self.destination_uri.startswith(utils.S3_SCHEME): try: schema, source_domain, source_key = utils.validate_uri(self.source_uri) s3 = boto3.client(schema) s3.download_file(source_domain, source_key, self.destination_uri) except Exception as ex: logging.error(ex) return False else: logging.error( "Not Implemented. From: %s \n\t to: %s", self.source_uri, self.destination_uri, ) return True
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/utilities/aws/s3_copy_utility.py
0.544075
0.189109
s3_copy_utility.py
pypi
from pyspark.sql import DataFrame from pyspark.sql.functions import ( to_json, col, struct, create_map, lit, array, monotonically_increasing_id, floor, row_number, collect_list, expr, ) from pyspark.sql import Window from datetime import datetime import pytz from ..interfaces import TransformerInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.spark import EDGEX_SCHEMA class PCDMToHoneywellAPMTransformer(TransformerInterface): """ Converts a Spark Dataframe in PCDM format to Honeywell APM format. Args: data (Dataframe): Spark Dataframe in PCDM format quality (str): Value for quality inside HistorySamples history_samples_per_message (int): The number of HistorySamples for each row in the DataFrame (Batch Only) """ data: DataFrame quality: str history_samples_per_message: int def __init__( self, data: DataFrame, quality: str = "Good", history_samples_per_message: int = 1, ) -> None: self.data = data self.quality = quality self.history_samples_per_message = history_samples_per_message @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_transform_validation(self): return True def post_transform_validation(self): return True def transform(self) -> DataFrame: """ Returns: DataFrame: A dataframe with with rows in Honeywell APM format """ if self.data.isStreaming and self.history_samples_per_message > 1: pcdm_df = self.data.withColumn("counter", monotonically_increasing_id()) w = Window.orderBy("counter") cleaned_pcdm_df = ( pcdm_df.withColumn( "index", floor( (row_number().over(w) - 0.01) / self.history_samples_per_message ), ) .withColumn( "HistorySamples", struct( col("TagName").alias("ItemName"), lit(self.quality).alias("Quality"), col("EventTime").alias("Time"), col("Value").alias("Value"), ).alias("HistorySamples"), ) .groupBy("index") .agg(collect_list("HistorySamples").alias("HistorySamples")) .withColumn("guid", expr("uuid()")) .withColumn( "value", struct( col("guid").alias("SystemGuid"), col("HistorySamples") ).alias("value"), ) ) else: cleaned_pcdm_df = self.data.withColumn("guid", expr("uuid()")).withColumn( "value", struct( col("guid").alias("SystemGuid"), struct( col("TagName").alias("ItemName"), lit(self.quality).alias("Quality"), col("EventTime").alias("Time"), col("Value").alias("Value"), ).alias("HistorySamples"), ), ) df = cleaned_pcdm_df.withColumn( "CloudPlatformEvent", struct( lit(datetime.now(tz=pytz.UTC)).alias("CreatedTime"), lit(expr("uuid()")).alias("Id"), col("guid").alias("CreatorId"), lit("CloudPlatformSystem").alias("CreatorType"), lit(None).alias("GeneratorId"), lit("CloudPlatformTenant").alias("GeneratorType"), col("guid").alias("TargetId"), lit("CloudPlatformTenant").alias("TargetType"), lit(None).alias("TargetContext"), struct( lit("TextualBody").alias("type"), to_json(col("value")).alias("value"), lit("application/json").alias("format"), ).alias("Body"), array( struct( lit("SystemType").alias("Key"), lit("apm-system").alias("Value"), ), struct(lit("SystemGuid").alias("Key"), col("guid").alias("Value")), ).alias("BodyProperties"), lit("DataChange.Update").alias("EventType"), ), ).withColumn("AnnotationStreamIds", lit("self.AnnotationStreamIds")) return df.select("CloudPlatformEvent", "AnnotationStreamIds")
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/transformers/spark/pcdm_to_honeywell_apm.py
0.764628
0.483283
pcdm_to_honeywell_apm.py
pypi
from pyspark.sql import DataFrame from pyspark.sql.functions import from_json, col, explode, when, lit from ..interfaces import TransformerInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.spark import EDGEX_SCHEMA class EdgeXOPCUAJsonToPCDMTransformer(TransformerInterface): """ Converts a Spark Dataframe column containing a json string created by EdgeX to the Process Control Data Model Args: data (DataFrame): Dataframe containing the column with EdgeX data source_column_name (str): Spark Dataframe column containing the OPC Publisher Json OPC UA data status_null_value (optional str): If populated, will replace 'Good' in the Status column with the specified value. change_type_value (optional str): If populated, will replace 'insert' in the ChangeType column with the specified value. """ data: DataFrame source_column_name: str status_null_value: str change_type_value: str tagname_field: str def __init__( self, data: DataFrame, source_column_name: str, status_null_value: str = "Good", change_type_value: str = "insert", tagname_field="resourceName", ) -> None: self.data = data self.source_column_name = source_column_name self.status_null_value = status_null_value self.change_type_value = change_type_value self.tagname_field = tagname_field @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_transform_validation(self): return True def post_transform_validation(self): return True def transform(self) -> DataFrame: """ Returns: DataFrame: A dataframe with the specified column converted to PCDM """ df = ( self.data.withColumn( self.source_column_name, from_json(self.source_column_name, EDGEX_SCHEMA), ) .select("*", explode("{}.readings".format(self.source_column_name))) .selectExpr( "explode({}.readings.{}) as TagName".format( self.source_column_name, self.tagname_field ), "to_utc_timestamp(to_timestamp((col.origin / 1000000000)), current_timezone()) as EventTime", "col.value as Value", "col.valueType as ValueType", ) .withColumn("Status", lit(self.status_null_value)) .withColumn("ChangeType", lit(self.change_type_value)) .withColumn( "ValueType", ( when(col("ValueType") == "Int8", "integer") .when(col("ValueType") == "Int16", "integer") .when(col("ValueType") == "Int32", "integer") .when(col("ValueType") == "Int64", "integer") .when(col("ValueType") == "Uint8", "integer") .when(col("ValueType") == "Uint16", "integer") .when(col("ValueType") == "Uint32", "integer") .when(col("ValueType") == "Uint64", "integer") .when(col("ValueType") == "Float32", "float") .when(col("ValueType") == "Float64", "float") .when(col("ValueType") == "Bool", "bool") .otherwise("string") ), ) ) return df.select( "TagName", "EventTime", "Status", "Value", "ValueType", "ChangeType" )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/transformers/spark/edgex_opcua_json_to_pcdm.py
0.885959
0.454896
edgex_opcua_json_to_pcdm.py
pypi
from pyspark.sql import DataFrame from pyspark.sql.functions import from_json, col, explode, when, lit, regexp_replace from ..interfaces import TransformerInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.spark import APM_SCHEMA class HoneywellAPMJsonToPCDMTransformer(TransformerInterface): """ Converts a Spark Dataframe column containing a json string created by Honeywell APM to the Process Control Data Model Args: data (DataFrame): Dataframe containing the column with EdgeX data source_column_name (str): Spark Dataframe column containing the OPC Publisher Json OPC UA data status_null_value (optional str): If populated, will replace 'Good' in the Status column with the specified value. change_type_value (optional str): If populated, will replace 'insert' in the ChangeType column with the specified value. """ data: DataFrame source_column_name: str status_null_value: str change_type_value: str def __init__( self, data: DataFrame, source_column_name: str, status_null_value: str = "Good", change_type_value: str = "insert", ) -> None: self.data = data self.source_column_name = source_column_name self.status_null_value = status_null_value self.change_type_value = change_type_value @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_transform_validation(self): return True def post_transform_validation(self): return True def transform(self) -> DataFrame: """ Returns: DataFrame: A dataframe with the specified column converted to PCDM """ df = ( self.data.withColumn("body", from_json(self.source_column_name, APM_SCHEMA)) .select(explode("body.Samples")) .selectExpr("*", "to_timestamp(col.Time) as EventTime") .withColumn("TagName", col("col.Itemname")) .withColumn("Status", lit(self.status_null_value)) .withColumn("Value", col("col.Value")) .withColumn( "ValueType", when(col("value").cast("float").isNotNull(), "float").when( col("value").cast("float").isNull(), "string" ), ) .withColumn("ChangeType", lit(self.change_type_value)) ) return df.select( "TagName", "EventTime", "Status", "Value", "ValueType", "ChangeType" )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/transformers/spark/honeywell_apm_to_pcdm.py
0.869133
0.414603
honeywell_apm_to_pcdm.py
pypi
from pyspark.sql import DataFrame, SparkSession from pyspark.sql.functions import col, get_json_object, element_at, when from ..interfaces import TransformerInterface from ..._pipeline_utils.models import Libraries, SystemType from ...sources.spark.delta import SparkDeltaSource class SSIPPIJsonStreamToPCDMTransformer(TransformerInterface): """ Converts a Spark DataFrame containing Binary JSON data and related Properties to the Process Control Data Model For more information about the SSIP PI Streaming Connector, please see [here.](https://bakerhughesc3.ai/oai-solution/shell-sensor-intelligence-platform/) Args: spark (SparkSession): Spark Session data (DataFrame): DataFrame containing the path and binaryFile data source_column_name (str): Spark Dataframe column containing the Binary json data properties_column_name (str): Spark Dataframe struct typed column containing an element with the PointType metadata_delta_table (optional, str): Name of a metadata table that can be used for PointType mappings """ spark: SparkSession data: DataFrame source_column_name: str properties_column_name: str metadata_delta_table: str def __init__( self, spark: SparkSession, data: DataFrame, source_column_name: str, properties_column_name: str, metadata_delta_table: str = None, ) -> None: self.spark = spark self.data = data self.source_column_name = source_column_name self.properties_column_name = properties_column_name self.metadata_delta_table = metadata_delta_table @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_transform_validation(self): return True def post_transform_validation(self): return True def transform(self) -> DataFrame: """ Returns: DataFrame: A dataframe with the provided Binary data converted to PCDM """ df = ( self.data.withColumn( self.source_column_name, col(self.source_column_name).cast("string") ) .withColumn( "EventDate", get_json_object(col(self.source_column_name), "$.EventTime").cast( "date" ), ) .withColumn( "TagName", get_json_object(col(self.source_column_name), "$.TagName").cast( "string" ), ) .withColumn( "EventTime", get_json_object(col(self.source_column_name), "$.EventTime").cast( "timestamp" ), ) .withColumn( "Status", get_json_object(col(self.source_column_name), "$.Quality").cast( "string" ), ) .withColumn( "Value", get_json_object(col(self.source_column_name), "$.Value").cast("string"), ) .withColumn( "PointType", element_at(col(self.properties_column_name), "PointType") ) .withColumn( "Action", element_at(col(self.properties_column_name), "Action").cast("string"), ) ) if self.metadata_delta_table != None: metadata_df = SparkDeltaSource( self.spark, {}, self.metadata_delta_table ).read_batch() metadata_df = metadata_df.select( "TagName", col("PointType").alias("MetadataPointType") ) df = df.join(metadata_df, (df.TagName == metadata_df.TagName), "left") df = df.withColumn( "PointType", (when(col("PointType").isNull(), col("MetadataPointType"))).otherwise( col("PointType") ), ) return ( df.withColumn( "ValueType", ( when(col("PointType") == "Digital", "string") .when(col("PointType") == "String", "string") .when(col("PointType") == "Float16", "float") .when(col("PointType") == "Float32", "float") .when(col("PointType") == "Float64", "float") .when(col("PointType") == "Int16", "integer") .when(col("PointType") == "Int32", "integer") .otherwise("string") ), ) .selectExpr( "*", "CASE WHEN ValueType = 'integer' THEN try_cast(Value as integer) END as Value_Integer", "CASE WHEN ValueType = 'float' THEN try_cast(Value as float) END as Value_Float", ) .withColumn( "ValueType", when( (col("Value_Integer").isNull()) & (col("ValueType") == "integer"), "string", ) .when( (col("Value_Float").isNull()) & (col("ValueType") == "float"), "string", ) .otherwise(col("ValueType")), ) .withColumn( "ChangeType", ( when(col("Action") == "Insert", "insert") .when(col("Action") == "Add", "insert") .when(col("Action") == "Delete", "delete") .when(col("Action") == "Update", "update") .when(col("Action") == "Refresh", "update") ), ) .select( col("EventDate"), col("TagName"), col("EventTime"), col("Status"), col("Value"), col("ValueType"), col("ChangeType"), ) )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/transformers/spark/ssip_pi_binary_json_to_pcdm.py
0.898864
0.55911
ssip_pi_binary_json_to_pcdm.py
pypi
from pyspark.sql import DataFrame from pyspark.sql.functions import ( from_json, col, explode, when, lit, coalesce, to_timestamp, ) from ..interfaces import TransformerInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.spark import FLEDGE_SCHEMA class FledgeOPCUAJsonToPCDMTransformer(TransformerInterface): """ Converts a Spark Dataframe column containing a json string created by Fledge to the Process Control Data Model Args: data (DataFrame): Dataframe containing the column with Json Fledge data source_column_name (str): Spark Dataframe column containing the OPC Publisher Json OPC UA data status_null_value (str): If populated, will replace 'Good' in the Status column with the specified value. timestamp_formats (list[str]): Specifies the timestamp formats to be used for converting the timestamp string to a Timestamp Type. For more information on formats, refer to this [documentation.](https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html) """ data: DataFrame source_column_name: str status_null_value: str change_type_value: str timestamp_formats: list def __init__( self, data: DataFrame, source_column_name: str, status_null_value: str = "Good", change_type_value: str = "insert", timestamp_formats: list = [ "yyyy-MM-dd'T'HH:mm:ss.SSSX", "yyyy-MM-dd'T'HH:mm:ssX", ], ) -> None: # NOSONAR self.data = data self.source_column_name = source_column_name self.status_null_value = status_null_value self.change_type_value = change_type_value self.timestamp_formats = timestamp_formats @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_transform_validation(self): return True def post_transform_validation(self): return True def transform(self) -> DataFrame: """ Returns: DataFrame: A dataframe with the specified column converted to PCDM """ df = ( self.data.withColumn( self.source_column_name, from_json(self.source_column_name, FLEDGE_SCHEMA), ) .selectExpr("inline({})".format(self.source_column_name)) .select(explode("readings"), "timestamp") .withColumn( "EventTime", coalesce( *[to_timestamp(col("timestamp"), f) for f in self.timestamp_formats] ), ) .withColumnRenamed("key", "TagName") .withColumnRenamed("value", "Value") .withColumn("Status", lit(self.status_null_value)) .withColumn( "ValueType", when(col("value").cast("float").isNotNull(), "float").when( col("value").cast("float").isNull(), "string" ), ) .withColumn("ChangeType", lit(self.change_type_value)) ) return df.select( "TagName", "EventTime", "Status", "Value", "ValueType", "ChangeType" )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/transformers/spark/fledge_opcua_json_to_pcdm.py
0.903397
0.488588
fledge_opcua_json_to_pcdm.py
pypi
import pyarrow as pa import pyarrow.parquet as pq import pandas as pd from pyspark.sql import DataFrame from ..interfaces import TransformerInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.constants import get_default_package class SSIPPIBinaryFileToPCDMTransformer(TransformerInterface): """ Converts a Spark DataFrame column containing binaryFile parquet data to the Process Control Data Model. This DataFrame should contain a path and the binary data. Typically this can be done using the Autoloader source component and specify "binaryFile" as the format. For more information about the SSIP PI Batch Connector, please see [here.](https://bakerhughesc3.ai/oai-solution/shell-sensor-intelligence-platform/) Args: data (DataFrame): DataFrame containing the path and binaryFile data """ data: DataFrame def __init__(self, data: DataFrame) -> None: self.data = data @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() libraries.add_pypi_library(get_default_package("pyarrow")) libraries.add_pypi_library(get_default_package("pandas")) return libraries @staticmethod def settings() -> dict: return {} def pre_transform_validation(self): return True def post_transform_validation(self): return True @staticmethod def _convert_binary_to_pandas(pdf): try: binary_list = pdf.values.tolist() binary_data = binary_list[0][3] buf = pa.py_buffer(binary_data) table = pq.read_table(buf) except Exception: return pd.DataFrame( { "EventDate": pd.Series([], dtype="datetime64[ns]"), "TagName": pd.Series([], dtype="str"), "EventTime": pd.Series([], dtype="datetime64[ns]"), "Status": pd.Series([], dtype="str"), "Value": pd.Series([], dtype="str"), "ValueType": pd.Series([], dtype="str"), "ChangeType": pd.Series([], dtype="str"), } ) value_type = str(table.schema.field("Value").type) if value_type == "int16" or value_type == "int32": value_type = "integer" output_pdf = table.to_pandas() output_pdf["EventDate"] = output_pdf["EventTime"].dt.date output_pdf["Value"] = output_pdf["Value"].astype(str) output_pdf["ChangeType"] = "insert" output_pdf["ValueType"] = value_type output_pdf = output_pdf[ [ "EventDate", "TagName", "EventTime", "Status", "Value", "ValueType", "ChangeType", ] ] return output_pdf def transform(self) -> DataFrame: """ Returns: DataFrame: A dataframe with the provided Binary data convert to PCDM """ return self.data.groupBy("path").applyInPandas( SSIPPIBinaryFileToPCDMTransformer._convert_binary_to_pandas, schema="EventDate DATE, TagName STRING, EventTime TIMESTAMP, Status STRING, Value STRING, ValueType STRING, ChangeType STRING", )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/transformers/spark/ssip_pi_binary_file_to_pcdm.py
0.727298
0.504516
ssip_pi_binary_file_to_pcdm.py
pypi
from abc import abstractmethod from pyspark.sql import DataFrame, SparkSession from pyspark.sql.functions import col, expr, lit from pyspark.sql.types import StructType from ....data_models.meters.ami_meters import ValueType, SeriesType, ModelType from ..interfaces import TransformerInterface from ..._pipeline_utils.mdm import MDM_USAGE_SCHEMA, MDM_META_SCHEMA from ..._pipeline_utils.models import Libraries, SystemType class BaseRawToMDMTransformer(TransformerInterface): """ Base class for all the Raw to Meters Data Model Transformers. Meters Data Model requires two outputs: - `UsageData` : To store measurement(value) as timeseries data. - `MetaData` : To store meters related meta information. It supports the generation of both the outputs as they share some common properties. Args: spark (SparkSession): Spark Session instance. data (DataFrame): Dataframe containing the raw MISO data. output_type (str): Must be one of `usage` or `meta`. name (str): Set this to override default `name` column. description (str): Set this to override default `description` column. value_type (ValueType): Set this to override default `value_type` column. version (str): Set this to override default `version` column. series_id (str): Set this to override default `series_id` column. series_parent_id (str): Set this to override default `series_parent_id` column. """ spark: SparkSession data: DataFrame output_type: str input_schema: StructType target_schema: StructType uid_col: str series_id_col: str timestamp_col: str interval_timestamp_col: str value_col: str series_parent_id_col: str name_col: str uom_col: str description_col: str timestamp_start_col: str timestamp_end_col: str time_zone_col: str version_col: str series_type: SeriesType model_type: ModelType value_type: ValueType properties_col: str def __init__( self, spark: SparkSession, data: DataFrame, output_type: str, name: str = None, description: str = None, value_type: ValueType = None, version: str = None, series_id: str = None, series_parent_id: str = None, ): self.spark = spark self.data = data self.output_type = output_type self.name = name if name is not None else self.name_col self.description = ( description if description is not None else self.description_col ) self.value_type = value_type if value_type is not None else self.value_type self.version = version if version is not None else self.version_col self.series_id = series_id if series_id is not None else self.series_id_col self.series_parent_id = ( series_parent_id if series_parent_id is not None else self.series_parent_id_col ) @staticmethod def system_type(): return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_transform_validation(self) -> bool: valid_output_types = ["usage", "meta"] if self.output_type not in valid_output_types: raise ValueError( f"Invalid output_type `{self.output_type}` given. Must be one of {valid_output_types}" ) assert str(self.data.schema) == str(self.input_schema) assert type(self.series_type).__name__ == SeriesType.__name__ assert type(self.model_type).__name__ == ModelType.__name__ assert type(self.value_type).__name__ == ValueType.__name__ return True def post_transform_validation(self) -> bool: assert str(self.data.schema) == str(self.target_schema) return True def _get_transformed_df(self) -> DataFrame: if self.output_type == "usage": self.target_schema = MDM_USAGE_SCHEMA return self._get_usage_transformed_df() else: self.target_schema = MDM_META_SCHEMA return self._get_meta_transformed_df() def _convert_into_target_schema(self) -> None: """ Converts a Spark DataFrame structure into new structure based on the Target Schema. Returns: Nothing. """ df: DataFrame = self.data df = df.select(self.target_schema.names) for field in self.target_schema.fields: df = df.withColumn(field.name, col(field.name).cast(field.dataType)) self.data = self.spark.createDataFrame(df.rdd, self.target_schema) def transform(self) -> DataFrame: """ Returns: DataFrame: A dataframe with the raw data converted into MDM. """ self.pre_transform_validation() self.data = self._get_transformed_df() self._convert_into_target_schema() self.post_transform_validation() return self.data def _add_uid_column(self, df: DataFrame) -> DataFrame: return df.withColumn("Uid", expr(self.uid_col)) def _add_series_id_column(self, df: DataFrame) -> DataFrame: return df.withColumn("SeriesId", expr(self.series_id)) def _add_timestamp_column(self, df: DataFrame) -> DataFrame: return df.withColumn("Timestamp", expr(self.timestamp_col)) def _add_interval_timestamp_column(self, df: DataFrame) -> DataFrame: return df.withColumn("IntervalTimestamp", expr(self.interval_timestamp_col)) def _add_value_column(self, df: DataFrame) -> DataFrame: return df.withColumn("Value", expr(self.value_col)) def _add_series_parent_id_column(self, df: DataFrame) -> DataFrame: return df.withColumn("SeriesParentId", expr(self.series_parent_id)) def _add_name_column(self, df: DataFrame) -> DataFrame: return df.withColumn("Name", expr(self.name)) def _add_uom_column(self, df: DataFrame) -> DataFrame: return df.withColumn("Uom", expr(self.uom_col)) def _add_description_column(self, df: DataFrame) -> DataFrame: return df.withColumn("Description", expr(self.description)) def _add_timestamp_start_column(self, df: DataFrame) -> DataFrame: return df.withColumn("TimestampStart", expr(self.timestamp_start_col)) def _add_timestamp_end_column(self, df: DataFrame) -> DataFrame: return df.withColumn("TimestampEnd", expr(self.timestamp_end_col)) def _add_time_zone_column(self, df: DataFrame) -> DataFrame: return df.withColumn("Timezone", expr(self.time_zone_col)) def _add_version_column(self, df: DataFrame) -> DataFrame: return df.withColumn("Version", expr(self.version)) def _add_series_type_column(self, df: DataFrame) -> DataFrame: return df.withColumn("SeriesType", lit(self.series_type.value)) def _add_model_type_column(self, df: DataFrame) -> DataFrame: return df.withColumn("ModelType", lit(self.model_type.value)) def _add_value_type_column(self, df: DataFrame) -> DataFrame: return df.withColumn("ValueType", lit(self.value_type.value)) def _add_properties_column(self, df: DataFrame) -> DataFrame: return df.withColumn("Properties", expr(self.properties_col)) def _pre_process(self) -> DataFrame: return self.data @staticmethod def _post_process(df: DataFrame) -> DataFrame: return df def _get_usage_transformed_df(self) -> DataFrame: df = self._pre_process() df = self._add_uid_column(df) df = self._add_series_id_column(df) df = self._add_timestamp_column(df) df = self._add_interval_timestamp_column(df) df = self._add_value_column(df) df = self._post_process(df) return df def _get_meta_transformed_df(self) -> DataFrame: df = self._pre_process() df = self._add_uid_column(df) df = self._add_series_id_column(df) df = self._add_series_parent_id_column(df) df = self._add_name_column(df) df = self._add_uom_column(df) df = self._add_description_column(df) df = self._add_timestamp_start_column(df) df = self._add_timestamp_end_column(df) df = self._add_time_zone_column(df) df = self._add_version_column(df) df = self._add_series_type_column(df) df = self._add_model_type_column(df) df = self._add_value_type_column(df) df = self._add_properties_column(df) df = self._post_process(df) return df
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/transformers/spark/base_raw_to_mdm.py
0.869604
0.66556
base_raw_to_mdm.py
pypi
from pyspark.sql import DataFrame from pyspark.sql.functions import ( from_json, col, explode, to_timestamp, when, lit, coalesce, ) from pyspark.sql.types import ArrayType, StringType from ..interfaces import TransformerInterface from ..._pipeline_utils.models import Libraries, SystemType from ..._pipeline_utils.spark import OPC_PUBLISHER_SCHEMA class OPCPublisherOPCUAJsonToPCDMTransformer(TransformerInterface): """ Converts a Spark Dataframe column containing a json string created by OPC Publisher to the Process Control Data Model Args: data (DataFrame): Dataframe containing the column with Json OPC UA data source_column_name (str): Spark Dataframe column containing the OPC Publisher Json OPC UA data multiple_rows_per_message (optional bool): Each Dataframe Row contains an array of/multiple OPC UA messages. The list of Json will be exploded into rows in the Dataframe. status_null_value (optional str): If populated, will replace null values in the Status column with the specified value. timestamp_formats (optional list[str]): Specifies the timestamp formats to be used for converting the timestamp string to a Timestamp Type. For more information on formats, refer to this [documentation.](https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html) filter (optional str): Enables providing a filter to the data which can be required in certain scenarios. For example, it would be possible to filter on IoT Hub Device Id and Module by providing a filter in SQL format such as `systemProperties.iothub-connection-device-id = "<Device Id>" AND systemProperties.iothub-connection-module-id = "<Module>"` """ data: DataFrame source_column_name: str multiple_rows_per_message: bool tagname_field: str status_null_value: str change_type_value: str timestamp_formats: list filter: str def __init__( self, data: DataFrame, source_column_name: str, multiple_rows_per_message: bool = True, tagname_field: str = "DisplayName", status_null_value: str = None, change_type_value: str = "insert", timestamp_formats: list = [ "yyyy-MM-dd'T'HH:mm:ss.SSSX", "yyyy-MM-dd'T'HH:mm:ssX", ], filter: str = None, ) -> None: # NOSONAR self.data = data self.source_column_name = source_column_name self.multiple_rows_per_message = multiple_rows_per_message self.tagname_field = tagname_field self.status_null_value = status_null_value self.change_type_value = change_type_value self.timestamp_formats = timestamp_formats self.filter = filter @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_transform_validation(self): return True def post_transform_validation(self): return True def transform(self) -> DataFrame: """ Returns: DataFrame: A dataframe with the specified column converted to PCDM """ if self.multiple_rows_per_message: df = self.data.withColumn( self.source_column_name, from_json(col(self.source_column_name), ArrayType(StringType())), ).withColumn(self.source_column_name, explode(self.source_column_name)) else: df = self.data.withColumn( self.source_column_name, from_json(col(self.source_column_name), StringType()), ) if self.filter != None: df = df.where(self.filter) df = ( df.withColumn( "OPCUA", from_json(col(self.source_column_name), OPC_PUBLISHER_SCHEMA) ) .withColumn("TagName", (col("OPCUA.{}".format(self.tagname_field)))) .withColumn( "EventTime", coalesce( *[ to_timestamp(col("OPCUA.Value.SourceTimestamp"), f) for f in self.timestamp_formats ] ), ) .withColumn("Value", col("OPCUA.Value.Value")) .withColumn( "ValueType", when(col("Value").cast("float").isNotNull(), "float") .when(col("Value").cast("float").isNull(), "string") .otherwise("unknown"), ) .withColumn("ChangeType", lit(self.change_type_value)) ) status_col_name = "OPCUA.Value.StatusCode.Symbol" if self.status_null_value != None: df = df.withColumn( "Status", when(col(status_col_name).isNotNull(), col(status_col_name)).otherwise( lit(self.status_null_value) ), ) else: df = df.withColumn("Status", col(status_col_name)) return df.select( "TagName", "EventTime", "Status", "Value", "ValueType", "ChangeType" )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/transformers/spark/opc_publisher_opcua_json_to_pcdm.py
0.90932
0.493226
opc_publisher_opcua_json_to_pcdm.py
pypi
from datetime import datetime from pyspark.sql import DataFrame, SparkSession from pyspark.sql.functions import when, substring, lit, col, concat from pyspark.sql.types import IntegerType from ...interfaces import TransformerInterface from ...._pipeline_utils.models import Libraries, SystemType from ...._pipeline_utils.weather import WEATHER_DATA_MODEL class RawForecastToWeatherDataModel(TransformerInterface): """ Converts a raw forecast into weather data model Args: spark (SparkSession): Spark Session instance. data (DataFrame): Dataframe to be transformed """ spark: SparkSession data: DataFrame def __init__( self, spark: SparkSession, data: DataFrame, ) -> None: self.spark = spark self.data = data self.target_schema = WEATHER_DATA_MODEL @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_transform_validation(self): return True def post_transform_validation(self) -> bool: assert str(self.data.schema) == str(self.target_schema) return True def _convert_into_target_schema(self) -> None: """ Converts a Spark DataFrame structure into new structure based on the Target Schema. Returns: Nothing. """ df: DataFrame = self.data df = df.select(self.target_schema.names) for field in self.target_schema.fields: df = df.withColumn(field.name, col(field.name).cast(field.dataType)) self.data = self.spark.createDataFrame(df.rdd, self.target_schema) def transform(self) -> DataFrame: """ Returns: DataFrame: A Forecast dataframe converted into Weather Data Model """ self.pre_transform_validation() processed_date = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S") df = ( self.data.withColumn("WeatherDay", substring("FcstValidLocal", 0, 10)) .withColumn( "WeatherHour", (substring("FcstValidLocal", 12, 2).cast(IntegerType()) + 1), ) .withColumn("WeatherTimezoneOffset", substring("FcstValidLocal", 20, 5)) .withColumn("WeatherType", lit("F")) .withColumn("ProcessedDate", lit(processed_date)) .withColumnRenamed("Temp", "Temperature") .withColumnRenamed("Dewpt", "DewPoint") .withColumnRenamed("Rh", "Humidity") .withColumnRenamed("Hi", "HeatIndex") .withColumnRenamed("Wc", "WindChill") .withColumnRenamed("Wdir", "WindDirection") .withColumnRenamed("Wspd", "WindSpeed") .withColumnRenamed("Clds", "CloudCover") .withColumn("WetBulbTemp", lit("")) .withColumn("SolarIrradiance", lit("")) .withColumnRenamed("Qpf", "Precipitation") .withColumnRenamed("DayInd", "DayOrNight") .withColumnRenamed("Dow", "DayOfWeek") .withColumnRenamed("Gust", "WindGust") .withColumnRenamed("Mslp", "MslPressure") .withColumnRenamed("Num", "ForecastDayNum") .withColumnRenamed("Pop", "PropOfPrecip") .withColumnRenamed("PrecipType", "PrecipType") .withColumnRenamed("SnowQpf", "SnowAccumulation") .withColumnRenamed("UvIndex", "UvIndex") .withColumnRenamed("Vis", "Visibility") ) columns = df.columns for column in columns: df = df.withColumn( column, when(col(column) == "", lit(None)).otherwise(col(column)) ) self.data = df self._convert_into_target_schema() self.post_transform_validation() return self.data
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/transformers/spark/the_weather_company/raw_forecast_to_weather_data_model.py
0.900521
0.57517
raw_forecast_to_weather_data_model.py
pypi
import os import pandas as pd import numpy as np import xarray as xr from ...interfaces import TransformerInterface from ...._pipeline_utils.models import Libraries, SystemType from ...._pipeline_utils.weather_ecmwf import ( RTDIP_STRING_WEATHER_DATA_MODEL, RTDIP_FLOAT_WEATHER_DATA_MODEL, ) class ECMWFExtractBaseToWeatherDataModel(TransformerInterface): """ Base class for extracting forecast data downloaded in .nc format from ECMWF MARS Server. Args: load_path (str): Path to local directory where the nc files will be stored, in format "yyyy-mm-dd_HH.nc" date_start (str): Start date of extraction in "YYYY-MM-DD HH:MM:SS" format date_end (str): End date of extraction in "YYYY-MM-DD HH:MM:SS" format run_frequency (str):Frequency format of runs to download, e.g. "H" run_interval (str): Interval of runs, e.g. a run_frequency of "H" and run_interval of "12" will extract the data of the 00 and 12 run for each day. lat (DataArray): Latitude values to extract from nc files lon (DataArray): Longitude values to extract from nc files utc (bool = True): Whether to convert the time to UTC or not """ def __init__( self, load_path: str, date_start: str, date_end: str, run_interval: str, run_frequency: str, lat: xr.DataArray, lon: xr.DataArray, utc: bool = True, ): self.load_path = load_path self.lat = lat self.lon = lon self.date_start = date_start self.date_end = date_end self.run_frequency = run_frequency self.run_interval = run_interval self.utc = utc self.dates = pd.date_range( start=self.date_start, end=self.date_end, freq=self.run_interval + self.run_frequency, ) @staticmethod def system_type(): """ Attributes: SystemType (Environment): Requires PYSPARK """ return SystemType.PYSPARK @staticmethod def libraries(): libraries = Libraries() return libraries @staticmethod def settings() -> dict: return {} def pre_transform_validation(self): return True def post_transform_validation(self): return True @staticmethod def _convert_ws_tag_names(x: list): """ Converts the tag names of wind speed from the format used in the nc files to the format used in the weather data model. Args: x (list): List of variable names of raw tags to be extracted from the nc files Returns: new_tags(list): List of variable names of raw tags to be extracted from the nc files, converted to the format used in the weather data model. """ convert_dict = { "10u": "u10", "100u": "u100", "200u": "u200", "10v": "v10", "100v": "v100", "200v": "v200", } new_tags = [convert_dict[i] if i in convert_dict.keys() else i for i in x] return new_tags def transform( self, tag_prefix: str, variables: list, method: str = "nearest" ) -> pd.DataFrame: """Extract raw data from stored nc filed downloaded via ECMWF MARS. Args: tag_prefix (str): Prefix of the tag names of raw tags to be added to the dataframe variables (list): List of variable names of raw tags to be extracted from the nc files method (str, optional): The method used to match latitude/longitude in xarray using .sel(), by default "nearest" Returns: df (pd.DataFrame): Raw data extracted with lat, lon, run_time, target_time as a pd.multiindex and variables as columns. """ df = [] # e.g. 10u variable is saved as u10 in the file... vars_processed = self._convert_ws_tag_names(variables) for i in self.dates: filename = f"{str(i.date())}_{i.hour:02}.nc" fullpath = os.path.join(self.load_path, filename) ds = xr.open_dataset(fullpath) tmp = ( ds[vars_processed] .sel(latitude=self.lat, longitude=self.lon, method=method) .to_dataframe() ) tmp["run_time"] = i df.append(tmp) ds.close() df = pd.concat(df, axis=0) df = df.rename_axis( index={ "time": "target_time", "latitude": "lat", "longitude": "lon", } ) df = df.reset_index(["lat", "lon"]) df[["lat", "lon"]] = df[["lat", "lon"]].apply( lambda x: np.round(x.astype(float), 5) ) if "level" in df.index.names: index_names = ["lat", "lon", "level", "run_time", "target_time"] else: index_names = ["lat", "lon", "run_time", "target_time"] df = df.reset_index().set_index(index_names) if self.utc: df = df.tz_localize("UTC", level="target_time") df = df.tz_localize("UTC", level="run_time") df = df[~(df.index.duplicated(keep="first"))] df = df.sort_index(axis=0) df = df.sort_index(axis=1) df_new = df.reset_index() df_new = df_new.rename( columns={ "lat": "Latitude", "lon": "Longitude", "run_time": "EnqueuedTime", "target_time": "EventTime", } ) df_new = ( df_new.set_index(["Latitude", "Longitude", "EnqueuedTime", "EventTime"])[ vars_processed ] .rename_axis("Measure", axis=1) .stack() .reset_index(name="Value") ) df_new["Source"] = "ECMWF_MARS" df_new["Status"] = "Good" df_new["Latest"] = True df_new["EventDate"] = pd.to_datetime(df_new["EventTime"]).dt.date df_new["TagName"] = ( tag_prefix + df_new["Latitude"].astype(str) + "_" + df_new["Longitude"].astype(str) + "_" + df_new["Source"] + "_" + df_new["Measure"] ) df_final = df_new.drop("Measure", axis=1) return df_final
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/transformers/spark/ecmwf/nc_extractbase_to_weather_data_model.py
0.753557
0.41253
nc_extractbase_to_weather_data_model.py
pypi
import numpy as np import xarray as xr from .nc_extractbase_to_weather_data_model import ECMWFExtractBaseToWeatherDataModel class ECMWFExtractGridToWeatherDataModel(ECMWFExtractBaseToWeatherDataModel): """Extract a grid from a local .nc file downloaded from ECMWF via MARS Args: lat_min (float): Minimum latitude of grid to extract lat_max (float): Maximum latitude of grid to extract lon_min (float): Minimum longitude of grid to extract lon_max (float): Maximum longitude of grid to extract grid_step (float): The grid length to use to define the grid, e.g. 0.1. load_path (str): Path to local directory with nc files downloaded in format "yyyy-mm-dd_HH.nc" date_start (str): Start date of extraction in "YYYY-MM-DD HH:MM:SS" format date_end (str): End date of extraction in "YYYY-MM-DD HH:MM:SS" format run_frequency (str): Frequency format of runs to download, e.g. "H" run_interval (str): Interval of runs, e.g. a run_frequency of "H" and run_interval of "12" will extract the data of the 00 and 12 run for each day. utc (bool, optional): Add utc to the datetime indexes? Defaults to True. """ def __init__( self, lat_min: float, lat_max: float, lon_min: float, lon_max: float, grid_step: float, load_path: str, date_start: str, date_end: str, run_interval: str, run_frequency: str, utc: bool = True, ): # hmm careful with floating points, this seems to work ok... lat_xr = xr.DataArray( np.linspace( lat_min, lat_max, int(np.round((lat_max - lat_min) / grid_step)) + 1 ), dims=["latitude"], ) lon_xr = xr.DataArray( np.linspace( lon_min, lon_max, int(np.round((lon_max - lon_min) / grid_step)) + 1 ), dims=["longitude"], ) self.load_path = load_path self.lat_min = lat_min self.lat_max = lat_max self.lon_min = lon_min self.lon_max = lon_max self.grid_step = grid_step self.lat = lat_xr self.lon = lon_xr self.date_start = date_start self.date_end = date_end self.run_frequency = run_frequency self.run_interval = run_interval self.utc = utc super(ECMWFExtractGridToWeatherDataModel, self).__init__( lat=lat_xr, lon=lon_xr, load_path=load_path, date_start=date_start, date_end=date_end, run_interval=run_interval, run_frequency=run_frequency, utc=utc, )
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/pipelines/transformers/spark/ecmwf/nc_extractgrid_to_weather_data_model.py
0.855187
0.515681
nc_extractgrid_to_weather_data_model.py
pypi
from turbodbc import connect, make_options, Megabytes import pandas as pd from ..._sdk_utils.compare_versions import _package_version_meets_minimum from ..connection_interface import ConnectionInterface from ..cursor_interface import CursorInterface import logging import os class TURBODBCSQLConnection(ConnectionInterface): """ Turbodbc is a python module used to access relational databases through an ODBC interface. It will allow a user to connect to databricks clusters or sql warehouses. Turbodbc offers built-in NumPy support allowing it to be much faster for processing compared to other connectors. To find details for SQL warehouses server_hostname and http_path location to the SQL Warehouse tab in the documentation. Args: server_hostname: hostname for the cluster or SQL Warehouse http_path: Http path for the cluster or SQL Warehouse access_token: Azure AD Token Note: More fields such as driver can be configured upon extension. """ def __init__(self, server_hostname: str, http_path: str, access_token: str) -> None: _package_version_meets_minimum("turbodbc", "4.0.0") options = make_options( autocommit=True, read_buffer_size=Megabytes(100), use_async_io=True ) self.connection = connect( Driver="Simba Spark ODBC Driver", Host=server_hostname, Port=443, SparkServerType=3, ThriftTransport=2, SSL=1, AuthMech=11, Auth_AccessToken=access_token, Auth_Flow=0, HTTPPath=http_path, UseNativeQuery=1, FastSQLPrepare=1, ApplyFastSQLPrepareToAllQueries=1, DisableLimitZero=1, EnableAsyncExec=1, RowsFetchedPerBlock=os.getenv("RTDIP_ODBC_ROW_BLOCK_SIZE", 500000), turbodbc_options=options, ) def close(self) -> None: """Closes connection to database.""" try: self.connection.close() except Exception as e: logging.exception("error while closing the connection") raise e def cursor(self) -> object: """ Intiates the cursor and returns it. Returns: TURBODBCSQLCursor: Object to represent a databricks workspace with methods to interact with clusters/jobs. """ try: return TURBODBCSQLCursor(self.connection.cursor()) except Exception as e: logging.exception("error with cursor object") raise e class TURBODBCSQLCursor(CursorInterface): """ Object to represent a databricks workspace with methods to interact with clusters/jobs. Args: cursor: controls execution of commands on cluster or SQL Warehouse """ def __init__(self, cursor: object) -> None: self.cursor = cursor def execute(self, query: str) -> None: """ Prepares and runs a database query. Args: query: sql query to execute on the cluster or SQL Warehouse """ try: self.cursor.execute(query) except Exception as e: logging.exception("error while executing the query") raise e def fetch_all(self) -> list: """ Gets all rows of a query. Returns: list: list of results """ try: result = self.cursor.fetchallarrow() df = result.to_pandas() return df except Exception as e: logging.exception("error while fetching the rows from the query") raise e def close(self) -> None: """Closes the cursor.""" try: self.cursor.close() except Exception as e: logging.exception("error while closing the cursor") raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/connectors/odbc/turbodbc_sql_connector.py
0.826362
0.195076
turbodbc_sql_connector.py
pypi
from databricks import sql from ..connection_interface import ConnectionInterface from ..cursor_interface import CursorInterface import logging class DatabricksSQLConnection(ConnectionInterface): """ The Databricks SQL Connector for Python is a Python library that allows you to use Python code to run SQL commands on Databricks clusters and Databricks SQL warehouses. The connection class represents a connection to a database and uses the Databricks SQL Connector API's for Python to intereact with cluster/jobs. To find details for SQL warehouses server_hostname and http_path location to the SQL Warehouse tab in the documentation. Args: server_hostname: Server hostname for the cluster or SQL Warehouse http_path: Http path for the cluster or SQL Warehouse access_token: Azure AD or Databricks PAT token """ def __init__(self, server_hostname: str, http_path: str, access_token: str) -> None: # call auth method self.connection = sql.connect( server_hostname=server_hostname, http_path=http_path, access_token=access_token, ) def close(self) -> None: """Closes connection to database.""" try: self.connection.close() except Exception as e: logging.exception("error while closing connection") raise e def cursor(self) -> object: """ Intiates the cursor and returns it. Returns: DatabricksSQLCursor: Object to represent a databricks workspace with methods to interact with clusters/jobs. """ try: return DatabricksSQLCursor(self.connection.cursor()) except Exception as e: logging.exception("error with cursor object") raise e class DatabricksSQLCursor(CursorInterface): """ Object to represent a databricks workspace with methods to interact with clusters/jobs. Args: cursor: controls execution of commands on cluster or SQL Warehouse """ def __init__(self, cursor: object) -> None: self.cursor = cursor def execute(self, query: str) -> None: """ Prepares and runs a database query. Args: query: sql query to execute on the cluster or SQL Warehouse """ try: self.cursor.execute(query) except Exception as e: logging.exception("error while executing the query") raise e def fetch_all(self) -> list: """ Gets all rows of a query. Returns: list: list of results """ try: result = self.cursor.fetchall_arrow() df = result.to_pandas() return df except Exception as e: logging.exception("error while fetching the rows of a query") raise e def close(self) -> None: """Closes the cursor.""" try: self.cursor.close() except Exception as e: logging.exception("error while closing the cursor") raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/connectors/odbc/db_sql_connector.py
0.820757
0.374733
db_sql_connector.py
pypi
import pyodbc import pandas as pd from ..connection_interface import ConnectionInterface from ..cursor_interface import CursorInterface import logging class PYODBCSQLConnection(ConnectionInterface): """ PYODBC is an open source python module which allows access to ODBC databases. This allows the user to connect through ODBC to data in azure databricks clusters or sql warehouses. Uses the databricks API's (2.0) to connect to the sql server. Args: driver_path: Driver installed to work with PYODBC server_hostname: Server hostname for the cluster or SQL Warehouse http_path: Http path for the cluster or SQL Warehouse access_token: Azure AD Token Note 1: More fields can be configured here in the connection ie PORT, Schema, etc. Note 2: When using Unix, Linux or Mac OS brew installation of PYODBC is required for connection. """ def __init__( self, driver_path: str, server_hostname: str, http_path: str, access_token: str ) -> None: self.connection = pyodbc.connect( "Driver=" + driver_path + ";" + "HOST=" + server_hostname + ";" + "PORT=443;" + "Schema=default;" + "SparkServerType=3;" + "AuthMech=11;" + "UID=token;" + #'PWD=' + access_token+ ";" + "Auth_AccessToken=" + access_token + ";" "ThriftTransport=2;" + "SSL=1;" + "HTTPPath=" + http_path, autocommit=True, ) def close(self) -> None: """Closes connection to database.""" try: self.connection.close() except Exception as e: logging.exception("error while closing the connection") raise e def cursor(self) -> object: """ Intiates the cursor and returns it. Returns: PYODBCSQLCursor: Object to represent a databricks workspace with methods to interact with clusters/jobs. """ try: return PYODBCSQLCursor(self.connection.cursor()) except Exception as e: logging.exception("error with cursor object") raise e class PYODBCSQLCursor(CursorInterface): """ Object to represent a databricks workspace with methods to interact with clusters/jobs. Args: cursor: controls execution of commands on cluster or SQL Warehouse """ def __init__(self, cursor: object) -> None: self.cursor = cursor def execute(self, query: str) -> None: """ Prepares and runs a database query. Args: query: sql query to execute on the cluster or SQL Warehouse """ try: self.cursor.execute(query) except Exception as e: logging.exception("error while executing the query") raise e def fetch_all(self) -> list: """ Gets all rows of a query. Returns: list: list of results """ try: result = self.cursor.fetchall() cols = [column[0] for column in self.cursor.description] result = [list(x) for x in result] df = pd.DataFrame(result) df.columns = cols return df except Exception as e: logging.exception("error while fetching rows from the query") raise e def close(self) -> None: """Closes the cursor.""" try: self.cursor.close() except Exception as e: logging.exception("error while closing the cursor") raise e
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/connectors/odbc/pyodbc_sql_connector.py
0.82251
0.222046
pyodbc_sql_connector.py
pypi
from langchain.chat_models import ChatOpenAI from langchain import SQLDatabase from langchain.agents import create_sql_agent from langchain.agents.agent_toolkits import SQLDatabaseToolkit from langchain.agents.agent_types import AgentType import logging from ..._sdk_utils.compare_versions import _package_version_meets_minimum from ..connection_interface import ConnectionInterface from ..cursor_interface import CursorInterface class ChatOpenAIDatabricksConnection(ConnectionInterface): """ The Chat Open AI(Chat GPT) Databricks LLM Connector enables you to connect to a Databricks SQL Warehouse and use the Chat Open AI(Chat GPT) LLM to generate SQL queries. The connection class represents a connection to a database and uses the Databricks SQL Connector API's for Python to interact with cluster/jobs and langchain to connect to Chat Open AI(Chat GPT) LLM. To find details for SQL warehouses server_hostname and http_path location to the SQL Warehouse tab in the documentation. Args: catalog: Catalog name in Databricks schema: Schema name in Databricks server_hostname: Server hostname for the cluster or SQL Warehouse http_path: Http path for the cluster or SQL Warehouse access_token: Azure AD or Databricks PAT token openai_api_key: OpenAI API key openai_model: OpenAI model name sample_rows_in_table_info: Number of rows to sample when getting table information verbose_logging: Whether to log verbose messages """ def __init__( self, catalog: str, schema: str, server_hostname: str, http_path: str, access_token: str, openai_api_key: str, openai_model: str = "gpt-4", sample_rows_in_table_info: int = 3, verbose_logging: bool = False, ) -> None: _package_version_meets_minimum("langchain", "0.0.196") # connect to llm llm = ChatOpenAI( temperature=0, model_name=openai_model, openai_api_key=openai_api_key ) # connect to Databricks db = SQLDatabase.from_databricks( catalog=catalog, schema=schema, api_token=access_token, host=server_hostname, warehouse_id=http_path.split("/")[-1], sample_rows_in_table_info=sample_rows_in_table_info, ) prefix = """ ... Always adhere to the format and don't return empty names or half responses. """ toolkit = SQLDatabaseToolkit(db=db, llm=llm) model_agent_type = AgentType.ZERO_SHOT_REACT_DESCRIPTION if "0613" in openai_model: model_agent_type = AgentType.OPENAI_FUNCTIONS self.connection = create_sql_agent( llm=llm, prefix=prefix, toolkit=toolkit, verbose=verbose_logging, agent_type=model_agent_type, ) def close(self) -> None: """Closes connection to database.""" pass def cursor(self) -> object: """ Initiates the cursor and returns it. Returns: ChatOpenAIDatabricksSQLCursor: Object to represent a connection to Databricks and Open AI with methods to interact with clusters/jobs and ChatGPT. """ try: return ChatOpenAIDatabricksSQLCursor(self.connection) except Exception as e: logging.exception("error with cursor object") raise e def run(self, query: str) -> str: """ Runs a query on the ChatGPT and the Databricks Cluster or SQL Warehouse. Returns: str: response from ChatGPT and the Databricks Cluster or SQL Warehouse """ cursor = self.cursor() cursor.execute(query) return cursor.fetch_all() class ChatOpenAIDatabricksSQLCursor(CursorInterface): """ Object to represent a connection to Databricks and Open AI with methods to interact with clusters/jobs and ChatGPT. Args: cursor: controls execution of commands on cluster or SQL Warehouse """ response = None def __init__(self, cursor: object) -> None: self.cursor = cursor def execute(self, query: str) -> None: """ Prepares and runs a database query. Args: query: sql query to execute on ChatGPT and the Databricks Cluster or SQL Warehouse """ try: self.response = self.cursor.run(query) except Exception as e: logging.exception("error while executing the query") raise e def fetch_all(self) -> list: """ Gets all rows of a query. Returns: list: list of results """ try: return self.response except Exception as e: logging.exception("error while fetching the rows of a query") raise e def close(self) -> None: """Closes the cursor.""" pass
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/connectors/llm/chatopenai_databricks_connector.py
0.854657
0.198394
chatopenai_databricks_connector.py
pypi
from pyspark.sql import SparkSession, DataFrame from ..._sdk_utils.compare_versions import _package_version_meets_minimum from ..connection_interface import ConnectionInterface from ..cursor_interface import CursorInterface from ...pipelines._pipeline_utils.spark import SparkClient from ...pipelines._pipeline_utils.models import Libraries import pandas as pd import logging class SparkConnection(ConnectionInterface): """ The Spark Connector enables running Spark Sql queries via a Spark Session. Additionally, this connector supports Spark Connect which was introduced in Pyspark 3.4.0 and allows Spark Sessions to connect to remote Spark Clusters. This enables Spark SQL to be constructed locally, but executed remotely. To find out more about Spark Connect and the connection string to be provided to the `spark_remote` parameter of the Spark Session, please see [here.](https://spark.apache.org/docs/latest/spark-connect-overview.html#specify-spark-connect-when-creating-spark-session) Args: spark (optional, SparkSession): Provide an existing spark session if one exists. A new Spark Session will be created if not populated spark_configuration (optional, dict): Spark configuration to be provided to the spark session spark_libraries (optional, Libraries): Additional JARs to be included in the Spark Session. spark_remote (optional, str): Remote connection string of Spark Server and any authentication details. The Spark Connect introduced in Pyspark 3.4.0 allows Spark Sessions to connect to remote Spark Clusters. This enables Spark SQL to be constructed locally, but executed remotely. """ def __init__( self, spark: SparkSession = None, spark_configuration: dict = None, spark_libraries: Libraries = None, spark_remote: str = None, ) -> None: if spark_remote != None: _package_version_meets_minimum("pyspark", "3.4.0") if spark is None: self.connection = SparkClient( spark_configuration={} if spark_configuration is None else spark_configuration, spark_libraries=Libraries() if spark_libraries is None else spark_libraries, spark_remote=spark_remote, ).spark_session else: self.connection = spark def close(self) -> None: """Not relevant for spark sessions""" pass def cursor(self) -> object: """ Intiates the cursor and returns it. Returns: DatabricksSQLCursor: Object to represent a databricks workspace with methods to interact with clusters/jobs. """ try: return SparkCursor(self.connection) except Exception as e: logging.exception("error with cursor object") raise e class SparkCursor(CursorInterface): """ Object to represent a spark session with methods to interact with clusters/jobs using the remote connection information. Args: cursor: controls execution of commands on Spark Cluster """ execute_result: DataFrame def __init__(self, cursor: object) -> None: self.cursor = cursor def execute(self, query: str) -> None: """ Prepares and runs a database query. Args: query: sql query to execute on the cluster or SQL Warehouse """ try: self.execute_result = self.cursor.sql(query) except Exception as e: logging.exception("error while executing the query") raise e def fetch_all(self) -> DataFrame: """ Gets all rows of a query. Returns: DataFrame: Spark DataFrame of results """ try: df = self.execute_result return df except Exception as e: logging.exception("error while fetching the rows of a query") raise e def close(self) -> None: """Not relevant for dataframes""" pass
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/connectors/grpc/spark_connector.py
0.878601
0.577078
spark_connector.py
pypi
from pydantic import BaseModel from datetime import datetime class AtmosphericG215minForecastV1(BaseModel): """ The Hourly Forecast API is sourced from the The Weather Company Forecast system. """ clas: str """Data identifier. Example: fod_long_range_hourly""" clds: int """Cloud Cover: Hourly average cloud cover expressed as a percentage. Range: 0 to 100""" day_ind: str """This data field indicates whether it is daytime or nighttime based on the Local Apparent Time of the location. Range: D, N, X X=Missing""" dewpt: int """Dew Point. The temperature which air must be cooled at constant pressure to reach saturation. The Dew Point is also an indirect measure of the humidity of the air. The Dew Point will never exceed the Temperature. When the Dew Point and Temperature are equal, clouds or fog will typically form. The closer the values of Temperature and Dew Point, the higher the relative humidity""" dow: str """Day of week. Range: Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday""" expire_time_gmt: float """Expiration time in UNIX seconds.""" fcst_valid: float """Time forecast is valid in UNIX seconds.""" fcst_valid_local: datetime """Time forecast is valid in local apparent time [ISO]""" feels_like: int """Hourly feels like temperature. An apparent temperature. It represents what the air temperature “feels like” on exposed human skin due to the combined effect of the wind chill or heat index""" golf_category: str """The Golf Index Category expressed as a worded phrase the weather conditions for playing golf""" golf_index: int """The Golf Index expresses on a scale of 0 to 10 the weather conditions for playing golf. Not applicable at night. Enum: 0-2=Very Poor, 3=Poor, 4-5=Fair, 6-7=Good, 8-9=Very Good, 10=Excellent""" gust: int """The maximum expected wind gust speed""" hi: int """Hourly maximum heat index""" icon_code: int """This number is the key to the weather icon lookup. The data field shows the icon number that is matched to represent the observed weather conditions. Please refer to the Forecast Icon Code, Weather Phrases and Images document""" icon_extd: int """Code representing explicit full set sensible weather""" mslp: float """Hourly mean sea level pressure""" num: int """This data field is the sequential number that identifies each of the forecasted days in the API. They start on day 1, which is the forecast for the current day. Then the forecast for tomorrow uses number 2, then number 3 for the day after tomorrow, and so forth. Range: 1-15""" phrase_12char: str """Hourly sensible weather phrase""" phrase_22char: str """Hourly sensible weather phrase""" phrase_32char: str """Hourly sensible weather phrase""" pop: str """Hourly maximum probability of precipitation. Range 0-100""" precip_type: str """The short text describing the expected type accumulation associated with the Probability of Precipitation (POP) display for the hou. Enum: rain,snow, precip""" qpf: float """The forecasted measurable precipitation (liquid or liquid equivalent) during the hour""" rh: int """The relative humidity of the aire. Range 0-100""" severity: int """A number denoting how impactful is the forecasted weather for this hour. Range: 0 = no threat, 6 = dangerous / life threatening""" snow_qpf: float """The forecasted hourly snow accumulation during the hour""" subphrase_pt1: str """Part 1 of 3-part hourly sensible weather phrase""" subphrase_pt2: str """Part 2 of 3-part hourly sensible weather phrase""" subphrase_pt3: str """Part 3 of 3-part hourly sensible weather phrase""" temp: int """The temperature of the air, measured by a thermometer 1.5 meters (4.5 feet) above the ground that is shaded from the other element""" uv_desc: str """The UV Index Description which complements the UV Index value by providing an associated level of risk of skin damage due to exposur. Eenum: -2 is Not Available-1 is No Report 0 to 2 is Low 3 to 5 is Moderate 6 to 7 is High 8 to 10 is Very High 11 to 16 is Extreme""" uv_index: int """Hourly maximum UV index""" uv_index_raw: float """The non-truncated UV Index which is the intensity of the solar radiation based on a number of factors""" uv_warning: int """TWC-created UV warning based on UV index of 11 or greater""" vis: float """Prevailing hourly visibility""" wc: int """Hourly minimum wind chill""" wdir: int """Hourly average wind direction in magnetic notation. Range 0 - 359""" wdir_cardinal: str """Hourly average wind direction in cardinal notation. Enum: N , NNE , NE, ENE, E, ESE, SE, SSE, S, SSW, SW, WSW, W, WNW, NW, NNW, CALM, VAR""" wspd: int """The maximum forecasted hourly wind speed""" wxman: str """Code combining Hourly sensible weather and temperature conditions""" class WeatherForecastV1(BaseModel): """ This model is used to represent the standardised weather forecast for a given location. """ Tagname: str """Unique identifier for the data point""" Longitude: float """Longitude of the location""" Latitude: float """Latitude of the location""" EventDate: datetime """Event date of forecast""" EventTime: datetime """Event time of forecast""" Source: str """Forecast API source i.e. ECMWF""" Status: str """Forecast API status i.e. Success""" Value: float Value: str """Value of forecast measurement""" EnqueuedTime: datetime """Time Forecast API was called""" Latest: bool """Latest forecast Identifier"""
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/data_models/weather/weather_models.py
0.875628
0.795181
weather_models.py
pypi
from ..weather_models import AtmosphericG215minForecastV1 from datetime import datetime def create_AtmosphericG215minForecastV1_VO( clas: str, clds: int, day_ind: str, dewpt: int, dow: str, expire_time_gmt: float, fcst_valid: float, fcst_valid_local: datetime, feels_like: int, golf_category: str, golf_index: int, gust: int, hi: int, icon_code: int, icon_extd: int, mslp: float, num: int, phrase_12char: str, phrase_22char: str, phrase_32char: str, pop: str, precip_type: str, qpf: float, rh: int, severity: int, snow_qpf: float, subphrase_pt1: str, subphrase_pt2: str, subphrase_pt3: str, temp: int, uv_desc: str, uv_index: int, uv_index_raw: float, uv_warning: int, vis: float, wc: int, wdir: int, wdir_cardinal: str, wspd: int, wxman: str, ): try: return AtmosphericG215minForecastV1( clas=clas, clds=clds, day_ind=day_ind, dewpt=dewpt, dow=dow, expire_time_gmt=expire_time_gmt, fcst_valid=fcst_valid, fcst_valid_local=fcst_valid_local, feels_like=feels_like, golf_category=golf_category, golf_index=golf_index, gust=gust, hi=hi, icon_code=icon_code, icon_extd=icon_extd, mslp=mslp, num=num, phrase_12char=phrase_12char, phrase_22char=phrase_22char, phrase_32char=phrase_32char, pop=pop, precip_type=precip_type, qpf=qpf, rh=rh, severity=severity, snow_qpf=snow_qpf, subphrase_pt1=subphrase_pt1, subphrase_pt2=subphrase_pt2, subphrase_pt3=subphrase_pt3, temp=temp, uv_desc=uv_desc, uv_index=uv_index, uv_index_raw=uv_index_raw, uv_warning=uv_warning, vis=vis, wc=wc, wdir=wdir, wdir_cardinal=wdir_cardinal, wspd=wspd, wxman=wxman, ) except Exception as ex: error_msg_str: str = ( "Could not create AtmosphericG215minForecastV1 Value Object: {}".format(ex) ) raise SystemError(error_msg_str)
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/data_models/weather/utils/CreateWeatherObject.py
0.514644
0.188287
CreateWeatherObject.py
pypi
from enum import IntFlag, auto from pydantic import BaseModel from enum import Enum class ModelType(IntFlag): Default = auto() class UomUsage(Enum): """ Units of measurement """ W = 0 """Watts""" WH = 1 """Watts/Hour""" KW = 2 """Kilowatts""" KWH = 3 """Kilowatts/Hour""" MW = 4 """Megawatts""" MWH = 5 """Megawatts/Hour""" class SeriesType(IntFlag): """ Definition of the type of timeseries for the measurements (e.g. realtime or interval based) and the type of the computation if the series is aggregated/derived """ RealTime = auto() """ The data has no specific time pattern """ Minute1 = auto() """ 1 minute interval """ Minutes5 = auto() """ 5 minutes interval """ Minutes10 = auto() """ 10 minutes interval """ Minutes15 = auto() """ 15 minutes interval """ Minutes30 = auto() """ 30 minutes interval """ Hour = auto() """ 60 minutes/1 hour interval """ Hours2 = auto() """ 2 hours interval """ Hours3 = auto() """ 3 hours interval """ Hours4 = auto() """ 4 hours interval """ Hours5 = auto() """ 5 hours interval """ Hours6 = auto() """ 6 hours interval """ Hours8 = auto() """ 8 hours interval """ Hours12 = auto() """ 12 hours interval """ Hours24 = auto() """ 1 Minute interval """ Week = auto() """ 1 Minute interval """ Month = auto() """ 1 Minute interval """ Year = auto() """ 1 Minute interval """ Sum = auto() """ Measurement is the result of computing the sum of a set of measurements """ MeanFilter = auto() """ Measurement is the result of computing the mean of a set of measurements """ MedianFilter = auto() """ Measurement is the result of computing the median of a set of measurements """ MaxFilter = auto() """ Measurement is the result of computing the max of a set of measurements """ MinFilter = auto() # Testing Test = auto() class Usage(BaseModel): """ Usage. a usage measurement from an AMI meter """ Uid: str """ A unique identifier associated to the source of the measurement (e.g. sensor, meter, etc.) """ SeriesId: str """ Identifier for a particular timeseries set """ Timestamp: int """ Creation time. Always UTC. Seconds since EPOCH """ IntervalTimestamp: int """ The timestamp for the interval. Always UTC. Seconds since EPOCH """ Value: float """ The actual value of the measurement """ class ValueType(IntFlag): """ Defines the type of value """ Counter = auto() """ The value is cumulative increasing monotinically """ Gauge = auto() """ The value can arbitrarily go up and down """ Histogram = auto() """ The value is a histogram """ Summary = auto() """ """ Usage = auto() """ The value is from source that consumes energy """ Generation = auto() """ The value is from a source that generates energy """ Prediction = auto() """ The value is generated from a predictive model """ ShortTerm = auto() """ The value is related to a short term (e.g. short term forecast) """ LongTerm = auto() """ The value is related to a long term (e.g. long term forecast) """ Actuals = auto() """ The value is from a actual measurement """ Backcast = auto() """ The value is related to a forecast that happens in the past (e.g. for calculating how good was the forecast compared to actuals) """ Forecast = auto() """ The value is related to a forecast in the future """ ShortTermBackcast = ShortTerm | Backcast LongTermBackcast = LongTerm | Backcast ShortTermForecast = ShortTerm | Forecast LongTermForecast = LongTerm | Forecast class MetaData(BaseModel): """ Metadata for a sensor, meter, etc. and its association to sets of time series """ Uid: str """ A unique identifier associated to the source of the measurement (e.g. sensor, meter, etc.) """ SeriesId: str """ Identifier for a particular timeseries set """ SeriesParentId: str """ Hierarchy (Sequence) of this TS associated to the same group of TS """ Name: str """ Name of the sensor """ Uom: UomUsage """ Unit of measure for this sensor """ Description: str """ Short description """ TimestampStart: int """ Timestamp of the creation of the record and start of the timeseries. Always UTC. Seconds since EPOCH """ TimestampEnd: int """ Timestamp of end of the timeseries. Always UTC. Seconds since EPOCH """ Timezone: str """ Time zone of where the sensor or where the series has started """ Version: str """ For versioning """ SeriesType: SeriesType """ Type of the timeseries """ ModelType: ModelType """ Type of model use to produce this data (e.g. a precitive model) """ ValueType: ValueType """ Type of value of the timeseries """ Properties: dict """ Any other additional properties (Key/Value) """
/rtdip_sdk-0.7.6-py3-none-any.whl/rtdip_sdk/data_models/meters/ami_meters.py
0.859826
0.696275
ami_meters.py
pypi
# Interface Read the Docs and GitHub Actions [![Docs](https://github.com/dfm/rtds-action/workflows/Docs/badge.svg)](https://github.com/dfm/rtds-action/actions?query=workflow%3ADocs) [![Documentation Status](https://readthedocs.org/projects/rtds-action/badge/?version=latest)](https://rtds-action.readthedocs.io/en/latest/?badge=latest) I like to use [Read the Docs](https://readthedocs.org/) to build (and version!) my docs, but I _also_ like to use [Jupyter notebooks](https://jupyter.org/) to write tutorials. Unfortunately, even though [notebooks can be executed on Read the Docs](https://docs.readthedocs.io/en/stable/guides/jupyter.html), some of them take a very long time to run or need special Docker environments to execute, which goes beyond what the platform supports. In these cases I needed to check executed notebooks (often with large images) into my git repository, causing huge amounts of bloat. Futhermore, the executed notebooks would often get out of sync with the development of the code. **No more!!** _This library avoids these issues by executing code on [GitHub Actions](https://github.com/features/actions), uploading build artifacts (in this case, executed Jupter notebooks), and then (only then!) triggering a Read the Docs build that can download the executed notebooks._ There is still some work required to set up this workflow, but this library has three pieces that make it a bit easier: 1. A GitHub action that can be used to trigger a build for the current branch on Read the Docs. 2. A Sphinx extension that interfaces with the GitHub API to download the artifact produced for the target commit hash. 3. Some documentation that shows you how to set all this up! ## Usage The following gives the detailed steps of the process of setting up a project using this workflow. But you can also see a fully functional example in this repository. The documentation source is the `docs` directory and the `.github/workflows` directory includes a workflow that is executed to build the docs using this package. The rendered page is available at [rtds-action.readthedocs.io](https://rtds-action.readthedocs.io). ### 1. Set up Read the Docs 1. First, you'll need to import your project as usual. If you've already done that, don't worry: this will also work with existing Read the Docs projects. 2. Next, go to the admin page for your project on Read the Docs, click on `Integrations` (the URL is something like `https://readthedocs.org/dashboard/YOUR_PROJECT_NAME/integrations/`). 3. Click `Add integration` and select `Generic API incoming webhook`. 4. Take note of the webhook `URL` and `token` on this page for use later. You should also edit your webhook settings on GitHub by going to `https://github.com/USERNAME/REPONAME/settings/hooks` and clicking "Edit" next to the Read the Docs hook. On that page, you should un-check the `Pushes` option. ### 2. Set up GitHub Actions workflow In this example, we'll assume that we have tutorials written as Jupyter notebooks, saved as Python scripts using [Jupytext](https://jupytext.readthedocs.io/en/latest/introduction.html) (because that's probably what you should be doing anyways!) in a directory called `docs/tutorials`. First, you'll need to add the Read the Docs webhook URL and token that you recorded above as "secrets" for your GitHub project by going to the URL `https://github.com/USERNAME/REPONAME/settings/secrets`. I'll call them `RTDS_WEBHOOK_URL` (include the `https`!) and `RTDS_WEBHOOK_TOKEN` respectively. For this use case, we can create the workflow `.github/workflows/docs.yml` as follows: ```yaml name: Docs on: [push, release] jobs: notebooks: name: "Build the notebooks for the docs" runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Set up Python uses: actions/setup-python@v2 with: python-version: 3.8 - name: Install dependencies run: | python -m pip install -U pip python -m pip install -r .github/workflows/requirements.txt - name: Execute the notebooks run: | jupytext --to ipynb --execute docs/tutorials/*.py - uses: actions/upload-artifact@v2 with: name: notebooks-for-${{ github.sha }} path: docs/tutorials - name: Trigger RTDs build uses: dfm/rtds-action@v1 with: webhook_url: ${{ secrets.RTDS_WEBHOOK_URL }} webhook_token: ${{ secrets.RTDS_WEBHOOK_TOKEN }} commit_ref: ${{ github.ref }} ``` Here, we're also assuming that we've added a `pip` requirements file at `.github/workflows/requirements.txt` with the dependencies required to execute the notebooks. Also note that in the `upload-artifact` step we give our artifact that depends on the hash of the current commit. This is crucial! We also need to take note of the `notebooks-for-` prefix because we'll use that later. It's worth emphasizing here that the only "special" steps in this workflow are the last two. You can do whatever you want to generate your artifact in the previous steps (for example, you could use `conda` instead of `pip`) because this workflow is not picky about how you get there! ### 3. Set up Sphinx Finally, you can edit the `conf.py` for your Sphinx documentation to add support for fetching the artifact produced by your action. Here is a minimal example: ```python import os extensions = [... "rtds_action"] # The name of your GitHub repository rtds_action_github_repo = "USERNAME/REPONAME" # The path where the artifact should be extracted # Note: this is relative to the conf.py file! rtds_action_path = "tutorials" # The "prefix" used in the `upload-artifact` step of the action rtds_action_artifact_prefix = "notebooks-for-" # A GitHub personal access token is required, more info below rtds_action_github_token = os.environ["GITHUB_TOKEN"] # Whether or not to raise an error on Read the Docs if the # artifact containing the notebooks can't be downloaded (optional) rtds_action_error_if_missing = False ``` Where we have added the custom extension and set the required configuration parameters. You'll need to provide Read the Docs with a GitHub personal access token (it only needs the `public_repo` scope if your repo is public). You can generate a new token by going to [your GitHub settings page](https://github.com/settings/tokens). Then, save it as an environment variable (called `GITHUB_TOKEN` in this case) on Read the Docs. ## Development For now, just a note: if you edit `src/js/index.js`, you _must_ run `npm run package` to generate the compiled action source.
/rtds_action-1.1.0.tar.gz/rtds_action-1.1.0/README.md
0.500488
0.90599
README.md
pypi
import logging from pathlib import Path from termcolor import colored from beetools import beearchiver, beeutils _PROJ_DESC = __doc__.split('\n')[0] _PROJ_PATH = Path(__file__) def project_desc(): return _PROJ_DESC class api: '''Class short description one-liner goes here. Class multi-liner detail description goes here. ''' def __init__(self, p_project_name, p_dir, p_parent_log_name='', p_verbose=True): '''Initialize the class Parameters ---------- p_parent_log_name : str Name of the parent. In combination witt he class name it will form the logger name. p_logger : bool, default = False Activate the logger p_verbose: bool, default = True Write messages to the console. Returns ------- See Also -------- Notes ----- Examples -------- ''' self.success = True if p_parent_log_name: self.log_name = '{}.{}'.format(p_parent_log_name, api) self.logger = logging.getLogger(self._log_name) self.project_name = p_project_name self.dir = p_dir self.verbose = p_verbose def method_1(self, p_msg): '''Method short description one-liner goes here. Class multi-liner detail description goes here. Parameters ---------- Returns ------- See Also -------- Notes ----- Examples -------- ''' print(colored('Testing {}...'.format(self.project_name), 'yellow')) print(colored('Message: {}'.format(p_msg), 'yellow')) return True def do_examples(p_cls=True): '''A collection of implementation examples for api. A collection of implementation examples for api. The examples illustrate in a practical manner how to use the methods. Each example show a different concept or implementation. Parameters ---------- p_cls : bool, default = True Clear the screen or not at startup of Archiver Returns ------- success : boolean Execution status of the examples. See Also -------- Notes ----- Examples -------- ''' success = do_example1(p_cls) success = do_example2(False) and success return success def do_example1(p_cls=True): '''A working example of the implementation of api. Example1 illustrate the following concepts: 1. Bla, bla, bla 2. Bla, bla, bla Parameters ---------- p_cls : bool, default = True Clear the screen or not at startup of Archiver Returns ------- success : boolean Execution status of the example See Also -------- Notes ----- Examples -------- ''' success = True archiver = beearchiver.Archiver(_PROJ_DESC, _PROJ_PATH) archiver.print_header(p_cls=p_cls) t_dir = beeutils.get_tmp_dir() t_api = api("api", t_dir) t_api.method_1('This is do_example1') beeutils.rm_tree(t_dir) archiver.print_footer() return success def do_example2(p_cls=True): '''Another working example of the implementation of api. Example2 illustrate the following concepts: 1. Bla, bla, bla 2. Bla, bla, bla Parameters ---------- p_cls : bool, default = True Clear the screen or not at startup of Archiver Returns ------- success : boolean Execution status of the method See Also -------- Notes ----- Examples -------- ''' success = True archiver = beearchiver.Archiver(_PROJ_DESC, _PROJ_PATH) archiver.print_header(p_cls=p_cls) t_dir = beeutils.get_tmp_dir() t_api = api("api", t_dir) t_api.method_1('This is do_example2') beeutils.rm_tree(t_dir) archiver.print_footer() return success if __name__ == '__main__': do_examples()
/rte_api-0.0.0-py3-none-any.whl/api/api.py
0.696268
0.346928
api.py
pypi
import logging from pathlib import Path from termcolor import colored from beetools import beearchiver, beeutils _PROJ_DESC = __doc__.split('\n')[0] _PROJ_PATH = Path(__file__) def project_desc(): return _PROJ_DESC class rteInstallServer: '''Class short description one-liner goes here. Class multi-liner detail description goes here. ''' def __init__(self, p_project_name, p_dir, p_parent_log_name='', p_verbose=True): '''Initialize the class Parameters ---------- p_parent_log_name : str Name of the parent. In combination witt he class name it will form the logger name. p_logger : bool, default = False Activate the logger p_verbose: bool, default = True Write messages to the console. Returns ------- See Also -------- Notes ----- Examples -------- ''' self.success = True if p_parent_log_name: self.log_name = '{}.{}'.format(p_parent_log_name, rteInstallServer) self.logger = logging.getLogger(self._log_name) self.project_name = p_project_name self.dir = p_dir self.verbose = p_verbose def method_1(self, p_msg): '''Method short description one-liner goes here. Class multi-liner detail description goes here. Parameters ---------- Returns ------- See Also -------- Notes ----- Examples -------- ''' print(colored('Testing {}...'.format(self.project_name), 'yellow')) print(colored('Message: {}'.format(p_msg), 'yellow')) return True def do_examples(p_cls=True): '''A collection of implementation examples for rteInstallServer. A collection of implementation examples for rteInstallServer. The examples illustrate in a practical manner how to use the methods. Each example show a different concept or implementation. Parameters ---------- p_cls : bool, default = True Clear the screen or not at startup of Archiver Returns ------- success : boolean Execution status of the examples. See Also -------- Notes ----- Examples -------- ''' success = do_example1(p_cls) success = do_example2(False) and success return success def do_example1(p_cls=True): '''A working example of the implementation of rteInstallServer. Example1 illustrate the following concepts: 1. Bla, bla, bla 2. Bla, bla, bla Parameters ---------- p_cls : bool, default = True Clear the screen or not at startup of Archiver Returns ------- success : boolean Execution status of the example See Also -------- Notes ----- Examples -------- ''' success = True archiver = beearchiver.Archiver(_PROJ_DESC, _PROJ_PATH) archiver.print_header(p_cls=p_cls) t_dir = beeutils.get_tmp_dir() t_rteinstallserver = rteInstallServer("rteInstallServer", t_dir) t_rteinstallserver.method_1('This is do_example1') beeutils.rm_tree(t_dir) archiver.print_footer() return success def do_example2(p_cls=True): '''Another working example of the implementation of rteInstallServer. Example2 illustrate the following concepts: 1. Bla, bla, bla 2. Bla, bla, bla Parameters ---------- p_cls : bool, default = True Clear the screen or not at startup of Archiver Returns ------- success : boolean Execution status of the method See Also -------- Notes ----- Examples -------- ''' success = True archiver = beearchiver.Archiver(_PROJ_DESC, _PROJ_PATH) archiver.print_header(p_cls=p_cls) t_dir = beeutils.get_tmp_dir() t_rteinstallserver = rteInstallServer("rteInstallServer", t_dir) t_rteinstallserver.method_1('This is do_example2') beeutils.rm_tree(t_dir) archiver.print_footer() return success if __name__ == '__main__': do_examples()
/rteInstallServer-0.0.1-py3-none-any.whl/rteinstallserver/rteinstallserver.py
0.674372
0.338186
rteinstallserver.py
pypi
from typing import List, Dict import numpy as np import pandas as pd from sklearn.metrics import roc_auc_score, pairwise_distances as sklearn_pairwise_distances __version__ = '0.1.0' class InputErrorRTG(RuntimeError): """Generic error caused by incorrect input""" pass def to_codes(array): """replace categories with unique integer codes""" return np.unique(array, return_inverse=True)[1] def to_codes_str_series(array): """replace categories with unique string codes""" return pd.Series(to_codes(array)).map(str) def compute_pairwise_distances( n_samples, embeddings=None, pairwise_distances=None, metric='euclidean', ): if (pairwise_distances is None) + (embeddings is None) != 1: raise RuntimeError('Embeddings or pairwise_distances should be provided (only one, not both)') if pairwise_distances is None: embeddings = np.asarray(embeddings) assert np.ndim(embeddings) == 2, 'embeddings should be 2-dimensional metric [n_samples, n_features]' assert len(embeddings) == n_samples, 'number of embeddings should be the same as number of rows in metadata' if metric == 'hellinger': if np.min(embeddings) < 0: raise InputErrorRTG('Hellinger distance requires non-negative elements in embedding') return sklearn_pairwise_distances(np.sqrt(embeddings), metric='euclidean') return sklearn_pairwise_distances(embeddings, metric=metric) else: if metric != 'euclidean': raise RuntimeWarning(f'Passed metric ({metric}) not used as distances are passed') assert pairwise_distances.shape == (n_samples, n_samples), 'wrong shape of distances passed' return pairwise_distances def compute_RTG_score( metadata: pd.DataFrame, include_confounders: List[str], exclude_confounders: List[str], *, embeddings=None, metric='euclidean', pairwise_distances=None, minimal_n_samples=30, ) -> float: """ Compute (single) RTG score. :param metadata: DataFrame with confounds (may contain additional variables) of shape [n_samples, n_variables]. Examples of variables: clone, donor, batch, plate, position on a plate :param include_confounders: list of confounders to estimate their joint contribution Example: pass ['batch', 'donor'] :param exclude_confounders: list of confounders to exclude, Example: ['clone'] Explanation: if ['batch', 'donor'] are included while ['clone'] is excluded, we measure how much samples with the same batch AND donor, but different clones are similar to each other. :param embeddings: numerical description of each sample. DataFrame or np.array of shape [n_sample, n_features]. Order of embeddings should match order of rows in metadata :param metric: distance used to evaluate similarity. Possible choices are: - 'euclidean', relevant e.g. for delta Ct gene expression or for different embeddings - 'hellinger', relevant e.g. for cell type fractions in scRNA-seq - 'cosine', frequently more appropriate for DL embeddings - other distances from scipy and sklearn are supported :param pairwise_distances: alternatively distances between all the pairs can be readily provided. np.array of shape [n_samples, n_samples] (in this case, don't pass embeddings and metric) :param minimal_n_samples: number of samples that can provide ranking (otherwise function returns NaN). E.g. if both include and exclude are the same confounders, or if latter includes former, there are no elements that can provide ranking. :return: score or NaN NaN (Not-a-Number) if too few samples can provide ranking. """ n_samples = len(metadata) pairwise_distances = compute_pairwise_distances(n_samples, embeddings, pairwise_distances, metric=metric) for column in [*include_confounders, *exclude_confounders]: if metadata[column].isna().sum() > 0: raise RuntimeError(f'Metadata has Nones in "{column}"') if len(include_confounders) == 0 or len(exclude_confounders) == 0: raise InputErrorRTG(f'include_confounders and exclude_confounders should be non-empty') inc_cat = '' for category in include_confounders: inc_cat = inc_cat + '_' + to_codes_str_series(metadata[category]) exc_indices_collection = [ to_codes(metadata[category]) for category in exclude_confounders ] # recoding for simpler comparison inc_indices = to_codes(inc_cat) aucs = [] for sample in range(n_samples): mask = True for exc_indices in exc_indices_collection: mask = mask & (exc_indices != exc_indices[sample]) target = inc_indices == inc_indices[sample] distances = pairwise_distances[sample] if len(set(target[mask])) == 2: aucs.append(roc_auc_score(target[mask], -distances[mask])) if len(aucs) < minimal_n_samples: return np.nan else: return float(np.mean(aucs)) def compute_RTG_contribution_matrix( metadata: pd.DataFrame, include_confounders_dict: Dict[str, List[str]], exclude_confounders_dict: Dict[str, List[str]], *, embeddings=None, metric='euclidean', pairwise_distances=None, minimal_n_samples=30, ): """ Compute RTG scores for multiple combinations of included and excluded confounding variables. :param metadata: DataFrame with confounds (may contain additional variables) of shape [n_samples, n_variables]. Examples of variables: clone, donor, batch, plate, position on a plate :param include_confounders_dict: dictionary with confounders and their combinations, Example: { 'only donor': ['donor'], 'donor&batch': ['donor', 'batch'] } :param exclude_confounders_dict: dictionary with confounders and their combinations Example: { 'donor': ['donor'], 'clone': ['clone'] } Score is computed for all pairs of included and excluded confounding variables. :param embeddings: numerical description of each sample. DataFrame or np.array of shape [n_sample, n_features]. Order of embeddings should match order of rows in metadata :param metric: distance used to evaluate similarity. Possible choices are: - 'euclidean', relevant e.g. for delta Ct gene expression or for different embeddings - 'hellinger', relevant e.g. for cell type fractions in scRNA-seq - 'cosine', frequently more appropriate for DL embeddings - other distances from scipy and sklearn are supported :param pairwise_distances: alternatively distances between all the pairs can be readily provided. np.array of shape [n_samples, n_samples] (in this case, don't pass embeddings and metric) :param minimal_n_samples: number of samples that can provide ranking (otherwise function returns NaN). E.g. if both include and exclude are the same confounders, or if latter includes former, there are no elements that can provide ranking. :return: resulting scores are organized in pd.DataFrame (NaN elements mean not enough statistics) """ n_samples = len(metadata) pairwise_distances = compute_pairwise_distances(n_samples, embeddings, pairwise_distances, metric=metric) results = {} for col_name, included in include_confounders_dict.items(): for row_name, excluded in exclude_confounders_dict.items(): results.setdefault(col_name, {})[row_name] = compute_RTG_score( metadata=metadata, include_confounders=included, exclude_confounders=excluded, pairwise_distances=pairwise_distances, minimal_n_samples=minimal_n_samples, ) return pd.DataFrame(results)
/rtg_score-0.1.0.tar.gz/rtg_score-0.1.0/rtg_score/__init__.py
0.940681
0.582432
__init__.py
pypi
# Real-Time Gym Easily implement your custom [Gymnasium](https://gymnasium.farama.org) environments for real-time applications. Real-Time Gym (```rtgym```) is typically needed when trying to use Reinforcement Learning algorithms in robotics or real-time video games. Its purpose is to clock your Gymnasium environments in a way that is transparent to the user. ## Quick links - [Installation](#installation) - [Real-time Gym presentation](#real-time-gym-framework) - [Performance](#performance) - [Tutorial: Implement custom tasks](#tutorial) - [Create a RealTimeGymInterface](#create-a-realtimegyminterface) - [Create a configuration dictionary](#create-a-configuration-dictionary) - [Instantiate your real-time environment](#instantiate-the-custom-real-time-environment) - [Bonus 1: Implement a render method](#bonus-1-implement-a-render-method) - [Bonus 2: Benchmark your environment](#bonus-2-benchmark-your-environment) - [Bonus 3: Pro tips](#bonus-3-pro-tips) - [Full python script](https://github.com/yannbouteiller/rtgym/blob/main/rtgym/tuto/tuto.py) - [Contribute](#authors) - [Sponsors](#sponsors) ## Installation `rtgym` can be installed from PyPI: ````bash pip install rtgym ```` ## Real-time Gym framework Real-Time Gym (```rtgym```) is a simple and efficient real-time threaded framework built on top of [Gymnasium](https://gymnasium.farama.org). It is coded in python. ```rtgym``` enables real-time implementations of Delayed Markov Decision Processes in real-world applications. Its purpose is to elastically constrain the times at which actions are sent and observations are retrieved, in a way that is transparent to the user. It provides a minimal abstract python interface that the user simply customizes for their own application. Custom interfaces must inherit the [RealTimeGymInterface](https://github.com/yannbouteiller/rtgym/blob/969799b596e91808543f781b513901426b88d138/rtgym/envs/real_time_env.py#L12) class and implement all its abstract methods. Non-abstract methods can be overidden if desired. Then, copy the ```rtgym``` default [configuration dictionary](https://github.com/yannbouteiller/rtgym/blob/97cfa9834d6ba7d95e18048c12ffc3aaf43456a7/rtgym/envs/real_time_env.py#L134) in your code and replace the ``` 'interface' ``` entry with the class of your custom interface. You probably also want to modify other entries in this dictionary depending on your application. Once the custom interface is implemented, ```rtgym``` uses it to instantiate a fully-fledged Gymnasium environment that automatically deals with time constraints. This environment can be used by simply following the usual Gymnasium pattern, therefore compatible with many implemented Reinforcement Learning (RL) algorithms: ```python from rtgym.envs.real_time_env import DEFAULT_CONFIG_DICT my_config = DEFAULT_CONFIG_DICT my_config['interface'] = MyCustomInterface env = gymnasium.make("real-time-gym-v1", my_config, disable_env_checker=True) obs, info = env.reset() while True: # when this loop is broken, the current time-step will timeout act = model(obs) # inference takes a random amount of time obs, rew, terminated, truncated, info = env.step(act) # transparently adapts to this duration ``` You may want to have a look at the [timestamps updating](https://github.com/yannbouteiller/rtgym/blob/969799b596e91808543f781b513901426b88d138/rtgym/envs/real_time_env.py#L188) method of ```rtgym```, which is reponsible for elastically clocking time-steps. This method defines the core mechanism of Real-Time Gym environments: ![Real-Time Gym Framework](https://raw.githubusercontent.com/yannbouteiller/rtgym/main/figures/rt_gym_env.png "Real-Time Gym Framework") Time-steps are being elastically constrained to their nominal duration. When this elastic constraint cannot be satisfied, the previous time-step times out and the new time-step starts from the current timestamp. This happens either because the environment has been 'paused', or because the system is ill-designed: - The inference duration of the model, i.e. the elapsed duration between two calls of the step() function, may be too long for the time-step duration that the user is trying to use. - The procedure that retrieves observations may take too much time or may be called too late (the latter can be tweaked in the configuration dictionary). Remember that, if observation capture is too long, it must not be part of the `get_obs_rew_terminated_info()` method of your interface. Instead, this method must simply retrieve the latest available observation from another process, and the action buffer must be long enough to handle the observation capture duration. This is described in the Appendix of [Reinforcement Learning with Random Delays](https://arxiv.org/abs/2010.02966). A call to `reset()` starts the elastic `rtgym` clock. Once the clock is started, it can be stopped via a call to the `wait()` API to artificially "pause" the environment. `reset()` captures an initial observation and sends the default action, since Real-Time MDPs require an action to be applied at all time. The following figure illustrates how `rtgym` behaves around `reset` transitions when: - the configuration dictionary has `"wait_on_done": True` - `wait` is customized to execute some arbitrary behavior - The default action is `a0` ![Reset Transitions](https://github.com/yannbouteiller/rtgym/releases/download/v0.9/reset.png "Reset Transitions") #### Note for advanced users: _In this configuration, the `"reset_act_buf"` entry of the configuration dictionary must be left to `True`, and arbitrary actions can be executed in the `wait` and `reset` implementation of your `RealTimeGymInterface`._ _When the `"reset_act_buf"` entry is set to `False`, `"wait_on_done"` should be `False` and `reset` should not execute any action, otherwise the initial action buffer would not be valid anymore._ _Setting `"reset_act_buf"` to `False` is useful when you do not want to break the flow of real-time operations around `reset` transitions. In such situations, `a1` would be executed until the end of `reset`, slightly overflowing on the next time step (where `a0` is applied), i.e., giving your `RealTimeGymInterface` a little less time to compute `a4` and capture `o4`._ _In case you want `a2` to be executed instead of `a0`, you can replace the default action right before calling reset:_ ```python obs, info = env.reset() # here, the default action will be applied while True: act = model(obs) obs, rew, terminated, truncated, info = env.step(act) done = terminated or truncated if done: env.set_default_action(act) obs, info = env.reset() # here, act will be applied ``` _(NB: you can achieve this behavior without resorting to `set_default_action`. Just set `"last_act_on_reset": True` in your configuration dictionary.)_ _In this code snippet, the action buffer contained in `obs` is the same after `step` and after the second `reset`. Otherwise, the last action in the buffer would be `act` after `step` and would be replaced by the default action in `reset`, as the last `act` would in fact never be applied (see `a2` in the previous figure, imagining that `a1` keeps being applied instead of arbitrary actions being applied by `wait` and `reset`, which in this case should be much shorter / near-instantaneous)._ _It is worth thinking about this if you wish to replace the action buffer with, e.g., recurrent units of a neural network while artificially splitting a non-episodic problem into finite episodes._ ## Performance The time granularity achievable with `rtgym` depends on your Operating System. Typically, on a high-end machine, Windows should be fine for time steps larger than 20 milliseconds, Linux can deal with time steps one order of magnitude shorter, and you can probably achieve even finer-grained control using a real-time OS. We provide benchmarks for Windows and Linux in the following Figures. ### Windows On Windows, precision is limited by the time granularity of the `sleep` call. For instance, on Windows 11, a `20ms (50Hz)` target in `rtgym` will in fact result in the following distribution of individual time step durations: ![Windows](https://github.com/yannbouteiller/rtgym/releases/download/v0.11/win_join.png "Windows") The duration of a `20ms` time step in Windows is `20ms` on average, but the actual duration of individual time steps constantly oscillates between `15ms` and `31ms`. This is because the time granularity of the `sleep` call in Windows is `16ms` (regardless of the target duration). ### Linux On Linux, `rtgym` can operate at a much higher frequency. For instance, using the same machine as the previous Windows experiment, `rtgym` easily achieves a time step duration of `2ms (500Hz)` on Linux: ![Linux](https://github.com/yannbouteiller/rtgym/releases/download/v0.11/lin_join.png "Linux") _(Note: `capture` refers to the duration elapsed between two subsequent calls to the `get_obs_rew_terminated_info` method or your `RealTimeGymInterface` implementation, and `control` refers to the duration elapsed between two subsequent calls to its `send_control` method.)_ ## Tutorial This tutorial will teach you how to implement a Real-Time Gym environment for your custom application, using ```rtgym```. The complete script for this tutorial is provided [here](https://github.com/yannbouteiller/rtgym/blob/main/rtgym/tuto/tuto.py). ### Custom Real-Time Gym environment #### Introduction Implementing a Gymnasium environment on a real system is not straightforward when time cannot be paused between time-steps for observation capture, inference, transfers and actuation. Real-Time Gym provides a python interface that enables doing this with minimal effort. In this tutorial, we will see how to use this interface in order to create a Gymnasium environment for your robot, video game, or other real-time application. From the user's point of view, this environment will work as Gymnasium environments usually do, and therefore will be compatible with many readily implemented Reinforcement Learning (RL) algorithms. #### Install Real-Time Gym First, we need to install the Real-Time Gym package. Run the following in a terminal or an Anaconda prompt: ```bash pip install rtgym ``` This will install Real-Time Gym and all its dependencies in your active python environment. #### Create a RealTimeGymInterface Now that Real-Time Gym is installed, open a new python script. You can import the RealTimeGymInterface class as follows: ```python from rtgym import RealTimeGymInterface ``` The [RealTimeGymInterface](https://github.com/yannbouteiller/rtgym/blob/969799b596e91808543f781b513901426b88d138/rtgym/envs/real_time_env.py#L12) is all you need to implement in order to create your custom Real-Time Gym environment. This class has 6 abstract methods that you need to implement: ```get_observation_space```, ```get_action_space```, ```get_default_action```, ```reset```, ```get_obs_rew_terminated_info``` and ```send_control```. It also has a ```wait``` and a ```render``` methods that you may want to override. We will implement them all to understand their respective roles. --- ##### Dummy drone You will of course want to implement this on a real system and can directly adapt this tutorial to your application if you feel comfortable, but for the needs of the tutorial we will instead be using a dummy remote controlled drone with random communication delays. Import the provided dummy drone as follows: ```python from rtgym import DummyRCDrone ``` A dummy RC drone can now be created: ```python rc_drone = DummyRCDrone() ``` The dummy drone evolves in a simple 2D world. You can remotely control it with commands such as: ```python rc_drone.send_control(vel_x=0.1, vel_y=0.2) ``` Note that whatever happens next will be highly stochastic, due to random delays. Indeed, the velocities ```vel_x``` and ```vel_y``` sent to the drone when calling ```send_control``` will not be applied instantaneously. Instead, they will take a duration ranging between 20 and 50ms to reach the drone. Moreover, this dummy drone is clever and will only apply an action if it is not already applying an action that has been produced more recently. But wait, things get even more complicated... This drone sends an updated observation of its position every 10ms, and this observation also travels for a random duration ranging between 20 and 50ms. And since the observer is clever too, they discard observations that have been produced before the most recent observation available. In other words, when you retrieve the last available observation with ```python pos_x, pos_y = rc_drone.get_observation() ``` , ```pos_x``` and ```pos_y``` will be observations of something that happened 20 to 60ms is the past, only influenced by actions that were sent earlier than 40 to 110 ms in the past. Give it a try: ```python from rtgym import DummyRCDrone import time rc_drone = DummyRCDrone() for i in range(10): if i < 5: # first 5 iterations vel_x = 0.1 vel_y = 0.5 else: # last 5 iterations vel_x = 0.0 vel_y = 0.0 rc_drone.send_control(vel_x, vel_y) pos_x, pos_y = rc_drone.get_observation() print(f"iteration {i}, sent: vel_x:{vel_x}, vel_y:{vel_y} - received: x:{pos_x:.3f}, y:{pos_y:.3f}") time.sleep(0.05) ``` In this code snippet, we control the dummy drone at about 20Hz. For the 5 first iteration, we send a constant velocity control, and for the 5 last iterations, we ask the dummy drone to stop moving. The output looks something like this: ```console iteration 0, sent vel: vel_x:0.1, vel_y:0.5 - received pos: x:0.000, y:0.000 iteration 1, sent vel: vel_x:0.1, vel_y:0.5 - received pos: x:0.000, y:0.000 iteration 2, sent vel: vel_x:0.1, vel_y:0.5 - received pos: x:0.003, y:0.015 iteration 3, sent vel: vel_x:0.1, vel_y:0.5 - received pos: x:0.008, y:0.040 iteration 4, sent vel: vel_x:0.1, vel_y:0.5 - received pos: x:0.012, y:0.060 iteration 5, sent vel: vel_x:0.0, vel_y:0.0 - received pos: x:0.016, y:0.080 iteration 6, sent vel: vel_x:0.0, vel_y:0.0 - received pos: x:0.020, y:0.100 iteration 7, sent vel: vel_x:0.0, vel_y:0.0 - received pos: x:0.023, y:0.115 iteration 8, sent vel: vel_x:0.0, vel_y:0.0 - received pos: x:0.023, y:0.115 iteration 9, sent vel: vel_x:0.0, vel_y:0.0 - received pos: x:0.023, y:0.115 Process finished with exit code 0 ``` The commands we sent had an influence in the delayed observations only a number of time-steps after they got sent. Now, you could do what some RL practionners naively do in such situations: use a time-step of 1 second and call it a day. But of course, this would be far from optimal, and not even really Markovian. Instead, we want to control our dummy drone as fast as possible. Let us say we want to control it at 20 Hz, i.e. with a time-step of 50ms. To keep it simple, let us also say that 50ms is an upper bound of our inference time. What we need to do in order to make the observation space Markovian in this setting is to augment the available observation with the 4 last sent actions. Indeed, taking into account one time-step of 50ms for inference and the transmission delays, the maximum total delay is 160ms, which is more than 3 and less than 4 time-steps (see the [Reinforcement Learning with Random Delays](https://arxiv.org/abs/2010.02966) paper for more explanations). Note that this will be taken care of automatically, so you don't need to worry about it when implementing your RealTimeGymInterface in the next section. --- ##### RealTimeGymInterface Create a custom class that inherits the RealTimeGymInterface class: ```python from rtgym import RealTimeGymInterface, DummyRCDrone import gymnasium.spaces as spaces import gymnasium import numpy as np class MyRealTimeInterface(RealTimeGymInterface): def __init__(self): pass def get_observation_space(self): pass def get_action_space(self): pass def get_default_action(self): pass def send_control(self, control): pass def reset(self, seed=None, options=None): pass def get_obs_rew_terminated_info(self): pass def wait(self): pass ``` Note that, in addition to the mandatory abstract methods of the ```RealTimeGymInterface``` class, we override the ```wait``` method and implement a ```__init__``` method. The latter allows us to instantiate our remote controlled drone as an attribute of the interface, as well as other attributes: ```python def __init__(self): self.rc_drone = DummyRCDrone() self.target = np.array([0.0, 0.0], dtype=np.float32) ``` --- The ```get_action_space``` method returns a ```gymnasium.spaces.Box``` object. This object defines the shape and bounds of the ```control``` argument that will be passed to the ```send_control``` method. In our case, we have two actions: ```vel_x``` and ```vel_y```. Let us say we want them to be constrained between ```-2.0m/s``` and ```2.0m/s```. Our ```get_action_space``` method then looks like this: ```python def get_action_space(self): return spaces.Box(low=-2.0, high=2.0, shape=(2,)) ``` --- ```RealTimeGymInterface``` also requires a default action. This is to initialize the action buffer, and optionally to reinitialize it when the environment is reset. In addition, ```send_control``` is called with the default action as parameter when the Gymnasium environment is reset. This default action is returned as a numpy array by the ```get_default_action``` method. Of course, the default action must be within the action space that we defined in ```get_action_space```. With our dummy RC drone, it makes sense that this action be ```vel_x = 0.0``` and ```vel_y = 0.0```, which is the 'stay still' control: ```python def get_default_action(self): return np.array([0.0, 0.0], dtype='float32') ``` --- We can now implement the method that will send the actions computed by the inference procedure to the actual device. This is done in ```send_control```. This method takes a numpy array as input, named ```control```, which is within the action space that we defined in ```get_action_space```. In our case, the ```DummyRCDrone``` class readily simulates the control-sending procedure in its own ```send_control``` method. However, just so we have something to do here, ```DummyRCDrone.send_control``` doesn't have the same signature as ```RealTimeGymInterface.send_control```: ```python def send_control(self, control): vel_x = control[0] vel_y = control[1] self.rc_drone.send_control(vel_x, vel_y) ``` --- Now, let us take some time to talk about the ```wait``` method. As you know if you are familiar with Reinforcement Learning, the underlying mathematical framework of most RL algorithms, called Markov Decision Process, is by nature turn-based. This means that RL algorithms consider the world as a fixed state, from which an action is taken that leads to a new fixed state, and so on. However, real applications are of course often far from this assumption, which is why we developed the ```rtgym``` framework. Usually, RL theorists use fake Gymnasium environments that are paused between each call to the step() function. By contrast, ```rtgym``` environments are never really paused, because you simply cannot pause the real world. Instead, when calling step() in a ```rtgym``` environment, an internal procedure will ensure that the control passed as argument is sent at the beginning of the next real time-step. The step() function will block until this point, when a new observation is retrieved. Then, step() will return the observation so that inference can be performed in parallel to the next time-step, and so on. This is convenient because the user doesn't have to worry about these kinds of complicated dynamics and simply alternates between inference and calls to step() as they would usually do with any Gymnasium environment. However, this needs to be done repeatedly, otherwise step() will time-out. Yet, you may still want to artificially 'pause' the environment occasionally, e.g. because you collected a batch of samples, or because you want to pause the whole experiment. This is the role of the ```wait``` method. By default, ```wait``` is a no-op, but you may want to override this behavior by redefining the method: ```python def wait(self): self.send_control(np.array([0.0, 0.0], dtype='float32')) ``` You may want your drone to land when this function is called for example. Note that you generally do not want to customize ```wait``` when ```"reset_act_buf"``` is ```True``` in the ```rtgym``` configuration dictionary. In this tutorial this will be the case, thus we keep the default behavior: ```python def wait(self): pass ``` --- The ```get_observation_space``` method outputs a ```gymnasium.spaces.Tuple``` object. This object describes the structure of the observations returned from the ```reset``` and ```get_obs_rew_terminated_info``` methods of our interface. In our case, the observation will contain ```pos_x``` and ```pos_y```, which are both constrained between ```-1.0``` and ```1.0``` in our simple 2D world. It will also contain target coordinates ```tar_x``` and ```tar_y```, constrained between ```-0.5``` and ```0.5```. Note that, on top of these observations, the ```rtgym``` framework will automatically append a buffer of the 4 last actions, but the observation space you define here must not take this buffer into account. In a nutshell, our ```get_observation_space``` method must look like this: ```python def get_observation_space(self): pos_x_space = spaces.Box(low=-1.0, high=1.0, shape=(1,)) pos_y_space = spaces.Box(low=-1.0, high=1.0, shape=(1,)) tar_x_space = spaces.Box(low=-0.5, high=0.5, shape=(1,)) tar_y_space = spaces.Box(low=-0.5, high=0.5, shape=(1,)) return spaces.Tuple((pos_x_space, pos_y_space, tar_x_space, tar_y_space)) ``` --- We can now implement the RL mechanics of our environment (i.e. the reward function and whether we consider the task ```terminated``` in the episodic setting), and a procedure to retrieve observations from our dummy drone. This is done in the ```get_obs_rew_terminated_info``` method. For this tutorial, we will implement a simple task. At the beginning of each episode, the drone will be given a random target. Its task will be to reach the target as fast as possible. The reward for this task will be the negative distance to the target. The episode will end whenever an observation is received in which the drone is less than ```0.01m``` from the target. Additionally, we will end the episode if the task is not completed after 100 time-steps. The task is easy, but not as straightforward as it looks. Indeed, the presence of random communication delays and the fact that the drone keeps moving in real time makes it difficult to precisely reach the target. --- ```get_obs_rew_terminated_info``` outputs 4 values: - ```obs```: a list of all the components of the last retrieved observation, except for the action buffer - ```rew```: a float that is our reward - ```terminated```: a boolean that tells whether the episode is finished (always False in the non-episodic setting) - ```info```: a dictionary that contains any additional information you may want to provide For our simple task, the implementation is fairly straightforward. ```obs``` contains the last available coordinates and the target, ```rew``` is the negative distance to the target, ```terminated``` is True when the target has been reached, and since we don't need more information ```info``` is empty: ```python def get_obs_rew_terminated_info(self): pos_x, pos_y = self.rc_drone.get_observation() tar_x = self.target[0] tar_y = self.target[1] obs = [np.array([pos_x], dtype='float32'), np.array([pos_y], dtype='float32'), np.array([tar_x], dtype='float32'), np.array([tar_y], dtype='float32')] rew = -np.linalg.norm(np.array([pos_x, pos_y], dtype=np.float32) - self.target) terminated = rew > -0.01 info = {} return obs, rew, terminated, info ``` We did not implement the 100 time-steps limit here because this will be done later in the configuration dictionary. _Note: `obs` is a list although the observation space defined in `get_observation_space` must be a `gymnasium.spaces.Tuple`. This is expected in `rtgym`. However, the inner components of this list must agree with the inner observation spaces of the tuple. Thus, our inner components are numpy arrays here, because we have defined inner observation spaces as corresponding `gymnasium.spaces.Box` in `get_observation_space`._ --- Finally, the last mandatory method that we need to implement is ```reset```, which will be called at the beginning of each new episode. This method is responsible for setting up a new episode in the episodic setting. In our case, it will randomly place a new target. ```reset``` returns an initial observation ```obs``` that will be used to compute the first action, and an ```info``` dictionary where we may store everything else. A good practice is to implement a mechanism that runs only once and instantiates everything that is heavy in ```reset``` instead of ```__init__```. This is because RL implementations will often create a dummy environment just to retrieve the action and observation spaces, and you don't want a drone flying just for that. Replace the ```__init__``` method by: ```python def __init__(self): self.rc_drone = None self.target = np.array([0.0, 0.0], dtype=np.float32) self.initialized = False ``` And implement the ```reset``` method as follows: ```python def reset(self, seed=None, options=None): if not self.initialized: self.rc_drone = DummyRCDrone() self.initialized = True pos_x, pos_y = self.rc_drone.get_observation() self.target[0] = np.random.uniform(-0.5, 0.5) self.target[1] = np.random.uniform(-0.5, 0.5) return [np.array([pos_x], dtype='float32'), np.array([pos_y], dtype='float32'), np.array([self.target[0]], dtype='float32'), np.array([self.target[1]], dtype='float32')], {} ``` We have now fully implemented our custom ```RealTimeGymInterface``` and can use it to instantiate a Gymnasium environment for our real-time application. To do this, we simply pass our custom interface as a parameter to ```gymnasium.make``` in a configuration dictionary, as illustrated in the next section. --- #### Create a configuration dictionary Now that our custom interface is implemented, we can easily instantiate a fully fledged Gymnasium environment for our dummy RC drone. This is done by loading the ```rtgym``` ```DEFAULT_CONFIG_DICT``` and replacing the value stored under the ```"interface"``` key by our custom interface: ```python from rtgym import DEFAULT_CONFIG_DICT my_config = DEFAULT_CONFIG_DICT my_config["interface"] = MyRealTimeInterface ``` We also want to change other entries in our configuration dictionary: ```python my_config["time_step_duration"] = 0.05 my_config["start_obs_capture"] = 0.05 my_config["time_step_timeout_factor"] = 1.0 my_config["ep_max_length"] = 100 my_config["act_buf_len"] = 4 my_config["reset_act_buf"] = False ``` The ```"time_step_duration"``` entry defines the duration of the time-step. The ```rtgym``` environment will ensure that the control frequency sticks to this clock. The ```"start_obs_capture"``` entry is usually the same as the ```"time_step_duration"``` entry. It defines the time at which an observation starts being retrieved, which should usually happen instantly at the end of the time-step. However, in some situations, you will want to actually capture an observation in ```get_obs_rew_terminated_info``` and the capture duration will not be negligible. In such situations, if observation capture is less than 1 time-step, you can do this and use ```"start_obs_capture"``` in order to tell the environment to call ```get_obs_rew_terminated_info``` before the end of the time-step. If observation capture is more than 1 time-step, it needs to be performed in a parallel process and the last available observation should be used at each time-step. In any case, keep in mind that when observation capture is not instantaneous, you should add its maximum duration to the maximum delay, and increase the size of the action buffer accordingly. See the [Reinforcement Learning with Random Delays](https://arxiv.org/abs/2010.02966) appendix for more details. In our situation, observation capture is instantaneous. Only its transmission is random. The ```"time_step_timeout_factor"``` entry defines the maximum elasticity of the framework before a time-step times-out. When it is ```1.0```, a time-step can be stretched up to twice its length, and the framework will compensate by shrinking the durations of the next time-steps. When the elasticity cannot be maintained, the framework breaks it for one time-step and warns the user. This might happen after calls to reset() depending on how you implement the ```reset``` method of the interface. However, if this happens repeatedly in other situations, it probably means that your inference time is too long for the time-step you are trying to use. The ```"ep_max_length"``` entry is the maximum length of an episode. When this number of time-steps have been performed since the last reset(), ```truncated``` will be ```True```. In the non-episodic setting, set this to ```np.inf```. The ```"act_buf_len"``` entry is the size of the action buffer. In our case, we need it to contain the 4 last actions. Finally, the ```"reset_act_buf"``` entry tells whether the action buffer should be reset with default actions when reset() is called. In our case, we don't want this to happen, because calls to reset() only change the position of the target, and not the dynamics of the drone. Therefore we set this to ```False```. --- #### Instantiate the custom real-time environment We are all done! Instantiating our Gymnasium environment is now as simple as: ```python env = gymnasium.make("real-time-gym-v1", config=my_config) ``` We can use it as any usual Gymnasium environment: ```python def model(obs): return np.clip(np.concatenate((obs[2] - obs[0], obs[3] - obs[1])) * 20.0, -2.0, 2.0) terminated, truncated = False, False obs, info = env.reset() while not (terminated or truncated): act = model(obs) obs, rew, terminated, truncated, info = env.step(act) print(f"rew:{rew}") ``` --- #### Bonus 1: Implement a render() method Optionally, you can also implement a ```render``` method in your ```RealTimeGymInterface```. This allows you to call ```env.render()``` to display a visualization of your environment. Implement the following in your custom interface (you need opencv-python installed and to import cv2 in your script) : ```python def render(self): image = np.ones((400, 400, 3), dtype=np.uint8) * 255 pos_x, pos_y = self.rc_drone.get_observation() image = cv2.circle(img=image, center=(int(pos_x * 200) + 200, int(pos_y * 200) + 200), radius=10, color=(255, 0, 0), thickness=1) image = cv2.circle(img=image, center=(int(self.target[0] * 200) + 200, int(self.target[1] * 200) + 200), radius=5, color=(0, 0, 255), thickness=-1) cv2.imshow("PipeLine", image) if cv2.waitKey(1) & 0xFF == ord('q'): return ``` You can now visualize the environment on your screen: ```python def model(obs): return np.array([obs[2] - obs[0], obs[3] - obs[1]], dtype=np.float32) * 20.0 terminated, truncated = False, False obs, info = env.reset() while not (terminated or truncated): env.render() act = model(obs) obs, rew, terminated, truncated, info = env.step(act) print(f"rew:{rew}") cv2.waitKey(0) ``` --- #### Bonus 2: Benchmark your environment `rtgym` provides a way of timing the important operations happening in your real-time environment. In order to use the benchmark option, set the corresponding entry to `True` in the configuration dictionary: ```python my_config['benchmark'] = True ``` The provided benchmarks will contain means and average deviations of critical operations, such as your inference duration and observation retrieval duration. These metrics are estimated through Polyak averaging. The Polyak factor sets the dampening of these metrics. A value close to `0.0` will be precise but slow to converge, whereas a value close to `1.0` will be fast and noisy. This factor can be customized: ```python my_config['benchmark_polyak'] = 0.2 ``` The benchmarks can then be retrieved at any time from the environment once it is instantiated. They are provided as a dictionary of tuples. In each tuple, the first value is the average, and the second value is the average deviation: ```python import pprint # pretty print for visualizing the dictionary nicely print("Environment benchmarks:") pprint.pprint(env.benchmarks()) ``` The output looks like this: ```console Environment benchmarks: {'inference_duration': (0.014090990135653982, 0.0012176857248554194), 'join_duration': (0.03710293826222041, 0.006481136920225911), 'retrieve_obs_duration': (8.012583396852672e-05, 0.0001397626015969312), 'send_control_duration': (0.000634083523134701, 0.0005238185602401273), 'step_duration': (0.037439853824566036, 0.006698605131647715), 'time_step_duration': (0.051359845765767326, 0.006117140690528808)} ``` Here, our inference duration is `0.014` seconds, with an average deviation of `0.0012` seconds. Importantly, note that retrieving observations and sending controls is almost instantaneous because the drone's communication delays do not influence these operations. The time-step duration is `0.05` seconds as requested in the configuration dictionary. Most of this duration is spent joining the `rtgym` thread, i.e. waiting for the previous time-step to end. Therefore, we could increase the control frequency here. However, note that doing this would imply using a longer action buffer. --- #### Bonus 3: Pro tips ##### a) Elasticity The time-step's maximum elasticity defines the tolerance of your environment in terms of time-wise precision. It is set in the configuration dictionary as the `"time_step_timeout_factor"` entry. This can be any value `> 0.0`. When this is set close to `0.0`, the environment will not tolerate uncertainty in your custom interface. When this is e.g. `0.5`, a time-step will be allowed to overflow for half its nominal duration. This overflow will be compensated in future time-steps. Usually, you don't want to set this value too high, because time-wise variance is probably what you want to avoid when using `rtgym`. However, in some special cases, you may actually want your time-steps to overflow repeatedly. In particular, if your inference duration is very small compared to your observation retrieval duration, you may want to set your observation retrieval time at the end of the time-step (default behavior), so that observation retrieval always overflows for almost a whole time-step. This is because inference will happen directly after the observation is captured, and the computed action will be applied at the beginning of the next time-step. You may want this to be as tight as possible. In such situation, keep in mind that inference must end before the end of this next time-step, since the computed action is to be applied there. Otherwise, your time-steps will timeout. ##### b) Reset In `rtgym`, the default action is sent when `reset()` is called. This is to maintain the real-time flow of time-steps during reset transitions. It may happen that you prefer to repeat the previous action instead, for instance because it is hard in your application to implement a no-op action. To achieve this behavior, you can simply replace the default action of your environment via `set_default_action` with the action that you want being sent, right before calling `reset()`: ```python env.set_default_action(my_new_default_action) obs, info = env.reset() # Note: alternatively, you can set the "last_act_on_reset" entry to True in your configuration. # This would make reset() send the last action instead of the default action. # In rtgym, when terminated or truncated is True, the action passed to step() is not sent. # Setting "last_act_on_reset" to True sends it on the subsequent reset(). # Think thoroughly before setting this to True, as this might not ne suitable. # In Real-Time RL, the last action of an episode has no effect in terms of reward. # Thus, it may be entirely random depending on your training algorithm. ``` --- ## Authors All contributions to this project are welcome. To contribute, please submit a pull request that includes your name in the Contributors list. ### Maintainers - Yann Bouteiller ### Contributors ## Sponsors: Many thanks to our sponsors for their support! ![mist](figures/mistlogo.png) [MISTlab - Polytechnique Montreal](https://mistlab.ca)
/rtgym-0.12.tar.gz/rtgym-0.12/README.md
0.480722
0.979629
README.md
pypi
import re class Version: __errorStr = "'{}' not supported between instances of '{}' and '{}'" __FINAL = "z" def __init__(self, versionString): regex = "^v?(\d+)(?:.(\d+))?(?:.(\d+))?(?:-?(a|b|rc)(\d+)?)?$" m = re.match(regex, str(versionString).strip(), flags=re.IGNORECASE) if m == None: raise ValueError('{arg} doesn\'t match expected version regex "{regex}"'.format(arg=versionString, regex=regex)) self.major =int(m.group(1)) self.minor = int(m.group(2)) if m.group(2) != None else 0 self.patch = int(m.group(3)) if m.group(3) != None else 0 self.candidate = m.group(4) if m.group(4) != None else self.__FINAL # "z" so string comparison works self.candidate_number = int(m.group(5)) if m.group(5) != None else 0 def __repr__(self): return '{cls}("{str}")'.format(cls=self.__class__.__name__, str=self.__str__()) def __str__(self): return "v{major}.{minor}.{patch}{release}".format( major=self.major, minor=self.minor, patch=self.patch, release="" if self.candidate == self.__FINAL else self.candidate + str(self.candidate_number)) def __segments(self): return [self.major, self.minor, self.patch, self.candidate, self.candidate_number] def __lt__(self, other): if isinstance(other, self.__class__): for s1, s2 in zip(self.__segments(), other.__segments()): if s1 < s2: return True if s1 > s2: return False return False if isinstance(other, int) or isinstance(other, float): return self < Version(other) raise TypeError(self.__errorStr.format("<",self.__class__, type(other))) def __le__(self, other): if isinstance(other, self.__class__): for s1, s2 in zip(self.__segments(), other.__segments()): if s1 < s2: return True if s1 > s2: return False return True if isinstance(other, int) or isinstance(other, float): return self <= Version(other) raise TypeError(self.__errorStr.format("<=",self.__class__, type(other))) def __gt__(self, other): if isinstance(other, self.__class__): for s1, s2 in zip(self.__segments(), other.__segments()): if s1 < s2: return False if s1 > s2: return True return False if isinstance(other, int) or isinstance(other, float): return self > Version(other) raise TypeError(self.__errorStr.format(">",self.__class__, type(other))) def __ge__(self, other): if isinstance(other, self.__class__): for s1, s2 in zip(self.__segments(), other.__segments()): if s1 < s2: return False if s1 > s2: return True return True if isinstance(other, int) or isinstance(other, float): return self >= Version(other) raise TypeError(self.__errorStr.format(">=",self.__class__, type(other))) def __eq__(self, other): if isinstance(other, self.__class__): for s1, s2 in zip(self.__segments(), other.__segments()): if s1 < s2: return False if s1 > s2: return False return True if isinstance(other, int) or isinstance(other, float): return self == Version(other) raise TypeError(self.__errorStr.format("==",self.__class__, type(other)))
/rtimbo-version-1.0.0.tar.gz/rtimbo-version-1.0.0/version/version.py
0.498535
0.173954
version.py
pypi
from typing import List import re from rtklookup.collection import Kanji import romkan class SearchResultGroup(object): """ This type holds a single search query and its results. """ def __init__(self, search_string: str): self.search = search_string # type: str self.kanji = [] # type: List[Kanji] self.kana = self.search self.wildcards = ['%', '+', '*', '?'] if not self.has_kana: # checking for self.has_kana to avoid converting hiragana # and such to kana. self.kana = romkan.to_hiragana(self.search) @property def is_empty(self): """ Empty search string? :return: """ return self.search == "" @property def has_kana(self): """ Could we successfully convert to hiragana? :return: """ if re.search("[^\u3040-\u30ff]", self.kana): return False else: return True @property def has_kanji(self): """ Could we find at least one kanji matching the search query? :return: """ return bool(self.kanji) @property def is_unique(self): """ Returns true if there are no more than one kanjis that match the search query. Note: Therefore this function will return True, whenever there are no kanji found at all and even if we could not even convert to hiragana. :return: """ if self.has_kanji: return len(self.kanji) == 1 else: return True @property def is_broken(self): """ Returns true if NONE of the following is is true : * we can find at least one matching kanji * we can convert to kana * the query is empty :return: """ return not self.is_empty and not self.has_kana and not self.has_kanji @property def needs_details(self): """ Should this SearchGroup be annotated in the details section? Returns true if there are several kanji matching the search criteria or if the search contained some kind of wildcard character. :return: """ if not self.is_unique: return True for wc in self.wildcards: if wc in self.search: return True return False @property def type(self): """Type of the item. Note: If conversion to kana was successfull but we also found kanji, "kanji" is returned. :return: "kanji" "kana" or "broken" """ if self.has_kanji: return "kanji" elif self.has_kana: return "kana" elif self.is_broken: return "broken" def __str__(self): return "<{} object for search '{}'>".format(self.__class__.__name__, self.search) def __repr__(self): return self.__str__() class SearchResult(object): """ This class defines a collection of SearchGroups. It represents one query as given by user input which was then dissected into single queries. """ def __init__(self, search_string: str, mode=None): """ :param search_string: The whole search query before dissection. :return:None """ self.search = search_string self.groups = [] # type: List[SearchResultGroup] self.mode = mode # the mode which was used for the search def copyable_result(self) -> str: """If the user is desperate to search for the result online or copy it for some similar person, this returns our best guess for such a string. A bit similar to resultprinter.first_line but with as few formatting as possible. :return: """ ret = "" for group in self.groups: if group.has_kanji: if group.is_unique: ret += group.kanji[0].kanji else: ret += "({})".format(''.join( [kanji.kanji for kanji in group.kanji])) elif group.has_kana: ret += group.kana else: ret += group.search return ret @property def unique_success(self): return self.is_broken and not self.is_broken @property def is_unique(self): """Did we get one unique result for every item the user searched for? :return: """ for item in self.groups: if not item.is_unique: return False return True @property def multiple_searches(self) -> bool: """Did the user search for multiple items? :return: """ return len(self.groups) >= 2 @property def is_broken(self) -> bool: """Could one of the items that the user searched for not be found/converted at all? :return: """ for item in self.groups: if item.is_broken: return True return False @property def is_single_kanji(self) -> bool: """Does the whole query string contains only one kanji and nothing else? :return: """ return len(self.groups) == 1 and all([g.kanji for g in self.groups]) @property def is_empty(self): """Do we have no search items? :return: """ return not self.groups def __contains__(self, item: int): return item in self.groups def __getitem__(self, item: int) -> SearchResultGroup: return self.groups[item]
/rtk_lookup-1.0.0-py3-none-any.whl/rtklookup/searchresults.py
0.89179
0.267983
searchresults.py
pypi
import os from typing import Callable, Optional from hydra.utils import instantiate from omegaconf import DictConfig, OmegaConf from pytorch_lightning import seed_everything from sklearn.base import BaseEstimator from sklearn.pipeline import Pipeline import wandb from rtk_mult_clf import utils from .datamodules.datamodule_sklearn import SklearnRTKDataModule from .metrics import opt_metrics from .predictor.engine import train_model, valid_model def resolve_tuple(*args): return tuple(args) OmegaConf.register_new_resolver("as_tuple", resolve_tuple) log = utils.get_logger(__name__) def train(config: DictConfig) -> Optional[float]: """Contains the training pipeline. Can additionally evaluate model on a testset, using best weights achieved during training. Args: config (DictConfig): Configuration composed by Hydra. Returns: Optional[float]: Metric score for hyperparameter optimization. """ # Set seed for random number generators in pytorch, numpy and python.random if config.get("seed"): seed_everything(config.seed, workers=True) wandb.init(**config.logger.init) log.info(f"Instantiating estimator <{config.model._target_}>") model: BaseEstimator = instantiate(config.model) log.info( f"Instantiating data transformer" f" <{config.data_transformer._target_}>" ) pipeline: Pipeline = instantiate( config.data_transformer, _recursive_=False ) # Init lightning datamodule log.info(f"Instantiating datamodule <{config.datamodule._target_}>") datamodule: SklearnRTKDataModule = instantiate(config.datamodule) train_model(model, pipeline, datamodule) log.info(f"Optimized metric: <{config.get('optimized_metric')}>") optimized_metric: Callable = opt_metrics.get( config.get("optimized_metric"), None ) metric_aggregation: str = config.get("metric_aggregation") if not optimized_metric: raise ValueError("Flag optimized_metric not defined!") if not metric_aggregation: raise ValueError("Flag metric_aggregation not defined!") score: float = valid_model( model, pipeline, datamodule, optimized_metric, metric_aggregation ) folder_path: str = os.path.join( os.environ["PROJECT_PATH_ROOT"], config.checkpoints.path_to_checkpoints ) os.makedirs(folder_path, exist_ok=True) save_path: str = os.path.join( folder_path, "_".join( [ config.logger.init.name, config.get("optimized_metric"), str(round(score, 4)), ".pkl", ] ), ) utils.save_model( {"model": model, "pipeline": pipeline}, save_path=save_path ) utils.log_info_error_analysis( model, pipeline, datamodule, config.logger.init.name, save_path, score ) # Return metric score for hyperparameter optimization return score
/rtk_mult_clf-0.1.0.tar.gz/rtk_mult_clf-0.1.0/rtk_mult_clf/training_pipeline_sklearn.py
0.913165
0.348562
training_pipeline_sklearn.py
pypi
from typing import Any from catboost import CatBoostClassifier class CatBoostWrapper: def __init__( self, **kwargs: Any ): self.model: CatBoostClassifier = CatBoostClassifier( **kwargs, text_processing={ "tokenizers": [{ "tokenizer_id": "Space", "separator_type": "ByDelimiter", "delimiter": " " }], "dictionaries": [{ "dictionary_id": "BiGram", "token_level_type": "Letter", "max_dictionary_size": "150000", "occurrence_lower_bound": "1", "gram_order": "2" }, { "dictionary_id": "Trigram", "max_dictionary_size": "150000", "token_level_type": "Letter", "occurrence_lower_bound": "1", "gram_order": "3" }, { "dictionary_id": "Fourgram", "max_dictionary_size": "150000", "token_level_type": "Letter", "occurrence_lower_bound": "1", "gram_order": "4" }, { "dictionary_id": "Fivegram", "max_dictionary_size": "150000", "token_level_type": "Letter", "occurrence_lower_bound": "1", "gram_order": "5" }, { "dictionary_id": "Sixgram", "max_dictionary_size": "150000", "token_level_type": "Letter", "occurrence_lower_bound": "1", "gram_order": "6" } ], "feature_processing": { "default": [ { "dictionaries_names": ["BiGram", "Trigram", "Fourgram", "Fivegram", "Sixgram"], "feature_calcers": ["BoW"], "tokenizers_names": ["Space"] }, { "dictionaries_names": ["BiGram", "Trigram", "Fourgram", "Fivegram", "Sixgram"], "feature_calcers": ["NaiveBayes"], "tokenizers_names": ["Space"] }, { "dictionaries_names": ["BiGram", "Trigram", "Fourgram", "Fivegram", "Sixgram"], "feature_calcers": ["BM25"], "tokenizers_names": ["Space"] }, ], } } ) def fit(self, X, y): self.model.fit(X, y) return self def predict(self, X): return self.model.predict(X) def predict_proba(self, X): return self.model.predict_proba(X)
/rtk_mult_clf-0.1.0.tar.gz/rtk_mult_clf-0.1.0/rtk_mult_clf/models/sklearn_model.py
0.829181
0.332771
sklearn_model.py
pypi
import os from typing import List, Optional, Tuple, Union import numpy as np import pandas as pd from sklearn.model_selection import train_test_split Dataset = Union[pd.DataFrame, np.ndarray] Target = Union[pd.DataFrame, np.ndarray] class SklearnRTKDataModule: def __init__( self, target_column: str, data_columns: List[str], data_dir: str = "data/raw/", path_to_test: str = "data/raw/", stratify: bool = False, test_size: float = 0.3, shuffle: bool = True, random_state: int = 100500, ): self.target_column: str = target_column self.data_columns: List[str] = data_columns self.data_dir: str = os.path.join( os.environ["PROJECT_PATH_ROOT"], data_dir ) self.path_to_test: str = os.path.join( os.environ["PROJECT_PATH_ROOT"], path_to_test ) self.stratify: bool = stratify self.test_size: float = test_size self.shuffle: bool = shuffle self.random_state: int = random_state self.train_data: Optional[Tuple[Dataset, Target]] = None self.val_data: Optional[Tuple[Dataset, Target]] = None self.test_data: Optional[Dataset] = None def setup(self, stage: Optional[str] = None) -> None: if stage == "fit" or not stage: data_train: pd.DataFrame = pd.read_excel( os.path.join(self.data_dir, "train.xlsx") ) target: Target = data_train[self.target_column] data_train.drop(self.target_column, inplace=True, errors="ignore") trn_idx, val_idx = train_test_split( data_train.index.values, stratify=target if self.stratify else None, shuffle=self.shuffle, test_size=self.test_size, random_state=self.random_state, ) x_train: Dataset = data_train.iloc[trn_idx] y_train: Target = target.iloc[trn_idx] x_val: Dataset = data_train.iloc[val_idx] y_val: Target = target.iloc[val_idx] self.train_data = (x_train[self.data_columns], y_train) self.val_data = (x_val[self.data_columns], y_val) elif stage == "test": self.test_data = pd.read_excel( os.path.join(self.data_dir, "test.xlsx") ) def get_train_data(self) -> Tuple[Dataset, Target]: return self.train_data def get_val_data(self) -> Tuple[Dataset, Target]: return self.val_data def get_test_data(self) -> Dataset: return self.test_data
/rtk_mult_clf-0.1.0.tar.gz/rtk_mult_clf-0.1.0/rtk_mult_clf/datamodules/datamodule_sklearn.py
0.794744
0.419351
datamodule_sklearn.py
pypi
from __future__ import annotations from typing import Any, List, Optional, Union import hydra import numpy as np import pandas as pd from omegaconf import DictConfig from tqdm import tqdm from sentence_transformers import SentenceTransformer from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer from sklearn.pipeline import Pipeline def make_pipeline(steps_config: DictConfig) -> Pipeline: """Creates a pipeline with all the preprocessing steps specified in `steps_config`, ordered in a sequential manner Args: steps_config (DictConfig): the config containing the instructions for creating the feature selectors or transformers Returns: [sklearn.pipeline.Pipeline]: a pipeline with all the preprocessing steps, in a sequential manner """ steps = [] for step_config in steps_config: # retrieve the name and parameter dictionary of the current steps step_name, step_params = list(step_config.items())[0] # instantiate the pipeline step, and append to the list of steps pipeline_step = (step_name, hydra.utils.instantiate(step_params)) steps.append(pipeline_step) return Pipeline(steps) class CountVectorizerDF: def __init__(self, column_name: str, **kwargs: Any): self.count_vectorizer: CountVectorizer = CountVectorizer(**kwargs) self.column_name: str = column_name def fit( self, data: pd.DataFrame, y: Optional[Union[pd.Series, np.ndarray]] = None, ) -> CountVectorizerDF: # y: Optional[Union[pd.Series, np.ndarray]] # необходим по требования Pipeline self.count_vectorizer.fit(data[self.column_name].values) return self def transform(self, data: pd.DataFrame) -> Any: return self.count_vectorizer.transform(data[self.column_name].values) class TfIdfVectorizerDF: def __init__(self, column_name: str, **kwargs: Any): self.tfidf_vectorizer: TfidfVectorizer = TfidfVectorizer(**kwargs) self.column_name: str = column_name def fit( self, data: pd.DataFrame, y: Optional[Union[pd.Series, np.ndarray]] = None, ) -> TfIdfVectorizerDF: # y: Optional[Union[pd.Series, np.ndarray]] # необходим по требования Pipeline self.tfidf_vectorizer.fit(data[self.column_name].values) return self def transform(self, data: pd.DataFrame) -> Any: return self.tfidf_vectorizer.transform(data[self.column_name].values) class TextPreprocessTransformerDF: def __init__(self, column_name: str, **kwargs: Any): self.column_name: str = column_name self.stop_words: List[str] = kwargs.get("stop_words", []) def fit( self, data: pd.DataFrame, y: Optional[Union[pd.Series, np.ndarray]] = None, ) -> TextPreprocessTransformerDF: # y: Optional[Union[pd.Series, np.ndarray]] # необходим по требования Pipeline return self def transform(self, data: pd.DataFrame) -> pd.DataFrame: data[self.column_name] = data[self.column_name].apply( lambda sent: self.process_text(sent, self.stop_words) ) return data @classmethod def process_text(cls, text: str, stop_words: List[str]) -> str: return " ".join( [word for word in text.split() if word not in stop_words] ) class IdentityTransformer: def __init__(self, column_name: str, **kwargs: Any): self.column_name: str = column_name def fit( self, data: pd.DataFrame, y: Optional[Union[pd.Series, np.ndarray]] = None, ) -> IdentityTransformer: # y: Optional[Union[pd.Series, np.ndarray]] # необходим по требования Pipeline return self def transform(self, data: pd.DataFrame) -> pd.DataFrame: return data[[self.column_name]] class LaBSEVectorizer: def __init__(self, column_name: str, **kwargs: Any): self.column_name: str = column_name path_to_model: str = kwargs.get( "path_to_model", 'sentence-transformers/LaBSE' ) self.model: SentenceTransformer = SentenceTransformer( path_to_model ) def fit( self, data: pd.DataFrame, y: Optional[Union[pd.Series, np.ndarray]] = None, ) -> IdentityTransformer: # y: Optional[Union[pd.Series, np.ndarray]] # необходим по требования Pipeline return self def transform(self, data: pd.DataFrame) -> np.ndarray: text_list: List[str] = [text for text in data[self.column_name].values] return self.model.encode(text_list, show_progress_bar=True)
/rtk_mult_clf-0.1.0.tar.gz/rtk_mult_clf-0.1.0/rtk_mult_clf/features/transformers.py
0.922281
0.496094
transformers.py
pypi
import logging import os import pickle import subprocess import warnings from argparse import Namespace from typing import Any, Dict, List, Optional, Sequence import matplotlib.pyplot as plt import pandas as pd import pytorch_lightning as pl import rich.syntax import rich.tree import seaborn as sns from omegaconf import DictConfig, OmegaConf from pytorch_lightning.utilities import rank_zero_only from sklearn import metrics from sklearn.base import BaseEstimator from sklearn.pipeline import Pipeline import wandb from rtk_mult_clf import SklearnRTKDataModule def get_logger(name=__name__) -> logging.Logger: """Initializes multi-GPU-friendly python command line logger.""" logger = logging.getLogger(name) # this ensures all logging levels get marked with the rank zero decorator # otherwise logs would get multiplied for each GPU # process in multi-GPU setup for level in ( "debug", "info", "warning", "error", "exception", "fatal", "critical", ): setattr(logger, level, rank_zero_only(getattr(logger, level))) return logger log = get_logger(__name__) def extras(config: DictConfig) -> None: """Applies optional utilities, controlled by config flags. Utilities: - Ignoring python warnings - Rich config printing """ # disable python warnings if <config.ignore_warnings=True> if config.get("ignore_warnings"): log.info("Disabling python warnings! <config.ignore_warnings=True>") warnings.filterwarnings("ignore") # pretty print config tree using Rich library if <config.print_config=True> if config.get("print_config"): log.info("Printing config tree with Rich! <config.print_config=True>") print_config(config, resolve=True) @rank_zero_only def print_config( config: DictConfig, print_order: Sequence[str] = ( "datamodule", "model", "callbacks", "logger", "trainer", ), resolve: bool = True, ) -> None: """Prints content of DictConfig using Rich library and its tree structure. Args: config (DictConfig): Configuration composed by Hydra. print_order (Sequence[str], optional): Determines in what order config components are printed. resolve (bool, optional): Whether to resolve reference fields of DictConfig. """ style = "dim" tree = rich.tree.Tree("CONFIG", style=style, guide_style=style) queue = [] for field in print_order: queue.append(field) if field in config else log.info( f"Field '{field}' not found in config" ) for field in config: if field not in queue: queue.append(field) for field in queue: branch = tree.add(field, style=style, guide_style=style) config_group = config[field] if isinstance(config_group, DictConfig): branch_content = OmegaConf.to_yaml(config_group, resolve=resolve) else: branch_content = str(config_group) branch.add(rich.syntax.Syntax(branch_content, "yaml")) rich.print(tree) path_to_log: str = os.path.join(config.current_work_dir, "config_tree.log") with open(path_to_log, "w") as file: rich.print(tree, file=file) @rank_zero_only def log_hyper_parameters( config: DictConfig, model: pl.LightningModule, trainer: pl.Trainer, ) -> None: """Controls which config parts are saved by Lightning loggers. Additionaly saves: - number of model parameters """ if not trainer.logger: return hparams: Dict[str, Any] = { "model": config["model"], "model/params/total": sum(p.numel() for p in model.parameters()), "model/params/trainable": sum( p.numel() for p in model.parameters() if p.requires_grad ), "model/params/non_trainable": sum( p.numel() for p in model.parameters() if not p.requires_grad ), "datamodule": config["datamodule"], "trainer": config["trainer"], } # choose which parts of hydra config will be saved to loggers # save number of model parameters if "seed" in config: hparams["seed"] = config["seed"] if "callbacks" in config: hparams["callbacks"] = config["callbacks"] # send hparams to all loggers trainer.logger.log_hyperparams(Namespace(**hparams)) def finish( logger: List[pl.loggers.LightningLoggerBase], ) -> None: """Makes sure everything closed properly.""" # without this sweeps with wandb logger might crash! for lg in logger: if isinstance(lg, pl.loggers.wandb.WandbLogger): import wandb wandb.finish() def log_wandb_confusion_matrix( model: BaseEstimator, pipeline: Pipeline, datamodule: SklearnRTKDataModule, experiment_name: str, ) -> None: x_valid, y_valid = datamodule.get_val_data() predictions: List[float] = model.predict(pipeline.transform(x_valid)) confusion_matrix = metrics.confusion_matrix( y_true=y_valid, y_pred=predictions ) plt.figure(figsize=(14, 8)) sns.set(font_scale=1.4) sns.heatmap(confusion_matrix, annot=True, annot_kws={"size": 8}, fmt="g") wandb.log( {f"confusion_matrix/{experiment_name}": wandb.Image(plt)}, commit=False ) # reset plot plt.clf() def log_wandb_precision_recall( model: BaseEstimator, pipeline: Pipeline, datamodule: SklearnRTKDataModule, experiment_name: str, metric_aggregation: str, ) -> None: x_valid, y_valid = datamodule.get_val_data() predictions: List[float] = model.predict(pipeline.transform(x_valid)) f1: float = metrics.f1_score( y_valid, predictions, average=metric_aggregation ) recall: float = metrics.recall_score( y_valid, predictions, average=metric_aggregation ) precision: float = metrics.precision_score( y_valid, predictions, average=metric_aggregation ) data: List[List[float]] = [[f1], [precision], [recall]] # set figure size plt.figure(figsize=(14, 3)) # set labels size sns.set(font_scale=1.2) # set font size sns.heatmap( data, annot=True, annot_kws={"size": 10}, fmt=".3f", yticklabels=["F1", "Precision", "Recall"], ) log_key: str = f"f1_p_r_heatmap/{experiment_name}/{metric_aggregation}" wandb.log( {log_key: wandb.Image(plt)}, commit=False, ) # reset plot plt.clf() def log_wandb_classification_report( model: BaseEstimator, pipeline: Pipeline, datamodule: SklearnRTKDataModule, experiment_name: str, ) -> None: x_valid, y_valid = datamodule.get_val_data() predictions: List[float] = model.predict(pipeline.transform(x_valid)) clf_report = metrics.classification_report( y_valid, predictions, labels=model.classes_, target_names=model.classes_, output_dict=True, ) # set figure size plt.figure(figsize=(14, 6)) # set labels size sns.set(font_scale=1.2) sns.heatmap( pd.DataFrame(clf_report), annot=True, annot_kws={"size": 10}, fmt=".3f", ) plt.xticks(rotation=15) wandb.log( {f"clf_report/{experiment_name}": wandb.Image(plt)}, commit=False ) # reset plot plt.clf() def log_wandb_error_predictions( model: BaseEstimator, pipeline: Pipeline, datamodule: SklearnRTKDataModule, experiment_name: str, ) -> None: x_valid, y_valid = datamodule.get_val_data() predictions: List[float] = model.predict(pipeline.transform(x_valid)) if hasattr(model, "predict_proba"): prob_predictions: List[List[float]] = model.predict_proba( pipeline.transform(x_valid) ) else: prob_predictions: List[List[float]] = model.decision_function( pipeline.transform(x_valid) ) pd.options.mode.chained_assignment = None logic_index: pd.SupportsIndex = predictions != y_valid error_report: pd.DataFrame = x_valid.loc[logic_index] error_report["target"] = y_valid.loc[logic_index] error_report["predictions"] = predictions[logic_index] columns: List[str] = [ "prob_0", "prob_1", "prob_2", "prob_3", "prob_4", "prob_5", "prob_6", "prob_7", "prob_8", "prob_9", "prob_10", ] error_report[columns] = prob_predictions[logic_index, :] wandb.log({f"error_report/{experiment_name}": error_report}, commit=False) def log_wandb_artifact(experiment_name: str, path_to_save_local: str) -> None: ckpts: wandb.Artifact = wandb.Artifact(experiment_name, type="checkpoints") ckpts.add_file(path_to_save_local) wandb.log_artifact(ckpts) def log_code_to_git(experiment_name: str, score: float) -> None: command = ["git", "add", "*"] not_ignored: bool = subprocess.run(command).returncode == 1 log.warning("Git add processed with error: {}".format(not_ignored)) experiment_msg: str = "Experiment {}, " "target score: {}".format( experiment_name, score ) command = ["git", "commit", "-m", experiment_msg] not_ignored: bool = subprocess.run(command).returncode == 1 log.warning("Git commit processed with error: {}".format(not_ignored)) def log_info_error_analysis( model: BaseEstimator, pipeline: Pipeline, datamodule: SklearnRTKDataModule, experiment_name: str, path_to_save_local: str, score: float, ) -> None: log.info("Info for Error Analysis!") log_wandb_confusion_matrix(model, pipeline, datamodule, experiment_name) log_wandb_precision_recall( model, pipeline, datamodule, experiment_name, "micro" ) log_wandb_precision_recall( model, pipeline, datamodule, experiment_name, "macro" ) log_wandb_classification_report( model, pipeline, datamodule, experiment_name, ) log_wandb_error_predictions( model, pipeline, datamodule, experiment_name, ) log_wandb_artifact(experiment_name, path_to_save_local) log_code_to_git(experiment_name, score) def save_model(model: Dict[str, Any], save_path: str) -> str: with open(save_path, "wb") as output_stream: pickle.dump(model, output_stream) return save_path def load_model(load_path: str) -> Optional[Dict[str, Any]]: with open(load_path, "rb") as input_stream: model = pickle.load(input_stream) return model
/rtk_mult_clf-0.1.0.tar.gz/rtk_mult_clf-0.1.0/rtk_mult_clf/utils/__init__.py
0.828037
0.261075
__init__.py
pypi
# rtl_ultrasound ----- TODO: logo next to title TODO: get badges * build passing (Travis) * build passing (appveyor) * coverage x% (coveralls.io) * docs passing (readthedocs.io) * code quality (codacy) * PyPI (badge.fury.io) * license * gitter.im * DOI (zenodo) ## Latest results See the [Aug 21, 2018 writeup](experiments/20180821/README.md) for more details. _Piezoelectric transducer is swept by servo motor_ ![gif](experiments/20180821/DSCN7889.gif) _Hardware setup with [SimpleRick v1.1](https://github.com/wlmeng11/SimpleRick/), 12.5 MHz low pass filter, and RTL-SDR_ ![setup](experiments/20180821/DSCN7892.JPG) ![summary](experiments/20180821/ControlAnd2Weights.png) ## Introduction ### Why SDR? The analog signal produced by a B-mode ultrasound (ie. 2D imaging) is essentially an Amplitude Modulated (AM) signal. The signal's envelope (ie. amplitude) corresponds to boundary information in the physical media, and the signal's carrier frequency is equal to the resonant frequency of the transducer. Most ultrasound systems take one of two approaches for data acquistion: 1. **Direct sampling of the ultrasound signal:** This method preserves the original signal in the time domain , accomadates any transducer frequency, and offers the best flexibility for post-processing and analysis. Both amplitude and phase information can be extracted the signal, so it is useful for both B-mode and Doppler mode imaging. However, this method requires a high sample rate ADC, as well as high bandwidth and storage for the digital data. 2. **Envelope detection with analog hardware:** Perform Amplitude Demodulation (typically with a diode-based rectifier and low pass filter) to yield an envelope signal, then acquire the envelope signal at a lower sample rate. This method reduces the bandwidth and storage requirements for the digital data, but there are a number of drawbacks: * Unless the low pass filter is adjustable, this method cannot accommodate different transducer frequencies. * The non-linearity of the diode may produce harmonic distortion. * All phase information in the signal is lost, rendering it useless for Doppler mode imaging. It has been [demonstrated by Peyton et al](https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/s12938-018-0512-6) that quadrature sampling can be used to reduce bandwidth requirements in an ultrasound imaging system. It turns out that quadrature modulation is essential to Software Defined Radio (SDR) because any type of amplitude modulation, frequency modulation, phase modulation, or combination of these can be expressed as a special case of quadrature modulation. Therefore, many of the software and hardware techniques used in SDR can be applied to ultrasound imaging. ### Why RTL-SDR? The RTL2832U chip in the RTL-SDR takes a hybrid approach for data acquisition. It employs a high sample rate ADC (28.8 Msps), followed by a software-configurable Digital Down Converter (DDC) that produces IQ data at a lower sample rate (up to 2.56 Msps), thus reducing bandwidth and storage requirements. We can then perform envelope detection *in software*. Plus, the RTL-SDR is really cheap (under $25 on Amazon in the United States)! As such, there is a lot of software support and a large community for the RTL-SDR. With a few software tweaks, it should be possible to substitute the RTL-SDR with a more expensive SDR (eg. AirSpy HF+, LimeSDR) for use cases that require better ADC resolution and SNR. TODO: total system cost ## Installation ### System Dependencies Install the system dependencies: * Python 3 with pip * librtlsdr #### Mac OSX `brew install python3 librtlsdr` *Warning*: If you previously installed software that bundles an out-of-date version of librtlsdr, you may have to remove it, or overwrite the symlinks for librtlsdr: `brew link --overwrite librtlsdr` ### Automatic installation Install rtl_ultrasound: `pip3 install rtl_ultrasound` ### Manual installation Clone the development repo: `git clone git@github.com:wlmeng11/rtl_ultrasound.git` Install the python package dependencies: `pip3 install -r requirements.txt` Run the install script: `pip3 install .` ## Usage ### Hardware Setup This software is designed to be used with the [RTL-SDR v3](https://www.rtl-sdr.com/buy-rtl-sdr-dvb-t-dongles/) in conjunction with the [SimpleRick](https://github.com/wlmeng11/SimpleRick) hardware. TODO: block diagram However, this software can be also used with any ultrasound hardware which provides an analog signal output that can be fed to the input of the RTL-SDR. ### Capturing images To capture approximately 1 second of data from the RTL-SDR and save it to a .npz file, run: `rtl_to_npz -v -n 5120000` Next, generate an image from the .npz file: `B_mode -v --data (data file name).npz` In the future, this process will be streamlined into a single script and possibly a GUI. ## Documentation A fairly comprehensive overview of the entire process from data acquisition to rendered image can be found in the [Aug 13, 2018 experiment](experiments/20180813/rtl_ultrasound_test.ipynb). Essentially, it boils down to these steps: * acquire IQ samples from RTL-SDR * upsample * extract envelope * split signal into scan lines * stack scan lines into image * perform polar to cartesian image transformation TODO: these steps will be parallelized with multithreading in order to provide a fast image update rate with pipelining ## DISCLAIMER This software is NOT meant to be used for any medical or diagnostic purposes. Absolutely no warranty is provided, express or implied. ## License The software contained in this repository makes use of the pyrtlsdr module, and is therefore a derivative work of pyrtlsdr. As such, this work respects the GPL v3 license of pyrtlsdr and is similarly distributed under GPL v3. The full text of the license can be found in the [COPYING](COPYING) file. [pyrtlsdr](https://github.com/roger-/pyrtlsdr) is Copyright (C) 2013 by Roger https://github.com/roger- [rtl_ultrasound](https://github.com/wlmeng11/rtl_ultrasound/) is Copyright (C) 2018 William Meng
/rtl_ultrasound-0.1.0.tar.gz/rtl_ultrasound-0.1.0/README.md
0.525856
0.730434
README.md
pypi
import queue class ManagerApp(): """ IMPORTANT: Don't build apps that inherit directly from ManagerApp. Use ManagerDistanceApp or ManagerPositionApp instead. """ DISTANCE_TYPE = 1 POSITION_TYPE = 2 def __init__(self): self._report_queue = queue.Queue() def _feed_report(self, report): self._report_queue.put(report) def pop_report(self): """ Pop report from the internal report queue """ return self._report_queue.get() def run(self): """ Run the application main loop. Implementation note: ==================== Use self.pop_report() to control the speed of the app loop. Depending on the app type, distance or position reports will result from a pop operation. """ raise NotImplementedError class ManagerDistanceApp(ManagerApp): """ Interface for manager application that is fed with distance data. """ def __init__(self): super().__init__() self.type = ManagerApp.DISTANCE_TYPE def feed_report(self, distance_report): """ Callback to feed distance report to the app. Args: distance_report (DistanceReport): distance_report """ self._feed_report(distance_report) class ManagerPositionApp(ManagerApp): """ Interface for manager application that is fed with position data. """ def __init__(self): super().__init__() self.type = ManagerApp.POSITION_TYPE def feed_report(self, position_report): """ Callback to feed position report to the app Args: position_report (PositionReport): position report """ self._feed_report(position_report) class ManagerInterface: """ Interface for manager data interface. NOTE: The implementation of this interface should not be considered by regular users. """ def read_data(self): """ Get distance data from an abstract interface. This function is assumed to be implemented as a blocking function call, until new data becomes available. Only returning new data will create some room for apps to act on the data and not putting too much pressure on the CPU. Returns: list: distance report list """ raise NotImplementedError return distance_reports def stop(self): """ Properly terminate the interface """ raise NotImplementedError def is_symmetrical(self): """ Return whether or not this interface is fully symmetrical, i.e., whether or not this interface has access to all measurements from all devices. """ raise NotImplementedError class DistanceReport: def __init__(self, device_id, distances_dict): """ Distance report object. Args: device_id (int): id of the measurement tag distances_dict (dict): dictionary with remote device ids as keys, and distances in centimeter as value. """ self.device_id = device_id self.distances_dict = distances_dict def __repr__(self): return "{}: {}".format(self.device_id, self.distances_dict) class PositionReport: def __init__(self, device_id, position): """ Position report format. Args: device_id (int): device or tag id position (Position): position (x, y, z) of the device with given device_id """ self.device_id = device_id self.position = position def __repr__(self): return "{}: ({}, {}, {})".format(self.device_id, self.position.x, self.position.y, self.position.z)
/rtloc_manager-0.1.2-cp38-cp38-win_amd64.whl/rtloc_manager/manager_api.py
0.898853
0.304804
manager_api.py
pypi
import curses import numpy as np from rtloc_manager.manager_api import ManagerDistanceApp class DistanceGrid(ManagerDistanceApp): def __init__(self, manager_config): super().__init__() self.nb_slots = manager_config.nb_slots self.distances_matrix = np.zeros((self.nb_slots, self.nb_slots), dtype="int") self.slot_addresses = np.array([0] * self.nb_slots) def update_distance_matrix(self, device_id, remote_ids, dists_to_remote): global reset_counter # reset when devices appear / disappear if remote_ids != self.slot_addresses[self.slot_addresses != 0].tolist(): self.slot_addresses.fill(0) self.distances_matrix.fill(0) # rebuild adress slot list for slot_idx, remote_id in enumerate(remote_ids): self.slot_addresses[slot_idx] = remote_id addr_slot_idx = np.where(self.slot_addresses == device_id)[0][0] for slot_idx, dist in enumerate(dists_to_remote): self.distances_matrix[addr_slot_idx, slot_idx] = dist def run(self): self.run = True self.ui = curses.initscr() while self.run: data = self.pop_report() device_id = data.device_id remote_ids = list(data.distances_dict.keys()) remote_ids.sort() dists_to_remote = [data.distances_dict[remote_id] for remote_id in remote_ids] self.update_ui(device_id, remote_ids, dists_to_remote) def distances_repr(self): result = (" addr |\t" + ("{:5d} \t" * self.nb_slots)).format(*self.slot_addresses) result += "\n" result += "-" * (8 * (self.nb_slots + 1)) result += "\n" for slot_idx, distances in enumerate(self.distances_matrix): # remove distance info to self tmp_distances = distances.tolist() tmp_distances[slot_idx] = 0 result += ("{:5d} |\t" + ("{:5d} \t" * (self.nb_slots))).format(self.slot_addresses[slot_idx], *tmp_distances) result += "\n" result += "\nPush [CTRL + C] to close app\n" return result def update_ui(self, device_id, remote_ids, dists_to_remote): # update internal data self.update_distance_matrix(device_id, remote_ids, dists_to_remote) self.ui.addstr(0, 0, self.distances_repr()) self.ui.refresh() def close(self): curses.endwin() quit()
/rtloc_manager-0.1.2-cp38-cp38-win_amd64.whl/rtloc_manager/apps/distance_grid_app.py
0.438304
0.235202
distance_grid_app.py
pypi
from rtlsdr import RtlSdr import numpy as np import asyncio from .utils import * class RtlSdr_NFS32002: def __init__(self): self.sdr = RtlSdr() self.sdr.sample_rate = 1e6 self.sdr.center_freq = 868.3e6 def __detectNFS32002Frame(self, samples_array, error_rate): nfs32002_timings = [625, 312.5, 312.5, 207.5, 207.5, 500, 500, 250, 250, 250, 250, 500, 500, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 500, 250, 250, 500, 250, 250, 500, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250, 250] data = np.abs(samples_array)**2 mean_data = np.mean(data) normalized = np.where(data > mean_data, 1, 0) bin = normalized[np.where(normalized != 0)[0][0]:] bin = np.append([0], bin) values, timings = find_runs(bin) error_rate_min, error_rate_max = 1-error_rate, 1+error_rate detected_frame = False i = 0 while i < len(values): # Check the presence of a syncword data = True if values[i] == 1: j = i for timing in nfs32002_timings: if (timings[j] >= (timing*error_rate_min) and \ timings[j] <= (timing*error_rate_max) and \ j < len(values)): j += 1 else: data = False break if data: detected_frame = True break i += 1 return(detected_frame) async def __detectionLoop(self, callback, error_rate): samples_array = np.array([]) stream = self.sdr.stream() async for samples in stream: samples_array = np.append(samples_array, samples) if len(samples_array) > 250*200: try: detected = self.__detectNFS32002Frame(samples_array, error_rate) except: detected = False # Flush samples array samples_array = np.array([]) if detected: callback() while not stream.queue.empty(): stream.queue.get_nowait() stream.queue.task_done() def startDetection(self, callback, error_rate = 0.2): loop = asyncio.get_event_loop() loop.run_until_complete(self.__detectionLoop(callback, error_rate))
/rtlsdr_nfs32002-0.2.tar.gz/rtlsdr_nfs32002-0.2/rtlsdr_nfs32002/protocol.py
0.451085
0.394434
protocol.py
pypi
import numpy as np import matplotlib.pyplot as plt from wwb_scanner.scan_objects.spectrum import compare_spectra from wwb_scanner.file_handlers import BaseImporter class BasePlot(object): def __init__(self, **kwargs): self.filename = kwargs.get('filename') if self.filename is not None: self.spectrum = BaseImporter.import_file(self.filename) else: self.spectrum = kwargs.get('spectrum') #self.figure.canvas.mpl_connect('idle_event', self.on_idle) @property def x(self): return getattr(self, '_x', None) @x.setter def x(self, value): self._x = value @property def y(self): return getattr(self, '_y', None) @y.setter def y(self, value): self._y = value @property def figure(self): return getattr(self, '_figure', None) @figure.setter def figure(self, figure): self._figure = figure #self.timer = figure.canvas.new_timer(interval=100) #self.timer.add_callback(self.on_timer) def on_timer(self): print('timer') spectrum = self.spectrum with spectrum.data_update_lock: if spectrum.data_updated.is_set(): print('update plot') self.update_plot() spectrum.data_updated.clear() def build_data(self): dtype = np.dtype(float) if not len(self.spectrum.samples): x = self.x = np.array(0.) y = self.y = np.array(0.) else: x = self.x = np.fromiter(self.spectrum.iter_frequencies(), dtype) y = self.y = np.fromiter((s.magnitude for s in self.spectrum.iter_samples()), dtype) if not hasattr(self, 'plot'): self.spectrum.data_updated.clear() return x, y def update_plot(self): if not hasattr(self, 'plot'): return x, y = self.build_data() self.plot.set_xdata(x) self.plot.set_ydata(y) #self.figure.canvas.draw_event(self.figure.canvas) self.figure.canvas.draw_idle() def build_plot(self): pass class SpectrumPlot(BasePlot): def build_plot(self): self.figure = plt.figure() self.plot = plt.plot(*self.build_data())[0] plt.xlabel('frequency (MHz)') plt.ylabel('dBm') center_frequencies = self.spectrum.center_frequencies if len(center_frequencies): samples = [self.spectrum.samples.get(f) for f in center_frequencies] ymin = self.y.min() plt.vlines(center_frequencies, [ymin] * len(center_frequencies), [s.magnitude-5 if s.magnitude-5 > ymin else s.magnitude for s in samples]) plt.show() class DiffSpectrum(object): def __init__(self, **kwargs): self.spectra = [] self.figure, self.axes = plt.subplots(3, 1, sharex='col') def add_spectrum(self, spectrum=None, **kwargs): name = kwargs.get('name') if name is None: name = str(len(self.spectra)) if spectrum is None: spectrum = BaseImporter.import_file(kwargs.get('filename')) self.spectra.append({'name':name, 'spectrum':spectrum}) def build_plots(self): dtype = np.dtype(float) if len(self.spectra) == 2: diff_spec = compare_spectra(self.spectra[0]['spectrum'], self.spectra[1]['spectrum']) self.spectra.append({'name':'diff', 'spectrum':diff_spec}) for i, spec_data in enumerate(self.spectra): spectrum = spec_data['spectrum'] x = np.fromiter(spectrum.iter_frequencies(), dtype) y = np.fromiter((s.magnitude for s in spectrum.iter_samples()), dtype) axes = self.axes[i] axes.plot(x, y) axes.set_title(spec_data['name']) plt.show()
/rtlsdr-wwb-scanner-0.0.1.tar.gz/rtlsdr-wwb-scanner-0.0.1/wwb_scanner/ui/plots.py
0.606964
0.359139
plots.py
pypi