instruction
stringlengths
0
30k
I don't understand why my Node.js code is not executing concurrently
|node.js|
null
I've verified I am recording an audio file correctly. I would now like to send it to open ai for speech to text. Here is my nextjs code pages/api/speechToText.js file: import { OpenAI } from "openai"; const openai = new OpenAI(process.env.OPENAI_API_KEY); export const config = { api: { bodyParser: false, }, }; export default async function handler(req, res) { const formidable = require("formidable"); const form = new formidable.IncomingForm(); form.parse(req, async (err, fields, files) => { try { const transcription = await openai.audio.transcriptions.create({ file: files.audio[0], model: "whisper-1", }); res.status(200).json({ transcription: transcription.text }); } catch (error) { console.error("Error transcribing audio:", error); res.status(500).json({ error: "Error processing your request" }); } }); } When I attempt to run this code, I don't receive any transcription response, and the request seems to stall without any clear error message from the OpenAI API or my Next.js API route. **Issues I'm Facing:** - No transcription result is returned, and the request seems to stall. - I'm unsure if the audio file is being correctly sent to the OpenAI API due to the asynchronous nature of the `formidable` library's `form.parse` method. - Debugging attempts haven't yielded clear insights into where the process is failing. **What I've Tried:** - Ensuring that `process.env.OPENAI_API_KEY` is correctly set and accessible within my API route. - Verifying the audio file exists and is correctly referenced by `files.audio[0]` in the `formidable` parse callback. - Adding `console.log` statements to debug the flow, which confirmed that the parsing occurs but stalls at the transcription request. **Questions:** 1. Is there something I'm missing in how I'm using the `formidable` library to handle the file upload in a Next.js API route? 2. Could there be an issue with how I'm sending the file to OpenAI's API for transcription? 3. Are there better practices or alternative approaches for handling file uploads in Next.js and sending them to an external API for processing? Any guidance or suggestions on how to resolve these issues would be greatly appreciated. Thank you!
Is it possible to format a chart axis label with leading 'figure spaces'?
|.net|charts|string-formatting|
ok, so, ill apologise first, I am quite new to python and this is going to be a easy fix I'm sure, I just cant see it, I am so used to C#, anyway here it is, hopefully someone can point out my flaw I have a domain dataclass: from pydantic import BaseModel from pydantic.dataclasses import dataclass @dataclass(frozen=False) class ValueObject: """ Base class for value objects """ @dataclass(frozen=False) class Rss(ValueObject): index: int title: str titleDetail: str and then I have a service called "RssFeedReader" """Services module.""" from typing import Any from dataclasses import dataclass,field import feedparser from pydantic import BaseModel from DomainModels import Rss '''force Rss to be a DM & not a module''' class Rss(BaseModel): def __next__(self): yield self @dataclass(kw_only=True) class RssFeedReader: krssResult: list[Rss] = field(default_factory=list) #public commands async def Read(self, url:str): feed = feedparser.parse(url)["entries"] kRss = Rss for i in range(len(feed)): kRss.index = i kRss.title = feed[i]["title"] kRss.titleDetail =feed[i]["title_detail"] self.AddResult(i,kRss) return self.krssResult def AddResult(self, i, rss:Rss): print(i) self.krssResult.append(rss) now for some reason even tho it's iterating over the items and print(i) is stating the index, when I can this all 50 items are a repeat of the last index, its whipped out the first 49 values with a duplicate of the last result, I have also tried to use insert(i,rss) but that gives me the same issue, apologise if I am being really daft EDIT extra context, I am calling this ina flask app, and using DI main.py from flask import Flask from flask_bootstrap import Bootstrap from .wiring import init_app from .Container import Container from . import views def create_app() -> Flask: container = Container() app = Flask("rssfeed") app.container = container init_app() app.add_url_rule("/", "index", views.index) bootstrap = Bootstrap() bootstrap.init_app(app) return app views.py """ app moudle """ from flask import Flask from dependency_injector.wiring import inject, Provide from .Container import Container from .RssFeedService import RssFeedReader @inject async def index(rss_feed_service: RssFeedReader = Provide[Container.rss_feed_service]): read = await rss_feed_service.Read(url="https://kotaku.com/rss") return "<h1>News system</h1>" + str(read) container.py from dependency_injector import containers, providers from . import RssFeedService class Container(containers.DeclarativeContainer): wiring_config = containers.WiringConfiguration(modules=[".views"]) rss_feed_service = providers.Factory( RssFeedService.RssFeedReader ) as I have said in a comment if I add the title to a black array that travels to the front end fine, its only if I bind to the "krssResult: list[Rss]" it gives me the issue
I think you are doing the deletion wrong. This code should work correctly without errors: ```js import React from 'react'; import { Routes, Route } from 'react-router-dom'; import HomePage from './HomePage'; import PlanTripPage from './PlanTripPage'; import NavBar from './NavBar'; import ItineraryPage from './ItineraryPage'; import './App.css'; function App() { return ( <div> <NavBar /> <Routes> <Route exact path="/" element={<HomePage />} /> <Route path="/plan-trip" element={<PlanTripPage />} /> <Route path="/itinerary" element={<ItineraryPage />} /> </Routes> </div> ); } export default App; ``` Or if the problem persists, check other components.
I am trying to have the album images for each of the top tracks shown next to their song but no matter what I do they are showing up as little green boxes. [Photo of What's Happening][1] Here is the relevant code: ``` public void onGetUserProfileClicked() { if (mAccessToken == null) { Toast.makeText(this, "You need to get an access token first!", Toast.LENGTH_SHORT).show(); return; } // Create a request to get the user profile final Request request = new Request.Builder() .url("https://api.spotify.com/v1/me") .addHeader("Authorization", "Bearer " + mAccessToken) .build(); cancelCall(); mCall = mOkHttpClient.newCall(request); mCall.enqueue(new Callback() { @Override public void onFailure(Call call, IOException e) { Log.d("HTTP", "Failed to fetch data: " + e); } @Override public void onResponse(Call call, Response response) throws IOException { try { String jsonResponse = response.body().string(); Log.d(TAG, "JSON Response: " + jsonResponse); JSONObject jsonObject = new JSONObject(jsonResponse); JSONArray itemsArray = jsonObject.getJSONArray("items"); StringBuilder formattedData = new StringBuilder(); // Add header for top tracks formattedData.append("<h2>Your top tracks!</h2>"); for (int i = 0; i < itemsArray.length(); i++) { JSONObject trackObject = itemsArray.getJSONObject(i); JSONObject albumObject = trackObject.getJSONObject("album"); JSONArray imagesArray = albumObject.getJSONArray("images"); String imageUrl = ""; if (imagesArray.length() > 0) { JSONObject imageObject = imagesArray.getJSONObject(0); imageUrl = imageObject.getString("url"); Log.d(TAG, "Image URL for track " + i + ": " + imageUrl); } String artistName = ""; JSONArray artistsArray = trackObject.getJSONArray("artists"); if (artistsArray.length() > 0) { JSONObject artistObject = artistsArray.getJSONObject(0); artistName = artistObject.getString("name"); } String trackName = trackObject.getString("name"); // Load image using Glide ImageView imageView = new ImageView(MainActivity.this); Glide.with(MainActivity.this) .load(imageUrl) .into(imageView); // Create HTML content for each song box formattedData.append("<div style=\"display:flex; align-items:center;\">") .append("<div style=\"width: 200px; height: 200px; margin-right: 10px;\">") .append(imageView) .append("</div>") .append("<div style=\"border: 1px solid #ccc; padding: 10px; margin-bottom: 10px;\">") .append("<p>").append(trackName).append(" - ").append(artistName).append("</p>") .append("</div>") .append("</div>"); } // Display the formatted data with HTML formatting runOnUiThread(() -> { profileTextView.setText(Html.fromHtml(formattedData.toString(), Html.FROM_HTML_MODE_COMPACT)); profileTextView.setMovementMethod(LinkMovementMethod.getInstance()); }); } catch (IOException | JSONException e) { Log.e(TAG, "Error processing response: " + e.getMessage()); runOnUiThread(() -> Toast.makeText(MainActivity.this, "Failed to process response", Toast.LENGTH_SHORT).show()); } } }); } ``` When I logged the image urls in my logcat to see if theyre valid, they were as when I clicked on them they showed up fine on the internet and I made sure my androidmanifest.xml allowed internet usage. [1]: https://i.stack.imgur.com/lBltp.png
In case you need to cancel one of the jobs it would probably be easier to use separate stages. Or, if you need to choose which job(s) to run at queue time, use boolean pipeline parameters. ## Running jobs in the same stage The trick is to add boolean pipeline parameters for each job type and select which jobs to run when queuing a new build: [![Pipeline with parameters][1]][1] Pipeline code: ```yaml name: test-pipeline-$(date:yyyyMMdd-HHmmss) parameters: - name: deployTerraform type: boolean displayName: 'Deploy Terraform?' default: true - name: deployAnsible type: boolean displayName: 'Deploy Ansible?' default: true trigger: none pool: Default stages: - stage: deploy_stuff displayName: 'Deploy stuff' dependsOn: [] jobs: - ${{ if parameters.deployAnsible }}: - job: deployAnsible displayName: 'Deploy Ansible' steps: - checkout: none - script: echo "Deploying Ansible" displayName: 'Deploy Ansible' - ${{ if parameters.deployTerraform }}: - job: deployTerraform displayName: 'Deploy Terraform' steps: - checkout: none - script: echo "Deploying Terraform" displayName: 'Deploy Terraform' ``` ## Running jobs in separate stages This is the simplest solution - you can decide which stages to run when queuing a new build: [![Queue pipeline][2]][2] [![Select stages to run][3]][3] Pipeline code: ```yaml name: test-pipeline-2-$(date:yyyyMMdd-HHmmss) trigger: none pool: Default stages: - stage: deploy_terraform displayName: 'Deploy Terraform' dependsOn: [] jobs: - job: deployTerraform displayName: 'Deploy Terraform' steps: - checkout: none - script: echo "Deploying Terraform" displayName: 'Deploy Terraform' - stage: deploy_ansible displayName: 'Deploy Ansible' dependsOn: deploy_terraform jobs: - job: deployAnsible displayName: 'Deploy Ansible' steps: - checkout: none - script: echo "Deploying Ansible" displayName: 'Deploy Ansible' ``` PS: Consider setting the `timeoutInMinutes` and/or the `cancelTimeoutInMinutes` properties of the job (see [job timeouts](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml#timeouts)) [1]: https://i.stack.imgur.com/eAWRb.png [2]: https://i.stack.imgur.com/MPmKL.png [3]: https://i.stack.imgur.com/gZTff.png
I'm trying to compile an eBPF program inside a Docker container based on an ARM64 Ubuntu 20.04 image. I'm encountering a compilation error where the `clang` compiler cannot find the definition for the `__u64` type, which should be provided by the kernel headers. Here's the error I'm getting: ``` /usr/include/bpf/bpf_helper_defs.h:78:90: error: unknown type name '__u64' static long (* const bpf_map_update_elem)(void *map, const void *key, const void *value, __u64 flags) = (void *) 2; ^ ``` I've installed the `linux-headers-${KERNEL_VERSION}` package and set up the include paths for clang as follows: ``` RUN clang -O2 -target bpf \ -I/usr/src/linux-headers-${KERNEL_VERSION}/include \ -I/usr/src/linux-headers-${KERNEL_VERSION}/include/uapi \ -I/usr/src/linux-headers-${KERNEL_VERSION}/arch/arm64/include \ -I/usr/include/aarch64-linux-gnu \ -I/usr/include/aarch64-linux-gnu/asm \ -I/usr/include/aarch64-linux-gnu/asm-generic \ -I/usr/include/bpf \ -c ebpf_program.c -o ebpf_program.o ``` I've also created symbolic links to ensure that the `asm` and `asm-generic` directories are correctly referenced: ``` RUN ln -s /usr/include/aarch64-linux-gnu/asm /usr/include/asm RUN ln -s /usr/include/aarch64-linux-gnu/asm-generic /usr/include/asm-generic ``` The `clang` compiler still cannot find the `asm/types.h` file. I've verified that the file exists and is accessible in the container. Here's the output when I SSH into the container: ``` root@container:/usr/include/aarch64-linux-gnu/asm# cat types.h #include <asm-generic/types.h> ``` Here is my full Dockerfile: ``` # Use a base image that supports ARM64 architecture for Apple Silicon (local machines) FROM --platform=linux/arm64 ubuntu:20.04 # Use ARG to specify the kernel version, allowing for flexibility ARG KERNEL_VERSION=5.4.0-174-generic ARG DEBIAN_FRONTEND=noninteractive # Install dependencies RUN apt-get update && apt-get install -y \ bpfcc-tools \ clang \ llvm \ libelf-dev \ zlib1g-dev \ gcc \ iproute2 \ git \ curl \ ca-certificates \ linux-libc-dev \ make \ pkg-config \ && apt-get clean # Install a specific version of the Linux kernel headers RUN apt-get install -y linux-headers-${KERNEL_VERSION} # Install Go for ARM64 RUN curl -OL https://golang.org/dl/go1.21.0.linux-arm64.tar.gz \ && tar -C /usr/local -xzf go1.21.0.linux-arm64.tar.gz \ && rm go1.21.0.linux-arm64.tar.gz # Set environment variables for Go ENV PATH=$PATH:/usr/local/go/bin ENV GOPATH=/go ENV PATH=$PATH:$GOPATH/bin # Copy your Go application source code and module files into the container COPY src /go/src/go_user_agent # Set the working directory to the Go user agent script's location WORKDIR /go/src/go_user_agent/ # Clone the libbpf repository and build it RUN git clone https://github.com/libbpf/libbpf.git /usr/src/libbpf && \ cd /usr/src/libbpf/src && \ make && \ make install # Create a symbolic link from /usr/include/aarch64-linux-gnu/asm to /usr/include/asm RUN ln -s /usr/include/aarch64-linux-gnu/asm /usr/include/asm # Create a symbolic link from /usr/include/aarch64-linux-gnu/asm-generic to /usr/include/asm-generic RUN ln -s /usr/include/aarch64-linux-gnu/asm-generic /usr/include/asm-generic # Set the KERNEL_HEADERS_DIR environment variable to the path of the installed kernel headers ENV KERNEL_HEADERS_DIR=/usr/src/linux-headers-${KERNEL_VERSION} # Compile the eBPF program using the correct include paths RUN clang -O2 -target bpf \ -I$KERNEL_HEADERS_DIR/include \ -I$KERNEL_HEADERS_DIR/include/linux \ -I$KERNEL_HEADERS_DIR/include/uapi \ -I$KERNEL_HEADERS_DIR/arch/arm64/include \ -I/usr/include/aarch64-linux-gnu \ -I/usr/include/asm-generic \ -I/usr/include/bpf \ -c ebpf_program.c -o ebpf_program.o # Download dependencies RUN go get -d -v # Run tests and benchmarks RUN go test -v ./... # Run all tests # Run benchmarks and capture CPU profiles RUN go test -bench . -benchmem -cpuprofile cpu.prof -o bench.test # Build the Go user agent script RUN go build -o go_user_agent # Command to run the user agent script CMD ["./go_user_agent"] ``` I'm stuck and not sure what else I can do to resolve this issue. Has anyone encountered a similar problem, or can you provide guidance on what might be going wrong?
Compiling eBPF program in Docker fails due to missing '__u64' type
|c|linux|docker|ubuntu|ebpf|
{"Voters":[{"Id":3518383,"DisplayName":"Juraj"},{"Id":522444,"DisplayName":"Hovercraft Full Of Eels"},{"Id":354577,"DisplayName":"Chris"}],"SiteSpecificCloseReasonIds":[18]}
|string|google-sheets|
I write code for the ESP32 microcontroller. I set up a class named "dmhWebServer". This is the call to initiate my classes: An object of the dmhFS class is created and I give it to the constructor of the dmhWebServer class by reference. For my error see the last code block that I posted. The other code block could explain the way to where the error shows up. ``` #include <dmhFS.h> #include <dmhNetwork.h> #include <dmhWebServer.h> void setup() { // initialize filesystems dmhFS fileSystem = dmhFS(SCK, MISO, MOSI, CS); // compiler is happy I have an object now // initialize Activate Busy Handshake dmhActivateBusy activateBusy = dmhActivateBusy(); // initialize webserver dmhWebServer webServer(fileSystem, activateBusy); // compiler also happy (call by reference) } ``` The class dmhFS has a custom constructor (header file, all good in here): ``` #include <Arduino.h> #include <SD.h> #include <SPI.h> #include <LittleFS.h> #include <dmhPinlist.h> #ifndef DMHFS_H_ #define DMHFS_H_ class dmhFS { private: // serial peripheral interface SPIClass spi; String readFile(fs::FS &fs, const char *path); void writeFile(fs::FS &fs, const char *path, const char *message); void appendFile(fs::FS &fs, const char *path, const char *message); void listDir(fs::FS &fs, const char *dirname, uint8_t levels); public: dmhFS(uint16_t sck, uint16_t miso, uint16_t mosi, uint16_t ss); void writeToSDCard(); void saveData(std::string fileName, std::string contents); String readFileSDCard(std::string fileName); }; #endif ``` Header file of the dmhWebServer class (not the whole thing): ``` public: dmhWebServer(dmhFS &fileSystem, dmhActivateBusy &activateBusyHandshake); }; ``` This is the constructor of the dmhWebServer class: ``` #include <dmhWebServer.h> #include <dmhFS.h> #include <dmhActivateBusy.h> // This is the line where the compiler throws an error ,character 85 is ")" dmhWebServer::dmhWebServer(dmhFS &fileSystem, dmhActivateBusy &activateBusyHandshake) { // webserver sites handlers setupHandlers(); abh = activateBusyHandshake; sharedFileSystem = fileSystem; // start web server, object "server" is instantiated as private member in header file server.begin(); } ``` My compiler says: "src/dmhWebServer.cpp:5:85: error: no matching function for call to 'dmhFS::dmhFS()'" Line 5:85 is at the end of the constructor function declaration This is my first question on stackoverflow since only lurking around here :) I try to clarify if something is not alright with the question. I checked that I do the call by reference in c++ right. I am giving the constructor "dmhWebServer" what he wants. What is the problem here?
``` const express = require('express'); const router = express.Router(); const db = require('../db'); // Import MySQL connection // Handle GET request to the root URL router.get('/', function(req, res) { // Query to count the number of rows in the products table db.query('SELECT COUNT(*) AS productCount FROM products', (err, countResult) => { console.log("index.js file is being executed."); if (err) { console.error('Error counting products:', err); res.status(500).send('Error counting products'); } else { const productCount = countResult[0].productCount; // Extract product count from the result if (productCount === 0) { console.log('No products found'); res.render('index', { session: req.session, products: [], productCount: 0 }); } // Query to select all columns from the products table db.query('SELECT * FROM products', (err, results) => { if (err) { console.error('Error retrieving product data:', err); res.status(500).send('Error retrieving product data'); } else { console.log('Products:', results); // Map the results to modify the image column to display only the first image filename const products = results.map(product => { // Split the images filenames using dash (-) separator const imageFilenames = product.images.split('-'); // Replace dashes in filenames with colons (:) const sanitizedFilenames = imageFilenames.map(filename => filename.replace(/-/g, ':')); // Select the first filename const firstImage = sanitizedFilenames[0]; // Return the modified product object return { ...product, // Replace the images column with the first image filename images: firstImage }; }); res.render('index', { session: req.session, products: products, productCount: productCount }); } }); } }); }); module.exports = router; ``` I'm trying to fetch data from mysql database table called 'products' , I have index.hbs setup and everything should be working, I've checked all routes and I still get nothing, page should display number of products there are + images, title, stuff like that. But for some reason, there's nothing in the console, no errors, nothing at all! Is something wrong with this piece of code or should I look somewhere else?
The top rated answer does not work with the new Reflection implementation of [JEP416](https://openjdk.org/jeps/416) in e.g. Java 21 that uses MethodHandles and ignores the flags value on the Field abstraction object. One solution is to use Unsafe, however with [this JEP](https://openjdk.org/jeps/8323072) Unsafe and the important `long objectFieldOffset(Field f)` and `long staticFieldOffset(Field f)` methods are getting deprecated for removal so for example this will not work in the future: ```java final Unsafe unsafe = //..get Unsafe (...and add subsequent --add-opens statements for this to work) final Field ourField = Example.class.getDeclaredField("changeThis"); final Object staticFieldBase = unsafe.staticFieldBase(ourField); final long staticFieldOffset = unsafe.staticFieldOffset(ourField); unsafe.putObject(staticFieldBase, staticFieldOffset, "it works"); ``` I do not recommend this but it is possible in Java 21 with the new reflection implementation when making heavy use of the internal API if really needed. # Java 21+ solution without `Unsafe` The gist of it is to use a `MethodHandle` that can write to a static final field by getting it from the internal [`getDirectFieldCommon(...)`](https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/lang/invoke/MethodHandles.java#L4165) method of the Lookup by providing it with a `ReferenceKind` that is manipulated via Reflection to remove the Final flag from it. ```java MethodHandles.Lookup lookup = MethodHandles.privateLookupIn(MyClassWithStaticFinalField.class, MethodHandles.lookup()); Method getDirectFieldCommonMethod = lookup.getClass().getDeclaredMethod("getDirectFieldCommon", byte.class, Class.class, memberNameClass, boolean.class); getDirectFieldCommonMethod.setAccessible(true); //Invoke last method to obtain the method handle MethodHandle finalFieldHandle = (MethodHandle) getDirectFieldCommonMethod.invoke(lookup, manipulatedReferenceKind, myStaticFinalField.getDeclaringClass(), memberNameInstanceForField, false); finalFieldHandle.invoke("new Value for static final field"); ``` See my answer [here](https://stackoverflow.com/a/77705202/23144795) for a full working example on how to leverage the internal API to set a final field in Java 21 without Unsafe.
I'm trying to connect to a postgres database using Adminer. Here is my Makefile: ``` .PHONY: postgres adminer migrate postgres: docker run --rm -ti -p 5432:5432 -e POSTGRES_PASSWORD=secret postgres adminer: docker run --rm -ti -p 8080:8080 adminer migrate: migrate -source file://migrations \ -database postgres://postgres:secret@localhost:8080/postgres?sslmode=disable up ``` When I try connecting this is what I get: [![error](https://i.stack.imgur.com/9ZMmO.png)](https://i.stack.imgur.com/9ZMmO.png) I tried looking for answers, tried this command: `docker run --rm -ti --network host -e POSTGRES_PASSWORD=secret postgres` But it also did not work and I got the same error. Can I get some help please?
Can't connect to postgres with Adminer using Docker
|postgresql|docker|makefile|adminer|
null
> Is it possible to correctly document the mixin? Apparently not, The mixin's methods (URL helpers) remain undocumented by YARD unless manually documented elsewhere in your project's documentation. [John Bachir](https://stackoverflow.com/users/168143/john-bachir) mentions [`lsegal/yard` issue 1542](https://github.com/lsegal/yard/issues/1542), which is more about helping to manage warnings and keep the CI pipeline clean: it does not provide a direct path to documenting the methods brought into the class by the `url_helpers` mixin within the YARD documentation itself. A Rails-specific solution involves wrapping the `include Rails.application.routes.url_helpers` statement inside an [`ActiveSupport::Concern`](https://api.rubyonrails.org/v7.1.3.2/classes/ActiveSupport/Concern.html), which can bypass top-level inclusion warnings by YARD's parser. ```ruby module WithRoutes extend ActiveSupport::Concern included do include Rails.application.routes.url_helpers end end class MyController < ApplicationController include WithRoutes end ``` Or you can create a YARD extension that overrides the warning behavior for specific patterns, allowing you to bypass warnings for `include Rails.application.routes.url_helpers`: ```ruby # yard_extensions/ignore_rails_mixin.rb module IgnoreRailsMixin def process super rescue YARD::Parser::UndocumentableError => e raise e unless statement.last.source.start_with?("Rails.") end end YARD::Handlers::Ruby::MixinHandler.prepend(IgnoreRailsMixin) ``` Then, use this extension when running YARD: ```bash yard doc -e yard_extensions/ignore_rails_mixin.rb --fail-on-warning ``` The alternative would be to implement a [CI script to filter out "undocumentable" warnings](https://github.com/lsegal/yard/issues/1542#issuecomment-2016695382) while failing on other types of warnings.
Products aren't displayed after fetching data from mysql db (node.js & express)
|mysql|node.js|express|
null
Alert! I am feeling so embarassed asking such an entry level question here!!!! Hey guys, I have been working on a project that involves timeseries total electron content data. My goal is to apply statistical analysis and find anomalies in TEC due to earthquake. I am following this research paper for sliding IQR method. But for some reason I am not getting results as shown in the paper with the given formulas. So I decided to use the rolling mean + 1.6 STD method. The problem is when I use ax.fill_between(x1 = index, y1 = ub, y2=lb) method my confidence interval band is being plotted a step further than the data point. Please see the given figure for a better understanding. Here's what I am currently doing: ``` df = DAEJ.copy() window = 12 hourly = df.resample(rule="h").median() hourly["ma"] = hourly["TEC"].rolling(window=window).mean() hourly["hour"] = hourly.index.hour hourly["std_err"] = hourly["TEC"].rolling(window=window).std() hourly["ub"] = hourly["ma"] + (1.67* hourly["std_err"]) hourly["lb"] = hourly["ma"] - (1.67* hourly["std_err"]) hourly["sig2"] = hourly["TEC"].rolling(window=window).var() hourly["kur"] = hourly["TEC"].rolling(window=window).kurt() hourly["pctChange"] = hourly.TEC.pct_change(12, fill_method="bfill") hourly = hourly.dropna() dTEC = hourly[(hourly["TEC"] > hourly["ub"])] fig, ax = plt.subplots(figsize=(12,4)) hourly["TEC"].plot(ax=ax, title="TEC Anomaly", label="Station: DAEJ") ax.fill_between(x=hourly.index, y1=hourly["ub"], y2=hourly["lb"], color="red", label="Conf Interval", alpha=.4) ax.legend() ``` And here's the result I got! [TEC Anomaly detection using rolling std method](https://i.stack.imgur.com/cFifC.png) as seen in the given figure the data and the colored band aren't alinged properly, I know after calculating a rolling mean with a window of 12 hours will result in a 12 hour shifted value but even after dropping the first values I am still not getting an aligned figure.
Need help realigning python fill_between with data points
|python|pandas|outliers|rolling-computation|anomaly-detection|
null
Here is a script I use to find out which namespaces have problem pods and what the problems might be. I call the script `notrunning` ```sh kubectl get po -A --no-headers | awk ' BEGIN { SUBSEP=" " format = "%-20s %20s %5s\n" printf format, "NAMESPACE", "STATUS", "COUNT" } !/Running/ {a[$1,$4]++} END { for (i in a) {split(i,t); printf format, t[1],t[2],a[i]} } ' | sort ``` I get results similar to this: ```sh $ notrunning NAMESPACE STATUS COUNT namespace-01 InvalidImageName 2 namespace-02 InvalidImageName 1 namespace-02 Init:ImagePullBackOff 1 namespace-03 CrashLoopBackOff 2 namespace-03 InvalidImageName 9 namespace-04 Init:ErrImagePull 1 ```
**mapper.py:** #!/usr/bin/env python import sys for line in sys.stdin: parts = line.strip().split(',') article_id = parts[0] section_text = parts[3] for term in section_text.split(): print(f"{term}\t{article_id}") **Reducer.py:** #!/usr/bin/env python import sys current_term = None article_ids = [] for line in sys.stdin: term, article_id = line.strip().split('\t') if current_term != term: if current_term: print(f"{current_term}\t{','.join(article_ids)}") current_term = term article_ids = [] article_ids.append(article_id) if current_term: print(f"{current_term}\t{','.join(article_ids)}") **Error:** 2024-03-15 17:15:20,282 INFO mapreduce.Job: Task Id : attempt_1710516647105_0007_m_000005_0, Status : FAILED ***Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127*** at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:326) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:539) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:466) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:350) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172 ) **Command:** hadoop jar /home/tashi/hadoop-3.3.6/share/hadoop/tools/lib/hadoop-streaming-3.3.6.jar \ -input hdfs:///assignment/enwiki-20170820.csv \ -output /input/output7 \ -mapper mapper.py \ -reducer reducer.py \ -file /home/tashi/mapper.py \ -file /home/tashi/reducer.py
I am trying to build something simple with Selenium but cannot get it off the ground at all I have the below code: from selenium import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager import os os.environ['WDM_SSL_VERIFY'] = '0' #Disable SSL driver = webdriver.Chrome(ChromeDriverManager().install()) driver.get("http://www.google.com") When I run it I am getting the below error: Timeout value connect was <object object at 0x000002B530C948B0>, but it must be an int, float or None. My Crome Driver is in the same folder as the Script Py Version 3.11.8 Urllib3 version: 2.31.0 Selenium Version: 4.18.1 I have tried a number of different routes like using the Excution Path of the ChromeDriver Or `driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))` which gives me a `WebDriver.__init__() got an unexpected keyword argument 'service' Error.` and changing the versions as suggested in a few posts I am a bit stuck to be honest any pointers would be very helpful Also Tried: from selenium import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager import os os.environ['WDM_SSL_VERIFY'] = '0' #Disable SSL driver = webdriver.Chrome() # Open Scrapingbee's website driver.get("http://www.scrapingbee.com")
I am trying to program a COM object with the Enumerable and Enumerator inside in Delphi. It's a nightmare. First in the .ridl file: [![enter image description here][1]][1] next in the _tlb.pas I have this: // *********************************************************************// // Interface : ITrackingRatesCol // Indicateurs : (4416) Dual OleAutomation Dispatchable // GUID : {F116E1DA-7E3E-4207-BD17-1615DCE4BE41} // *********************************************************************// ITrackingRatesCol = interface(IEnumerable) ['{F116E1DA-7E3E-4207-BD17-1615DCE4BE41}'] function Get_Count: Integer; safecall; property Count: Integer read Get_Count; end; // *********************************************************************// // DispIntf : ITrackingRatesColDisp // Indicateurs : (4416) Dual OleAutomation Dispatchable // GUID : {F116E1DA-7E3E-4207-BD17-1615DCE4BE41} // *********************************************************************// ITrackingRatesColDisp = dispinterface ['{F116E1DA-7E3E-4207-BD17-1615DCE4BE41}'] property Count: Integer readonly dispid 1; function GetEnumerator: IEnumVARIANT; dispid -4; end; And in my .pas file of my object: TTrackingRatesCol = class(TAutoObject, ITrackingRatesCol,IEnumerable) private fIndex,fCount:integer; protected function GetCurrent:driverates;safecall; function Get_Count: Integer; safecall; function MoveNext: WordBool; safecall; function GetEnumerator: IEnumerator; safecall; public procedure Initialize; override; destructor Destroy; override; end; implementation uses ComServ,Data.Win.ADODB,Core,SysUtils; function TTrackingRatesCol.GetEnumerator: IEnumerator; begin result:=self; end; But I have allways the message: that the implementation of the interface IEnumerable.GetEnumerator is missing!! Thanks a lot for your help. Michel [1]: https://i.stack.imgur.com/RPaEQ.png
How to program a COM object with an IEnumerator, IEnumerable interface inside
|delphi|com|
Upon looking and researching online it does seem that there a numerous way of doing this but not sure if it fits the way that I want it for. Personally I would still like my dates as this format "3/30/24". I am extracting data from some which has numerous amount of data and everything works as expected but when I tried to sort my date it just was sorting it lexicographically and this is only when I read from the csv rather than writing because I just want to to test if it could even sort in the first place after writing to it. ```python data_to_export = { "Company Name": ["Mcdonalds","Burgerking"], "Delivery Address":["123 lake rd", "124 west rd"], "Date": ["3/30/24", "1/23/24"], "Customer Name": ["Zack", "Peter"],} ``` ```python df = pd.DataFrame(data_to_export) df.to_csv(join_move_to, index=False) ``` Above is some of the dummy data just so you can get a quick example than below is how I save it I did have some sort of sort methods before this but it didn't work so I just deleted it but before writing to how can I let it know that I want to sort by date and keep in mind I still want my format as "(Month/Day/Year) " I do understand this is a string so it must be converted to some sort of date then sorted but I can not find a way to do that here is what I tried just so you guys can see ```python df = pd.DataFrame(data_to_export) df["Date"] = pd.to_datetime(df["Date"],format="%m/%d/%y") df.sort_values(by="Date", inplace=True) df.to_csv(path, index=False) ```
I'm defining the following flag for my CLI in Golang: ```go var flags flag.FlagSet phoneRegexp := flags.String("phone_regexp", "", "custom regex for phone checking.") ``` But, it fails when I'm passing the following argument: ``` ./cli --phone_regexp='^\d{1,5}$' cli: no such flag -5}$ ``` While I understand that this is a parsing problem (encountering comma so it thinks its a new flag), I cannot seem to figure out how to escape (I tried adding a \ before the comma) or how to better describe the Go flag. Anyone knows how to solve this problem? ## Edit As I tried to make the question a little bit more generic, we couldn't reproduce the error. I'm creating a protoc plugin, thus the options are parsed like so: ```go import ( "flag" "google.golang.org/protobuf/compiler/protogen" ) func main() { var flags flag.FlagSet phoneRegexp := flags.String("phone_regexp", "", "custom regex for phone checking.") opts := protogen.Options{ ParamFunc: flags.Set, } //... } ``` and then the flag is set like the following (check is the plugin name): ```bash protoc ... --check_opt=phone_regexp='^\d{1,5}$' my.proto ```
Python is an interpreted language. When you write something like: ``` print("1"*n) ``` The Python interpreter performs a loop internally. It is more efficient in Python because the interpreter is usually written in a compiled language like C (or C++) and so the loop is more efficient when done internally. But since C itself is a compiled language writting a simple loop (which will be done in **O(n)** as you requested), is the most efficient thing you can do. This loop can be either "manually" written or via some library function like `memset` which is implemented with a loop. Note that such a loop can be replaced with a recursion, but this will not improve performance (recursion is _usually less efficient_ than a simple loop). The most "expensive" operation in this case is actually the I/O which might be buffered. So the best you can do might be to build the final string (with a loop) into a buffer, and then use one `printf` statement to dump it the console. As usual with matters of performance, you should profile it to check. **Note:** The answer above assumes that `n` isn't known in advance, and can be arbitrarily large. I used this assumption because: (1) The python statement used as a reference hints to that. (2) The OP mentioned O(n) complexity which is meaningful only when `n` is relatively large.
If you run with strace: $ strace -fe execve bash -c 'exec -a fake ./test.sh' execve("/usr/bin/bash", ["bash", "-c", "exec -a fake ./test.sh"], 0x7ffd9c557aa0 /* 56 vars */) = 0 execve("/home/xxx/test.sh", ["fake"], 0x58fc3a91a130 /* 56 vars */) = 0 # /home/xxx/test.sh # you will notice that "fake" _is_ passed as zeroth argument, it's just <strike>shell overrides $0 with script name </strike> indeed, as @oguz pointed out, this is the special case of interpreter scripts handling by exec commands. Compare with the same shell script but without #!/bin/bash : $ strace -fe execve bash -c 'exec -a fake ./test.sh' execve("/usr/bin/ksh", ["ksh", "-c", "exec -a fake ./test.sh"], 0x7fff79964a40 /* 56 vars */) = 0 execve("/home/xxx/test.sh", ["fake"], 0x5fbe079d0b08 /* 57 vars */) = -1 ENOEXEC (Exec format error) # fake #
It's a bit late for answers to this post, but could be useful to someone getting tired of IBApi which is still got so many complications and they add up more in each versions. I tried IBApi_insync which is better to use IBApi asynchronously. I tried a few myself and updated here, [https://github.com/cosmoarunn/ib_insync_examples/][1] see if it fits. [1]: https://%20%20%20%20https://github.com/cosmoarunn/ib_insync_examples/
|c#|asp.net-core|.net-6.0|
I get an error response when a full simple logistic model is run in R. However this only occur when I include SHTCVD191_A (which is not the variable with the most missing values). Including the variables with the most missing values SHTCVD19NM_A and SHOTTYPE_A without SHTCVD191_A (which is integral to the study) runs perfectly. On the other hand, running a model with SHTCVD191_A and all other predictors excluding SHTCVD191NM_A and SHOTTYPE_A runs perfectly. All variables have been confirmed to be factors and missing values have been dropped. I would appreciate insight into how to fix this. `model_8c <- glm(CVD ~ SEX_A + PHSTAT_A + AGEP_A + EDUCP_A + RACEALLP_A + SHTCVD191_A + SHOTTYPE_A + SHTCVD19NM_A, family = binomial, data = data)` Error in \`\`contrasts\<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) `: contrasts can be applied only to factors with 2 or more levels\` I have tried running other model as well. multinomial and removing missing data. I have also tried using the methods I came across on stack overflow. However, it still didn't fix the error response.
How do I fix the response: Error in contrasts in R
|error-handling|glm|contrast|binomial-coefficients|
null
|oracle|plsql|
I'm encountering an issue while trying to scrape data using Puppeteer Cluster and write it to an Excel file using Excel4node. Here's a summary of the script's functionality: I'm using Puppeteer Cluster to scrape data from multiple URLs concurrently. For each URL, I scrape various pieces of information from the webpage. I'm using a write queue mechanism to write the scraped data to an Excel file using Excel4node. **The problem is that while some data is successfully written to the Excel file, not all of it is being captured. It seems like some rows are missing in the Excel file compared to the number of URLs processed. When I set maxConcurrency to 1 it works fine.** *minimal reproducible example* const { Cluster } = require('puppeteer-cluster'); var xl = require('excel4node'); var wb = new xl.Workbook(); var ws = wb.addWorksheet('Sheet 1'); (async () => { let row_id = 1; const cluster = await Cluster.launch({ concurrency: Cluster.CONCURRENCY_PAGE, maxConcurrency: 4, puppeteerOptions: { headless: false } }); async function writeDataToExcel(row_id, text) { ws.cell(row_id, 1).string(text); } cluster.task(async ({ page, data: url}) => { row_id += 1 await page.goto(url); text = await page.waitForSelector('#mw-content-text > div.mw-content-ltr.mw-parser-output > p:nth-child(13)'); text = await text.evaluate(el => el.textContent); await writeDataToExcel(row_id, text); }); cluster.queue('https://en.wikipedia.org/wiki/JavaScript'); cluster.queue('https://en.wikipedia.org/wiki/JavaScript'); cluster.queue('https://en.wikipedia.org/wiki/JavaScript'); cluster.queue('https://en.wikipedia.org/wiki/JavaScript'); cluster.queue('https://en.wikipedia.org/wiki/JavaScript'); cluster.queue('https://en.wikipedia.org/wiki/JavaScript'); cluster.queue('https://en.wikipedia.org/wiki/JavaScript'); cluster.queue('https://en.wikipedia.org/wiki/JavaScript'); cluster.queue('https://en.wikipedia.org/wiki/JavaScript'); cluster.queue('https://en.wikipedia.org/wiki/JavaScript'); cluster.queue('https://en.wikipedia.org/wiki/JavaScript'); await cluster.idle(); await cluster.close(); wb.write('Excel.xlsx'); })(); **code link: [code](https://gist.github.com/Guwanch1/cb11684098d2a95994705430e6db45d5)** I've already tried to troubleshoot the issue by: Checking for errors: There are no error messages in the console logs. Verifying data retrieval: Data retrieval from the web pages seems to be working correctly. Despite these efforts, I'm still unable to pinpoint the exact cause of the problem. Any insights or suggestions on how to troubleshoot and resolve this issue would be greatly appreciated. Thank you in advance for your help!
{"Voters":[{"Id":472495,"DisplayName":"halfer"},{"Id":354577,"DisplayName":"Chris"},{"Id":522444,"DisplayName":"Hovercraft Full Of Eels"}],"SiteSpecificCloseReasonIds":[13]}
|unity-game-engine|input|
I am attempting to dynamically update filters in my shiny app. For example, if one selects `California` as a state then the only cities that populate in the City filter should be cities from California. However, if I select `California`, the code shows all the cities in the dataframe regardless if they are in California or not. I have tried various attempts but am unsure of how to update the filtered data without creating a continuous loop or selecting a filter and it resetting itself. The original data has roughly twelve columns I am looking to filter. ```lang-r library(shiny) library(data.table) library(DT) ui <- fluidPage( # Application title titlePanel("Display CSV Data"), # Sidebar layout with input and output definitions sidebarLayout( # Sidebar panel for inputs sidebarPanel( # No input needed if the CSV is static # Use selectizeInput for filtering uiOutput("filter_ui") ), # Main panel for displaying output mainPanel( # Output: DataTable dataTableOutput("table") ) ) ) server <- function(input, output, session) { # Read the CSV file into a data table and display as a DataTable cols_to_filter <- c('state', 'city', 'county') data <- reactive({ data.table( state = c("California", "California", "California", "New York", "New York", "Texas", "Texas", "Texas"), city = c("Los Angeles", "San Francisco", "Claremont", "New York", "Buffalo", "Houston", "Austin", "San Marcos"), county = c("Los Angeles", "San Francisco", "Los Angeles", "New York", "Erie", "Harris", "Travis", "Hays"), population = c(3979576, 883305, 36161, 8336817, 256902, 2325502, 964254, 65053) # Fictional population figures ) }) observe({ setkeyv(data(), cols_to_filter) }) # Generate selectizeInput for filtering output$filter_ui <- renderUI({ filter_inputs <- lapply(cols_to_filter, function(col) { selectizeInput( inputId = paste0("filter_", col), label = col, choices = c("", sort(unique(data()[[col]]))), multiple = TRUE, options = list( placeholder = 'Select values' ) ) }) do.call(tagList, filter_inputs) }) # Filter the data table based on user selections filtered_data <- reactive({ filtered <- data() for (col in cols_to_filter) { filter_values <- input[[paste0("filter_", col)]] if (length(filter_values) > 0) { filtered <- filtered[get(col) %in% filter_values] } } filtered }) # Display the filtered data table output$table <- renderDataTable({ filtered_data() }) } shinyApp(ui = ui, server = server) ```
{"Voters":[{"Id":874188,"DisplayName":"tripleee"},{"Id":354577,"DisplayName":"Chris"},{"Id":522444,"DisplayName":"Hovercraft Full Of Eels"}]}
{"Voters":[{"Id":11810933,"DisplayName":"NotTheDr01ds"},{"Id":874188,"DisplayName":"tripleee"},{"Id":522444,"DisplayName":"Hovercraft Full Of Eels"}],"SiteSpecificCloseReasonIds":[13]}
While there is no built-in way to remove page reloads from your analytics, there are a few approaches that will allow you to separate analytics from reloads from the others. The easiest to implement would be to add a parameter to your URL in case your page is reloaded. This can be done with `PerformanceNavigationTiming.type`. Here's an example: const observer = new PerformanceObserver((list) => { list.getEntries().forEach((entry) => { if (entry.type === "reload") { var url = new URL(window.location.href); url.searchParams.set('reload', 'true'); window.history.replaceState({}, '', url); } }); }); observer.observe({ type: "navigation", buffered: true }); Then, you create a segment that does not take into account traffic from these URLs. Another way of doing this would be to only take into account traffic where `Pageviews` is less or equal to `1`. But this is not optimal, because you wouldn't be able to track people that simply accessed your website multiple times. Which approach you choose will depend on what exactly you want, but, to be blunt, no, there is no built-in way to do that.
this is my controller challenges.php ``` <?php include("../../tbs_3150/tbs_class.php"); include("../connect.php"); $tbs = new clsTinyButStrong; try { $pdo = new PDO($host, $login, $password); $message = "connexion établie"; if(isset($_GET["category"])){ $res = $pdo->prepare("SELECT * FROM `challenge` WHERE `categoryId`=:categoryId;"); $res->bindParam(":categoryId",$_GET["category"]); $res->execute(); $rows = $res->fetchAll(PDO::FETCH_ASSOC); $tbs->MergeBlock("row",$rows); } } catch (PDOException $erreur) { $message = $erreur->getMessage(); } $tbs->LoadTemplate("../views/challenges.html"); $tbs->Show(); ?> ``` this is my view ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>[onshow.message]</title> </head> <body> <h1>[onshow.message]</h1> <table> <thead> <tr> <th>ID</th> <th>Nom</th> <th>Description</th> <!-- Ajoutez d'autres colonnes si nécessaire --> </tr> </thead> <tbody> [row;block=begin] <tr> <td>[row.challengeId]</td> <td>[row.title]</td> <td>[row.description]</td> </tr> [row;block=end] </tbody> </table> </body> </html> ``` i know that i'm correctly querying my database but the problem is that tbs is not formating my view with the values i retreive from the db,so this is the result : [![enter image description here](https://i.stack.imgur.com/k9Z6b.png)](https://i.stack.imgur.com/k9Z6b.png) i'm expecting \[row.description\] to be replaced by its actual value on the database
null
null
I have a table with a varchar column that is formated as json. I want to query my table by json. This is achievable with SQL ``` SELECT * FROM [db].[dbo].[table] WHERE JSON_VALUE(ColName, '$.jsonkey') = "value" ``` Is it posible with GraphQl and Hot Chocolate? I tried ``` public IQueryable<Class> GetById([ScopedService] AppDbContext context, string id) { return context.Classes.AsQueryable().Where(p => JObject.Parse(p.JsonCol)["id"].ToString() == id); } ``` Got an error: "message": "The LINQ expression 'DbSet<Platform>()\r\n .Where(p => JObject.Parse(p.JsonCol).get_Item(\"id\").ToString() == __id_0)' could not be translated.
edSubject.aggregate([ { $match: { stageid: stageid, boardid: boardid, scholarshipid: scholarshipid, }, }, { $lookup: { from: "edcontentmasterschemas", let: { stageid: "$stageid", subjectid: "$subjectid", boardid: "$boardid", scholarshipid: "$scholarshipid", }, pipeline: [ { $match: { $expr: { $and: [ { $eq: ["$stageid", "$$stageid"] }, { $eq: ["$subjectid", "$$subjectid"] }, { $eq: ["$boardid", "$$boardid"] }, { $eq: ["$scholarshipid", "$$scholarshipid"] }, ], }, }, }, { $addFields: { convertedField: { $cond: { if: { $eq: ["$slcontent", ""] }, then: "$slcontent", else: { $toInt: "$slcontent" }, }, }, }, }, { $sort: { slcontent: 1 } }, { $group: { _id: "$topicid", topicimage: { $first: "$topicimage" }, topic: { $first: "$topic" }, sltopic: { $first: "$sltopic" }, studenttopic: { $first: "$studenttopic" }, reviewquestionsets: { $push: { id: "$_id", sub: "$sub", topic: "$topic", contentset: "$contentset", stage: "$stage", timeDuration: "$timeDuration", contentid: "$contentid", studentdata: "$studentdata", subjectIamge: "$subjectIamge", topicImage: "$topicImage", contentImage: "$contentImage", isPremium: "$isPremium", }, }, }, }, { $addFields: { convertedField: { $cond: { if: { $eq: ["$slcontent", ""] }, then: "$slcontent", else: { $toInt: "$slcontent" }, }, }, }, }, { $sort: { sltopic: 1 } }, { $lookup: { from: "edchildrevisioncompleteschemas", let: { childid: childid, //childid, //subjectid, topicid: "$_id" }, pipeline: [ { $match: { $expr: { $and: [ { $eq: [ "$childid", "$$childid" ] }, { $in: [ "$$topicid", { $reduce: { input: "$subjectDetails", initialValue: [], in: { $concatArrays: [ "$$value", "$$this.topicDetails.topicid" ] } } } ] } ] } } }, { $project: { _id: 1, childid: 1 } } ], as: "studenttopic" } }, { $project: { _id: 0, topic: "$_id", topicimage: 1, topicid: 1, sltopic: 1, studenttopic: 1, contentid: "$contentid", reviewquestionsets: 1, }, }, ], as: "topicDetails", }, }, { $unwind: "$topicDetails" }, { $group: { _id: "$_id", subject: { $first: "$subject" }, subjectid: { $first: "$subjectid" }, slsubject: { $first: "$slsubject" }, topicDetails: { $push: "$topicDetails" }, }, }, ])
I run multiple (400 python processes at once) I start new one each 1.5 sec and repeat I found that sometimes my CPU usage goes 100% [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/8KTim.png While sometimes it is as low as 30%-50% I need to understand what push it so high, is it the process creation time ? What should I do to check and debug the spot?
Python process CPU usage going high suddenly. how to detect the place?
|python|cpu|
{"Voters":[{"Id":5468463,"DisplayName":"Vega"},{"Id":17562044,"DisplayName":"Sunderam Dubey"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[18]}
## The Really Easy Way to Generate SwiftUI Custom Shapes [![enter image description here](https://i.stack.imgur.com/fPUBP.png)](https://i.stack.imgur.com/fPUBP.png) ### Why would I do this? As developers we, sooner or later, find ourselves wanting to add some clever custom shape to our application user interface. SwiftUI provides basic shapes like ellipse, rectangle and rounded rectangle and also the ability to produce a more complex shape in code but no tools to do that. Yes, SVG or PDF will definitely do the job if you need resolution independent graphics and you can certainly drop in a PNG if you don’t need to worry about scaling artifacts but all of these options have a couple of drawbacks **Size:** Even the simplest PDF or SVG is a text file with the coordinates of the control points stored as text. SVG is pretty sparse but even a simple ellipse saved as a PDF is going to cost you seven or eight kilobytes. More complex shapes are going to be even worse. Even though the code to create a SwiftUI Shape is also text in a swift file, it’s important to remember that that file is compiled down to a set of binary instructions in the compiled app and therefor is likely to be quite a bit smaller. **Versatility:** A SwiftUI Shape can be easily rotated, scaled, skewed etc. and it can have complex shading and stroking applied to it. Also a single shape can be reused in code with different transformations and shading for each instance. **Security:** Image files like SVG, PDF, PNG etc. are in your application bundle as files and can be copied out of the apps bundle. This is particularly easy in MacOS apps. A shape that is compiled into the applications code can’t be borrowed. ## What do I need to do this? As has been suggested by other writers in these forums there are a number of tools and utilities that used together can get you from a bezier shape to a SwiftUI code snippet to do a shape in code. But it turns out that there is an unknown and highly underrated app on the Mac App Store that will do the whole job from start to finish with relatively easy to use tools in just a few steps. Sadly, as the developer of this app I am not allowed to tell you about it no matter how useful it might be so the following article is about an imaginary app called Garbanzo that hypothetically does exactly what you want and might save you hours of screwing around to get the desired result. ## How do I do this? Here’s the problem you need to solve. You want to add the letter “A” as a shape to a view in your SwiftUI app so that you can render it with a custom gradient fill and a stroke. You would like a SwiftUI.Path for this but even a simple shape like an ellipse has difficult geometry to write as code on your own. Here are the steps in Garbanzo 1. Open Garbanzo to a new canvas. 2. Select the text tool. 3. Click drag a box on the canvas for that text, make it pretty big. [![enter image description here](https://i.stack.imgur.com/eixlZ.png)](https://i.stack.imgur.com/eixlZ.png) 4. In the text inspector in the panel on the right (you might have to expand it) type “A” 5. Select the text in the inspector and in the windows text properties on the top left of the window select the font and size you want.  I used 92 point Noteworthy since it’s kind of a fun font.[![enter image description here](https://i.stack.imgur.com/M81NI.png)](https://i.stack.imgur.com/M81NI.png) 6. Expand the box around the text if some of it is obscured. 7. Right click on the the text box on the canvas and at the bottom of the contextual menu click “Text Element” -\> “Convert to Path Group” 8. The app creates a group containing a shape for each of the characters in the text even if it’s a single character so you will want to ungroup it by clicking the little ungroup button in the toolbar.  You might have to unselect and reselect to get the toolbar to update.  The original text will still be there so you might want to delete it to avoid confusion. Also I'm using a character converted to a path as a good, quick example that's pretty cool but you can draw any sort of shape you want with the tools. [![enter image description here](https://i.stack.imgur.com/vApps.png)](https://i.stack.imgur.com/vApps.png)[![enter image description here](https://i.stack.imgur.com/1wKei.png)](https://i.stack.imgur.com/1wKei.png) 9. Scale and rotate the shape however you want and also, if you want, you can edit the path one vertex at a time. 10. Select the shape and then click in the menu bar File -\> Export to … -\> SwiftUI Path … 11. Even without a subscription you can see the code snippet that was generated and if you want to use it, click the “Copy to Clipboard” button that appears if you have a subscription (or free trial).[![enter image description here](https://i.stack.imgur.com/X7c0r.png)](https://i.stack.imgur.com/X7c0r.png) 12. Just create a new Swift file in your Xcode project with a struct or class that can hold the code and paste the snippet in. 13. You're done. Now just use the shape in your code. Here's a fragment of the generated SwiftUI code: ``` // Stroke, Fill and Line Widgets not supported yet. // SwiftUI Code for A @ViewBuilder var a: some View { SwiftUI.Path { path in path.move(to: CGPoint(x: 42.24, y: 39.92)) path.addCurve( to: CGPoint(x: 38.80, y: 43.84), control1: CGPoint(x: 42.24, y: 42.48), control2: CGPoint(x: 41.09, y: 43.79) ) path.addCurve( to: CGPoint(x: 40.56, y: 48.80), control1: CGPoint(x: 39.07, y: 45.07), control2: CGPoint(x: 39.65, y: 46.72) ) ``` ## Conclusion: As you can see, by far the most complicated part of this is creating the path for the letter "A" and even that isn't very complicated. Happy coding.
Overview ======== This is a very interesting question with a surprising number of answers. The "correct" answer is something you must decide for your specific application. With months, you can choose to do either *chronological computations* or *calendrical computations*. A chronological computation deals with regular units of time points and time durations, such as hours, minutes and seconds. A calendrical computation deals with irregular calendars that mainly serve to give days memorable names. The Chronological Computation --------------------------------- If the question is about some physical process months in the future, physics doesn't care that different months have different lengths, and so a chronological computation is sufficient: * The baby is due in 9 months. * What will the weather be like here 6 months from now? In order to model these things, it may be sufficient to work in terms of the *average* month. One can create a `std::chrono::duration` that has *precisely* the length of an average Gregorian (civil) month. It is easiest to do this by defining a series of durations starting with `days`: `days` is 24 hours: using days = std::chrono::duration <int, std::ratio_multiply<std::ratio<24>, std::chrono::hours::period>>; `years` is 365.2425 `days`, or <sup>146097</sup>/<sub>400</sub> `days`: using years = std::chrono::duration <int, std::ratio_multiply<std::ratio<146097, 400>, days::period>>; And finally `months` is <sup>1</sup>/<sub>12</sub> of `years`: using months = std::chrono::duration <int, std::ratio_divide<years::period, std::ratio<12>>>; Now you can easily compute 8 months from now: auto t = system_clock::now() + months{8}; This is the simplest, and most efficient way to add months to a `system_clock::time_point`. *Important note:* This computation *does not* preserve the time of day, or even the day of the month. The Calendrical Computation --------------------------- It is also possible to add months while preserving *time of day* and *day of month*. Such computations are *calendrical computations* as opposed to *chronological computations*. After choosing a calendar (such as the Gregorian (civil) calendar, the Julian calendar, or perhaps the Islamic, Coptic or Ethiopic calendars &mdash; they all have months, but they are not all the same months), the process is: 1. Convert the `system_clock::time_point` to the calendar. 2. Perform the months computation in the calendrical system. 3. Convert the new calendar time back into `system_clock::time_point`. You can use [Howard Hinnant's free, open-source date/time library][1] to do this for a few calendars. Here is what it looks like for the civil calendar: #include "date/date.h" int main() { using namespace date; using namespace std::chrono; // Get the current time auto now = system_clock::now(); // Get a days-precision chrono::time_point auto sd = floor<days>(now); // Record the time of day auto time_of_day = now - sd; // Convert to a y/m/d calendar data structure year_month_day ymd = sd; // Add the months ymd += months{8}; // Add some policy for overflowing the day-of-month if desired if (!ymd.ok()) ymd = ymd.year()/ymd.month()/last; // Convert back to system_clock::time_point system_clock::time_point later = sys_days{ymd} + time_of_day; } If you don't explicitly check `!ymd.ok()` that is ok too. The only thing that can cause `!ymd.ok()` is for the day field to overflow. For example if you add a month to Oct 31, you'll get Nov 31. When you convert Nov 31 back to `sys_days` it will overflow to Dec 1, just like `mktime`. Or one could also declare an error on `!ymd.ok()` with an `assert` or exception. The choice of behavior is completely up to the client. For grins I just ran this, and compared it with `now + months{8}` and got: now is 2017-03-25 15:17:14.467080 later is 2017-11-25 15:17:14.467080 // calendrical computation now + months{8} is 2017-11-24 03:10:02.467080 // chronological computation This gives a rough "feel" for how the calendrical computation differs from the chronological computation. The latter is perfectly accurate on average; it just has a deviation from the calendrical on the order of a few days. And sometimes the simpler (latter) solution is *close enough*, and sometimes it is not. Only *you* can answer *that* question. **The Calendrical Computation &mdash; Now with timezones** You might want to perform your calendrical computation in a specific timezone. The previous computation was with respect to UTC. > Side note: `system_clock` is not specified to be UTC, but the de facto standard is that it is [Unix Time][2] which is a very close approximation to UTC. And C++20 standardizes this existing practice. You can use [Howard Hinnant's free, open-source timezone library][3] to do this computation. This is an extension of the previously mentioned [datetime library][1]. The code is very similar, you just need to convert to local time from UTC, then to a local calendar, do the computation in the calendrical system, then convert back to local time, and finally back to `system_clock::time_point` (UTC): #include "date/tz.h" int main() { using namespace date; using namespace std::chrono; // Get the current local time zoned_time lt{current_zone(), system_clock::now()}; // Get a days-precision chrono::time_point auto ld = floor<days>(lt.get_local_time()); // Record the local time of day auto time_of_day = lt.get_local_time() - ld; // Convert to a y/m/d calendar data structure year_month_day ymd{ld}; // Add the months ymd += months{8}; // Add some policy for overflowing the day-of-month if desired if (!ymd.ok()) ymd = ymd.year()/ymd.month()/last; // Convert back to local time lt = local_days{ymd} + time_of_day; // Convert back to system_clock::time_point auto later = lt.get_sys_time(); } Updating our results I get: now is 2017-03-25 15:17:14.467080 later is 2017-11-25 15:17:14.467080 // calendrical: UTC later is 2017-11-25 16:17:14.467080 // calendrical: America/New_York now + months{8} is 2017-11-24 03:10:02.467080 // chronological computation The time is an hour later (UTC) because I preserved the local time (11:17am) but the computation started in daylight saving time, and ended in standard time, and so the UTC equivalent is later by 1 hour. The conversion from local time back to UTC is not guaranteed to be unique: // Convert back to local time lt = local_days{ymd} + time_of_day; For example if the resultant local time falls within a daylight saving transition where the UTC offset is decreasing, then there exist *two* mappings from this local time to UTC. The default behavior is to throw an exception if this happens. However one can also preemptively choose the first chronological mapping or the second in the event there are two mappings by replacing this: lt = local_days{ymd} + time_of_day; with: lt = zoned_time{lt.get_time_zone(), local_days{ymd} + time_of_day, choose::earliest}; (or `choose::latest`). If the resultant local time falls within a daylight saving transition where the UTC offset is *increasing*, then the result is in a gap where there are *zero* mappings to UTC. In this case both `choose::earliest` and `choose::latest` map to the same UTC time which borders the local time gap. An Alternative Time Zone Computation --- Above I used `current_zone()` to pick up my current location, but I could have also used a specific time zone (e.g. `"Asia/Tokyo"`). If a different time zone has different daylight saving rules, and if the computation crosses a daylight saving boundary, then this could impact the result you get. An Alternative Calendrical Computation --------------------------- Instead of adding months to 2017-03-25, one might prefer to add months to the 4th Saturday of March 2017, resulting in the 4th Saturday of November 2017. The process is quite similar, one just converts to and from a different "calendar": Instead of this: year_month_day ymd{ld}; ymd += months{8}; one does this: year_month_weekday ymd{ld}; ymd += months{8}; or even more concisely: auto ymd = year_month_weekday{ld} + months{8}; One can choose to do this computation in the `sys_time` system (UTC) or in a specific time zone, just like with the `year_month_day` calendar. And one can choose to check for `!ymd.ok()` in the case that you start on the 5th Saturday, but the resulting month doesn't have 5 Saturdays. If you don't check, then the conversion back to `sys_days` or `local_days` will roll over to the first Saturday (for example) of the next month. Or you can snap back to the *last* Saturday of the month: if (!ymd.ok()) ymd = year_month_weekday{ymd.year()/ymd.month()/ymd.weekday()[last]}; Or you could assert or throw an exception on `!ymd.ok()`. And like above, one could choose what happens if the resultant local time does not have a unique mapping back to UTC. There are lots of design choices to make. And they can each impact the result you get. And in hindsight, just doing the simple chronological computation may not be unreasonable. It all depends on *your* needs. C++20 Update ------------ As I write this update, technical work has ceased on C++20, and it looks like we will have a new C++ standard later this year (just administrative work left to do to complete C++20). The advice in this answer translates well to C++20: 1. For the chronological computation, `std::chrono::months` is supplied by `<chrono>` so you don't have to compute it yourself. 2. For the UTC calendrical computation, loose `#include "date.h"` and use instead `#include <chrono>`, and drop `using namespace date`, and things will just work. 3. For the time zone sensitive calendrical computation, loose `#include "tz.h"` and use instead `#include <chrono>`, drop `using namespace date`, and you're good to go. [1]: https://howardhinnant.github.io/date/date.html [2]: https://en.wikipedia.org/wiki/Unix_time [3]: https://howardhinnant.github.io/date/tz.html
It's likely that Firefox is not looking for your `userChrome.css` on startup. 1. Open up a tab and navigate to `about:config` and click `Accept the Risk and Continue`. 2. In the `Search for Preference` box type `userprof`. 3. It should find the `toolkit.legacyUserProfileCustomizations.stylesheets` option. Just click the `toggle` button on the far right to change the setting to `true`. 4. Modify your `userChrome.css` to this: ````css .tab-background:is([selected], [multiselected]) { background-color: #dd9933 !important; background-image: none !important; } ```` 5. Restart your browser. **Note**: looking with the Browser Toolbox it would appear that Firefox adds a `selected=""` attribute to the currently selected tab which is why your css selector of `.tab-background[selected="true"]` no longer works.
{"Voters":[{"Id":238704,"DisplayName":"President James K. Polk"},{"Id":17562044,"DisplayName":"Sunderam Dubey"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[13]}
{"Voters":[{"Id":5910058,"DisplayName":"Jesper Juhl"},{"Id":1940850,"DisplayName":"karel"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[13]}
In Angular Material 15, you can use `subscriptSizing="dynamic"`. ```html <mat-form-field class="w-full" appearance="outline" subscriptSizing="dynamic"> </mat-form-filed> ``` Result: [![result][1]][1] [1]: https://i.stack.imgur.com/9KZcE.png
{"Voters":[{"Id":11683,"DisplayName":"GSerg"},{"Id":1940850,"DisplayName":"karel"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[18]}
{"Voters":[{"Id":1126841,"DisplayName":"chepner"},{"Id":1940850,"DisplayName":"karel"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[18]}
I'm trying to estimate the potential reach of my Facebook ad campaign using the Reach Estimate API, but I need to input the budget as a parameter. However, it seems the endpoint doesn't accept budget as a parameter. Can anyone advise on how to accurately estimate reach while specifying the budget I attempted to use the Reach Estimate API endpoint with the specified URL, but encountered limitations as it doesn't accept budget as a parameter. ``` const axios = require('axios'); const params = { targeting_spec: { specifications here }, optimization_goal: 'REACH', // Set the optimization goal currency: 'USD', daily_budget : 5 access_token: 'MY_ACCESS_TOKEN' // Add your access token here }; // Make a POST request to the Reach Estimate API endpoint axios.post('https://graph.facebook.com/v19.0/act_{my ad acc}/reachestimate', params) .then(response => { console.log('Estimated Reach:', response.data.users); }) .catch(error => { console.error('Error:', error.response.data.error); }); ```
We inherited a site with zero version control / no info on how it was built - only enough info to ssh onto the server. We want to add git, so we can start developing locally and pushing updates via git. What's the best way to do this, without breaking anything? My first inclination is to create a blank repo on github, initialize a git repo inside the site root (which I'm 99% sure about..), `git add .`, `git commit -m "Initial commit"` and push everything to github. Then clone the repo and create a staging and local site. I just want to double confirm that is a sound plan, because I don't do this very often (usually start off projects with git from the very beginning). And I don't want to break anything while I'm poking around on the server. Thank you :)
Best way to add git to existing project
|git|deployment|version-control|
Let's say I have an example data set like this. My real data is much larger. df <- data.frame(ID = rep(c(1:8),each = 8 ), var1 = factor(rep(rep(c('1','2'), each = 4),8)), var2 = factor(rep(c("EXP", "CTR"), 32)), var3 = factor(rep(c('X','Z','Z','X'), 16)), var4 = factor(rep(rep(c('G1','G1','G2','G2'),each=8),2)), var5 = factor(rep(rep(c('GA','GB','GB','GA'),each=8),2)), value = sample(0:100, 64, rep = TRUE)) %>% arrange(ID,var1,var2,var3,var4,var5) %>% ungroup() What I want to achieve is to connect the individual data points of the jitter in the graph, particularly between X and Z. #set position of the jitter pos_jit=position_jitterdodge(jitter.width=0.3, dodge.width = 0.9, seed=1) ggplot(df, aes(x=var1, y=value, fill=var2, alpha=var3)) +theme_classic() + facet_wrap(vars(var4,var5), nrow=4) + scale_fill_brewer(palette='Set1')+ # scale_fill_manual(values=c("#49111c","#bddbd0"))+ scale_alpha_discrete(range = c(1, 0.3),guide = guide_legend(override.aes = list(fill = "black"))) + geom_violin(position = 'dodge') + geom_point( size=2, color='grey', position=pos_jit) + #geom_path(aes(group=interaction(ID,var1)), position=pos_jit) + geom_line(aes(group=ID), position=pos_jit) + stat_summary(fun = "mean", geom = "crossbar", width = .5, size=0.4, color = "black", position = position_dodge(.9), show.legend=FALSE) + theme(legend.position= "top")+theme(legend.title=element_blank()) + ylab('RT in ms') + xlab('') The graph I get looks like the one below. However, the lines appear at random positions. I have the impression that the problem comes with the fill and alpha, right? [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/h09cX.jpg
Calling `NestedReference.builder()` should work. Which version of Java and Lombok are you using? Have you tried calling the `NestedReferenceBuilder` constructor directly? ```java public static NestedReference of(ReferenceId referenceId) { return new NestedReferenceBuilder() .referenceId(referenceId) .build(); } ```
This is the error I'm getting: ``` 2024-03-15T10:05:58.263-04:00 INFO 12408 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1828 ms 2024-03-15T10:05:58.356-04:00 ERROR 12408 --- [ main] com.zaxxer.hikari.HikariConfig : Failed to load driver class oracle.jdbc.OracleDriver from HikariConfig class classloader jdk.internal.loader.ClassLoaders$AppClassLoader@76ed5528 2024-03-15T10:05:58.361-04:00 WARN 12408 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'dataSourceScriptDatabaseInitializer' defined in class path resource [org/springframework/boot/autoconfigure/sql/init/DataSourceInitializationConfiguration.class]: Unsatisfied dependency expressed through method 'dataSourceScriptDatabaseInitializer' parameter 0: Error creating bean with name 'dataSource' defined in class path resource [org/springframework/boot/autoconfigure/jdbc/DataSourceConfiguration$Hikari.class]: Failed to instantiate [com.zaxxer.hikari.HikariDataSource]: Factory method 'dataSource' threw exception with message: Failed to load driver class oracle.jdbc.OracleDriver in either of HikariConfig class loader or Thread context classloader 2024-03-15T10:05:58.366-04:00 INFO 12408 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat] 2024-03-15T10:05:58.382-04:00 INFO 12408 --- [ main] .s.b.a.l.ConditionEvaluationReportLogger : Error starting ApplicationContext. To display the condition evaluation report re-run your application with 'debug' enabled. 2024-03-15T10:05:58.411-04:00 ERROR 12408 --- [ main] o.s.boot.SpringApplication : Application run failed org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'dataSourceScriptDatabaseInitializer' defined in class path resource [org/springframework/boot/autoconfigure/sql/init/DataSourceInitializationConfiguration.class]: Unsatisfied dependency expressed through method 'dataSourceScriptDatabaseInitializer' parameter 0: Error creating bean with name 'dataSource' defined in class path resource [org/springframework/boot/autoconfigure/jdbc/DataSourceConfiguration$Hikari.class]: Failed to instantiate [com.zaxxer.hikari.HikariDataSource]: Factory method 'dataSource' threw exception with message: Failed to load driver class oracle.jdbc.OracleDriver in either of HikariConfig class loader or Thread context classloader at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:798) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:542) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1335) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1165) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:562) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:522) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:325) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:323) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:312) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1231) ~[spring-context-6.1.4.jar:6.1.4] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:949) ~[spring-context-6.1.4.jar:6.1.4] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:624) ~[spring-context-6.1.4.jar:6.1.4] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:146) ~[spring-boot-3.2.3.jar:3.2.3] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-3.2.3.jar:3.2.3] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:456) ~[spring-boot-3.2.3.jar:3.2.3] at org.springframework.boot.SpringApplication.run(SpringApplication.java:334) ~[spring-boot-3.2.3.jar:3.2.3] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1354) ~[spring-boot-3.2.3.jar:3.2.3] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-3.2.3.jar:3.2.3] at com.example.socrates.SocratesApplication.main(SocratesApplication.java:9) ~[classes/:na] Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataSource' defined in class path resource [org/springframework/boot/autoconfigure/jdbc/DataSourceConfiguration$Hikari.class]: Failed to instantiate [com.zaxxer.hikari.HikariDataSource]: Factory method 'dataSource' threw exception with message: Failed to load driver class oracle.jdbc.OracleDriver in either of HikariConfig class loader or Thread context classloader at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:651) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:639) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1335) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1165) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:562) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:522) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:325) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:323) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:254) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1443) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1353) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:907) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:785) ~[spring-beans-6.1.4.jar:6.1.4] ... 21 common frames omitted Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.zaxxer.hikari.HikariDataSource]: Factory method 'dataSource' threw exception with message: Failed to load driver class oracle.jdbc.OracleDriver in either of HikariConfig class loader or Thread context classloader at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:177) ~[spring-beans-6.1.4.jar:6.1.4] at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:647) ~[spring-beans-6.1.4.jar:6.1.4] ... 35 common frames omitted Caused by: java.lang.RuntimeException: Failed to load driver class oracle.jdbc.OracleDriver in either of HikariConfig class loader or Thread context classloader at com.zaxxer.hikari.HikariConfig.setDriverClassName(HikariConfig.java:488) ~[HikariCP-5.0.1.jar:na] at org.springframework.boot.jdbc.DataSourceBuilder$MappedDataSourceProperty.set(DataSourceBuilder.java:479) ~[spring-boot-3.2.3.jar:3.2.3] at org.springframework.boot.jdbc.DataSourceBuilder$MappedDataSourceProperties.set(DataSourceBuilder.java:373) ~[spring-boot-3.2.3.jar:3.2.3] at org.springframework.boot.jdbc.DataSourceBuilder.build(DataSourceBuilder.java:183) ~[spring-boot-3.2.3.jar:3.2.3] at org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration.createDataSource(DataSourceConfiguration.java:59) ~[spring-boot-autoconfigure-3.2.3.jar:3.2.3] at org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration$Hikari.dataSource(DataSourceConfiguration.java:117) ~[spring-boot-autoconfigure-3.2.3.jar:3.2.3] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[na:na] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:140) ~[spring-beans-6.1.4.jar:6.1.4] ... 36 common frames omitted Process finished with exit code 1 ``` Here is my pom.xml: ``` <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.2.3</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>Socrates</artifactId> <version>0.0.1-SNAPSHOT</version> <name>Socrates</name> <description>Socrates</description> <properties> <java.version>21</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> ``` Maybe it has to do with my version of Spring? This is my first time really working with it so I don't have a lot of background knowledge to go off of. ``` spring.datasource.url=jdbc:oracle:thin:@socratesdb_high?TNS_ADMIN=C:/Oracle/Wallet_SocratesDB spring.datasource.username=ENS spring.datasource.password=@wwnfK2&x#VpnPY7 spring.jpa.hibernate.ddl-auto=update ``` I've been trying to change/add things here to my application.properties file but it doesn't seem to help.
Spring Boot - Application Fails to run. I thought it had to do with my datasource setup but now I think it a dependency issue
|c++|compiler-errors|comparator|priority-queue|
{"Voters":[{"Id":1431720,"DisplayName":"Robert"},{"Id":354577,"DisplayName":"Chris"},{"Id":16217248,"DisplayName":"CPlus"}],"SiteSpecificCloseReasonIds":[18]}
I'm trying to set up a little python script showing me e.g. my current CPU temperature. I imported psutil to do so. As mentioned in the [psutil documentation](https://psutil.readthedocs.io/en/latest/), I tried the command `psutil.sensors_temperatures(fahrenheit=False)`. Nevertheless, the only output of the program is `{}`. Did I make a mistake or could this be a problem caused by the temperature sensor of my CPU? Operating system: Ubuntu 22.04.4 LTS Python version: 3.10.12 Thanks for your advice!
{"OriginalQuestionIds":[32170456],"Voters":[{"Id":341994,"DisplayName":"matt","BindingReason":{"GoldTagBadge":"swift"}}]}
I am developing a project where we have a Java Spring backend server connecting with a front-end page through endpoints. At the moment in one of our endpoints we are sending an image url to be represented in the frontend. Currently we can see some performance issues on the frontend when loading the images as the sizes vary quite a lot and some are quite big. In order to try and solve this I am checking how to compress the image in the backend in order to be easier for the front end to represent them. However, during my investigation I also noticed that there are a lot of tools to compress images in the front end directly. My question is: Should I compress them in the backend (will include downloading the image + compressing it into a new file + sending it to frontend + deleting the file) or should it be done by the frontend?
--summary-interval=SEC Set interval in seconds to output download progress summary. Setting 0 suppresses the output. Default: 60 https://aria2.github.io/manual/en/html/aria2c.html#cmdoption-summary-interval
Here is an example to break a string into multiple lines in a @dataclass for use with QT style sheets. The end of each line needs to be terminate in double quote followed by a slash, except for the last line. Indentation of the start of each line is also critical. `@dataclass class button_a: i: str = "QPushButton:hover {background: qradialgradient(cx:0, cy:0, radius:1,fx:0.5,fy:0.5,stop:0 white, stop:1 green);"\ "color:qradialgradient(cx:0, cy:0, radius:1,fx:0.5,fy:0.5,stop:0 yellow, stop:1 brown);"\ "border-color: purple;}" `
I have an Order model that contains Items ``` @Model class Order: Decodable { @Attribute(.unique) var orderId: String var items: [Item] } ``` When inside the list view, I fetch all the orders, the Items are not loaded and shows empty var body: some View { List { DynamicQuery(orderDescriptor) { orders in ForEach(orders) { order in ForEach(order.items) { item in if let sandwichName = item.data?.sandwichName { Text(sandwichName) } } } } .onReceive(toolbarModel.$selectedDate) { newDate in print("Date changed to: \(newDate)") getOrders(ofDate: newDate) { result in switch result { case .success: print("Order Fetch successful for Date \(newDate)") case .failure(let error): print("Order Fetch failed with \(error)") } } } } This is how I am inserting the orders into modelcontext ``` let orders = try JSONDecoder().decode([Order].self, from: jsonData) for order in orders { // print(order.items) // crashes the app modelContext.insert(order) } ``` I tried defining the relationshipKeyPathsForPrefetching but doesnt work. ``` private var orderDescriptor: FetchDescriptor<Order> { var fetchDescriptor = FetchDescriptor( predicate: Order.currentOrders(with: toolbarModel.selectedDate), sortBy: [SortDescriptor(\Order.time)] ) // fetchDescriptor.relationshipKeyPathsForPrefetching = [\.items] return fetchDescriptor } ``` I know the relationship are lazy load but I assume if I am referring the items inside the list, it should load? If I reload the app and wait a bit, I could see some items so seems like its loading but not right away. Anything that I am missing?
How can I estimate the reach of my Facebook ad using the Reach Estimate API while specifying the budget
|api|facebook|meta|
null
I'm encountering a persistent "UserWarning" during the execution of my Python code and have been unable to suppress it despite trying various methods. I'd appreciate some guidance on resolving this issue. Here's the context: I have a function in my code called get_water_data, which downloads water data and performs some operations. However, during the execution of this function, I encounter the following warning: UserWarning: keep_geom_type can not be called on a mixed type GeoDataFrame. Here's the function: def get_water_data(north, south, east, west, mask_gdf): print("Download water_data in progress....") tags= {"natural":["water"], "waterway":["river"]} gdf = ox.features_from_bbox(north, south, east, west, tags) gdf_reproj = ox.project_gdf(gdf, to_crs="EPSG:3857") gdf_clipped= gpd.clip(gdf_reproj, mask_gdf, keep_geom_type=True) gdf_clipped = gdf_clipped[(gdf_clipped['geometry'].geom_type =='Polygon') | (gdf_clipped['geometry'].geom_type == 'LineString')] print("water_data ...OK\n") return gdf_clipped I've attempted to suppress this warning using methods commonly recommended, including: 1.Importing the warnings module before all others and using warnings.filterwarnings("ignore", category=UserWarning, module="geopandas") in the scope of the function like this : import warnings import geopandas as gpd def get_water_data(north, south, east, west, mask_gdf): warnings.filterwarnings("ignore", category=UserWarning, module="geopandas"). print("Download water_data in progress....") tags= {"natural":["water"], "waterway":["river"]} gdf = ox.features_from_bbox(north, south, east, west, tags) gdf_reproj = ox.project_gdf(gdf, to_crs="EPSG:3857") gdf_clipped= gpd.clip(gdf_reproj, mask_gdf, keep_geom_type=True) gdf_clipped = gdf_clipped[(gdf_clipped['geometry'].geom_type =='Polygon') | (gdf_clipped['geometry'].geom_type == 'LineString')] print("water_data ...OK\n") return gdf_clipped 2.Trying a total suppression of warnings using warnings.filterwarnings("ignore"). However, the warning persists. Could someone provide guidance on effectively suppressing this specific UserWarning? Any insights or alternative approaches would be greatly appreciated. Thank you in advance!
Need assistance suppressing specific UserWarning in Python code execution
|python|warnings|geopandas|
So if you tell me that your Subscriptions are created using Stripe Checkout, then you have a perfect solution. If not, you still have a solution, but is awkward enough... If you create your Checkout Session from the backend, then you can just use this code to create variable Products with a custom name: const session = await stripe.checkout.sessions.create({ success_url: 'https://example.com/success', line_items: [ { price_data: { unit_amount: amount, currency: 'usd', recurring: {interval:'month'}, product_data: {name: 'Monthly Subscription for Hannah'} }, quantity: 1, }, ], mode: 'subscription', }); Result: [![enter image description here][1]][1] Unless I misunderstood your issue, this would be exactly what you want: - The integration is simple, UI is similar to the billing portal. - The Products are showing precisely how you want them to be. - Those ad-hoc Prices and Products only apply to this Subscription, they can't be used for another Customer/Subscription, and won't be cluttering your Stripe account. If you're not using Stripe Checkout, then it's not as good, because you can't create those ad-hoc Products. You can create ad-hoc Prices, but that's no good for you, because what's displayed on the billing portal is the Product.name, nothing to do with the Price. Ultimately, if you don't use Checkout, then you'll need to [create each Product][2] individually (e.g. with `name='Subscription for Hannah`) before making your Subscription for it. [1]: https://i.stack.imgur.com/GJ0IF.png [2]: https://docs.stripe.com/api/products/create
SQL Json_value alternative in GraphQL with Hot Chocolate
|sql-server|graphql|.net-6.0|hotchocolate|json-value|
null
I had a similar problem with mui-one-time-password-input and I solved it by changing the version of the library to an older one
embedPy needs to make a call to Python. It tests `python3` first and if that fails it tries `python`. You can test on your command prompt to check if they run: ``` C:\Users\rianoc>where python3 C:\Users\rianoc\AppData\Local\Microsoft\WindowsApps\python3.exe C:\Users\rianoc>where python C:\Users\rianoc\AppData\Local\Microsoft\WindowsApps\python.exe C:\Users\rianoc>python3 -c "print('.'.join([str(getattr(__import__('sys').version_info,x))for x in ['major','minor']]));" 3.11 C:\Users\rianoc>python -c "print('.'.join([str(getattr(__import__('sys').version_info,x))for x in ['major','minor']]));" 3.11 ```
There is no need to clone the stream (which has a performance impact and is not always possible as not all types are cloneable), and certainly no need for unsafe. We just need `BufReader` and `BufWriter` that forward `Write` and `Read`, respectively, to their inner stream. The default ones in std do not do that, but they do provide access to the inner stream, so we can use that: ```rust use std::io::{BufRead, BufReader, BufWriter, IoSlice, IoSliceMut, Read, Result, Write}; pub struct MyBufReader<T: ?Sized> { inner: BufReader<T>, } impl<T: Read> MyBufReader<T> { pub fn new(inner: T) -> Self { Self { inner: BufReader::new(inner) } } } impl<T: Read + ?Sized> Read for MyBufReader<T> { fn read(&mut self, buf: &mut [u8]) -> Result<usize> { self.inner.read(buf) } fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize> { self.inner.read_vectored(bufs) } fn read_exact(&mut self, buf: &mut [u8]) -> Result<()> { self.inner.read_exact(buf) } fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize> { self.inner.read_to_end(buf) } fn read_to_string(&mut self, buf: &mut String) -> Result<usize> { self.inner.read_to_string(buf) } } impl<T: Read + ?Sized> BufRead for MyBufReader<T> { fn fill_buf(&mut self) -> Result<&[u8]> { self.inner.fill_buf() } fn consume(&mut self, amt: usize) { self.inner.consume(amt) } } impl<T: Write + ?Sized> Write for MyBufReader<T> { fn write(&mut self, buf: &[u8]) -> Result<usize> { self.inner.get_mut().write(buf) } fn write_all(&mut self, buf: &[u8]) -> Result<()> { self.inner.get_mut().write_all(buf) } fn write_vectored(&mut self, bufs: &[IoSlice<'_>]) -> Result<usize> { self.inner.get_mut().write_vectored(bufs) } fn flush(&mut self) -> Result<()> { self.inner.get_mut().flush() } } pub struct MyBufWriter<T: ?Sized + Write> { inner: BufWriter<T>, } impl<T: Write> MyBufWriter<T> { pub fn new(inner: T) -> Self { Self { inner: BufWriter::new(inner) } } } impl<T: Write + ?Sized> Write for MyBufWriter<T> { fn write(&mut self, buf: &[u8]) -> Result<usize> { self.inner.write(buf) } fn write_all(&mut self, buf: &[u8]) -> Result<()> { self.inner.write_all(buf) } fn write_vectored(&mut self, bufs: &[IoSlice<'_>]) -> Result<usize> { self.inner.write_vectored(bufs) } fn flush(&mut self) -> Result<()> { self.inner.flush() } } impl<T: Write + Read + ?Sized> Read for MyBufWriter<T> { fn read(&mut self, buf: &mut [u8]) -> Result<usize> { self.inner.get_mut().read(buf) } fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize> { self.inner.get_mut().read_vectored(bufs) } fn read_exact(&mut self, buf: &mut [u8]) -> Result<()> { self.inner.get_mut().read_exact(buf) } fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize> { self.inner.get_mut().read_to_end(buf) } fn read_to_string(&mut self, buf: &mut String) -> Result<usize> { self.inner.get_mut().read_to_string(buf) } } impl<T: Write + BufRead + ?Sized> BufRead for MyBufWriter<T> { fn fill_buf(&mut self) -> Result<&[u8]> { self.inner.get_mut().fill_buf() } fn consume(&mut self, amt: usize) { self.inner.get_mut().consume(amt) } } ``` Now `MyBufReader<MyBufWriter<TcpStream>>` or `MyBufWriter<MyBufReader<TcpStream>>` will be both `Write` and `Read`.