instruction stringlengths 0 30k β |
|---|
Arduino: How can I check when a LED is On and the joystick is moved ongoing to this direction? |
|arduino|arduino-uno|arduino-ide| |
null |
You're better off filtering for specific process which can be done via PID or Process Name, both pieces of information are easily obtained from the task manager if you don't already know.
Once you have the PID this will work great:
$PID=<Your Process ID)
(Get-WmiObject win32_process -Filter ProcessId=$PID -Property CommandLine).CommandLine
Example of getting java.exe by process name:
```
(Get-WmiObject -Class win32_process -Filter "Name='java.exe'" -Property CommandLine).CommandLine
```
**added by barlop**
example with output-
PS C:\Users\User> (Get-WmiObject win32_process -Filter ProcessId=1676 -Property CommandLine).CommandLine <ENTER>
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --type=renderer --field-trial-handle=1108,1777349067310493
8616,10462310811264875730,131072 --lang=en-GB --enable-auto-reload --origin-trial-disabled-features=MeasureMemory --devi
ce-scale-factor=1 --num-raster-threads=2 --enable-main-frame-before-activation --renderer-client-id=1695 --no-v8-untrust
ed-code-mitigations --mojo-platform-channel-handle=11412 /prefetch:1
PS C:\Users\User>
|
Selenium Python - The element I'm looking for cant be found even though it exists in Yahoo Finance |
|python|selenium-webdriver|yahoo| |
Express does not parse json data by default. You likely are missing the json parser middleware.
Try adding the following to your code, ideally where you first initialize express;
```
app.use(express.json())
``` |
!! **EDIT**: a solution can be found at the bottom of the question !!
I am using a Huawei E3372 4G USB dongle on Win8.1. This dongle's settings can be accessed via browser by typing 192.168.8.1 and the user can enable a 4G connection by manually clicking the "Enable mobile data" button.
This is the script I am trying to use to "Enable mobile data" connection, knowing I'm doing something wrong only on line 4:
```
#!/bin/bash
curl -s -X GET "http://192.168.8.1/api/webserver/token" > token.xml
TOKEN=$(grep -v '<?xml version="1.0" encoding="UTF-8"?><response><token>' -v '</token></response>' token.xml)
curl "http://192.168.8.1/api/dialup/mobile-dataswitch" -H "Host: 192.168.8.1" -H "User-Agent: Mozilla/5.0 (Windows NT 6.3; rv:68.0) Gecko/20100101 Goanna/4.8 Firefox/68.0 PaleMoon/29.0.1" -H "Accept: */*" -H "Accept-Language: en-US,en;q=0.5" --compressed -H "Content-Type: application/x-www-form-urlencoded; charset=UTF-8;" -H "_ResponseSource: Broswer" -H "__RequestVerificationToken: $TOKEN" -H "X-Requested-With: XMLHttpRequest" -H "Referer: http://192.168.8.1/html/content.html" -H "Cookie: SessionID=AgVjkIjBxOC0xPbys3nne7rA4I8GXNzUkZCcSOGPR8P3xss8XOuqRbdb0EgHidXhQXZ903xf0nk0F8J81ISqHpZ7kYvZaSW5wHWDqJ9w90pXj90cPwCm7F01fFcmp0gv" -H "Connection: keep-alive" --data-raw "<?xml version=""1.0"" encoding=""UTF-8""?><request><dataswitch>1</dataswitch></request>"
date
exec $SHELL
```
Upon executing the first curl command, the xml file's content would look like this:
```
<?xml version="1.0" encoding="UTF-8"?><response><token>ZsxY7Q9G90jh4FqUiAjxD9XmqLWf0rYg4RUNf6FoVzeTIlPPms0Ov1RERFFRY77o</token></response>
```
Just for test purposes, if I manually insert the token in the bash script, it works like a charm:
```
#!/bin/bash
curl "http://192.168.8.1/api/dialup/mobile-dataswitch" -H "Host: 192.168.8.1" -H "User-Agent: Mozilla/5.0 (Windows NT 6.3; rv:68.0) Gecko/20100101 Goanna/4.8 Firefox/68.0 PaleMoon/29.0.1" -H "Accept: */*" -H "Accept-Language: en-US,en;q=0.5" --compressed -H "Content-Type: application/x-www-form-urlencoded; charset=UTF-8;" -H "_ResponseSource: Broswer" -H "__RequestVerificationToken: ZsxY7Q9G90jh4FqUiAjxD9XmqLWf0rYg4RUNf6FoVzeTIlPPms0Ov1RERFFRY77o" -H "X-Requested-With: XMLHttpRequest" -H "Referer: http://192.168.8.1/html/content.html" -H "Cookie: SessionID=AgVjkIjBxOC0xPbys3nne7rA4I8GXNzUkZCcSOGPR8P3xss8XOuqRbdb0EgHidXhQXZ903xf0nk0F8J81ISqHpZ7kYvZaSW5wHWDqJ9w90pXj90cPwCm7F01fFcmp0gv" -H "Connection: keep-alive" --data-raw "<?xml version=""1.0"" encoding=""UTF-8""?><request><dataswitch>1</dataswitch></request>"
date
exec $SHELL
```
I've found several suggestions for the same or similar dongles, none of them worked for me, it must be due to my insufficient knowledge. My cry for help is about line 4 of the top-most script. I am making a mistake somewhere obviously.
Thank you in advance for your help.
===== EDIT: SOLUTION IS FOUND ! =====
markp-fuso's suggestion was the path to my solution. Kudos. I just noticed that besides a variable "token" which was changing upon each "on/off" action, this dongle also had a less variable "SesTokInfo" which is not being changed upon each "on/off" action (I just tested that manually) and it is different than what it was yesterday. It could be "plug/unplug" of dongle that causes that, I honestly can't know.
To whom it may concern: The final form of the working script which I've just tested with positive result twice would be: (note that the script contains the curl command to "Enable mobile data". The one to "Disable mobile data" should contain 0 instead of 1 in the "dataswitch>1</dataswitch" part.
```
#!/bin/bash
curl -s -X GET "http://192.168.8.1/api/webserver/token" > token.xml
curl -s -X GET "http://192.168.8.1/api/webserver/SesTokInfo" > sestoken.xml
TOKEN=$(sed -En 's|.*<token>(.*)</token>.*|\1|p' token.xml)
SESTOKEN=$(sed -En 's|.*<SesInfo>(.*)</SesInfo>.*|\1|p' sestoken.xml)
typeset -p TOKEN
typeset -p SESTOKEN
curl "http://192.168.8.1/api/dialup/mobile-dataswitch" -H "Host: 192.168.8.1" -H "User-Agent: Mozilla/5.0 (Windows NT 6.3; rv:68.0) Gecko/20100101 Goanna/4.8 Firefox/68.0 PaleMoon/29.0.1" -H "Accept: */*" -H "Accept-Language: en-US,en;q=0.5" --compressed -H "Content-Type: application/x-www-form-urlencoded; charset=UTF-8;" -H "_ResponseSource: Broswer" -H "__RequestVerificationToken: $TOKEN" -H "X-Requested-With: XMLHttpRequest" -H "Referer: http://192.168.8.1/html/content.html" -H "Cookie: SessionID=$SESTOKEN" -H "Connection: keep-alive" --data-raw "<?xml version=""1.0"" encoding=""UTF-8""?><request><dataswitch>1</dataswitch></request>"
date
exec $SHELL
``` |
I'm using ```Make``` (```Automake```) to compile and execute unit tests. However these tests need to read and write test data. If I just specify a path, the tests only work from a specific directory. Even using ```builddir```, via config.h, isn't particularly useful, because it is of course evaluated at compiletime rather than at runtime. What would be nice is, if the builddir would get passed at runtime instead. But would it be better to specify them via the command arguments or via the environment? And is it "better" to specify individual files or a generic directory? I would consider a ```PATH```-like search behaviour overkill for just a test, or would that be recommended?
So the question is, how would I specify the path to a test file best, in terms of portability, interoperability, maintainability and common sense? |
How to refer to the filepath of test data in test sourcecode? |
{"Voters":[{"Id":2325110,"DisplayName":"user2325110"}]} |
I to got the same error while working on streamlit.
To solve this error u have to change enocoding of config.toml to utf-8
for that got to file explorer
and search for config.toml in explorer , open the config files with notpad and change encoding to utf-8 and save it
for changing click on save as so that changing is possible.
HOPE IT HELPS.
Thank you.
[IMAGE of utf-8 changing][1]
[1]: https://i.stack.imgur.com/sVLDp.png |
I've been having trouble with how I'm supposed to fetch the data using an API call. I'm using useQuery and I've looked around for how I'm supposed to implement it and I'm just getting errors with it and async keywords that may or may not be needed in the file. I'm just confused with use client and what should and shouldn't be in that kind of file.
Here's how I have useQuery set up
```
"use client"
import { useQuery } from "@tanstack/react-query";
import fetchData from "../app/fetchData.jsx";
export default function Temps(){
const{ data, error, isError } = useQuery({
queryKey: ["wdata"],
queryFn: () => fetchData(),
});
if (isError) <span>Error: {error.message}</span>
if(data)
return(
<div>
<p> {data.toString()}</p>
</div>
)
}
```
Here's where I have fetchData
```
"use client"
import { usePathname } from "next/navigation";
async function getData(){
const options = {
method:"GET",
headers: {
accept: "application/json",
},
};
const pathName = usePathname();
const { searchParams } = pathName;
const lat = searchParams.get('lat');
const lon = searchParams.get('lon');
const response = await fetch(
`http://api.openweathermap.org/data/2.5/weather?lat=${lat}&lon=${lon}&appid=${process.env.API_KEY}&units=imperial`,
options)
.then ((response) => response.json())
.catch((err) => console.error(err));
return response;
}
export default async function fetchData(){
const data = await getData();
return data;
}
```
I'm thinking there might be something in the above code that's causing the error. Maybe the usePathname and searchParams but I'm not exactly sure how I should solve it.
also here's the home page where I use useQueryClient
```
import WeatherApp from './WeatherApp';
import { QueryClient } from '@tanstack/react-query';
import fetchData from './fetchData';
import Temps from '@/components/Temps';
export default async function Home() {
const queryClient = new QueryClient()
await queryClient.prefetchQuery({
queryKey: ["wdata"],
queryFn: () => fetchData(),
})
return (
<div>
<WeatherApp />
<Temps />
</div>
);
}
``` |
How can I adapt my Chrome packaged apps for Kiosks following their depreciation? I've begun experimenting with PWAs as an alternative, and while they work online and offline on various devices, when installed as a Kiosk app, they only function online, offline functionality is required. Any insights on resolving this issue?
Service Worker
```
const CACHE_NAME = 'cache-v1';
const urlsToCache = [
'/', // Include the root URL to handle HTML updates
'https://server'
'https://server/index.html',
'https://server/Main.css',
'https://server/Main.js',
// Add more resources as needed
];
self.addEventListener('install', (event) => {
event.waitUntil(
caches.open(CACHE_NAME)
.then((cache) => cache.addAll(urlsToCache))
.then(() => self.skipWaiting())
);
});
self.addEventListener('activate', (event) => {
event.waitUntil(
caches.keys().then((cacheNames) => {
return Promise.all(
cacheNames.map((cacheName) => {
if (cacheName !== CACHE_NAME) {
return caches.delete(cacheName);
}
})
);
})
);
});
self.addEventListener('fetch', (event) => {
event.respondWith(
caches.match(event.request)
.then((cachedResponse) => {
if (cachedResponse) {
return cachedResponse;
}
return fetch(event.request)
.then((response) => {
// Check if we received a valid response
if (!response || response.status !== 200 || response.type !== 'basic') {
return response;
}
const responseToCache = response.clone();
caches.open(CACHE_NAME)
.then((cache) => cache.put(event.request, responseToCache));
return response;
});
})
.catch(() => {
return caches.match(event.request);
})
);
});
```
Manifest
```
{
"name": "Kiosk App 0_0_3_1",
"description": "DESCRIPTION",
"version": "0.0.3.0",
"manifest_version": 3,
"start_url": "./Main.html",
"display": "standalone",
"background": {
"service_worker": "service-worker.js"
},
"icons": [
{
"src": "./icon48.png",
"sizes": "48x48",
"type": "image/png"
},
{
"src": "./icon128.png",
"sizes": "128x128",
"type": "image/png"
},
{
"src": "./icon144.png",
"sizes": "144x144",
"type": "image/png"
}
],
"screenshots": [
{
"src": "/logo.gif",
"sizes": "600x600",
"type": "image/gif",
"form_factor": "wide",
"label": "Application"
},
{
"src": "/logo.gif",
"sizes": "600x600",
"type": "image/gif",
"label": "Application"
}
],
"permissions": [
"storage",
"contentSettings",
"webRequest",
{
"host_permissions": [
"*://*.google.com/*",
"*://www.googleapis.com/*",
"*://*.googleusercontent.com/*",
]
}
],
"kiosk_enabled": true,
"kiosk": {
"required_platform_version": "13982",
"always_update": true
}
}
``` |
Offline PWA as a ChromOS Kiosk Application |
|javascript|progressive-web-apps|service-worker|google-chrome-os| |
null |
Alright, so basically I have a script that draws ECG on a SurfaceGui with pixels (frames), and the thing is, the script runs very slowly, like twice or thrice slower than supposed to, like when I need script to draw a certain wave with length 80 ms, it draws it in 160 ms or more.
I figured out to use ```os.clock()``` to see what is going on, and found out that ```wait()``` and ```task.wait()``` become very inaccurate at very low wait times (e.g. 1/60 seconds).
I tried to use the same ```os.clock()``` to accurately measure time. It worked on paper fine, but on practice it made my PC lag and constantly cause ```Script exhausted execution time``` error. I was using this function:
```
local function AccurateWait(time)
local start = os.clock()
while os.clock() - start < time do
-- Nothing
end
end
```
Now I don't know what to do, here is full script, maybe it will help.
```
local leadFrame = script.Parent.blackBg.I_leadFrame
local leadFrameWidth = leadFrame.Size.X.Offset
local leadFrameUpdateTick = 1
local pixelSize = 2 -- def:
local pixels = {}
local step = 3 -- DO NOT CHANGE THIS VALUE, def: 4
local colGreen = Color3.fromRGB(25, 255, 25)
local heartBPM = 60 / 60 * 1000 -- Will need it later, for now let it remain as 60
local linePosX = 0
local linePosY = 0
local linePosY_Scale = 0.5
local lineAccuracy = 16 -- def: 16
local lineReachedFrameWidth = false
local lineLoops = true
local lineClearanceOffset = 20
local lineClearing = false
local section = 0
local sectionInMS = 0
local sectionMaxWidth = 0
local wholeBeatLength = 0
local function DrawLine()
if linePosX >= leadFrameWidth - pixelSize then
if lineLoops then
linePosX = 0
end
lineReachedFrameWidth = true
return
end
if linePosX >= leadFrameWidth - pixelSize - lineClearanceOffset or lineClearing then
lineClearing = true
pixels[1]:Destroy()
table.remove(pixels, 1)
end
if linePosY ~= linePosY then
linePosY = 0
end
local pixel = Instance.new("Frame", leadFrame)
pixel.Size = UDim2.new(0, pixelSize, 0, pixelSize)
pixel.Position = UDim2.new(0, linePosX, linePosY_Scale, linePosY)
pixel.BackgroundColor3 = colGreen
pixel.BackgroundTransparency = 0
pixel.BorderSizePixel = 0
pixel.Name = "pixel"
table.insert(pixels, pixel)
end
local function DrawP_Wave()
local durationP_Wave = 80 -- In ms, assume it is normal duration at 60 bpm
local durationPR_Segment = durationP_Wave + 40
local startTime = os.clock()
while sectionInMS < durationP_Wave do
-- At these parameters length of P wave is 90 ms
local A = 15 * step / 4 -- Scale of P wave, def: 15
local B = 2.4 -- Width of P wave, def: 2.2 (the higher - the shorter)
local C = 1.5 -- Can't describe, better not touch it, def: 1.5
local D = 0.4 -- Height of P wave, def:
local E = 1 -- Polarity of P wave, def: 1
for i = 1, lineAccuracy do
linePosY = -E * (A * D * math.sin(B * section / A))^C
DrawLine()
linePosX += step/lineAccuracy
section += step/lineAccuracy
sectionInMS = ((section/(60*step))*1000)
if sectionInMS > durationP_Wave then
break
end
end
wait(1/60)
end
print("P wave: "..((os.clock()-startTime)*1000).." ms")
while sectionInMS < durationPR_Segment do
for i = 1, lineAccuracy do
DrawLine()
linePosX += step/lineAccuracy
section += step/lineAccuracy
sectionInMS = ((section/(60*step))*1000)
if sectionInMS > durationPR_Segment then
break
end
end
wait(1/60)
wholeBeatLength += 1/60*1000
end
print("PQ segment: "..((os.clock()-startTime)*1000).." ms")
section = 0
sectionInMS = 0
end
local function DrawQRS_Complex()
local durationQRS_Complex = 90
local durationST_Segment = 100 + durationQRS_Complex
local startTime = os.clock()
while sectionInMS < durationQRS_Complex do
local A = 1.7 -- Width of QRS, def: 1.5 (the higher A - the shorter QRS)
local B = 1.7 -- Height of QRS, def: 1.7
local C = 3.6
local D = 3.5
local E = 5 -- def: 5
local F = 1.1 -- Proportions of Q to S (bigger num -> deeper peak Q), def: 1.1
local G = 15 * step / 4 -- Scale, def: 15
for i = 1, lineAccuracy do
linePosY = -G*((B*(math.sin(A/G * section))^E)^D-
(math.sin(A/G*F * section))^C)
DrawLine()
linePosX += step/lineAccuracy
section += step/lineAccuracy
sectionInMS = ((section/(60*step))*1000)
if sectionInMS > durationQRS_Complex then
break
end
end
wait(1/60)
wholeBeatLength += 1/60*1000
end
print("QRS complex: "..((os.clock()-startTime)*1000).." ms")
while sectionInMS < durationST_Segment do
for i = 1, lineAccuracy do
DrawLine()
linePosX += step/lineAccuracy
section += step/lineAccuracy
sectionInMS = ((section/(60*step))*1000)
if sectionInMS > durationST_Segment then
break
end
end
wait(1/60)
wholeBeatLength += 1/60*1000
end
print("ST segment: "..((os.clock()-startTime)*1000).." ms")
section = 0
sectionInMS = 0
end
local function DrawT_Wave()
local durationT_Wave = 160
local startTime = os.clock()
while sectionInMS < durationT_Wave do
local A = 1.1
local B = 1.6
local C = 3.6
local D = 0.9
local E = 5
local F = 1.1
local G = 15 * step / 4
local H = 2.1
for i = 1, lineAccuracy do
linePosY = -G*((B*(math.sin(A/G * section))^E)^D-
((math.sin(A/G*F * section))^H)^C)
DrawLine()
linePosX += step/lineAccuracy
section += step/lineAccuracy
sectionInMS = ((section/(60*step))*1000)
if sectionInMS > durationT_Wave then
break
end
end
wait(1/60)
wholeBeatLength += 1/60*1000
end
print("T wave: "..((os.clock()-startTime)*1000).." ms")
section = 0
sectionInMS = 0
end
local function BreakBetweenBeats()
local startTime = os.clock()
while wholeBeatLength < heartBPM do
for i = 1, lineAccuracy do
DrawLine()
linePosX += step/lineAccuracy
section += step/lineAccuracy
sectionInMS = ((section/(60*step))*1000)
end
wait(1/60)
wholeBeatLength += 1/60*1000
end
print("Pause: "..((os.clock()-startTime)*1000).." ms")
section = 0
sectionInMS = 0
wholeBeatLength = 0
end
while true do
DrawP_Wave()
DrawQRS_Complex()
DrawT_Wave()
BreakBetweenBeats()
end
print("ECG session has ended.")
``` |
RBX: How to accurately yield/pause script up to miliseconds? |
neo4j, how to query chain using two different nodes |
|neo4j|cypher| |
You should not use a loop, not use `-` for missing values.
Remove the `-`, use NaNs, and vectorize your code:
```
df[['coupon', 'income']] = df[['coupon', 'income']].apply(pd.to_numeric, errors='coerce')
df['income'] = df['income'].fillna(df['coupon'].mul(df['value']).div(100))
```
Output:
```
value coupon income
0 100 3.0 3.0
1 150 NaN NaN
2 200 4.0 8.0
3 250 5.0 12.5
``` |
In my case, this problem was resolved by restarting the computer. But it is possible to solve it by restarting winnat, open cmd as administrator:
net stop winnat
net start winnat |
I have the following situation. I had a function that ran on Windows App Service Plan. That function processed blobs. The function had default `LogsAndContainerScan` trigger. Now after some time I decided to rewrite this function and also migrate it from Windows to Linux, I also wanted to deploy it in an isolated environment inside a docker container. To accomplish this I createad another Function App that was running on a new App Service Plan for Linux. During the deployment I deployed and started a new function app on Linux, and stopped the old one for Windows.
To my big surprise, the new function started to process the blobs that were processed long ago by the previous function. After some digging and reading answers on Stack Overflow for example [this one](https://stackoverflow.com/questions/41008374/azure-functions-configure-blob-trigger-only-for-new-events) or [this one](https://stackoverflow.com/questions/51675455/stop-azure-blob-trigger-function-from-being-triggered-on-existing-blobs-when-fun), it seems to me that the function will process a blob only if it does not have a [blob receipt](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger?tabs=python-v2%2Cisolated-process%2Cnodejs-v4&pivots=programming-language-csharp#blob-receipts) inside `azure-webjobs-hosts` blob container. When I looked at my `azure-webjobs-hosts` blob container I found out that there are actually two folders in there - one for my previous function, and one for my new function. So I conclude that even though there were receipts for the existing blobs, they were in the folder of the old function app, which means that when I created a new function app, it tried to find the receipts in another folder, couldn't find them, so it started to process all of the blobs again. Which basically means that whenever I decide to create another function app with a blob trigger, it will try to reprocess all of the existing files.
The question that I have.
1. Is my reasoning above correct, and every function app reprocess the blobs again that were processed before? If no why did it happen in my situation?
2. Is there any way I can avoid this situation in the future, when I, for example, decide to create yet another function app that will operate on the same blob container?
|
|android|firebase|firebase-authentication| |
null |
null |
I've completed this guide [guide](https://www.youtube.com/watch?v=eaQc7vbV4po) and now want to retrieve user details using a server component, but my cookie doesn't appear to be in the request even though it is present in the browser.
I customised my profile page and made it into a server component, but when I reload the page to call the fetch request, no token cookie is present. I have checked this with the following lines of code in the route handler
```
const token = request.cookies.get("token")?.value || "";
console.log(token, "token");
```
|
|lua|wait|roblox|roblox-studio| |
I'm trying to use criteria builder to add several field together and then apply a sql function.
I try to sum it and with only two field, it is possible but when i have a third one, i cannot apply the "+" easily.
select.add(cb.min(cb.sum(functionListToAdd.get(0), functionListToAdd.get(1))));
cq.multiselect(select);
em.createQuery(cq).getResultList();
This case works but i need more parameter and the sum function doesn't allowed more than 2 Expression (parameter).
Do you know how to sum field like the example below with CriteriaBuilder API ?
To illustrate what i want i did a request sql which work (i remove some code to be more readable)
SQL Example of what i want :
select
min(**b1_0.ecpi + s1_0.ecpi + l1_0.ecpi**),
max(b1_0.ecpi + s1_0.ecpi + l1_0.ecpi),
cast('units' as text)
from
facility.facility b1_0,
facility.cost s1_0 ,
facility.shipping l1_0
Thanks a lot to help me because i'm locked because of a simple "+". |
How to do addition ("+") between fields with CriteriaBuilder with JPA? |
|java|jpa|criteriabuilder| |
As it being mentioned, .nvm or npm is not added to PATH in .bashrc or .zshrc, so if do so and run `source .zshrc` it should work |
I use `Apollo` for requests. I need to get some info from server response header and save it
I see response headers while debugging inside the `executionContext`, but hot to get headers from response in the code? |
How to get response headers from Apollo executionContext? |
|android|apollo|okhttp| |
You are on the right track. Essentially you want to format your text date in a way that switches the month and day around, and then use `DATEVALUE()` to turn it back into an actual date value.
=DATEVALUE(TEXT(A1, "YYYY-DD-MM")) |
We have Docker containers running in ECS Fargate. They use the AWS SDK for Go V2, and they set up the SDK like this
```
cfg, err := config.LoadDefaultConfig(context.TODO())
```
We want to send an e-mail, so we set up SES:
```
repo.client = ses.NewFromConfig(cfg)
```
When trying to send an e-mail, it cannot find credentials:
```
Get "http://169.254.169.254/latest/meta-data/iam/security-credentials/": dial tcp 169.254.169.254:80: connect: invalid argument
```
It seems to try to connect to the IMDS endpoint that belongs to ECS running on EC2, instead of the one for Fargate. What's going wrong here?
The full error for reference:
```
failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, exceeded maximum number of attempts, 3, request send failed, Get "http://169.254.169.254/latest/meta-data/iam/security-credentials/": dial tcp 169.254.169.254:80: connect: invalid argument
```
The ECS task execution role has a policy that allows full access to SES (for testing), so that's not the problem. The AWS documentation states that Fargate containers use the following address for credentials
```curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI```
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html |
I have just downloaded a spring-boot project from http://start.spring.io/.
The error is occurring in the build.gradle file.
> Build file 'C:\Users\me\Desktop\WorkSpace\myProject\build.gradle' line: 3
Plugin [id: 'org.springframework.boot', version: '3.2.3'] was not found in any of the following sources:
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
I tried to build the project with an older SB version but the error persists. |
Plugin [id: 'org.springframework.boot', version: '3.2.3'] was not found in any of the following sources: |
|spring-boot|gradle-plugin| |
null |
Model looks like
```ruby
class Campaign < ActiveRecord::Base
before_update :delete_triggers
def delete_triggers
Trigger.where(campaign_id: id).delete_all
end
end
```
and in the controller I do
```ruby
Campaign.find(params[:campaignId]).update(campaign_params)
```
where `campaign_params` just grabs the body and does some cleanup. `delete_triggers` never gets called though. There is no filter (there will be, but I removed it since the method wasn't firing. It didn't make a difference). In order to sanity check, I added to the model
```ruby
before_save :check_dirty
def check_dirty
pp changed?
end
```
which prints `true` as expected.
What would cause `before_update` to not trigger on a call to `.update`, where `before_save` does? Obviously, I could just use `before_save` and add some filtering logic in the callback, but that's hacky when there's a hook made for this purpose.
(There are other questions similar to this, but not the same. One has a syntax error in a proc, and I'm not using a proc; and the other is a bad filter, which I removed specifically to avoid that possibility and am still getting the issue) |
Rails before_update not triggering on update but save works |
|ruby-on-rails|ruby|activerecord|ruby-on-rails-5| |
One possible way is creating an array of positions using the separator regexp, and then doing different replacements between those positions, like this:
```Python
import re
def replace(data, separator_re, replace_outside_special_func):
poss = [0]
for match in re.finditer(separator_re, data):
poss.extend((match.start(), match.end()))
poss.append(len(data))
return ''.join(
data[poss[i] : poss[i + 1]] if i & 3
else replace_outside_special_func(data[poss[i] : poss[i + 1]])
for i in range(0, len(poss) - 1))
data = '''some text
----
text inside special block
----
some text
'''
print(replace(
data, r'(?m)^----$',
lambda data: data.replace('text', 'TEXT')))
```
It prints:
```
some TEXT
----
text inside special block
----
some TEXT
```
Some explanation:
* We use `re.finditer` and `match.start()` and `match.end()` to find the start and end positions (within-string indexes) of each separator. We save these positions to the array `poss` in increasing order.
* We also add `0` to the beginning of `poss` and `len(data)` to the end of poss, to make sure we won't miss any block near the start and end of the input.
* We iterate over each substring marked by adjacent positions in `poss`, e.g. `data[poss[0] : poss[1]]`, `data[poss[1] : poss[2]]`, `data[poss[2] : poss[3]]` etc.
* `data[poss[i] : poss[i + 1]]` is:
* If `i % 4 == 0`: Normal block, e.g. `'some text\n'`.
* If `i % 4 == 1`: Seperator after normal block, e.g. `'----'`.
* If `i % 4 == 2`: Special block, e.g. `'\ntext inside special block\n'`.
* If `i % 4 == 3`: Seperator after special block, e.g. `'----'`.
* We apply the replacement only in normal blocks.
---
Here is a shorter solution:
```Python
import re
data = '''some text
----
text inside special block
----
some text
'''
print(re.sub(
r'(?s)(.*?)(\Z|\n(?:\Z|----(?:\Z|\n.*?(?:\Z|\n(?:\Z|----(?:\Z|\n))))))',
lambda match: match.group(1).replace('text', 'TEXT') + match.group(2),
data))
```
I think this shorter solution is slower. You may want to run some benchmarks on long input.
Some explanation:
* `(?s)` sets the `s` flag so that `.` will match any character, including `\n`.
* `(.*?)` matches the normal block, and saves it to `match.group(1)`.
* `(\Z|\n(?:\Z|----(?:\Z|\n.*?(?:\Z|\n(?:\Z|----(?:\Z|\n))))))` matches the separator + special block + separator, and saves it to `match.group(2)`.
* The `.*?` is like `.*`, but shorter matches are tried first. Without the `?`, that part of the regexp would match the entire input.
* The `TEXT` replacement is only applied to normal blocks (`match.group(1)`).
* The regexp contains many instances of `\Z|\n...` so that it matches correctly even if the input string ends early (e.g. within a special block).
---
When doing string processing with regexps, these are useful ideas:
1. Match everything with a single regexp of the form `(...)|(...)|...`, and sort it out in the replacement function with `if match.group(1) is not None:` etc.
2. Match with multiple short regexps (e.g. `^----$` and `text`), and write code which joins the matches and replacements together.
3. Match with a single long regexp, do partial matches at end-of-string with `\Z|...`.
The longer solution above did #2, and the shorter one did #3. For tokenization (such as removing comments from C source code), #1 is useful. |
Hooks are not supported inside an async component error in nextjs project using useQuery |
|reactjs|react-hooks|async-await|fetch-api|tanstackreact-query| |
I am trying to create an authentication system using EJS and PG database, but I am having trouble getting the routing and rendering to work.
In my `index.html` I have an anchor element that points to `signup.html`, which works. In `signup.html` I have a login form, but then when I try to submit it, I get an HTTP 405 error.
How do I properly set up the routing between `index.html`, `signup.html`, and `app.js` (where the authentication logic happens)?
index.html:
```
<a href="signup.html" class="auth-link">Sign Up</a>
```
signup.html:
```
<form action="/users/register" method="POST">
```
app.js:
``` js
app.get("/users/register", checkAuthenticated, (req, res) => {
res.render("register.ejs");
});
```
|
I want to Bind:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<D:propfind xmlns:D='DAV:' xmlns:A='http://apple.com/ns/ical/'
xmlns:C='urn:ietf:params:xml:ns:caldav'>
<D:prop>
<D:resourcetype />
<D:owner />
<D:displayname />
<D:current-user-principal />
<D:current-user-privilege-set />
<A:calendar-color />
<C:calendar-home-set />
</D:prop>
</D:propfind>
```
to
__xml Objects__
```
[XmlRoot("propfind", Namespace = DavNamepaces.davXmlNamespace)]
public class PropFindRequest()
{
[XmlElement(ElementName = "prop", Namespace = DavNamepaces.davXmlNamespace)]
public Properties Properties { get; set; } = null!;
}
[XmlRoot(ElementName="prop")]
public class Properties
{
[XmlElement(ElementName="resourcetype")]
public object Resourcetype { get; set; }
[XmlElement(ElementName="owner")]
public object Owner { get; set; }
[XmlElement(ElementName="displayname")]
public object Displayname { get; set; }
[XmlElement(ElementName="current-user-principal")]
public object Currentuserprincipal { get; set; }
[XmlElement(ElementName="current-user-privilege-set")]
public object Currentuserprivilegeset { get; set; }
[XmlElement(ElementName="calendar-color")]
public object Calendarcolor { get; set; }
[XmlElement(ElementName="calendar-home-set")]
public object Calendarhomeset { get; set; }
}
```
__Program.cs__
```
// Add services to the container.
builder.Services
.AddControllers(options =>
{
options.OutputFormatters.RemoveType<SystemTextJsonOutputFormatter>();
options.InputFormatters.RemoveType<SystemTextJsonInputFormatter>();
})
// Adds builder.AddXmlDataContractSerializerFormatters(); && builder.AddXmlSerializerFormatters();
.AddXmlFormaterExtensions();
```
__controller__
```
[ApiController]
[Route("caldav/")]
public class CalDavController
{
[AcceptVerbs("POST")]
[ApiExplorerSettings(IgnoreApi = false)]
[Produces("application/xml")]
[Consumes("application/xml")]
public async Task<IActionResult> Propfind([FromXmlBody(XmlSerializerType = XmlSerializerType.XmlSerializer)]PropFindRequest propFindRequest,[FromHeader] int Depth)
{
throw new NotImplementedException();
}
}
```
__Error__
```
Error: Bad Request
Response body
Download
<problem xmlns="urn:ietf:rfc:7807">
<status>400</status>
<title>One or more validation errors occurred.</title>
<type>https://tools.ietf.org/html/rfc9110#section-15.5.1</type>
<traceId>00-e1611d29682cd7e995d256c79850082d-ebfcd416977a94a7-00</traceId>
<MVC-Errors>
<MVC-Empty>An error occurred while deserializing input data.</MVC-Empty>
<propFindRequest>The propFindRequest field is required.</propFindRequest>
</MVC-Errors>
</problem>
```
I was expecting that the mapping works with the Annotations I added.
I tried first to change the Request just to match the expectations in the Response
__trial and error request__
```
<?xml version="1.0" encoding="UTF-8"?>
<propFindRequest>
<properties>
<resourcetype>string</resourcetype>
<owner>string</owner>
<displayname>string</displayname>
<currentuserprincipal>string</currentuserprincipal>
<currentuserprivilegeset>string</currentuserprivilegeset>
<calendarcolor>string</calendarcolor>
<calendarhomeset>string</calendarhomeset>
</properties>
</propFindRequest>
```
but it was resulted in the __same error__ therefore i think i made a mistake but i have no clue.
Am i missing something Important?
Do I have to use DataContract Serializer ... I thougth that this should work out of the box with the default Xml Serializer according to MS.... |
In elastisearch I have an key that is an array I need to count the unique items in the array. Without overlapping items of other documents.
So I made this:
The scripted_metric works when it isn't in the date_histogram
```php
'aggs' => [
'groupByWeek' => [
'date_histogram' => [
'field' => 'date',
'calendar_interval' => '1w',
],
'aggs' => [
'count_unique_locations' => [
'scripted_metric' => [
'init_script' => 'state.locations = []',
'map_script' => 'state.locations.addAll(doc.unique_locations_with_error)',
'combine_script' => 'return state.locations',
'reduce_script' => "
def locations = [];
for (state in states) {
for(location in state) {
if(!locations.contains(location) && location != '' ) {
locations.add(location);
}
}
}
return locations.length;
",
],
],
],
],
],
```
When I run the query I get this error:
```json
{
"error": {
"root_cause":[],
"type":"search_phase_execution_exception",
"reason":"",
"phase":"fetch",
"grouped":true,
"failed_shards":[],
"caused_by":{
"type":"script_exception",
"reason":"runtime error",
"script_stack":[
"locations = []; "," ^---- HERE"],
"script":"def locations = []; for (state in states) { for(location in state){if(!locations.contains(location) && location != '' ) {locations.add(location); }}} return locations.length;",
"lang":"painless",
"position":{
"offset":16,
"start":4,
"end":20
},
"caused_by":{
"type":"null_pointer_exception",
"reason":"cannot access method/field [iterator] from a null def reference"
}
}
},
"status":400
}
```
I think it has something to do with that doc now null is but I don't know why of how to fix it. |
Elasticsearch doc null pointer exception error for scripted metric on date_histogram |
|elasticsearch| |
null |
I see the beginning of your problem because a StatefulWidget attribute cannot change its value:
```dart
void reviewData(BuildContext context) async {
final prefs = await SharedPreferences.getInstance();
List<String> formDataList = [];
print("_selectedSchool $_selectedSchool");
forms.forEach((form) {
formDataList.add('
${_selectedSchool ?? 'No School Selected'}|
${form.cpsNumber /*error value can't change*/}|
${form.itemDescription /*error value can't change*/}|
${form.selectedReason /*error value can't change*/}');
});
await prefs.setStringList('formData', formDataList);
Navigator.push(context, MaterialPageRoute(builder: (context) => SummaryPage()));
}
```
```dart
class MyCustomForm extends StatefulWidget {
final VoidCallback onCpsNumberTyped;
MyCustomForm({required this.onCpsNumberTyped});
@override
_MyCustomFormState createState() => _MyCustomFormState();
// create attibute here is equivalent of const
String get cpsNumber => ''; // always ''
String get itemDescription => ''; // always ''
String get selectedReason => ''; // always ''
}
```
if you want to be able to record dynamically send a function instead
```
class MyCustomForm extends StatefulWidget {
final VoidCallback onCpsNumberTyped;
MyCustomForm({required this.onCpsNumberTyped,required this.youFunction});
@override
_MyCustomFormState createState() => _MyCustomFormState();
// create attibute here is equivalent of const (old)
String get cpsNumber => ''; // always ''
String get itemDescription => ''; // always ''
String get selectedReason => ''; // always ''
// create function (new)
final Function youFunction;
}
class _MyCustomFormState extends State<MyCustomForm> {
@override
Widget build(BuildContext context) {
// use function in your code
widget.youFunction()
}
}
```
if this is too vague please tell me for further explanation. |
|c#|xml|asp.net-web-api|model-binding|.net-8.0| |
I have a class called Task in which I have properties like Date, Id, priority and Name. I have a list where I store objects of this class. The problem is that when I do the sorting and then display it in my Listview (I am working in WPF framework coz it is a part of the assigment) all the tasks get renamed tothe same thing (TODO-list.task.Task) from their original name and I have no idea if the sorting actually worked.
I use this to sort the list:
```
List\<Task\> sort_by_date = tasks.OrderBy(x =\> x.Date).ToList();
```
And then write it out to list view like this:
```
MainWindow.withDate.Item.Clear();
MainWindow.withDate.ItemsSource = sort_by_date;
```
EDIT: Already figured it out...it was a small thing in writing out teh listview
|
I'm making a quick draft for a project I'm doing and I can't seem to get winsound module to work. Well the module would load no yellow error line, and won't tell me that it doesn't exist. It even does the rest of the lines opening up the browser. But the audio doesn't play,
------------------------------------------------------------------------------------
if messagebox.showerror(title="*ANGRY*", message="ALRIGHT THIS IS WHAT YOU GET!!!"):
if messagebox.showerror(title="sus", message="MY SUPER LASER PISS!!!!!!!!!!!!!!!!!!!!!!!!"):
for i in range(1):
winsound.PlaySound('SUPERLASERPISS.mp3', winsound.SND_FILENAME)
webbrowser.open("https://www.bing.com/images/search?q=PISS&form=HDRSC3&first=1")
-------------------------------------------------------------------------------------------------
I used "SND_FILENAME" instead of 0 to see if that would fix ti but it wouldnt.
I expected it to play the sound, end of story. (and also open up 100 tabs of eggman but that's not the point) It won't play. |
|algorithm|checksum|8051| |
I need to create multiple queries with various weightages and Properties.
The simplified version of couple of queries is this
```
SELECT Emp_Id,
(30 * ISNULL(BMI,0) +
(20 * ISNULL(Height,0) +
(10 * ISNULL(Eyesight,0)) from MyTable1 where Category = 'Fighter'
SELECT Emp_Id,
(10 * ISNULL(BMI,0) +
(10 * ISNULL(Height,0) +
(20 * ISNULL(Skill,0) +
(40 * ISNULL(Eyesight,0)) from MyTable1 where Category = 'Sniper'
```
There are 100s of queries with different weightages and properties. So I wanted to create a table with Weightages and Properties. Then create dynamic query which would be executed since it will be much easier to maintain.
Below is my code so far
```
/* Dummy Table Creation */
DECLARE @DummyWeightageTable TABLE (Category varchar(50), Fieldname varchar(50), Weightage real)
insert into @DummyWeightageTable values
('Sniper', 'Eyesight', 40), ('Sniper', 'BMI', 10), ('Sniper', 'Height', 10), ('Sniper', 'Skill', 20),
('Fighter', 'Eyesight', 10), ('Fighter', 'BMI', 30), ('Fighter', 'Height', 20)
/* Actual Functionality */
DECLARE @sql VARCHAR(MAX)
DECLARE @delta VARCHAR(MAX)
DECLARE @TempTableVariable TABLE (Fieldname varchar(50), Weightage real)
insert into @TempTableVariable select Fieldname, Weightage from @DummyWeightageTable where Category = 'Sniper'
set @sql = 'SELECT Emp_Id,'
-Do below steps for all rows
select @delta = '(', Weightage, ' * ISNULL(', Fieldname, ',0) +' from @TempTableVariable
set @sql = @sql + @delta + '0) from MyDataTable1'
EXEC sp_executesql @sql;
Truncate @TempTableVariable
insert into @TempTableVariable select Fieldname, Weightage from @DummyWeightageTable where Category = 'Fighter'
set @sql = 'SELECT Emp_Id,'
-Do below steps for all rows
select @delta = '(', Weightage, ' * ISNULL(', Fieldname, ',0) +' from @TempTableVariable
set @sql = @sql + @delta + '0) from MyDataTable1'
EXEC sp_executesql @sql;
```
However Sql Server doesn't allow Arrays. So I am getting an error when I try to populate variable @delta
Msg 141, Level 15, State 1, Line 15
A SELECT statement that assigns a value to a variable must not be combined with data-retrieval operations.
I feel there must be some workaround for this but I couldn't find it. |
Why can't ECS Fargate container find task execution role credentials? |
|amazon-web-services|go|amazon-ecs|aws-fargate|aws-sdk-go-v2| |
null |
The Backbone crew discourage storing anything other than primitives in models, and therefore if you're going to do so, it's on you to handle it.
https://github.com/jashkenas/backbone/issues/3457
One can use the `lodash` [cloneDeep()][1] function for this, e.g.,
```
const model2 = new Model(_.cloneDeep(model1.attributes));
```
There are several alternatives to `cloneDeep()`; depending on your supporting library cast of characters, there may be a work-alike already available to you. For example, babel and core-js as of late have the usual auto-polyfill support for `structuredClone()`, so if you're already set up for that, it'd be my first choice now.
[1]: https://lodash.com/docs/4.17.15#cloneDeep |
It looks like the error is coming from the third `parameter` definition and the use of `allOf`. The error states you cannot have `anyOf`, `allOf`, or `oneOf`.
I also wonder if FastAPI is looking for a `requestBody` property because it's a `POST` operation. You don't have one defined and it has an error related to that.
Try this...
```json
{
"openapi": "3.0.3",
"info": {
"title": "SomeTitle",
"description": "SomeDescription",
"version": "1.2.0"
},
"servers": [
{
"url": "http://localhost:8080",
"description": "Local Environment for testing"
},
{
"url": "Productive URL",
"description": "Productive Instance"
}
],
"paths": {
"/api/v1/rag/prepare-index": {
"post": {
"tags": [
"Retrieval Augmented Generation"
],
"summary": "Build Index",
"operationId": "Build_Index",
"security": [
{
"APIKeyHeader": []
}
],
"parameters": [
{
"name": "project_id",
"in": "query",
"required": true,
"schema": {
"type": "string",
"title": "Project Id"
}
},
{
"name": "max_docs",
"in": "query",
"required": true,
"schema": {
"type": "integer",
"title": "Max Docs"
}
},
{
"name": "vector_store",
"in": "query",
"required": false,
"schema": {
"$ref": "#/components/schemas/VectorStores"
}
}
],
"requestBody": {
"description": "a request body",
"content": {
"application/json": {
"schema": {
"type": "object"
}
}
},
"required": true
},
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"type": "object",
"title": "Response Build Index"
}
}
}
},
"404": {
"description": "Not found"
},
"422": {
"description": "Validation error"
}
}
}
}
},
"components": {
"schemas": {
"VectorStores": {
"type": "string",
"enum": [
"redis",
"in_memory"
],
"title": "VectorStores",
"default": "in_memory"
}
}
}
}
``` |
I have a project, and the directory structure is:
```
.
βββ bin
βββ README.md
βββ src
β βββ blocking_deque
β β βββ blocking_deque.hpp
β βββ buffer
β β βββ buffer.cpp
β β βββ buffer.h
β βββ epoller
β β βββ epoller.cpp
β β βββ epoller.h
β βββ log
β βββ log.cpp
β βββ log.h
βββ test
βββ Makefile
βββ test_log.cpp
```
*log.h*:
```cpp
#ifndef LOG_H
#define LOG_H
#include <memory>
#include <string>
#include <thread>
#include <mutex>
#include "../blocking_deque/blocking_deque.hpp"
#include "../buffer/buffer.h"
class Log {
public:
// ...
bool is_open();
// ...
private:
Log();
virtual ~Log();
// ...
Buffer buf;
// ...
};
#endif // LOG_H
```
*log.cpp*:
```cpp
#include <cstdio>
#include <ctime>
#include <cstring>
#include <cassert>
#include <cstdarg>
#include <sys/stat.h>
#include <sys/time.h>
#include "log.h"
Log::Log() {
// ...
}
Log::~Log() {
// ...
}
inline bool Log::is_open() {
// ...
}
```
*buffer.h*:
```cpp
#ifndef BUFFER_H
#define BUFFER_H
#include <vector>
#include <string>
#include <atomic>
class Buffer {
// ...
};
#endif // BUFFER_H
```
*buffer.cpp*:
```cpp
#include <sys/uio.h> // readv
#include <unistd.h> // write
#include <cerrno> // errno
#include <cassert> // assert
#include <algorithm> // copy
#include "buffer.h"
// ... definition of some members of Buffer
```
*blocking_deque.hpp*:
```cpp
#ifndef BLOCKING_DEQUE_HPP
#define BLOCKING_DEQUE_HPP
#include <deque>
#include <condition_variable>
#include <mutex>
template <typename T>
class BlockingDeque {
// ...
};
// ... definition of some members
```
*test_log.cpp*:
```cpp
#include "../src/log/log.h"
void test_log() {
using level_type = Log::level_type;
Log::get_instance()->init(1, "./test_log1", ".log", 0);
int cnt = 0;
for (level_type lv=3; lv>=0; --lv) {
Log::get_instance()->set_level(lv);
for (int i=0; i<10000; ++i) {
for (level_type new_lv=0; new_lv<=3; ++new_lv) {
LOG_BASE(new_lv, "%s ===== %d", "test", cnt++);
}
}
}
Log::get_instance()->init(1, "./test_log2", ".log", 5000);
cnt = 0;
for (level_type lv=0; lv<=3; ++lv) {
Log::get_instance()->set_level(lv);
for (int i=0; i<10000; ++i) {
for (level_type new_lv=3; new_lv>=0; --new_lv) {
LOG_BASE(new_lv, "%s ==== %d", "test", cnt++);
}
}
}
}
int main() {
test_log();
}
```
When I use `g++ -g test/test_log.cpp src/blocking_deque/blocking_deque.hpp src/buffer/buffer.cpp src/log/log.cpp -o ./a`, the compiler(specifically, it's the linker) informs me that **undefined reference to `some function`**. The detail is shown below:
```
/usr/bin/ld: /tmp/cc3lAVRd.o: in function `test_log()':
/home/fansuregrin/workshop/my_http_server/test/test_log.cpp:18: undefined reference to `Log::is_open()'
/usr/bin/ld: /home/fansuregrin/workshop/my_http_server/test/test_log.cpp:28: undefined reference to `Log::is_open()'
/usr/bin/ld: /tmp/ccnM4HDs.o: in function `Log::write(int, char const*, ...)':
/home/fansuregrin/workshop/my_http_server/src/log/log.cpp:167: undefined reference to `Buffer::has_written(unsigned long)'
collect2: error: ld returned 1 exit status
```
However, the `Log::is_open()` is defined in `src/log/log.cpp`, the `Buffer::has_written(unsigned long)` is defined in `src/buffer/buffer.cpp`. Why it can't be compiled? |
Need Dynamic query creation in Sql server with Array like implementation |
|sql-server| |
null |
In compilerOptions in tsconfig.json, use "baseUrl": "./"
instead of baseUrl": "./src".
Then you can import files as below:
import { foo } from 'src/_files/_productlist'; |
I am trying to cache some private custom rest endpoints using WP Rest Cache and whenever i try it makes them public? Not sure if anyone has come across this but i need it to remain private in the cache to authenticated users.
register_rest_route( 'v1', 'get-report-summary/',array(
'methods' => 'GET',
'callback' => 'get_report_summary'
));
function get_report_summary(){
$data = "Some code";
$response = new WP_REST_Response($data);
$response->set_status(200);
return $response;
}
Adding the endpoint to the cache
function wprc_add_endpoint( $allowed_endpoints ) {
if ( ! isset( $allowed_endpoints[ 'v1' ] ) || ! in_array( 'get-report-summary', $allowed_endpoints[ 'v1' ] ) ) {
$allowed_endpoints[ 'v1' ][] = 'get-report-summary';
}
return $allowed_endpoints;
}
add_filter( 'wp_rest_cache/allowed_endpoints', 'wprc_add_endpoint', 10, 1);
|
Caching private wordpress rest endpoints |
|wordpress|api|rest|caching| |
My instance of func app currently under consumption plan.
There is one timer trigger logic needs to make bunch on http requests - 10k and fails due to consumption plan "connections" limitation.
Is it possible to automatically switch to premium plan for 1 hour daily and then switch it back to consumption?
Thank you! |
Azure Function: switch from consumption to premium plan for 1 hour daily automatically? |
|azure-functions| |
|sql-server| |
|wordpress|caching| |
I have a project that I made with Laravel 10, and after publishing my project, it gives a cors error in the console on the Client (React) side, and although I tried many methods, I could not solve the problem, and while doing research, I came across a package called FruitCake cors package, but when I install this package, it gives the error I mentioned in the title, can you help?
I tried many methods, but I could not solve the problem on the website I published, only this method remained and it gave an error when installing with composer.
|
Installation failed, reverting ./composer.json and ./composer.lock original content |
|laravel|cors|composer-php|production|laravel-10| |
null |
I have a multi account setup in AWS
There is an administrative account where my domain is registered (example.com)
There is a production account using lightsail where my subdomain is registered (prod.example.com)
I am using a loadbalancer on lightsail production account
I want to point app.example.com to app.prod.example.com and can set this up with a CNAME entry in the administrative account.
When I go to app.example.com I get a privacy error that states that "This server couldn't prove that it's app.example.com; its security certificate is from prod.example.com. This may be caused by a misconfiguration or an attacker intercepting your connection.
I have an SSL certificate for example.com via R53 on my admin account. I have SSL certificate from lightsail for the loadbalancer.
How do I ensure the link is trusted?
Added CNAME record in admin account pointing from app.example.com to app.prod.example.com |
I've encountered an error while attempting to list all first-party audience segmentations created in my GoogleAds account using the googleads Python library. Despite confirming that the credentials are correct and updating the library to the latest version available, I continue to face the following error:
[WARNING] 2024-03-26T12:42:30.368Z 41d1f691-e863-47a7-97c7-4ceec141ccef Error summary: {'faultMessage': '[ServerError.SERVER_ERROR @ ]', 'requestId': '8e056b59b2549571b78c96c177a495e2', 'responseTime': '1173', 'serviceName': 'AudienceSegmentService', 'methodName': 'getAudienceSegmentsByStatement'}
I've even attempted to execute the code on a clean installation in a virtual machine, but the error persists.
I'm reaching out to seek assistance from the community regarding this issue. Any insights or suggestions on how to resolve this error would be greatly appreciated.
Thank you in advance for your help.
Best regards;
I've even attempted to execute the code on a clean installation in a virtual machine, but the error persists.
The credentials are correct. |
Error while Retrieving First Party Audience Segmentations in GoogleAds Python Library [ServerError.SERVER_ERROR @ ]' |
|python|google-ads-api| |
null |
This is my models.py:
class Person(models.Model):
surname = models.CharField(max_length=100, blank=True, null=True)
forename = models.CharField(max_length=100, blank=True, null=True)
def __str__(self):
return '{}, {}'.format(self.surname, self.forename)
class PersonRole(models.Model):
ROLE_CHOICES = [
("Principal investigator", "Principal investigator"),
[etc...]
]
title = models.CharField(choices=TITLE_CHOICES, max_length=9)
project = models.ForeignKey('Project', on_delete=models.CASCADE)
person = models.ForeignKey(Person, on_delete=models.CASCADE)
person_role = models.CharField(choices=ROLE_CHOICES, max_length=30)
def __str__(self):
return '{}: {} as {}.'.format(self.project, self.person, self.person_role)
class Project(models.Model):
title = models.CharField(max_length=200)
person = models.ManyToManyField(Person, through=PersonRole)
def __str__(self):
return self.title
def get_PI(self, obj):
return [p.person for p in self.person.all()] #I'll then need to filter where person_role is 'Principal investigator', which should be the easy bit.
In my Admin back-end I'd like to display the person (principal investigator) in the main table:
class ProjectAdmin(ImportExportModelAdmin):
list_filter = [PersonFilter, FunderFilter]
list_display = ("title", "get_PI")
ordering = ('title',)
You can see that I created my `get_PI()` in my `models.py` and references in my `list_display`. I'm getting `Project.get_PI() missing 1 required positional argument: 'obj'`. What am I doing wrong? |
Display a filtered result from ManyToMany through model in Admin |
|python-3.x|django|django-models| |
I faced this problem and found out that in my case the problem was linked to Deep Link intent-filter.
I removed android:autoVerify and android:sspPattern options, replacing the last one with android:pathPrefix. Also, check that android:host is specified correctly.
I did a LOT of testing before finding the cause of the error. |
The problem I am trying to solve is where we have multiple IDPs coming into our application edge from various authentication sources. What I want to do is to use token exchange to swap this token, for a token which can be understood at any level of our micro service architecture. I envision the token will simply be for identity only. The overhead of trying to build multiple IDPs into every component within our system is proving a big challenge.
Can you advise if the token exchange mechanism is suitable for this? At least at RFC level it appears to be suitable.
We have been experimenting with spring auth server for the last few months. |
Token exchange in Spring Authorization Server |
|spring|oauth-2.0|spring-authorization-server|token-exchange| |
null |
{"OriginalQuestionIds":[9189172],"Voters":[{"Id":523612,"DisplayName":"Karl Knechtel","BindingReason":{"GoldTagBadge":"python"}}]} |
I am plotting covariates against predicted occupancy models using the unmarked package, three of my covariates are continuous so I have plotted using predict function and ggplot, geom_ribbon. However obviously this doesn't work with categorical/factor variables, I would like to be able to plot as a boxplot the predicted occupancy for the two discrete categories within my factor covariate.
The dataset UMF - is the unmarked frame with site covariates, obs covariates and capture history of the individual species. I have included the code for the null model, continuous covariate model (path_dist) and the point I am at with the categorical covariate (fox_presence). The categorical covariate has two levels: present and abscent and is treated as a factor in the dataset. I have tried to use the same predict function as with the continous and null but changed the type to "response" however this produces an error code.
Is there any way I can model and plot categorical covariates against individual species occupancy in the unmarked package? I have cut out the modelling of the other continuous variables as its just repition but is why the model moves from m2 to m5.
#detection model
m1 <- occu(formula = ~1 # detection formula first
~1, # occupancy formula second
data = umf)
#distance to path model
m2 <- occu(formula = ~1 # detection formula first
~path_dist, # occupancy formula second,
data = umf)
#plot distance to path model
newDat <- cbind(expand.grid(path_dist=seq(min(cov$path_dist),max(cov$path_dist), length.out=100)))
newDat <- predict(m2, type="state", newdata = newDat, appendData=TRUE) # predict psi (type = "state") and confidence intervals based on our model for these road distances
p1 <- ggplot(newDat, aes(x = path_dist, y = Predicted)) + # create plot
geom_ribbon(aes(ymin = lower, ymax = upper), alpha = 0.5, linetype = "dashed") +
geom_path(size = 1) +
labs(x = "Distance to path", y = "Occupancy probability") +
theme_classic() +
coord_cartesian(ylim = c(0,1))
#fox_presence model
#fox presence/abscene needs to be factor
cov$Fox_Presence <- factor(cov$Fox_presence)
m5 <- occu(formula = ~1 # detection formula first
~Fox_presence, # occupancy formula second,
data = umf)
newDat4 <- cbind(expand.grid(Fox_presence=seq(cov$Fox_presence),(cov$Fox_presence), length.out=100))
newDat4 <- predict(m5, type="response", newdata = newDat4, appendData=TRUE) # predict psi (type = "state") and confidence intervals based on our model for these road distances
#error: valid types are state, det |
It is possible, please follow the below example:
INSTALL azure;
LOAD azure;
SET azure_storage_connection_string = '<connection string from storage account/access keys/connection string>';
SELECT * FROM 'azure://<continer>/<folder>/<filename>.parquet' limit 10; |
I want to translate the following code from R to Python using `scipy.stats.probplot`.
```
qqplot(-log10(ppoints(1000)), -log10(p_value)
```
This is Q-Q plot of p-values compared to uniform distribution with a minus log scale. I am after something like the following. (I know that there are other libraries that achieve this, but I am looking for the answer for `probplot`.)
```
probplot(-np.log10(p_values_data), dist="uniform", sparams=(0, 1), plot=plt)
```
This does not work correctly because, the x-axis is uniform. Here, `plt` is due to `import matplotlib.pyplot as plt`. I found the post [here](https://stackoverflow.com/questions/55225458/scipy-stats-probplot-to-generate-qqplot-using-a-custom-distribution), among others, but I did not find anything on modifying the `dist` parameter to accommodate `-log10(uniform)`.
How can I get this plot using `probplot`?
|
How to give negative log10 distribution to Python probplot function for qq-plotting p-values? |