instruction stringlengths 0 30k ⌀ |
|---|
I have tried multple solutons in the platform but none seems to work. I can't seem to wrap my around the error AttributeError: 'Wildcards' object has no attribute 'sample'
Here is what I am trying to do:
```
import pandas as pd
sample_csv = os.path.join(BASE_DIR, "samples.csv")
sample_table = pd.read_csv(sample_csv).set_index('sample', drop=False)
def get_fq1(wildcards):
return sample_table.loc[wildcards.sample, 'fastq1']
def get_fq2(wildcards):
return sample_table.loc[wildcards.sample, 'fastq2']
def get_fq_files_dict(wildcards):
return {
"r1" : sample_table.loc[wildcards.sample, 'fastq1'],
"r2" : sample_table.loc[wildcards.sample, 'fastq2'],
}
rule fastp:
input:
unpack(get_fq_files_dict)
output:
expand(base_dir + "/trimmed/{sample}_trimmed.{r}.fastq.gz", sample=SAMPLES_f, # This is refined elsewhere, r=['_1','_2'])
shell:
"""
echo {input} # This is just a decoy command
"""
```
This code throws me the error
`AttributeError: 'Wildcards' object has no attribute 'sample'`
Does anybody have an idea what is not right? |
Trying to update the version.go file with the release tag from GitHub actions but its failing |
|go|github|github-actions| |
You have a number of serious design flaws in your inventory system.
Firstly, you should have only a single table containing both warehouses, with a `warehouse_id` column.
Second, this is clearly a table of inventory in and out. It makes no sense to have separate columns, you may as well have a single column, where `out` is negative.
warehouse_id |item_id |qty_moved
--|--|--
1|item1 |10
1|item1 |5
1|item2 |-3
1|item2 |-2
2|item1 |12
2|item1 |50
2|item2 |-10
2|item2 |-30
Now you can just do a simple query with conditional aggregation
```sql
SELECT
im.item_id,
SUM(CASE WHEN warehouse_id = 1 THEN qty_moved END) AS warehouse1,
SUM(CASE WHEN warehouse_id = 2 THEN qty_moved END) AS warehouse2
FROM InventoryMove im
GROUP BY
im.item_id;
``` |
{"Voters":[{"Id":11002,"DisplayName":"tgdavies"},{"Id":466862,"DisplayName":"Mark Rotteveel"},{"Id":573032,"DisplayName":"Roman C"}]} |
I'm trying to build and push a docker image from a dockerfile to my private repo in dockerhub using buildctl-daemonless.sh command using argo workflow, the following is the step in the workflow that does the job. The problem is that I get that it runs successfully but I couldn't find the image pushed
```
- name: image
inputs:
parameters:
- name: path
- name: image
volumes:
- name: test-secret-secret
secret:
secretName: test-secret-secret
container:
readinessProbe:
exec:
command: [ sh, -c, "buildctl debug workers" ]
image: moby/buildkit:v0.9.3-rootless
volumeMounts:
- name: work
mountPath: /work
- name: test-secret-secret
mountPath: /.docker
workingDir: /work/{{inputs.parameters.path}}
env:
- name: BUILDKITD_FLAGS
value: --oci-worker-no-process-sandbox
- name: DOCKER_CONFIG
value: /.docker
command: [ sh, -c, "buildctl-daemonless.sh" ]
args:
- build
- --frontend=dockerfile.v0
- --local context=.
- --local dockerfile=.
- --output type=image,name=docker.io/myusername/repo-test:argoworkflow,push=true
```
The only thing I get from the logs is this
```
time="2024-03-31T11:30:27 UTC" level=info msg="capturing logs" argo=true
NAME:
buildctl - build utility
USAGE:
buildctl [global options] command [command options] [arguments...]
VERSION:
v0.9.3
COMMANDS:
du disk usage
prune clean up build cache
build, b build
debug debug utilities
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--debug enable debug output in logs
--addr value buildkitd address (default: "unix:///run/user/1000/buildkit/buildkitd.sock")
--tlsservername value buildkitd server name for certificate validation
--tlscacert value CA certificate for validation
--tlscert value client certificate
--tlskey value client key
--tlsdir value directory containing CA certificate, client certificate, and client key
--timeout value timeout backend connection after value seconds (default: 5)
--help, -h show help
--version, -v print the version
```
Any thoughts please what could be the problem |
Correct - as long as you have the same table/column names, you should be good. You may have to redefine the relationships between the tables, but your visuals/measures/columns, should be good. |
How to check if Android 13 app is connected to WIFI with API v33 |
|java|android|wifi|android-13|android-connectivitymanager| |
This:
-I./Dependencies/GLFW/Include
tells your compiler to search for the header files that you `#include` in
your source code in the directory `./Dependencies/GLFW/Include`.
And this:
-I./Dependencies/GLFW/lib
tells it also to look for header files in `./Dependencies/GLFW/lib`
But you haven't got any header files there. You've got (presumably)
the binary libraries that the linker needs, including (presumably)
the `opengl32` and `glfw3` libraries.
You have to tell `gcc` where the linker should search for these, and you need to tell it that it's a place to search for *libraries*,
not header files. This is so that `gcc` will pass on this information to the *linker*
(`ld`), which needs it, and not to the C compiler (`cc1`), which doesn't.
You do that with the `-L <dir>` option. See [GCC manual:3.16 Options for Directory Search](https://gcc.gnu.org/onlinedocs/gcc/Directory-Options.html)
Change:
SET COMPILER_FLAGS= -I./Dependencies/GLFW/Include -I./Dependencies/GLFW/lib
SET LINKER_FLAGS= -lopengl32 -lglfw3
to:
SET COMPILER_FLAGS= -I./Dependencies/GLFW/Include -I./Dependencies/GLFW/lib
SET LINKER_FLAGS= -L./Dependencies/GLFW/lib -lopengl32 -lglfw3
and with luck this particular problem:
cannot find -lglfw3
will be solved. |
I ran into an issue where `submit.click()` was only sometimes executing the button's click handler. Changing to this approach turned out to be more reliable:
```
submit = fixture.debugElement.query(By.css('#getimagebyid')).nativeElement;
submit.dispatchEvent(new Event('click'));
``` |
You do not need to register your result with the `item` salt. When you register the result of a loop (e.g. `with_items`) the registered value will contain a key `results` which holds a list of all results of the loop. (See [docs][1])
Instead of looping over your original device list, you can loop over the registered results of the first task then:
- name: Check if the disk is partitioned and also ignore sda
stat: path=/dev/{{item}}1
with_items: disk_var
when: item != 'sda'
register: device_stat
- name: Create GPT partition table
command: /sbin/parted -s /dev/{{ item.item }} mklabel gpt
with_items: "{{ device_stat.results }}"
when:
- item is not skipped
- item.stat.exists == false
The condition `not item | skipped` takes care of that elements which have been filtered in the original loop (sda) will not be processed.
While that might be a solution to your problem, your question is very interesting. There seems to be no `eval` feature in Jinja2. While you can concatenate strings you can not use that string as a variable name to get to its value...
[1]: https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_loops.html#registering-variables-with-a-loop |
I tried to use the code to connect to WiFi on Android 13. The callback function in the code will execute if the connection is successful, but in fact the phone is not connected to WiFi at all.
This code will work on Android 12, but not on Android 13:
```
package common.yunshen.common.wifimanager;
import android.net.MacAddress;
import android.net.wifi.WifiManager;
import android.net.wifi.WifiNetworkSuggestion;
import android.os.Build;
import androidx.annotation.RequiresApi;
import java.util.ArrayList;
import java.util.List;
public class WifiConnect {
public interface WifiConnectCallback {
void onConnectSuccess();
void onConnectFailure(String errorMessage);
}
@RequiresApi(api = Build.VERSION_CODES.Q)
public static void connectWifiForQ(WifiManager manager, String ssid, String bssid, String passwd, boolean isHidden, String capabilities, WifiConnectCallback callback) {
if (capabilities.contains("WPA-PSK") || capabilities.contains("WPA2-PSK")) {
setWPA2ForQ(manager, ssid, bssid, passwd, isHidden, callback);
} else {
setESSForQ(manager, ssid, isHidden, callback);
}
}
@RequiresApi(api = Build.VERSION_CODES.Q)
public static void setWPA2ForQ(WifiManager manager, String ssid, String bssid, String passwd, boolean isHidden, WifiConnectCallback callback) {
WifiNetworkSuggestion suggestion;
if (bssid == null) {
suggestion = new WifiNetworkSuggestion.Builder()
.setSsid(ssid)
.setWpa2Passphrase(passwd)
.setIsHiddenSsid(isHidden)
.build();
} else {
suggestion = new WifiNetworkSuggestion.Builder()
.setSsid(ssid)
.setBssid(MacAddress.fromString(bssid))
.setWpa2Passphrase(passwd)
.setIsHiddenSsid(isHidden)
.build();
}
List<WifiNetworkSuggestion> suggestions = new ArrayList<>();
suggestions.add(suggestion);
int status = manager.addNetworkSuggestions(suggestions);
if (status != WifiManager.STATUS_NETWORK_SUGGESTIONS_SUCCESS) {
// 连接失败
callback.onConnectFailure("Failed to add network suggestion");
} else {
callback.onConnectSuccess();
}
}
@RequiresApi(api = Build.VERSION_CODES.Q)
public static void setESSForQ(WifiManager manager, String ssid, boolean isHidden, WifiConnectCallback callback) {
WifiNetworkSuggestion suggestion = new WifiNetworkSuggestion.Builder()
.setSsid(ssid)
.setIsHiddenSsid(isHidden)
.build();
List<WifiNetworkSuggestion> suggestions = new ArrayList<>();
suggestions.add(suggestion);
int status = manager.addNetworkSuggestions(suggestions);
if (status != WifiManager.STATUS_NETWORK_SUGGESTIONS_SUCCESS) {
// 连接失败
callback.onConnectFailure("Failed to add network suggestion");
} else {
callback.onConnectSuccess();
}
}
}
```
I looked up the relevant information and found that I need to add a new permission.
```
<uses-permission android:name="android.permission.NEARBY_WIFI_DEVICES"/>
```
But it doesn't work. |
I’ll demonstrate by mimicking the string in a *pre* tag and do the string parsing...
<!DOCTYPE html><head></head>
<body onload="Process();">
<script>
function Process(){
let S=Cont.innerHTML;
S=S.replace(/\n+/g,'').replace(/ +/g,'').replace(/{/g,' {\n ').replace(/;/g,';\n').replace(/:/g,': ').replace(/,/g,', ');
Cont.innerHTML=S;
}
</script>
<pre id="Cont">
p {
color: hsl(0deg, 100%, 50%;
}
</pre>
</body></html>
You start by stripping all spaces and all LFs and then insert them back this time strategically, as desired.
What I’ve done here is compatible with CSS but if your *CodeBlocks* may also contain JS or HTML then you’ll need a separate function to handle each.
I hope you don’t get any mixed *CodeBlocks* because it then gets very tricky!
|
As noted in the comments, an `awk` solution is simple.
For example:
```
awk '!index($0,q) || ++c>n' n=3 q=please "$filename"
```
---
It is not very convenient to try to count with `sed`, although it is possible. Parameterising arguments is also complicated.
Ignoring both those issues, here is a sed script to delete the first 3 occurrences of lines containing "please":
```
sed '
1 {
x
s/^/.../
x
}
/please/ {
x
s/.//
x
t del
b
:del
d
}
' "$filename"
```
Applying either script to:
```
1 leave this line alone
2 leave this line alone
1 please delete this line
3 leave this line alone
2 please delete this line
4 leave this line alone
5 leave this line alone
3 please delete this line
6 leave this line alone
4 please leave this line alone
7 leave this line alone
```
produces:
```
1 leave this line alone
2 leave this line alone
3 leave this line alone
4 leave this line alone
5 leave this line alone
6 leave this line alone
4 please leave this line alone
7 leave this line alone
```
---
If you would prefer to enter a number rather than a long string of dots, you can do:
```
sed '
1 {
x
:start
s/^/ /
t nop
:nop
s/././3
t end
b start
:end
x
}
/please/ {
x
s/.//
x
t del
b
:del
d
}
' "$filename"
```
As "one-liner":
```
sed -e'1{x;:l' -e's/^/ /;tn' -e:n -e's/././3;te' -ebl -e:e -e'x;};/please/{x;s/.//;x;td' -eb -e:d -e'd;}' "$filename"
``` |
I’ll demonstrate by mimicking the string in a *pre* tag and do the string parsing...
<!DOCTYPE html><head></head>
<body onload="Process();">
<script>
function Process(){
let S=Cont.innerHTML;
S=S.replace(/\n+/g,'').replace(/ +/g,'').replace(/{/g,' {\n ').replace(/;/g,';\n').replace(/:/g,': ').replace(/,/g,', ');
Cont.innerHTML=S;
}
</script>
<pre id="Cont">
p {
color: hsl(0deg, 100%, 50%;
}
</pre>
</body></html>
You start by stripping all spaces and all LFs and then insert them back this time strategically, as desired.
What I’ve done here is compatible with CSS but if your *CodeBlocks* may also contain JS or HTML then you’ll need a separate function to handle each.
I hope you don’t get any mixed *CodeBlocks* because it then gets tricky!
|
I think a demo would be easier than going back and forth: https://github.com/quyentho/submodule-demo
You can check my `dist/` folder in the `placeholder-lib` to see if your generated build has similar structure. You can see I have no problem to import like this in my `consumer`:
import { Button } from "placeholder-lib/components";
import useMyHook from "placeholder-lib/shared";
I guess, your problem could be missing these export lines in `package.json`
"exports": {
".": "./dist/index.js",
"./components": "./dist/components/index.js",
"./shared": "./dist/shared/index.js"
},
|
Im using Google Analytics 4 and Google Tag Manager to register my events in my Wordpress site.
I have successfully added Google Tag Manager in both Header and Body and added an simple event called all_clicks and i have submitted all my latest changes.
[![enter image description here][1]][1]
The problem is that for some reason i get only events coming from the Home page of my site. If i click somewhere else and redirected, events wont show up in the Realtime report.
In the developers tools the events are being collected as intended for all pages meaning that the GTM is installed.
[![enter image description here][2]][2]
and events are also firing in Google Tag Assistant.
Any ideas?
[1]: https://i.stack.imgur.com/ogj9o.png
[2]: https://i.stack.imgur.com/dNKAq.png |
Google Analytics doesnt register events when redirected in a second page |
|google-analytics|google-tag-manager|google-analytics-4| |
I am writting a web crawler and I need to extract following data from a web page.I have added the page source for your reference and I want to access highlighted data. how can i do using python?
[![enter image description here][1]][1]
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/R7g63.png
[2]: https://i.stack.imgur.com/eOn5O.png |
If you open the SVG files in your browser tabs, you will notice that they are not the same. I'm not talking about the size or the colour, but the ***coordinates*** - the black duck is centered, unlike the yellow / orange duck, and this is causing the problems with the black duck display.
What you need to do is the following:
1. Open both of your SVG files in a text editor of your choice
2. Copy the styling of your `duckling-svgrepo-com-bk.svg` into a separate file
3. Replace the entire contents of your `duckling-svgrepo-com-bk.svg` with the contents of `duckling-svgrepo-com.svg`, so that the black duck is now identical to the yellow / orange duck
4. Remove the inline styling for the `path` and `circle` elements of your `duckling-svgrepo-com-bk.svg`, and instead apply the `st` classes you copied to an empty file in step 2
5. Copy back the styles from step 3, and save your `duckling-svgrepo-com-bk.svg` file
What this achieves is that the black duck svg is now exactly the same (sans the colour) as the yellow / orange duck, and your code will work now.
This should be the end result for your black duck's SVG:
```
<?xml version="1.0" encoding="windows-1252"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg height="800px" width="800px" version="1.1" id="Capa_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 49 49" xml:space="preserve">
<style type="text/css">
.st0{fill:#D77328;}
.st1{fill:#FFFFFF;}
.st2{fill:#2C2F38;}
.st3{stroke:#FFFFFF;stroke-miterlimit:10;}
</style>
<g>
<path d="M37.19,26.375c-0.999-1.983-2.486-3.818-4.687-5.119c-0.231-0.136-0.295-0.436-0.123-0.641 c1.116-1.332,4.669-5.868,4.669-9.615c0-6.075-4.925-11-11-11s-11,4.925-11,11c0,6,5.008,9.048,5.873,9.88 c0.086,0.082,0.13,0.186,0.128,0.305c-0.012,1.015-0.342,5.794-5.532,3.985c-1.398-0.487-2.64-1.341-3.686-2.387L5.888,16.84 c-0.469-0.469-1.239-0.409-1.634,0.125C2.92,18.768,0.431,23.017,1.048,29c0,0,0.366,16.187,11.604,19.514 C13.861,48.872,15.126,49,16.386,49h6.858c2.854,0,5.645-0.829,8.027-2.402c0.083-0.055,0.166-0.11,0.25-0.166 C38.085,42.017,40.75,33.439,37.19,26.375z"/>
<path class="st0" d="M47.538,10h-8.206h-2.335c0.03,0.33,0.05,0.662,0.05,1c0,2.351-1.398,5.011-2.716,6.997V18h5.871 c0.926,0,1.854-0.218,2.632-0.721c2.568-1.658,4.434-5.064,5.161-6.554C48.161,10.389,47.912,10,47.538,10z"/>
<circle class="st1" cx="28.048" cy="9" r="4"/>
<circle class="st2" cx="30.048" cy="10" r="2"/>
<path class="st3" d="M20.515,29.887c6.723-3.413,7.533,4.125,7.533,4.125c0,8.75-7,8-7,8 c-5.947,0-8.933-6.269-9.758-8.343c-0.138-0.346,0.071-0.727,0.434-0.808l0.377-0.084C15.006,32.132,17.863,31.233,20.515,29.887z"/>
</g>
</svg>
```
Of course, you could also adjust your Javascript code to cover the cases when a SVG file is not centered, but this was the easier route to take. |
in this code function must take one index and simply delete it. And a problem is in delete_current_transaction, pycharm says that (index = self.ui.tableView.selectedIndexes()\[0\]) **list index out of range**.
Edit_current_transaction also must take only one category. But it don't gives this type of error
```
def edit_current_transaction(self):
index = self.ui.tableView.selectedIndexes()[0]
id = str(self.ui.tableView.model().data(index))
date = self.ui_window.dataEdit.text()
category = self.ui_window.cb_chose_category.currentText()
description = self.ui_window.le.description.text()
balance = self.ui_window.le_balance.text()
status = self.ui_window.cb_status.currentText()
self.conn.update_transaction_query(date, category, description, balance, status, id)
self.view_data()
self.new_window.close()
def delete_current_transaction(self):
index = self.ui.tableView.selectedIndexes()[0]
id = str(self.ui.tableView.model().data(index))
self.conn.delete_transaction_query(id)
self.view_data()
self.new_window.close()
```
This function is a part of mainapp class
class MoneyManager(QMainWindow):
def __init__(self):
super(MoneyManager, self).__init__()
self.ui = Ui_MainWindow()
self.ui.setupUI(self)
self.conn = Data()
self.view_data()
self.ui.NewTransButton.clicked.connect(self.open_new_transaction_window)
self.ui.EditTransButton.clicked.connect(self.open_new_transaction_window)
self.ui.DeleteTransButton.clicked.connect(self.delete_current_transaction())
def open_new_transaction_window(self):
self.new_window = QtWidgets.QDialog()
self.ui_window = Ui_New_Transaction_window()
self.ui_window.setupUi(self.new_window)
self.new_window.show()
sender = self.sender()
if sender.text() == "New transaction":
self.ui_window.NewTransButton.clicked.connection(self.add_new_transaction)
else:
self.ui_window.EditTransButton.clicked.connection(self.edit_current_transaction)
def delete_current_transaction(self):
index = self.ui.tableView.selectedIndexes()[0]
id = str(self.ui.tableView.model().data(index))
self.conn.delete_transaction_query(id)
self.view_data()
self.new_window.close()
It takes data from Connection. function that gives this function data
def delete_transaction_query(self, id):
sql_query = "DELETE FROM expenses WHERE ID=?"
self.execute_query_with_params(sql_query, [id])
|
{"Voters":[{"Id":14732669,"DisplayName":"ray"},{"Id":16217248,"DisplayName":"CPlus"},{"Id":354577,"DisplayName":"Chris"}],"SiteSpecificCloseReasonIds":[]} |
{"Voters":[{"Id":104149,"DisplayName":"Bob Arnson"},{"Id":23852,"DisplayName":"Rob Mensching"},{"Id":9214357,"DisplayName":"Zephyr"}],"SiteSpecificCloseReasonIds":[13]} |
{"Voters":[{"Id":2359687,"DisplayName":"devlin carnate"},{"Id":9214357,"DisplayName":"Zephyr"},{"Id":354577,"DisplayName":"Chris"}],"SiteSpecificCloseReasonIds":[16]} |
How to type hint type union as one or the other in typescript |
|typescript|react-context|union-types| |
I have set up `VNDocumentCameraViewController` for scanning documents automatically. I want to hide the cancel and capture buttons from the screen. Is it possible to hide these buttons?
Code:-
import UIKit
import Vision
import VisionKit
import AVFoundation
class DocumentScannerViewController: UIViewController, VNDocumentCameraViewControllerDelegate {
var previewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
let scannerViewController = VNDocumentCameraViewController()
scannerViewController.delegate = self
present(scannerViewController, animated: true)
previewLayer = AVCaptureVideoPreviewLayer(session: AVCaptureSession())
previewLayer?.videoGravity = .resizeAspectFill
previewLayer?.frame = view.bounds
previewLayer?.cornerRadius = 10
previewLayer?.opacity = 0.75
previewLayer?.borderColor = UIColor.red.cgColor
previewLayer?.borderWidth = 5.0
view.layer.addSublayer(previewLayer!)
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) {
for pageNumber in 0..<scan.pageCount {
let image = scan.imageOfPage(at: pageNumber)
if let imageData = image.jpegData(compressionQuality: 1.0) {
print(imageData)
}
if let cgImage = image.cgImage {
print("cgImage", cgImage)
}
}
controller.dismiss(animated: true)
}
func documentCameraViewControllerDidCancel(_ controller: VNDocumentCameraViewController) {
controller.dismiss(animated: true)
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFailWithError error: Error) {
print(error)
controller.dismiss(animated: true)
}
}
Please refer to the attached screenshot for clarification.
![ScreenShot][1]
My question is, how can I hide these buttons? I've tried the above code, but haven't achieved any results yet.
[1]: https://i.stack.imgur.com/VfehM.png |
I observed that when a WebView app in Android is closed, it may continue running in the background under certain circumstances. For example, I have deployed a webapp that replicates databases from a remote CouchDB server to the local PouchDB, and I load the app into the Android WebView, such that when there are record updates in the CouchDB, the WebView can display the up-to-date records.
When the Android app is closed (home button pressed, etc.), it seems that the app is still fetching data from the CouchDB. This is what I want because I want the app to keep fetching data even when it is in the background, despite some potential performance issues. However, the app stops doing so after a while (probably after a few minutes).
Any method that the WebView app can never be terminated?
|
Keep a webview app of Android running in the background |
|android-studio|couchdb|background-service|foreground-service| |
this is my project directoy, i have deployed my code to vercel, it works fine on local but dynamic pages shows 404 errors on vercel.

i added trailingSlash: true in next config but it still didnt work |
grep -o 'CN=[^,]*' data | sed 's/^CN=//'
The grep command extracts CN=*blahblahblah* by finding 'CN=' followed by anything that is not a comma. Then the sed command then deletes 'CN=', leaving just the *blahblahblah*. Another way of doing it is
grep -o 'CN=[^,]*' data | tail -c +4
where the tail commands excludes the first 3 characters, which would by 'CN=' |
I looked up with a listener as Alex Mamo suggested on a comment, and it did give me an error. Turns out it was the google play services of the phone I was using for testing.
Also had connection issues with said phone.
Here is the post that helped me:
[https://stackoverflow.com/questions/38583278/firebasecloudmessaging-firebaseinstanceid-background-sync-failed-service-not][1]
[1]: https://stackoverflow.com/questions/38583278/firebasecloudmessaging-firebaseinstanceid-background-sync-failed-service-not
Thank you all in the comments!! |
I just ran into this issue with React and wanted to call out that when adding a mouse event listener to the **window**, or any DOM element, not a React component, we need to use the global DOM MouseEvent type, no import needed. If you use VSCode like me, you may have accidentally auto-imported MouseEvent from React! If so, just remove that import and you'll be good to go. Hope this helps someone, cheers! |
These questions are always hard to answer, since I do not know what kind of setup do you have. But two things can be improved, and my suspicion is based on what you said here:
*"it will either fail immediately and say the "device reports readiness to read but returned no data", or it will return fragments of the word "hello" with no discernable pattern in responses and then fail with device reports readiness to read but returned no data."*
This sounds like you have a baudrate problem and generally that you are reading data too quickly.
Also, on the pico side, you are just printing Hello every second, this is not optimal, try to also include serial commands on that side as well.
For now, I would assume that if you tried something like this on your Pi 4:
import time
import serial
ser = serial.Serial('/dev/ttyACM0', 9600, timeout=30) # or make sure that both are running at the same baudrate
ser.flushInput()
time.sleep(0.5) # include some sleep here also
while True:
line = ser.readline() # readline is not very nice, think about using read_all() or just read() instead.
if line:
print(line.decode('utf-8'), end='')
I have another answer, where I give an example of prompting a reaction from arduino: https://stackoverflow.com/a/77967320/16815358 |
grep -o 'CN=[^,]*' data | sed 's/^CN=//'
The grep command extracts CN=*blahblahblah* by finding 'CN=' followed by anything that is not a comma. Then the sed command then deletes 'CN=', leaving just the *blahblahblah*. Another way of doing it is
grep -o 'CN=[^,]*' data | tail -c +4
where the tail commands excludes the first 3 characters, which would be 'CN=' |
I'm new in Django and I'm building a site for booking.
I build the front-end in vue 3 and the back-end in django using channels.
I've implemented the websockets but now I'm trying to add a GET or POST entry point for the confirmation via link (like "url/api/confirm/confirm_code" or "url/api/confirm/confirm_code=code") in a mail I sent from the back-end. The problem is that my back-end never receive the request.
I tried like this:
app_name.urls
```
from django.contrib import admin
from django.urls import path, include
urlpatterns: list[path] = [
path('admin/', admin.site.urls),
path('api/', include('app.routing')),
]
```
app.routing
```
from django.urls import path
from app.consumers.view import ViewConsumer
from app.consumers.booking import BookingConsumer
from app.consumers.account import AccountConsumer
from app.consumers.profile import ProfileConsumer
from app.http.confirm_reservation import confirm_reservation
websocket_urlpatterns = [
path(r"ws/view/", ViewConsumer.as_asgi()),
path(r"ws/booking/", BookingConsumer.as_asgi()),
path(r"ws/account/", AccountConsumer.as_asgi()),
path(r"ws/profile/", ProfileConsumer.as_asgi()),
]
urlpatterns = [
path(r"confirm/<str:confirm_code>/", confirm_reservation, name="confirm_reservation"),
]
```
app.http.confirm_reservation
```
from django.http import JsonResponse
from app.services.booking import BookingService
def confirm_reservation(request, confirm_code: str):
print(request)
print(confirm_code)
return JsonResponse(BookingService().confirm_reservation(request))
```
Least but not last, if you have any suggestions for a better code than I wrote, please tell me in a comment.
Thanks.
I'm using my own site and postman (http://192.168.1.5:8080/api/confirm/1234/) to try sending the confirmation code but the front-end get "Cannot GET /api/confirm/1234/" and the back-end doesn't print any data that shows it receive that call, not even any kind of error. |
Add an http GET/POST entry point to a Django with channels websocket |
|django|rest|post|get|django-channels| |
null |
{"Voters":[{"Id":7916438,"DisplayName":"tevemadar"},{"Id":9214357,"DisplayName":"Zephyr"},{"Id":354577,"DisplayName":"Chris"}]} |
After constant requests, I took a look at the expected output, and this is probably not the best/neatest code, but it does work, and is fairly compact
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
function render(md) {
let code = ""; //For functions return
let mdLines = md.split("\n");
const re = /^\s*([#]{1,6})\s/;
for (let line of mdLines) {
const hash = line.match(re);
if (hash) {
const h = hash[1].length;
line = `<h${h}>${line.replace(re, "").trim()}</h${h}>`;
}
code += line;
}
return code;
}
let text1 = "## he#llo \n there \n # yooo";
let text2 = "# he#llo \n there \n ## yooo";
console.log(render(text1));
console.log(render(text2));
<!-- end snippet -->
|
to prevent removing other styles applied to the Text, you can do it as below:
Text(
text = "سلام اندروید!",
style = MaterialTheme.typography.bodyMedium.copy(textDirection = TextDirection.Rtl)
) |
I have 3 data frames like below :
[enter image description here](https://i.stack.imgur.com/WyAgq.png)
df_date and df1 and df2
df1 and df2 have the same columns A B C D
i want to combine it like below in one data frame like below: and the dataframe should have dupplication in date and the same columns A B C D
[enter image description here](https://i.stack.imgur.com/UVFQ7.png) |
How to combine dataframes with different column numbers |
|python|pandas| |
null |
How can I sort a csv file using pandas so that the names are sorted the same way finder does it?
In finder I have millions of files named as such: "-2odhDKSZ22302_000.jpg". These file names are also in a csv. I'd like to be able to go through the images via finder and the csv file in the same order.
If there are differences in how other operating systems (e.g. Ubuntu) sort file names could you please point this out.
I've tried searching on stack sites but can't find what I'm looking for. |
Sorting alphabetically using pandas |
|python|pandas|csv|sorting|alphabetical| |
null |
Encountered similar issue. There is a thing in your csproj that may trigger that. Make sure to remove this:
```xml
<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
```
[source](https://steveellwoodnlc.medium.com/error-nu5026-the-file-to-be-packed-was-not-found-on-disk-18bfbc6be4a)
The error line (french) was:
NuGet.Build.Tasks.Pack.targets(221,5): error NU5026: Le fichier 'C:\...\bin\Release\net48\....dll' à compresser est introuvable sur le disque. |
Please have a look at the [`hiltViewModel`][1] documentation:
> Returns an existing HiltViewModel-annotated ViewModel or creates a new one **scoped to the current navigation graph** present on the {@link NavController} back stack.
You have the following code in your Post Composable:
viewModel: PostViewModel = hiltViewModel()
This will return the **same ViewModel instance** for **every single** `Post` Composable, as all of them are in the same destination in your `NavGraph`. If you update a field in the ViewModel from one `Post`, then all other `Post`s will have the same change.
[1]: https://developer.android.com/reference/kotlin/androidx/hilt/navigation/compose/package-summary#hiltViewModel(androidx.lifecycle.ViewModelStoreOwner,kotlin.String) |
I have container: inlinie-size with max-width: 500px. Inside is paragraph with font-size: 2cqw. My understanding is that font size is relative to the width of the container, so font size should stop growing once container hits 500px. But instead font-size keeps growing as browser window is made wider, even though container stays at 500px width, why is that?
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
<!DOCTYPE html>
<html>
<head>
<style type="text/css">
* {
margin: 0;
padding: 0;
}
.container {
container: inline-size;
max-width: 500px;
background-color: rgb(213, 213, 213);
}
p {
font-size: 2cqw;
}
</style>
</head>
<body>
<div class="container">
<p>Paragraph with font-size:2cqw, inside container with 90% width max-width=500px.</p>
</div>
</html>
<!-- end snippet -->
|
I am creating a Docker image with a `env.yml` file. This yml file has a Python package named mohit-housing-price-prediction=0.0.2 which I created locally. While creating an image using
`docker build -t mohitsharmatigeranalytics/tamlep:0.3 .`
I get
```
164.0 ERROR: Could not find a version that satisfies the requirement mohit-housing-price-prediction==0.0.2 (from versions: none)
```
My `Dockerfile` is
```
FROM continuumio/miniconda3
LABEL maintainer="Mohit Sharma"
WORKDIR /app
# Copy project files
COPY . /app
#Install dependencies using Conda
RUN conda env create -f docker_environment.yml
# Activatethe Conda environment
SHELL ["conda", "run", "-n", "mle-dev", "/bin/bash", "-c"]
# Set executable permissions for the Python scripts
RUN chmod +x src/housing_price_prediction/components/ingest_data.py \
&& chmod +x src/housing_price_prediction/components/train.py \
&& chmod +x src/housing_price_prediction/components/score.py
# Set executable permissions for the the .sh file
RUN chmod +x /app/run_scripts.sh
#Set entrypointand default command
ENTRYPOINT [ "conda", "run", "-n", "mle-dev" ]
CMD ["./run_scripts.sh"]
```
and my run_scripts.sh file is
```
#!/bin/bash
cd /app/src/housing_price_prediction/components
# Run data_ingest.py
echo "Running data_ingest.py..."
python ingest_data.py
# Run train.py
echo "Running train.py..."
python train.py
# Run score.py
echo "Running score.py..."
python score.py
```
Since I have already created the package locally, shouldn't the environment be created without any errors in the Docker image? |
|java|spring-boot|spring-security|spring-data-jpa| |
From what I understood, the only thing that seems to be missing is the starting number:
```javascript
function count(num1, num2)
{
let result = ""; // Always initialise in function to ensure it always sends the current result only.
for (let i = num1; i <= num2; i++) // Set i = num1 instead of num+1
{
result += i;
if (i < num2) { result += ","; }
}
return result;
}
``` |
So, I have a many to many SQLAlchemy relationship defined likeso,
```python
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, sessionmaker
from sqlalchemy import Column, Integer, String, ForeignKey, UniqueConstraint, Table, create_engine
from sqlalchemy.orm import relationship, registry
mapper_registry = registry()
Base = declarative_base()
bridge_category = Table(
"bridge_category",
Base.metadata,
Column("video_id", ForeignKey("video.id"), primary_key=True),
Column("category_id", ForeignKey("category.id"), primary_key=True),
UniqueConstraint("video_id", "category_id"),
)
class BridgeCategory: pass
mapper_registry.map_imperatively(BridgeCategory, bridge_category)
class Video(Base):
__tablename__ = 'video'
id = Column(Integer, primary_key=True)
title = Column(String)
categories = relationship("Category", secondary=bridge_category, back_populates="videos")
class Category(Base):
__tablename__ = 'category'
id = Column(Integer, primary_key=True)
text = Column(String, unique=True)
videos = relationship("Video", secondary=bridge_category, back_populates="categories")
engine = create_engine('sqlite:///:memory:', echo=True)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
with Session() as s:
v1 = Video(title='A', categories=[Category(text='blue'), Category(text='red')])
v2 = Video(title='B', categories=[Category(text='green'), Category(text='red')])
v3 = Video(title='C', categories=[Category(text='grey'), Category(text='red')])
videos = [v1, v2, v3]
s.add_all(videos)
s.commit()
```
Of course, because of the unique constraint on `Category.text`, we get the following error.
```
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: category.text
[SQL: INSERT INTO category (text) VALUES (?) RETURNING id]
[parameters: ('red',)]
```
I am wondering what the best way of dealing with this is. With my program, I get a lot of video objects, each with a list of unique Category objects. The text collisions happen across all these video objects.
I could loop through all videos, and all categories, forming a Category set, but that's kinda lame. I'd also have to do that with the 12+ other many-to-many relationships my Video object has, and that seems really inefficient.
Is there like a "insert ignore" flag I can set for this? I haven't been able to find anything online concerning this situation. |
I have interfaces
**Angular Typescript Class**
interface A_DTO {
type: string,
commonProperty: string,
uniquePropertyA: string,
etc..
}
interface B_DTO {
type: string,
commonProperty: string,
uniquePropertyB: string,
etc...
}
type AnyDTO = A_DTO | B_DTO
I have an object, fetched from an API. When it is fetched, it immediately gets cast to A_DTO or B_DTO, by reading the 'type' property. But after that, it then it gets saved to a service, for storage, where it gets saved to a single variable, but of typ AnyDTO (I call that service variable with the components I work with - casting back to AnyDTO, doesn't cause any properties to be lost, so I'm happy)
**Angular Template**
But, in a component, I have some template code,
@if(object.type == "Type_A") {
// do something
// I can do object.commonProperty
// but I cannot access object.uniquePropertyA
} @ else { object.type == "Type_B") {
// do something
// I can do object.commonProperty
// but I cannot access object.uniquePropertyB
}
Note, above, object gets read as type, AnyDTO = A_DTO | B_DTO
**Angular Typescript Class**
I tried creating a type guard on the interface, in the typescript class code, e.g.
protected isTypeA(object: any): object is A_DTO {
return A_DTO?.Type === "Type_A";
},
**Angular Template**
Then
@if(isTypeA(object) && object.type == "Type_A") {
// do something
// I can do object.commonProperty
// but I still cannot access object.uniquePropertyA...
} @ else { object.type == "Type_B") {
// do something
// but I cannot access object.uniquePropertyB
}
Even with the typeguard being called in the template, inside the @if, 'object' still gets treated as type: A_DTO | B_DTO. Despite what I read on the internet, type narrowing does not happen. So can only access the common properties from 'object'.
I also tried to explicity type cast in the template, using things like (object as A_DTO).uniquePropertyA, but that doesn't work in the Angular template area
Any ideas on a ***dynamic*** solution, (that ideally does not involve create separate variables for each subtype in the Typescript class)?
Cheers,
ST
|
Angular - type casting and accessing properties of subtypes / classes in template |
|angular|typescript|casting| |
I recently did a new install of Visual Studio Community 2022 after uninstalling a previous install. I selected 2 workloads: ASPNet & Web Development and .Net Desktop Development. But when launching Visual Studio I get no new project template when selecting "Create a New Project". The install is on Windows 11 machine.

I tried solution at [here ](https://stackoverflow.com/questions/70041859/all-project-templates-dont-appear-in-visual-studio-2022)but it did not resolve the issue. I also tried "repair" from Visual Studio installer. It also did not resolve the issue.
I expected to see new project templates (e.g., Console application) |
As per [this answer](https://stackoverflow.com/a/75202688/23186684),
`list[dict]` is supported from python 3.9 and up. You need to upgrade your python version.
You can change `FROM python:3.8-slim` to `FROM python:3.9-slim` or any higher version in your Dockerfile |
Evaluating this in Assembly (A % B) % (C % D) |
null |
```lang-x86asm
INCLUDE Irvine32.inc
INCLUDELIB Irvine32.lib
INCLUDELIB kernel32.lib
INCLUDELIB user32.lib
.data
A SBYTE 10d ; A is an 8-bit signed integer
B SBYTE 2d ; B is an 8-bit signed integer
cc SBYTE 20d ; C is an 8-bit signed integer
D SBYTE 5d ; D is an 8-bit signed integer
.code
main PROC
mov EAX, 0
mov EDX, 0
mov al, A ; Load A into AL register
imul B
movsx bx, cc
imul bx
movsx bx, D
imul bx
call DumpRegs ;
exit
main ENDP
END main
```
I have this code and I want to modify it to print output for this (A % B) % (C % D) but when I use `idiv` the code doesn't give any outputs. |
my laptop are installed win10 and I use it to code python
but when I setting Tensorflow and Keras, it ok (find version)
nvidia-smi and nvcc --version
but Tensorflow is not ok:
here is my code.
---
import tensorflow as tf
print(tf.config.list_physical_devices('CPU'))
print(tf.config.list_physical_devices('GPU'))
---
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]
[]
I hope someone can HELP me, thanks a lot |
win10 Tensorflow can no detect GPU nvidia (laptop Dell) |
|python|tensorflow|gpu|nvidia| |
null |
I'm working with a Kafka Streams application where we use dynamic topic determination based on message headers. In our setup, it's normal for topics to be deleted while the application is running. Messages for a deleted topic might still occasionally arrive, but I want to simply ignore them. However, even after receiving just one message for a non-existent topic, I encounter an infinite loop of errors:
```
[kafka-producer-network-thread | stream-example-producer] WARN org.apache.kafka.clients.NetworkClient -- [Producer clientId=stream-example-producer] Error while fetching metadata with correlation id 74 : {test1=UNKNOWN_TOPIC_OR_PARTITION}
org.apache.kafka.common.errors.TimeoutException: Topic test1 not present in metadata after 60000 ms.
[kafka-producer-network-thread | stream-example-producer] WARN org.apache.kafka.clients.NetworkClient -- [Producer clientId=stream-example-producer] Error while fetching metadata with correlation id 79 : {test1=UNKNOWN_TOPIC_OR_PARTITION}
```
This infinite loop of errors essentially causes the application to stop working. How can I configure my Kafka Streams application to ignore messages for deleted topics without entering an infinite loop of errors? Is there a way to handle this situation?
Here's a simplified example of my application code:
```
StreamsBuilder builder = new StreamsBuilder();
List<String> dynamicTopics = List.of("good_topic", "deleted_topic");
builder.stream("source_topic").to((k, v, c) -> dynamicTopics.get(new Random().nextInt(dynamicTopics.size()))); //in real application from header
KafkaStreams streams = new KafkaStreams(builder.build(), props);
streams.start();
```
`Automatic topic creation is disabled.
`
**I tried the following to handle and ignore the error:**
1. Use KafkaAdmin: However, between checks for existing topics, a topic can be deleted, which doesn't solve the issue.
2. Set UncaughtExceptionHandler:
```
streams.setUncaughtExceptionHandler(new StreamsUncaughtExceptionHandler() {
@Override
public StreamThreadExceptionResponse handle(Throwable throwable) {
return StreamThreadExceptionResponse.SHUTDOWN_APPLICATION;
}
});
```
But the code doesn't even reach this handler.
3. Set ProductionExceptionHandler:
```
props.put(StreamsConfig.DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG,
CustomProductionExceptionHandler.class.getName());
```
Again, the code doesn't reach this handler.
4. Set Producer Interceptor:
```
props.put(StreamsConfig.producerPrefix(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG), ErrorInterceptor.class.getName());
```
The code reaches this interceptor, but I'm unable to resolve the issue from here.
5. Configure Producer Properties:
```
props.put(StreamsConfig.RETRY_BACKOFF_MS_CONFIG, "5000");
props.put(StreamsConfig.producerPrefix(ProducerConfig.MAX_BLOCK_MS_CONFIG), "8000");
props.put(StreamsConfig.producerPrefix(ProducerConfig.LINGER_MS_CONFIG), "0");
props.put(StreamsConfig.producerPrefix(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG), "10000");
props.put(StreamsConfig.producerPrefix(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG), "10000");
props.put(StreamsConfig.producerPrefix(ProducerConfig.RETRIES_CONFIG), 0);
```
I tried adjusting these producer properties, but Kafka Streams still attempts to handle the error indefinitely |
First of all, I would update the *fit* method on your *Preprocessor* class. In there you are only fitting the k_means estimator but not the ColumnTransformer and its associated objects which also requires fitting.
When calling the CVGridSearch the fit/fit_transform method is called for every piece of the pipeline, but when the fit method is called on the object of the *Preprocessing* class nothing is fiting the underlying objects. Something like following taking advantage of the fit and transform methods of ColumnTransformer should be a set in the right direction.
def fit(self, X, y=None):
self._cluster_simil = ClusterSimilarity(n_clusters=10, gamma=1., random_state=42)
self._preprocessing().fit(X)
return self
def transform(self, X):
return self._preprocessing().transform(X)
If this doesn't fix the issue I would require the dict of the parameters you are trying to optimize (param_grid) since the notation from "cat__ocean_proximity_ISLAND" indicates that "ocean_proximity_ISLAND" is being used as a parameter somewhere in your pipeline.
Finally, allow me to give you this piece of advice form someone who has tried to extend classes form sklearn and tensorflow many times (based on coultless hours of wrestling with the libraries). There is a reason why pipelines are there to combine estimators (in tensorflow something similar happens with the sequential model class). Extending these classes by inheritance / overriding is really really hard. This is because you are inheriting a lot of baggage from these complex classes which you are not accounting for and which are not apparent. The way to go IMO is to use pipelines and if you want go for more complex structures use dependency injection where you create and handle instances of the objects you need inside a container object.
In general I would say that if you go to the sklearn docs for a class like the [ColumnTransformer][1] for example, and the hit the blue [source] hyperlink that takes you to the github implentation of any method, and feel confident you can implement something similar you are good to go to try and create a new estimator class that will interoperate with the rest of the classes in the library without breaking anything.
I am not judging your skills, but be cautious in that when you inherit from BaseEstimator and override fit and transform there might be many other thing you also have to implement to make it compatible with other classes such as CVSearchGrid which you might not know about. That is why I would always recommend to stick to pipelines and use dependecy injection instead of inheriting in sklearn as much as possible.
[1]: https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html |
{"Voters":[{"Id":23825920,"DisplayName":"AlGM93"}]} |
Using autoloader to ingest files from adls gen2. However, I want to ingest only new files. Using the following config is still not preventing existing files from getting ingested. Is anyone else facing the same issue?
"cloudFiles.format": "json",
"multiline": True,
"pathGlobFilter": f"*.json",
"includeExistingFiles": "false",
"cloudFiles.schemaLocation": inferred_schema_folder,
"cloudFiles.schemaEvolutionMode": "addNewColumns",
"cloudFiles.inferColumnTypes": "true",
"cloudFiles.rescuedDataColumn": "unexpected_data",
"cloudFiles.maxFilesPerTrigger": 10000,
Tried switching to file notification mode instead of directory listing. But that too didnt work
"cloudFiles.useNotifications": "true",
"cloudFiles.subscriptionId": config.AZURE_SUB_ID,
"cloudFiles.tenantId": config.AZURE_TENANT_ID,
"cloudFiles.clientId": config.AZURE_CLIENT_ID,
"cloudFiles.clientSecret": config.AZURE_CLIENT_SECRET,
"cloudFiles.resourceGroup": "Development", |
SQLAlchemy Many-to-Many Relationship: UNIQUE constraint failed |
|python|sqlalchemy|many-to-many| |
Here's a failing test:
```java
// these are pretty basic dependencies so I won't include imports or pom
@Test
void testIfNoHeadersAreAddedToRequestImplicitly() {
Mono<Map<String, Object>> responseMono = WebClient.builder()
.baseUrl("https://httpbin.org")
.build()
.get()
.uri("/headers")
.retrieve()
.bodyToMono(new ParameterizedTypeReference<>() {});
StepVerifier.create(responseMono.map(m -> m.get("headers")).cast(Map.class))
.expectNextMatches(Map::isEmpty)
.verifyComplete();
}
```
```xml
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.1.5</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<!-- ... -->
<properties>
<java.version>17</java.version>
<spring-cloud.version>2022.0.4</spring-cloud.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter</artifactId>
</dependency>
<!-- ↓ includes WebFlux starter -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.wiremock</groupId>
<artifactId>wiremock-standalone</artifactId>
<version>3.5.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
```
I'm pretty sure `WebClient` adds some default request headers, such as `HOST`, and I want to *see* where it happens. It so happens I need to disable this feature, but I'm just as eager to *see* the cause
So far, my debugging of this issue was unsuccessful – including debugging of `DefaultWebClient.exchange()` which seems to work as expected
However, I discovered that a *wrong* request is passed to this callback
```java
// org.springframework.http.client.reactive.ReactorClientHttpConnector.connect(..)
return requestSender
// put a breakpoint on the lambda
.send((request, outbound) -> requestCallback.apply(adaptRequest(method, uri, request, outbound)))
```
When I dug deeper, Netty just pretended those headers were all along (once any request appears, it already has those default headers)
The issue is reproduced with WireMock too so it's unlikely to have anything to do with `httpbin` (btw, in case you're unfamiliar with it, you may visit [the API's page][1])
[1]: https://httpbin.org/#/Request_inspection/get_headers |
Font-size cqw units |
|html|css| |
I've written a VBA script that inserts data from an Excel workbook into an xml-file.
The xml-file to be edited is chosen from a filedialogpicker:
Public strFile As String
Sub SetPath()
' Declare filedialog picker
Set fd = Application.FileDialog(msoFileDialogFilePicker)
' Define attributes for the filepicker to only allow a single xml file
With fd
.Filters.Clear
.Filters.Add "XML files", "*.xml", 1
.Title = "Choose file"
.AllowMultiSelect = False
.InitialFileName = ActiveWorkbook.Path & "\"
' If a file is chosen save the path and name
If .Show = True Then
strFile = .SelectedItems(1)
Else
'MsgBox "Error"
End If
End With
End Sub
After which I open, edit and save the file:
Sub WriteXML()
Call SetPath
' Open the xml-file
Dim xmlFile As String
xmlFile = strFile
Set xmlWriter = CreateObject("Microsoft.XMLDOM")
xmlWriter.async = False
xmlWriter.Load xmlFile
' long codeblock for editing the file
xmlWriter.Save xmlFile
Set xmlWriter = Nothing
End Sub
The code works fine on my private computer, however running it on my company PC I get the error:
"the filename directory name or volume label syntax is incorrect"
I suspect that it involves my company pc using sharepoint thus making the save path:
'https://company-my.sharepoint.com/personal/mat_company_com/Documents/Desktop/myTemplates/Templates/file.xml'
As opposed to:
'C:\Users\Mat\Documents\file.xml'
Is there any way to fix this? |
Write xml from VBA |
|excel|xml|vba| |
Is it possible to show thrown error/console.log at the swagger ui, when one execute a call from the ui?
I don't know how to start with this and I didn't found any hint by searches in the internet. Maybe someone has a deeper view inside and can answer this fast and reliable. |
Show "thrown error"/"console.log" at the swagger ui |
|swagger|openapi|swagger-ui|openapi-generator|nestjs-swagger| |
null |
when we write test cases in the robot framework how to add test cases under the specific tag.
I want to know how to use tags in robot framework and how to define it ,how to add specic test cases under specic tag and to execute the particular test cases using tags. |
how to mention tags in robotframework while writing scripts |
|tags|robotframework| |
null |
I have a package in SSIS which in a first step executes these two queries:
truncate table mytable;
insert into mytable
select * from myview
then, a second step read data from view that takes data from mytable in join with other tables, and writes all the data into a csv file.
The job has never given me any problems, in the last three days however the csv file that is written only contains the header row with the name of the fields.
I specify that the job also does not fail, the execution is successful.
The csv file that is written is very bih, about 1.5 gb as I have to upload it to an application that requires the entire data history each time it is loaded so I cannot remove any data by appying a filter before write the file.
Can you help me?
I've tried to change the timeout options of the first step to 300 seconds (it was 0), the exported file is still empty.
[Timeout setting](https://i.stack.imgur.com/U73md.png)
Then I tried to change the contraint option in the connection between the first and the second step
It was "Success" i tried with "Completion". File still empty.
[Precedence constraint editor](https://i.stack.imgur.com/Pt90Y.png)
|
I have a problem with Excel customized ribbon.
The file I'm using has a customized ribbon and I need to edit/remove it but I can't find it in the ''Main Tabs'' available, but still I do not have the option to remove/edit it.
Any clues on this?

I tried using "Customize the Ribbon" options to remove it, but the button name (CEAF) dose not appear between the ''Main Tabs'' available. Also, I tried to use the ''Quick access Toolbar'', this time the customized ribbon name appeared (Upadacitinib CEA Tab) in the commands and this specific button (CEAF) also appeared, but still I don not have the option to remove/edit it.

|
I know that benchmarking in OPA shows memory usage as B/op. I want to see how much memory it takes up on actual memory when the file size of data.json is large + how much memory it consumes when calculating if the file is large, how can I check it? Does B/op include all the usage I want?
`opa bench --data policy.rego --data data.json --input input.json 'data.opa.allow' --count 1 --benchmem`

Tried OPA benchmark command |
I am using ESP-IDF directly (VSCode ESP-IDF extension). For PlatformIO the is not so different.
My device is ESP32-S3.
There are two thing you want to change:
- ***IDE serial monitor baudrate*** - the baudrate the IDE monitor is expecting.
- ***ESP32 console output baudrate*** - the baudrate of the ESP32 device for the UART used for console output.
# Setting the IDE Serial Monitor Baudrate
For ESP-IDF extension, go to the ***settings***, type `esp-ifd baud rate` in the search box, you will get two results: ***"ESP-IDF Flash Baud rate"*** and ***"ESP-IDF Monitor Baud rate"***. The monitor is the one you are looking for. Set it the value you want e.g. 230400.
For PlatformIO extension, you can set the monitor speed by adding/changing the `monitor_speed` option in ***platformio.ini***, e.g. `monitor_speed = 230400`.
# Setting the ESP32 Console Output Baudrate
There are two ways I found.
### 1. Hard code the baudrate
The default console output is `UART0`, so right at the beginning of the program add: `uart_set_baudrate(UART_NUM_0, 230400)`:
```c
#include "driver/uart.h"
void app_main()
{
uart_set_baudrate(UART_NUM_0, 230400);
// ...
}
```
When using this option, the serial monitor may output some garbage before the code reaches this line do to some init code logs that are sent before changing the baudrate to what the monitor is expecting.
### 2. Configure ESP-IDF sdkconfig File
To open a configuration editor, for ESP-IDF you can use the command `ESP-IDF: SDK Configuration editor` (or the corresponding icon) or type `idf.py menuconfig` (for PlatformIO `pio run -t menuconfig`) in the terminal.
The configurations you want to change are:
1. ***Component config -> ESP System Settings -> Channel for console
output***:
Set it to `Custom UART`.
2. ***Component config -> ESP System
Settings -> UART peripheral to use for console output (0-1)***:
Set it
to `UART0`.
3. ***Component config -> ESP System Settings -> UART console
baud rate***:
Set it to the value you want e.g. 230400.
P.S. The ***CONFIG_CONSOLE_UART_BAUDRATE*** option of the ***sdkconfig*** is no longer supported, see [ESP-IDF Monitor](https://docs.espressif.com/projects/esp-idf/en/stable/esp32/migration-guides/release-5.x/5.0/tools.html#esp-idf-monitor).
|
includeExistingFiles: false does not work in Databricks Autoloader |
|apache-spark|pyspark|azure-databricks|spark-structured-streaming| |
null |
I'm developing a project on Raspberry Pi Pico W and I'm trying to setup a custom Wi-Fi Manager using [AsyncWebServer_RP2040W library](https://github.com/khoih-prog/AsyncWebServer_RP2040W) and [AsyncTCP_RP2040W](https://github.com/khoih-prog/AsyncTCP_RP2040W) on [Arduino Pico](https://arduino-pico.readthedocs.io/en/latest/index.html). The code for the async server are bellow:
```cpp
#include <Arduino.h>
#include "AsyncTCP_RP2040W.h"
#include "AsyncWebServer_RP2040W.h"
#include <WiFi.h>
#include <LittleFS.h>
// Wi-Fi Manager definitions
const char *WIFI_USER = "WIFI_MANAGER";
const uint8_t WIFI_CHANNEL = 6;
const uint8_t WIFI_PORT = 80;
// Read File from LittleFS
String readFile(fs::FS &fs, const char * path) {
Serial.printf("Reading file: %s\r\n", path);
File file = fs.open(path, "r");
if(!file || file.isDirectory()) {
Serial.println("- failed to open file");
return String();
}
String fileContent;
while(file.available()) {
fileContent = file.readStringUntil('\n');
break;
}
return fileContent;
}
// Write file to LittleFS
void writeFile(fs::FS &fs, const char * path, const char * message) {
Serial.printf("Writing file: %s\r\n", path);
File file = fs.open(path, "w");
if(!file) {
Serial.println("- failed to write file");
return;
}
if(file.print(message)) {
Serial.println("- writed file");
}
else {
Serial.println("- failed to write file");
}
}
void setup() {
// Init serial, LittleFS and ON LED
Serial.begin(115200);
pinMode(LED_BUILTIN, OUTPUT);
digitalWrite(LED_BUILTIN, HIGH);
LittleFS.begin();
// Load values saved in LittleFS
ssid = readFile(LittleFS, ssidPath);
pass = readFile(LittleFS, passPath);
Serial.println(ssid);
Serial.println(pass);
if(ssid == "") {
AsyncWebServer server(WIFI_PORT);
// Connect to Wi-Fi network with SSID and password
Serial.println("Beggining AP...");
WiFi.softAP(WIFI_USER, NULL, WIFI_CHANNEL);
IPAddress IP = WiFi.softAPIP();
Serial.print("AP IP: ");
Serial.println(IP);
// Web Server Root URL
server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
request->send(LittleFS, "/index.html", "text/html");
});
server.serveStatic("/", LittleFS, "/");
server.on("/", HTTP_POST, [](AsyncWebServerRequest *request) {
int params = request->params();
for(int i=0;i<params;i++) {
AsyncWebParameter* p = request->getParam(i);
if(p->isPost()) {
// HTTP POST ssid value
if (p->name() == PARAM_INPUT_1) {
ssid = p->value().c_str();
Serial.print("SSID: ");
Serial.println(ssid);
// Write file to save value
writeFile(LittleFS, ssidPath, ssid.c_str());
}
// HTTP POST pass value
if (p->name() == PARAM_INPUT_2) {
pass = p->value().c_str();
Serial.print("Password: ");
Serial.println(pass);
// Write file to save value
writeFile(LittleFS, passPath, pass.c_str());
}
}
}
request->send(200, "text/plain", "Finished. The device will restart");
vTaskDelay(pdMS_TO_TICKS(3000));
rp2040.reboot(); // restart device
});
server.begin();
}
}`
```
The software needs to detect if there are a connection avaliable and if aren't, it opens an AP and set up a HTML page where the user can send the parameters for the Wi-Fi network and these params are saved in a .txt file using LittleFS, so the system reboots and on the next boot, it can read the file with the Wi-Fi credentials and connect to network. The code compiles and uploads to the Pico, but when I connect to the AP and type the IP in the browser, it gives a ERR_CONNECTION_REFUSED error. I tried to change various parameters without success, and earlier versions of the code opened the page, but now doesn't open at any cost. Can someone give a hint? |