instruction stringlengths 0 30k β |
|---|
***I have a dashboard in **Datadog** that uses a "status" variable to filter the log messages. This works fine for the majority of the entries.
There are some actual errors that do not have status=Error. They are logged as Info.
When I select "Error" from the variable dropdown I would like to get the entries with status=Error and any other entry (regardless of the status) which has the work "Error" as part of the content. If I select status="Debug" or status="Info" I would like to get the entries that match that status.
I can't figure out how to construct the query. Please, help.
I tried different queries but they did not produce the result I wanted. I am new to datadog, so I am not sure how to construct the query. |
Datadog Dashboard with a template variable |
I'm currently in the process of developing a wake word model for my AI Assistant, and I'm facing a dilemma regarding which output I should feed into my Linear Layer. Could someone clarify the difference between the two available outputs, and provide insight into why one should consider a specific recommendation for making the choice?
Thank you! |
Question 1:
In the forward(self, x) method of the SpecAugment class, there's a property named self._forward that seems to behave like a function. However, it also receives an 'x' variable as if it were a function. How does this interaction work?
Question 2:
After passing the 'x' value to the self._forward property, the code selects a policy from the 'policies' dictionary based on the 'policy' parameter. Assuming I understand the answer to the first question correctly, it seems that the 'x' value provided to self._forward is then used to choose one of 'self.policy1', 'self.policy2', or 'self.policy3'. How does this selection process work, and how do these policies run without explicitly taking any values?
Below is the original code for reference:
class SpecAugment(nn.Module):
def __init__(self,rate,policy=3,freq_mask=2,time_mask=4):
super(SpecAugment, self).__init__()
self.rate = rate
self.specaug1 = nn.Sequential(
torchaudio.transforms.FrequencyMasking(freq_mask_param=freq_mask),
torchaudio.transforms.TimeMasking(time_mask_param=time_mask)
)
self.specaug2 = nn.Sequential(
torchaudio.transforms.FrequencyMasking(freq_mask_param=freq_mask),
torchaudio.transforms.TimeMasking(time_mask_param=time_mask),
torchaudio.transforms.FrequencyMasking(freq_mask_param=freq_mask),
torchaudio.transforms.TimeMasking(time_mask_param=time_mask)
)
policies = {1:self.policy1, 2:self.policy2, 3:self.policy3}
self._forward = policies[policy]
def forward(self,x):
return self._forward(x)
#this makes specaug1
def policy1(self,x):
probability = torch.rand(1,1).item()
if self.rate > probability:
return self.specaug1(x)
return x
#this makes specaug2
def policy2(self,x):
probability = torch.rand(1,1).item()
if self.rate > probability:
return self.specaug2(x)
return x
#this makes random choice because we did torch.rand
def policy3(self,x):
probability = torch.rand(1,1).item()
if probability > 0.5:
return self.policy1(x)
return self.policy2(x)
|
Can std::bit_cast be applied to an empty object? |
|c++|language-lawyer|c++20|constexpr|bit-cast| |
\-- Product class---
```
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE )
@Column (name ="id")
private Long id;
@Column(name ="sku")
private String sku;
@Column(name ="name")
private String name;
@Column(name ="description")
private String description;
@Column(name ="unit_price")
private BigDecimal unitPrice;
@Column(name ="image_url")
private String imageUrl;
@Column(name ="active")
private boolean active;
@Column(name ="units_in_stock")
private int unitsInStock;
@Column(name ="date_created")
@CreationTimestamp
private Date dateCreated;
@Column(name ="last_updated")
@UpdateTimestamp
private Date lastUpdated;
@ManyToOne(
cascade = CascadeType.ALL
)
@JoinColumn(name = "category_id", nullable = false)
private ProductCategory category;
```
}
\--- ProductRepository.java------
Extended JpaRepository.
\------application.properties------
spring.application.name=ecommerce-app
spring.data.rest.base-path=/api
\-------------here is my error when accessing <http://localhost:8081/api/products----->
Servlet.service() for servlet \[dispatcherServlet\] in context with path \[\] threw exception \[Request processing failed: org.springframework.dao.InvalidDataAccessResourceUsageException: JDBC exception executing SQL \[select p1_0.id,p1_0.active,p1_0.category_id,p1_0.date_created,p1_0.description,p1_0.image_url,p1_0.last_updated,p1_0.name,p1_0.sku,p1_0.unit_price,p1_0.units_in_stock from product p1_0 offset ? rows fetch first ? rows only\] \[ORA-00933: SQL command not properly ended
\] \[n/a\]; SQL \[n/a\]\] with root cause
oracle.jdbc.OracleDatabaseException: ORA-00933: SQL command not properly ended
i have created test class for product to use findAll() , yes thats working fine but its erroring out when accessing with rest api urls. |
I have created a spring boot application with spring data JPA, Rest ,oracle and i am getting this ORA-00933: SQL command not properly ended |
|spring-boot| |
null |
Entity Framework Core 8 dbcontext - can't add some rows in many-to-many relationship |
Testing your code it seems to work apart from to things. The first is a minor detail but `Courier` is not a color scale in Plotly. Change that to a valid color scale and that is solved. The second point is that you are adding a complete figure to another figure: [Plotly documentation][1]. There are two approaches moving forward: adding the result of `create_annotated_heatmap` as a trace or use the result as a figure.
**Option 1:**
import plotly.graph_objects as go
# Initialize the figure
fig = go.Figure()
# Add trace for max date
fig.add_trace(
create_annotated_heatmap(
z=z,
x=x,
y=z,
colorscale='agsunset',
reversescale=True,
zmid=1.0,
zauto=True,
font_colors=['#000000', '#FFFFFF'],
hoverongaps=False,
).data[0]
)
**Option 2:**
import plotly.graph_objects as go
fig = create_annotated_heatmap(z=z,
x=x,
y=z,
colorscale='agsunset',
reversescale=True,
zmid=1.0,
zauto=True,
font_colors=['#000000', '#FFFFFF'],
hoverongaps=False,
)
[1]: https://plotly.com/python/creating-and-updating-figures/#adding-traces-to-subplots |
I'm using magento 2.4.2-p1 and getting this error continously in my app as exceptions.
Report ID: webapi-6605b783362ae; Message: Notice: fil
e_get_contents(): file created in the system's temporary directory in /bitnami/magento/ven
dor/laminas/laminas-http/src/PhpEnvironment/Request.php on line 96
Why these errors are happening, anyone have idea? |
I need to print the bounding box coordinates of a walking person in a video. Using YOLOv5 I detect the persons in the video. Each person is tracked. I need to print each person's bounding box coordinate with the frame number. Using Python how to do this.
The following is the code to detect, track persons and display coordinates in a video using YOLOv5.
```
#display bounding boxes coordinates
import cv2
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
# Open the video file
cap = cv2.VideoCapture("Shoplifting001_x264_15.mp4")
#get total frames
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
print(f"Frames count: {frame_count}")
# Initialize the frame id
frame_id = 0
# Loop through the video frames
while cap.isOpened():
# Read a frame from the video
success, frame = cap.read()
if success:
# Run YOLOv8 tracking on the frame, persisting tracks between frames
results = model.track(frame, persist=True,classes=[0])
# Visualize the results on the frame
annotated_frame = results[0].plot()
# Print the bounding box coordinates of each person in the frame
print(f"Frame id: {frame_id}")
for result in results:
for r in result.boxes.data.tolist():
if len(r) == 7:
x1, y1, x2, y2, person_id, score, class_id = r
print(r)
else:
print(r)
# Display the annotated frame
cv2.imshow("YOLOv5 Tracking", annotated_frame)
# Increment the frame id
frame_id += 1
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else: # Break the loop if the end of the video is reached
break
# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()
```
The above code is working and display the coordinates of tracked persons.
But the problem is in some videos it is not working properly
```
0: 384x640 6 persons, 187.2ms
Speed: 4.1ms preprocess, 187.2ms inference, 4.0ms postprocess per image at shape (1, 3, 384, 640)
Frame id: 2
[830.3707275390625, 104.34822845458984, 983.3080444335938, 366.95147705078125, 1.0, 0.8544653654098511, 0.0]
[80.94219207763672, 76.50841522216797, 254.93991088867188, 573.0479736328125, 2.0, 0.8748959898948669, 0.0]
[193.58871459960938, 60.9941291809082, 335.6488342285156, 481.8208312988281, 3.0, 0.8484305143356323, 0.0]
[470.2035827636719, 92.78453826904297, 732.5341796875, 602.9578857421875, 4.0, 0.8541176319122314, 0.0]
[719.50537109375, 227.52276611328125, 884.10498046875, 501.5626525878906, 5.0, 0.6705026030540466, 0.0]
[365.58099365234375, 47.774330139160156, 600.3360595703125, 443.5860595703125, 6.0, 0.785051703453064, 0.0]
```
This output is correct.
But for another video there are only three people in the video but at the beginning of the video at 1st frame identify as 6 person.
```
0: 480x640 6 persons, 810.5ms
Speed: 8.0ms preprocess, 810.5ms inference, 8.9ms postprocess per image at shape (1, 3, 480, 640)
Frame id: 0
[0.0, 10.708396911621094, 37.77726745605469, 123.68929290771484, 0.36418795585632324, 0.0]
[183.0453338623047, 82.82539367675781, 231.1952667236328, 151.8341522216797, 0.2975049912929535, 0.0]
[154.15158081054688, 74.86528778076172, 231.10934448242188, 186.2017822265625, 0.23649221658706665, 0.0]
[145.61187744140625, 69.76246643066406, 194.42532348632812, 150.91973876953125, 0.16918501257896423, 0.0]
[177.25042724609375, 82.43289947509766, 266.5430908203125, 182.33889770507812, 0.131477952003479, 0.0]
[145.285400390625, 69.32669067382812, 214.907470703125, 184.0771026611328, 0.12087596207857132, 0.0]
```
Also, the output does not show the person ID here. Only display coordinates, confidence score, and class id. What is the reason for that?
|
Some HTTP request requires authentication, some requests are no, but if usage of `{ credentials: "include" }` is not the vulnerability, why not always keep `"include"`value?
|
Is it fine from the viewpoint of the security to set always `{ credentials: "include", }` in Fetch API of JavaScript? |
|javascript|fetch-api| |
|django|windows| |
I attempt one javascript assignment and there is four task which I mentioned in my code. but somehow I am getting problem in my 2nd task.where I have to make function but I didn't get idea what is going wrong.
```
> Passed Test 1: successfully logged consoleStyler() variables
> Failed Test 2: Not logging celebrateStyler() variables
> Failed Test 3: Not calling consoleStyler() and celebrateStyler()
> Passed Test 4: successfully called styleAndCelebrate()
```
Here is my code.
// Task 1: Build a function-based console log message generator
```
function consoleStyler(color,background,fontSize,txt) {
var message = "%c" + txt;
var style = `color: ${color};`
style += `background: ${background};`
style += `font-size: ${fontSize};`
console.log(message,style)
}
```
// Task 2: Build another console log message generator
```
function celebrateStyler(reason) {
var fontStyle = "color: tomato; font-size: 50px";
if(reason == "birthday")
{
console.log("%cHappy Birthday", fontStyle);
}else if(reason == "champions")
{
console.log("%cCongrats on the title!", fontStyle);
} else {
console.log(message, style);
}
}
```
// Task 3: Run both the consoleStyler and the celebrateStyler functions
```
consoleStyler('#1d5c63', '#ede6db', '40px', 'Congrats!')
celebrateStyler('birthday')
```
// Task 4: Insert a congratulatory and custom message
```
function styleAndCelebrate(color, background, fontSize, txt,reason) {
consoleStyler(color, background, fontSize, txt);
celebrateStyler(reason);
}
```
// Call styleAndCelebrate
```
styleAndCelebrate('ef7c8e','fae8e0','30px','You made it!','champions')
``` |
Had the same problem and this solve it. Also, I would like to mention that the build time was pretty much the same.
flutter build web --web-renderer canvaskit
|
This is not an _actual_ answer to OP's question. That can be found in the comments and in @thisisayush's post. But here
is a one-liner, demonstrating that it can be done without redefining a string variable in a loop. As strings are immutable they are not the first choice when it comes to accumulating values step by step. Arrays are more versatile and often faster.
<!-- begin snippet:js console:true -->
<!-- language:lang-js -->
const count=(n1,n2)=>Array(n2+1).fill(0).map((_,i)=>i).slice(n1).join(",");
console.log(count(0,8));
console.log(count(2,5));
console.log(count(5,2));
<!-- end snippet -->
The `.fill(0)` looks a bit superfluous here, but it is necessary, since `.map()` will not iterate over `undefined` elements. |
There is no closed formula for this problem. You can't do much better than trying many sizes until you find the best square size, but you can do this in O(log n) time where *n* is the larger dimension in pixels of your rectangle using binary search:
```c
int maximumSquareSize = (int)sqrt(rectangleWidth * rectangleHeight / squareCount);
int squareSize = binarySearchSquareSize(0, maximumSquareSize);
int binarySearchSquareSize(int minimumSquareSize, int maximumSquareSize)
{
int testedSquareSize = (minimumSquareSize + maximumSquareSize) / 2;
if (testedSquareSize == minimumSquareSize || testedSquareWidth == maximumSquareSize) return testedSquareSize;
int squaresPerRow = rectangleWidth / testedSquareSize ;
int squaresPerColumn = rectangleHeight / testedSquareSize ;
int totalSquaresFitted = squaresPerRow * squaresPerColumn;
if (totalSquaresFitted < squareCount)
{
return binarySearchSquareSize(minimumSquareWidth, testedSquareWidth);
}
else
{
return MaximizeSquareWidth(testedSquareWidth, maximumSquareWidth);
}
}
```
The first line is a minor optimization that may save a few loops. It determines the maximum possible square size from the fact that the resultant area of the square times the number of squares must not exceed the area of the rectangle for the answer to be valid. You could also just started with `max(rectangleWidth, rectangleHeight)` instead as the upper bound.
Then, the code performs a binary search between 0 (squares of size 0 will definitely fit in the rectangle) and this maximum, checking if the resulting square fitting would overflow the rectangle, until it reaches the square size where the fit is the most exact possible. |
You can use the `getByText` matcher, or the async `findByText` matcher if the toast is shown asynchronously and you need to wait for it to appear.
```typescript
test('...', async () => {
//...
expect(await screen.findByText('invalid answer')).toBeInTheDocument();
}); |
{"OriginalQuestionIds":[64263271],"Voters":[{"Id":2887218,"DisplayName":"jcalz","BindingReason":{"GoldTagBadge":"javascript"}}]} |
{"OriginalQuestionIds":[18739166],"Voters":[{"Id":2943403,"DisplayName":"mickmackusa","BindingReason":{"GoldTagBadge":"php"}}]} |
I'm currently following this [tutorial][1], and I'm stuck on running the program. I have a conda environment set up, with all of the necessary packages downloaded, but when I run the program, I receive the following error.
```
ModuleNotFoundError: No module named 'passlib'
```
I've run ```conda list | grep pass``` and it shows that I do have passlib downloaded.
```
passlib 1.7.4 pyhd8ed1ab_1 conda-forge
```
I've tried uninstalling, reinstalling, running a new terminal, and nothing works. Any ideas on how to get the environment to recognize that ```passlib``` is installed?
NOTE: I'm using bash to run the .sh scripts and not poetry, since I've had some trouble with poetry.
[1]: https://christophergs.com/tutorials/ultimate-fastapi-tutorial-pt-10-auth-jwt/ |
Cannot find module after importing |
|python|jwt|conda|fastapi|passlib| |
It doesn't look like a correct syntax. Try:
```
{{ humanize 0.123456 }}
```
Doc: https://grafana.com/docs/grafana/v9.5/alerting/fundamentals/annotation-label/variables-label-annotation/ |
**Promise resolution**
The technical explanation is that pairs of `resolve`/`reject` functions are "one shots" in that once you call one of them, further calls to either function of the pair are ignored without error.
If you resolve a promise with a promise or thenable object, Promise code internally creates a new, second pair of resolve/reject functions for the promise being resolved and adds a `then` clause to the resolving promise to resolve or reject the promise being resolved, according to the settled state of the resolving promise.
Namely, in
```js
const test = new Promise((resolve, reject) => {
resolve(Promise.resolve(78))
})
```
`resolve(Promise.resolve(78))` conceptually becomes
```js
Promise.resolve(78).then(resolve2,reject2)
```
where `resolve2`/`reject2` are a new pair of resolve/reject functions created for the promise `test`.
If and when executed, one of the `then` clause's handlers (namely `reject2` in this case) will be called by a Promise Reaction Job placed in the microtask queue. Jobs in the microtask queue are executed asynchonously to calling code, where `test` will remain pending at least until after the synchronous code returns.
Note in summary: you can only _settle_ a promise with a non-promise value.
**Promise Rejection**
Promise rejection is certain if a promise's `reject` function is called. Hence you can reject a promise with any JavaScript value, including a Promise object in any state.
Namely in
```js
const test2 = new Promise((resolve, reject) => {
reject(Promise.resolve(78))
})
```
the rejection can be performed synchronously, and the the rejection reason of `test2` is the _promise object_ `Promise.resolve(78)`, not the number 78. Demo:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const test2 = new Promise((resolve, reject) => {
reject(Promise.resolve(78))
})
test2.catch(reason =>
console.log("reason is a promise: ", reason instanceof Promise)
);
<!-- end snippet -->
|
The first link requires the user to upload their objects to your server (running Node.js) and then your server uploads the objects to Google Cloud Storage. This common method ensures security as the objects can be scanned before uploading to Cloud Storage.
Signed URLs are usually recommended because the client uploads directly to Cloud Storage bypassing your server.
The best method depends on the size of the objects, reliability of the client's networks, security objectives, network costs (ingress and egress), how trusted the clients are, etc. |
null |
I'm programming a simple 'game' that is just a block that can jump around. I placed a wall, but now the character phases through when moving to the right, but the collision works when going to the right.
Every other part of this is fine. the willImpactMovingLeft function works just fine and should be a mirror of willImpactMovingRight, but right doesn't work.
simple working code for this example:
```
<!DOCTYPE html>
<html>
<body>
<div id="cube" class="cube"></div>
<div id="ground1" class="ground1"></div>
<div id="ground2" class="ground2"></div>
<script>
var runSpeed = 5;
var CUBE = document.getElementById("cube");
var immovables = ["ground1", "ground2"];
let Key = {
pressed: {},
left: "ArrowLeft",
right: "ArrowRight",
isDown: function (key){
return this.pressed[key];
},
keydown: function (event){
this.pressed[event.key] = true;
},
keyup: function (event){
delete this.pressed[event.key];
}
}
window.addEventListener("keyup", function(event) {
Key.keyup(event);
});
window.addEventListener("keydown", function(event) {
Key.keydown(event);
});
setInterval(()=>{
//move left
if (Key.isDown(Key.left)){
if(willImpactMovingLeft("cube", immovables)!=false){
cube.style.left = willImpactMovingLeft("cube", immovables)+"px";
}else{
cube.style.left = CUBE.offsetLeft - runSpeed +"px";
}
}
//move right
if (Key.isDown(Key.right)){
if(willImpactMovingRight("cube", immovables)!=false){
cube.style.left = willImpactMovingRight("cube", immovables)+"px";
}else{
cube.style.left = CUBE.offsetLeft + runSpeed +"px";
}
}
}, 10);
function willImpactMovingLeft(a, b){
var docA = document.getElementById(a);
var docB = document.getElementById(b[0]);
for(var i=0;i<b.length;i++){
docB = document.getElementById(b[i]);
if((docA.offsetTop>docB.offsetTop&&docA.offsetTop<docB.offsetTop+docB.offsetHeight)||(docA.offsetTop+docA.offsetHeight>docB.offsetTop&&docA.offsetTop+docA.offsetHeight<docB.offsetTop+docB.offsetHeight)){//vertical check
if(docA.offsetLeft+docA.offsetWidth>docB.offsetLeft+runSpeed){
if(docA.offsetLeft-runSpeed<docB.offsetLeft+docB.offsetWidth){
return docB.offsetLeft+docB.offsetWidth;
}
}
}
}
return false;
}
function willImpactMovingRight(a, b){
var docA = document.getElementById(a);
var docB = document.getElementById(b[0]);
for(var i=0;i<b.length;i++){
docB = document.getElementById(b[i]);
if((docA.offsetTop>docB.offsetTop&&docA.offsetTop<docB.offsetTop+docB.offsetHeight)||(docA.offsetTop+docA.offsetHeight>docB.offsetTop&&docA.offsetTop+docA.offsetHeight<docB.offsetTop+docB.offsetHeight)){//vertical check
if(docA.offsetLeft>docB.offsetWidth+docB.offsetLeft-runSpeed){
if(docA.offsetLeft+docA.offsetWidth+runSpeed<=docB.offsetLeft){
CUBE.textContent = "WIMR";
return docB.offsetLeft-docA.offsetWidth;
}
}
}
}
return false;
}
</script><style>
.cube{height:50px;width:50px;background-color:red;position:absolute;top:500px;left:500px;}
.ground1{height:10px;width:100%;background-color:black;position:absolute;top:600px;left:0;}
.ground2{height:150px;width:10px;background-color:black;position:absolute;top:450px;left:700px;}
</style></body></html>```
it looks like a lot, but the only problem is the last if statement. Even when the condition is true, it still skips the return number and returns false. Any idea why?
Edit: simplified the code as much as I could.
P.S. **I CANNOT USE THE CONSOLE, I AM ON A MANAGED COMPUTER** |
I was going through the coursera program on machine / deep learning and came through to same problem - my hp laptop with GPU would crash when running model training notebook.
Running on WSL with cuda on Windows 11 Preview, and spent some time looking to find the root cause / solution to the problem.
However, suggestion above about corrupted notebook and moving code to a new note (and a separate / new virtual environment) solved the problem right away.
I have to say though, I dont know the root cause of the failure, just that the new notebook / venv solved the problem. |
I'm encountering an issue when attempting to append a JSON array or object into another JSON array within a PostgreSQL function. It seems that the array_append function is inserting the JSON as a string, resulting in an unexpected format in the output.
Currently, I'm getting output like this:
> {"{\"category_id\":8,\"category_name\":\"08 Candy\",\"is_active\":true,\"category_name_app\":\"Candy\",\"display_order\":7}"}
However, I'd like the output to be in this format so that I can easily decode it in my code:
> [{"category_id":8,"category_name":"08 Candy","is_active":true,"category_name_app":"Candy","display_order":7}]
Below is the logic of my function:
for all_categories in select * from categories where is_active = '1' loop
show_at_homepage = 0;
for current_subcat in select * from public."V_category_to_sub_category_w_names" where category_id = all_categories.category_id and sub_category_is_active = '1' loop
select * into product_count from public."V_APP_products_w_sub_categories" where sub_category_id = current_subcat.sub_category_id and store_id = get_store_id and is_deleted='0';
if count(product_count) > 0 then
show_at_homepage = 1;
end if;
end loop;
if show_at_homepage = 1 then
select row_to_json(all_categories) into cat_json;
select array_append(my_json_result_array,cat_json) into my_json_result_array;
end if;
end loop;
return my_json_result_array; |
When using the Microsoft Telnet Client, the first message isn't echoed locally, but subsequent messages are.
Why is that the case? Is this documented somewhere?
Are there other quirks of the Microsoft implementation?
Are they documented somewhere?
(This question is about "a specific programming problem", because a specific server implementation should get the Microsoft Telnet Client to echo the first message.) |
Echo behaviour of Microsoft Windows Telnet Client |
|windows|client|telnet|behavior| |
By default, select inputs will use the `id` attribute of the model as the `value` attribute of the `<option>` tags, and it ties various methods on the object for the contents of the option tag such as `to_label`, `name`, and `to_s`.
You can change both with the `:member_value` and `:member_label` options respectively (these were called `:value_method` and `:label_method` in older versions.
The details of each option are in the documentation for the select input:
http://rdoc.info/github/justinfrench/formtastic/Formtastic/Inputs/SelectInput |
I'm running a Django-powered site, and I'm seeing errors like these in my Django application's error logs:
django.core.exceptions.DisallowedHost: Invalid HTTP_HOST header: 'badhost.com'. You may need to add 'badhost.com' to ALLOWED_HOSTS.
I was under the impression that my nginx configuration, shown below (and trimmed for brevity), would prevent these requests from ever making it to the Django app; specifically, the last `server` block in the config. What do I have wrong?
My end-goal is for nginx to reject requests that have an invalid Host header.
```
server {
server_name mysite.com www.mysite.com;
listen 80;
return 302 https://$host$request_uri;
}
server {
server_name mysite.com www.mysite.com;
root /home/myuser/mysite.com/public/;
location / {
try_files $uri @proxy_to_app;
}
location @proxy_to_app {
proxy_pass http://localhost:8001;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # managed by Certbot
}
server {
listen 80 default_server;
listen 443 ssl default_server;
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # managed by Certbot
return 444;
}
``` |
|django|nginx| |
I want to get live weather data with web scraping. I was thinking about using BeautifulSoup for this.
```html
<span class="Column--precip--3JCDO">
<span class="Accessibility--visuallyHidden--H7O4p">Chance of Rain</span>
3%
</span>
```
I want to get the 3% out of this container. I already managed to get data from the website using this code snippet for another section.
temp_value = soup.find("span", {"class":"CurrentConditions--tempValue--MHmYY"}).get_text(strip=True)
I tried the same for the rain_forecast
rain_forecast = soup.find("span", {"class": "Column--precip--3JCDO"}).get_text(strip=True)
But the output my console is delivering is `--` for `print(rain_forecast)`.
The only difference I can see is that between the "text" that should be gotten from the span there is another span.
Another way I came across on Stack Overflow is to use Selenium, because the data has not yet been loaded into the variable and therefore the output is `--`.
But I don't know if this is overkill for my application, or if there is an simpler solution for this problem. |
In the below code:
package main
import (
"errors"
"fmt"
"math"
)
func area(r float64, shapeConstant float64, result *float64) error {
if r <= 0 {
return errors.New("r must be positive")
}
*result = shapeConstant * r * r
return nil
}
const (
shapeConstantForSquare = 1.0
shapeConstantForCircle = math.Pi
shapeConstantForHexagon = 3 * math.Sqrt(3) / 2 // 3 * 1.73205080757 / 2
)
func areaOfSquare(r float64, result *float64) error {
return area(r, shapeConstantForSquare, result)
}
func areaOfCircle(r float64, result *float64) error {
return area(r, shapeConstantForCircle, result)
}
func areaOfHexagon(r float64, result *float64) error {
return area(r, shapeConstantForHexagon, result)
}
func main() {
var result float64
err := areaOfSquare(3, &result)
display(err, &result)
areaOfCircle(3, &result)
display(err, &result)
areaOfHexagon(3, &result)
display(err, &result)
}
func display(err error, result *float64) {
if err != nil {
fmt.Println(err)
return
}
fmt.Println(result)
}
----------
`const shapeConstantForHexagon = 3 * math.Sqrt(3) / 2` needs runtime to evaluate expression(RHS) and provide value to LHS. Compile error.
What is the best approach to avoid expression evaluation in RHS, at runtime? in const declaration |
const declaration - How to evaluate expressions at compile time? |
|go|constants| |
I am unable to build a apllication. Here is my docker file
FROM docker.io/node:16-alpine AS builder
WORKDIR /usr/app
COPY ./ /usr/app
RUN npm install
RUN npm run build
Error message
npm ERR! path /usr/app
npm ERR! command failed
npm ERR! signal SIGKILL
npm ERR! command sh -c -- ng build
|
Docker File issue with node js application |
|docker|pipeline| |
null |
You can use the following if you want the action to happen for a certain range (10...20):
<!-- begin snippet:js console:true -->
<!-- language:lang-html -->
<script src="https://code.jquery.com/jquery-3.7.1.min.js"></script>
<input type="range" id="range-slider" min="0" max="50" value="5" step="1" />
<span class="other-element">in range</span>
<!-- language:lang-css -->
.active {background-color:green}
<!-- language:lang-js -->
$("#range-slider").on("input",function(){
$(".other-element").toggleClass("active",this.value>9&&this.value<21)
})
<!-- end snippet -->
|
I'm unfamiliar with the intricacies of LWJGL (or even your program) but I think the crux of the algorithm is something like:
```glsl
#extension GL_EXT_gpu_shader4 : enable
void rebuildLightmap() {
int[] storage = new int[2];
for(int z=0; z<SIZE; z++) {
for(int x=0;x<SIZE;x++) {
int index = z*SIZE+x;
int result = this.dungeonFloor.isLit(x+(xPos * SIZE), z+(zPos * SIZE));
storage[index & 31] ^= result << (index & 31);
}
}
// Mostly unchanged
for(int i=0;i<2;i++) {
lightMap[i]=0;
int len = 31;
for(int j=0;j<32;j++) {
if(storage[(i<<5)+j]) {
lightMap[i] += 1<<len--;
}
}
}
}
```
You just need to ensure `dungeonFloor.isLit` returns either 0 or 1.
The trick is instead of sending a full 64 bit integer, you send two 32 bit integers and from where you indexed before, [perform a modulus](https://stackoverflow.com/a/6670766/8724072) which selects the lower or upper range of 64 whole bits. |
Issue with BBCode image tag on React |
|reactjs|typescript|bbcode| |
I am trying to setup a reverse proxy along with the parameter for my backend server. Here is my proxy config:
location /vid {
proxy_pass https://10.0.0.10:8443/video.html;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /socket.io/ {
proxy_pass https://10.0.0.10:8000/$arg_url;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $proxy_host;
proxy_cache_bypass $http_upgrade;
client_max_body_size 100M;
}
it is showing some SSL problems but the certificate is correct as the rest of the application working well.:
2024-03-31T16:58:58.492587700Z 2024/03/31 16:58:58 [error] 31#31: *24 SSL_do_handshake() failed (SSL: error:0A00010B:SSL routines::wrong version number) while SSL handshaking to upstream, client: 192.168.176.1, server: localhost, request: "GET /socket.io/?EIO=3&transport=polling&t=1711904338460-81 HTTP/1.1", upstream: "https://10.0.0.10:8000/", host: "10.0.0.10", referrer: "https://10.0.0.10/vid"
Here is my browser console:
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/mfibu.png
I am assuming that /socket.io/ path contains the query param which is not adding up from the proxy server side so the backend server gives the 502 error. I tried to add the $arg_url but still not working. Any idea how to resolve it? |
Nginx reverse proxy with query parameters not working |
|nginx|reverse-proxy| |
Since [Firebase Hosting's configuration](https://firebase.google.com/docs/hosting/full-config) does not automatically differentiate between files and directories in the way traditional servers might, you could try and manually specify redirect rules for your known directory paths.
But:
- each directory would need a redirect rule. For projects with many directories, this can become cumbersome.
- as commented, this does not work: the redirect rule for adding a trailing slash is too general or does not correctly discriminate between files and directories, causing Firebase Hosting to repeatedly attempt to redirect to the same path, adding a slash each time, ... which results in a loop.
-----
> Note there can be many directories so a solution that avoids writing a separate rule for each one is much preferred for maintainability.
Then (and this is not tested) you would need to use [Cloud Functions for Firebase](https://firebase.google.com/docs/functions) or [Firebase Hosting's integration with Cloud Run](https://firebase.google.com/docs/hosting/cloud-run) to programmatically handle requests. That allows you to implement logic that checks if a requested path corresponds to a directory and enforce a trailing slash through redirection.
Using a Cloud Function, that function intercepts HTTP requests, checks if the request URL corresponds to a directory (by checking if it maps to an `index.html` file in your public directory), and redirects to the same URL with a trailing slash if so.
A pseudo-code example would be:
```javascript
const functions = require('firebase-functions');
const path = require('path');
const os = require('os');
const fs = require('fs-extra');
exports.addTrailingSlash = functions.https.onRequest(async (req, res) => {
// Extract the path from the request URL
const urlPath = req.path;
// Construct the file system path to where the file would be located
const filePath = path.join(os.tmpdir(), 'public', urlPath);
// Check if an index.html exists for this path
if (await fs.pathExists(path.join(filePath, 'index.html'))) {
// Directory exists, redirect to path with trailing slash
res.redirect(301, `${urlPath}/`);
} else {
// Not a directory or no index.html, handle normally
// That might involve serving the file directly, showing a 404, etc.
}
});
```
Route requests through this Cloud Function will use the `rewrites` feature in your `firebase.json`, allowing the function to process and redirect directory requests accordingly.
Your `firebase.json` would need a rewrite rule to direct traffic to this function for handling:
```json
{
"hosting": {
"rewrites": [
{
"source": "**",
"function": "addTrailingSlash"
}
],
// Other configurations
}
}
```
But: routing all requests through a Cloud Function could introduce latency: do check the potential performance impact.
And implementing logic with Cloud Functions or Cloud Run adds complexity to your project and may incur costs based on your usage, so you might have to do some [Cloud FinOps](https://cloud.google.com/learn/what-is-finops). |
Had the same problem and this solve it. Also, I would like to mention that the build time was pretty much the same.
```
flutter build web --web-renderer canvaskit
``` |
My system has two user providers. One is an AdminUser and the other is a regular User. I can't use roles to check access.Access should be determined by the user's instance.
I defined two different firewalls and settings. This helps to demarcate the entrance. But now an ordinary authorized user can open the main page in /admin or /admin/login
How can I do this without using roles. I tried to write custom_authenticators. But I'm stumped and not sure if this is right for me.
My security settings now look like this.
providers:
app_user_provider:
entity:
class: App\Entity\User
property: email
admin_user_provider:
entity:
class: App\Entity\AdminUser
property: email
firewalls:
dev:
pattern: ^/(_(profiler|wdt)|css|images|js)/
security: false
admin:
lazy: true
pattern: ^/admin
provider: admin_user_provider
form_login:
login_path: admin_app_login
check_path: admin_app_login
username_parameter: _email
password_parameter: _password
# where to redirect after success login
default_target_path: admin_home
logout:
path: admin_app_logout
# where to redirect after logout
target: admin_app_login
main:
lazy: true
provider: app_user_provider
form_login:
login_path: user_login
check_path: user_login
username_parameter: _email
logout:
path: user_logout
target: user_login
|
How to protect routes from one provider, but allow another in Symfony 6? |
|php|symfony6|symfony-security| |
null |
local Backpack = game:GetService("ReplicatedStorage"):Waitforchild("Backpack")
local Player = game:GetService("Players").localplayer
Player.Character.Respawned:Connect(function(Character)
local clone = Backpack:clone(Character:Waitforchild("Torso"))
end |
I learned that Python class attributes are like static data members in C++. However, I got confused after trying the following code:
>>> class Foo:
... a=1
...
>>> f1=Foo();
>>> f2=Foo()
>>> f1.a
1
>>> f1.a=5
>>> f1.a
5
>>> f2.a
1
Shouldn't f2.a also equal 5?
If a is defined as a list instead of an integer, the behavior is expected:
>>> class Foo:
... a=[]
...
>>> f1=Foo();
>>> f2=Foo()
>>> f1.a
[]
>>> f1.a.append(5)
>>> f1.a
[5]
>>> f2.a
[5]
I looked at
https://stackoverflow.com/questions/207000/python-difference-between-class-and-instance-attributes, but it doesn't answer my question.
Can anyone explain why the difference? |
Class and instance attributes |
Scraping information in a span located under nested span |
|python|web-scraping|beautifulsoup| |
I'm getting this error when trying to read an Env Variable from the .env file.
Things to know: my .env is in the root directory, the variable is prefixed by **VITE_**, and I'm trying to import it doing
```const apiKey = import.meta.env.VITE_YOUTUBE_API_KEY;```
I've scoured the web, but most of the answers were either *"use VITE prefix"* or *"use import.meta.env"*. I also tried using **loadEnv** like this https://vitejs.dev/config/#using-environment-variables-in-config, but I get "process is not defined" at ```const env = loadEnv(mode, process.cwd(), '')```.
Here's my .env (I've also tried removing the quotes from the variables, same thing):
```js
MONGODB_URI="mongodb+srv://USER:PASS@cluster0.f2tb4tn.mongodb.net/tutorialsApp?retryWrites=true&w=majority&appName=Cluster0"
JWT_SECRET="JWT_SECRET"
VITE_YOUTUBE_API_KEY="YOUTUBE_API_KEY"
VITE_CHANNEL_ID="CHANNEL_ID"
VITE_CLIENT_ID="CLIENT_ID"
```
Edit: here's my vite.config.js:
```js
import { transformWithEsbuild, defineConfig } from "vite";
import react from "@vitejs/plugin-react";
// Access import.meta.env directly
const appEnv = import.meta.env;
export default defineConfig({
plugins: [
{
name: "treat-js-files-as-jsx",
async transform(code, id) {
if (!id.match(/src\/.*\.js$/)) return null;
return transformWithEsbuild(code, id, {
loader: "jsx",
jsx: "automatic",
});
},
},
react(),
],
optimizeDeps: {
force: true,
esbuildOptions: {
loader: {
".js": "jsx",
},
},
},
define: {
__APP_ENV__: JSON.stringify(appEnv),
},
});
```
Any help would be appreciated!
|
|python|django|pip|pipenv| |
It works for me with this modification of `dnn_pred`:
```
dnn_pred <- function(model, data, ...) {
predict(model, newdata=as.h2o(data), ...) |> as.data.frame()
}
p <- predict(logo, model=dl_model, fun=dnn_pred)
plot(p)
```
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/9MmGd.png |
I'm interested in using UUIDs as primary keys in an intermediate table. I'm aware that adding `use Illuminate\Database\Eloquent\Concerns\HasUuids;` and `use HasUuids;` to a model can achieve this. However, since I don't have or need a model for my intermediate table, I'm uncertain if it's possible to automatically create UUIDs similarly. Will I need to manually generate the UUID when I create an entry in my intermediate table?
Here's what my migration file looks like:
```php
public function up(): void {
Schema::create('post_user', function (Blueprint $table) {
$table->uuid('id')->primary();
$table->timestamps();
$table->string('title');
$table->string('body');
});
}
``` |
Implementing UUID as primary key in Laravel intermediate table |
Just set the major unit to 1 so the axis counts 1 - 8 by 1s.
Add to the x-axis setting
```
chart.set_x_axis({'name': 'X Axis'})
```
```
chart.set_x_axis({
'name': 'X Axis',
'major_unit': 1, # set major unit to 1
})
```
[![Chart with major unit set to 1][1]][1]
<br>
You could also add a label to the Trend Line like;
```
# Add line series to the chart
chart.add_series({
'categories': '=Sheet1!$C$1:$C$2', # Adjusted for new data range
'values': '=Sheet1!$D$1:$D$2', # Adjusted for new data range
'line': {'type': 'linear'}, # Adding linear trendline
'data_labels': {'value': False, # Add Data Label
'position': 'below',
'category': True,
}
})
```
All this is doing is labeling your trend line, with the lower label sitting on the X-Axis.<br>
You'll notice the value is set at start and end points but I don't think there is any means to remove the top textbox using Xlsxwriter. It can be removed manually however simply by clicking the textbox twice and then use the delete key.<br>
And for that matter you could manually move the bottom textbox to align with the other numbers too if you like
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/cZLFl.png
[2]: https://i.stack.imgur.com/wvyu6.png
<br>
You could add a verical line using Error Bars but it doesn't give you any addition features. In fact you would have to use the major unit change to see the 3 on the x-axis value since it has no data label.<br>
Apart from that I suppose you could just add a drawing line/textbox on to the Chart.
|
You may try-
=TOROW(INDEX(SPLIT(INDEX(FILTER($A$2:$A$7;BYROW($B$2:$M$7;LAMBDA(rw;INDEX(OR(rw=H13)))))&"|"&
FILTER($B$1:$M$1;BYCOL($B$2:$M$7;LAMBDA(col;INDEX(OR(col=H13))))));"|"));3)
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/8GTJn.png |
{"Voters":[{"Id":5514747,"DisplayName":"Harun24hr"}]} |
Invoking Local API from a Laravel controller within the same Laravel application |
Ruby newbtard here...
I have an csv file (logevents.csv) that has a "message" column.
The "message" column contains rows of json data.
Using Ruby, I'd like to convert the json data's name:value pairs to columnname:rowvalue in a 2nd csv file.
Here's the 1st row of the csv file:
message
"{""version"":""0"",""id"":""fdd11d8a-ef17-75ae-cf50-077285bb7e15"",""detail-type"":""Auth0 log"",""source"":""aws.partner/auth0.com/trulab-dev-c36bb924-cf05-4a5b-8400-7bdfbfe0806c/auth0.logs"",""account"":""654654277766"",""time"":""2024-03-27T12:30:51Z"",""region"":""us-east-2"",""resources"":\[\],""detail"":{""log_id"":""90020240327123051583073000000000000001223372067726119722"",""data"":{""date"":""2024-03-27T12:30:51.531Z"",""type"":""seacft"",""description"":"""",""connection_id"":"""",""client_id"":""v00a8B5f1sgCDjVhneXMbMmwxlsbYoHq"",""client_name"":""TruLab Dev"",""ip"":""3.17.36.227"",""user_agent"":""Faraday v1.10.3"",""details"":{""code"":""\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*5kW""},""hostname"":""trulab-dev.us.auth0.com"",""user_id"":""auth0|648a230ee5ad48ee2ebfb212"",""user_name"":""angus.ingram+dev@trulab.com"",""auth0_client"":{""name"":""omniauth-auth0"",""version"":""2.6.0"",""env"":{""ruby"":""2.6.5"",""rails"":""6.1.7.4""}},""$event_schema"":{""version"":""1.0.0""},""log_id"":""90020240327123051583073000000000000001223372067726119722""}}}"
For each row, I'd like above to be written to another csv file but with the name:value pairs pivoted into column:rowvalue with a "," (comma) as the delimiter for the column names and row values, ala:
version,id,detail-type,source,account ....etc
0,fdd11d8a-ef17-75ae-cf50-077285bb7e15,Auth0 log,aws.partner/auth0.com/trulab-dev-c36bb924-cf05-4a5b-8400-7bdfbfe0806c/auth0.logs,654654277766 ....etc
I have been trying to accomplish this via this ruby script (runtimetest.rb):
```
require 'csv'
require 'json'
CSV.open("C:/Ruby/dev/logevents2.csv", "w") do |csv| #open new file for write
JSON.parse(File.open("C:/Ruby/dev/logevents.csv").read).each do |hash| #open json to parse
csv << hash.values #write value to file
end
end
```
But at runtime the csv file contents (logevents.csv) are written on screen with "unexpected token" message:
C:\Users\dclad>runtimetest.rb
C:/Ruby32-x64/lib/ruby/3.2.0/json/common.rb:216:in `parse': unexpected token at '"version"":""0"",""id"":""fdd11d8a-ef17-75ae-cf50-077285bb7e15"",""detail-type"":""Auth0 log"",""source"":""aws.partner/auth0.com/trulab-dev-c36bb924-cf05-4a5b-8400-7bdfbfe0806c/auth0.logs"",""account"":""654654277766"", ........
Tried this, I have been trying to accomplish this via this ruby script (runtimetest.rb):
```
require 'csv'
require 'json'
CSV.open("C:/Ruby/dev/logevents2.csv", "w") do |csv| #open new file for write
JSON.parse(File.open("C:/Ruby/dev/logevents.csv").read).each do |hash| #open json to parse
csv << hash.values #write value to file
end
end
```
Was expecting output to be column, row table in 2nd csv:
version,id,detail-type,source,account ....etc
0,fdd11d8a-ef17-75ae-cf50-077285bb7e15,Auth0 log,aws.partner/auth0.com/trulab-dev-c36bb924-cf05-4a5b-8400-7bdfbfe0806c/auth0.logs,654654277766 ....etc
I may be going about this all wrong.
Any suggestions would be greatly appreciated!
Best Regards,
Donald
|
You should join the *subject* collection with the *edcontentmaster* via `$lookup`. The `pipeline` in the `$lookup` stage should be your existing query.
In the last stage, convert the `topicDetails` to an object by getting the first element.
```
db.subject.aggregate([
{
$match: {
stageid: "5",
boardid: "1",
scholarshipid: "NVOOKADA1690811843420"
}
},
{
$lookup: {
from: "edcontentmaster",
let: {
stageid: "$stageid",
subjectid: "$subjectid",
boardid: "$boardid",
scholarshipid: "$scholarshipid"
},
pipeline: [
{
$match: {
$expr: {
$and: [
{
$eq: [
"$stageid",
"$$stageid"
]
},
{
$eq: [
"$subjectid",
"$$subjectid"
]
},
{
$eq: [
"$boardid",
"$$boardid"
]
},
{
$eq: [
"$scholarshipid",
"$$scholarshipid"
]
}
]
}
}
},
{
$addFields: {
convertedField: {
$cond: {
if: {
$eq: [
"$slcontent",
""
]
},
then: "$slcontent",
else: {
$toInt: "$slcontent"
}
}
}
}
},
{
$sort: {
slcontent: 1
}
},
{
$group: {
_id: "$topic",
topicimage: {
$first: "$topicimage"
},
topicid: {
$first: "$topicid"
},
sltopic: {
$first: "$sltopic"
},
studenttopic: {
$first: "$studenttopic"
},
reviewquestionsets: {
$push: {
id: "$_id",
sub: "$sub",
topic: "$topic",
contentset: "$contentset",
stage: "$stage",
timeDuration: "$timeDuration",
contentid: "$contentid",
studentdata: "$studentdata",
subjectIamge: "$subjectIamge",
topicImage: "$topicImage",
contentImage: "$contentImage",
isPremium: "$isPremium"
}
}
}
},
{
$lookup: {
from: "edchildrevisioncompleteschemas",
let: {
childid: "WELL1703316202984",
//childid,
subjectid: "1691130406151",
//subjectid,
topicid: "$topicid"
},
pipeline: [
{
$match: {
$expr: {
$and: [
{
$eq: [
"$childid",
"$$childid"
]
},
{
$in: [
"$$subjectid",
"$subjectDetails.subjectid"
]
},
{
$in: [
"$$topicid",
{
$reduce: {
input: "$subjectDetails",
initialValue: [],
in: {
$concatArrays: [
"$$value",
"$$this.topicDetails.topicid"
]
}
}
}
]
}
]
}
}
},
{
$project: {
_id: 1,
childid: 1
}
}
],
as: "studenttopic"
}
},
{
$project: {
_id: 0,
topic: "$_id",
topicimage: 1,
topicid: 1,
sltopic: 1,
studenttopic: 1,
contentid: "$contentid",
reviewquestionsets: 1
}
}
],
as: "topicDetails"
}
},
{
$set: {
topicDetails: {
$first: "$topicDetails"
}
}
}
])
```
[Demo @ Mongo Playground](https://mongoplayground.net/p/byEA90yyNt0) |
I am trying to build a simple command line tool and package it with `setup.py`. Here's my directory structure.
```
βββ s3_md5
β βββ __init__.py
β βββ cmd.py
β βββ src
β βββ __init__.py
β βββ cli.py
β βββ logger.py
β βββ s3_file.py
β βββ s3_md5.py
βββ setup.py
βββ test
βββ __init__.py
βββ conftest.py
βββ test_calculate_range_bytes_from_part_number.py
βββ test_get_file_size.py
βββ test_get_range_bytes.py
βββ test_parse_file_md5.py
```
In `setup.py`
```python
'''installer'''
from os import getenv
from setuptools import find_packages, setup
setup(
name="s3-md5",
description="Get fast md5 hash from an s3 file",
version=getenv('VERSION', '1.0.0'),
url="https://github.com/sakibstark11/s3-md5-python",
author="Sakib Alam",
author_email="16sakib@gmail.com",
license="MIT",
install_requires=[
"boto3==1.26.41",
"boto3-stubs[s3]",
],
extras_require={
"develop": [
"autopep8==2.0.1",
"moto==4.0.12",
"pytest==7.2.0",
"pylint==3.1.0",
],
"release": ["wheel==0.43.0"]
},
packages=find_packages(exclude=["test", "venv"]),
python_requires=">=3.10.12",
entry_points={
'console_scripts': ['s3-md5=s3_md5.cmd:run'],
}
)
```
And `cmd.py`
```
'''driver'''
from time import perf_counter
from boto3 import client
from src.cli import parse_args
from src.logger import logger
from src.s3_md5 import parse_file_md5
def run():
// some stuff with imports from src
if __name__ == "__main__":
run()
```
When I run the `cmd.py` from the `s3_md5` directory itself, everything is fine. But when I build and install it as a command line tool and try to run that, it throws
```
ModuleNotFoundError: No module named 'src'
```
I checked the lib folder and it does contain the src folder. Oddly enough when I use `s3_md5.src.cli` within `cmd.py`; the command line tool works but running the script from the directory doesn't really work as it references the package installed not the code itself which causes issues for development usage.
I've tried reading everything I can about python module system but I can't wrap my head around to this. I suspect its to do with the PYTHONPATH not knowing where to look for `src` but I can be wrong. I tried using relative import which works for the command line tool but throws no known parent for directly running `python cmd.py` |
Python ModuleNotFoundError for command line tools built with setup.py |
|python|python-3.x|python-import|python-module| |
I am writing to seek guidance regarding the conversion of my Kivymd main.py file to an APK file. Although I am new to Kivymd, I have managed to create a testing mobile app with the help of online resources. However, when attempting to convert the main.py file to an APK file using Google Colab notebook, I encountered an error and was unable to generate the desired mobile app APK file.
I kindly request your assistance in understanding the correct process for converting the Kivymd main.py file to an APK file. Your guidance in this matter would be greatly appreciated.
Thank you for your attention to this matter.
Sincerely, |
i'm currently using JPA buddy in my spring boot application to generate JPA Entities from my sakila database. But when i use this feature JPA buddy alway listed all of database tables and views including system tables and views[enter image description here](https://i.stack.imgur.com/mmAQq.png)
How can i make JPA buddy only list data tables and not system tables.
I expect JPA buddy to only list data tables and not system tables |
JPA buddy error when generating JPA Entities from DB |
|spring-boot|jpa-buddy| |
null |
I have a grid of images in my app which will open a scroll view of the images if one is clicked. But no matter what image I click, the scroll view will always open on image 1 with 2,3,4,5,6,7,8,9 below it. I want to make it so if a user clicks on image 6 it will open the scroll view on image 6 with images 1,2,3,4,5 above it and 7,8,9 below it. Here is my code:
```swift
LazyVGrid(columns: threeColumnGrid, alignment: .center) {
ForEach(viewModel.posts) { post in
NavigationLink(destination: ScrollPostView(user: user)){
KFImage(URL(string: post.imageurl))
.resizable()
.aspectRatio(1, contentMode: .fit)
.cornerRadius(15)
}
}
.overlay(RoundedRectangle(cornerRadius: 14)
.stroke(Color.black, lineWidth: 2))
}
```
|
How to make a scroll view of 9 images in a forEach loop open on image 6 if image 6 is clicked on from a grid? |
|swift|swiftui| |
In "@angular/material": "16.2.12", you need to use mdc format.
.mat-checkbox-inner-container does not appear to be used anymore.
Use this instead
.mdc-checkbox {
margin: 0px 8px auto !important;
} |
i'm currently using JPA buddy in my spring boot application to generate JPA Entities from my sakila database. But when i use this feature JPA buddy alway listed all of database tables and views including system tables and views ([here is what on my JPA buddy reverse engineering show])(https://i.stack.imgur.com/mmAQq.png)
How can i make JPA buddy only list data tables and not system tables.
I expect JPA buddy to only list data tables and not system tables |
The previous solutions here didn't work properly for me. The footnotes did not automatically number themselves with Yannis's answer. The solution was to add a new FootnotReferenceMark to the footnote in order to have automatically numbered references in both the text and the footnote.
// Create and append footnote
Footnote footnote = new Footnote() { Id = footnoteId };
// Create and insert footnote reference
//Make fontsize 20 and superscript
ParagraphProperties paraProps = new ParagraphProperties();
ParagraphStyleId styleId = new ParagraphStyleId() { Val = "FootnoteText" }; // Style for footnote text
paraProps.Append(styleId);
Paragraph para = new Paragraph(paraProps);
Run footnoteTextRun = new Run();
var footnoteTextContent = new Text(footnoteText);
footnoteTextContent.Space = SpaceProcessingModeValues.Preserve;
footnoteTextRun.Append(footnoteTextContent);
//Add FootnoteReferenceMark
Run footnoteReferenceMarkRun = new Run();
footnoteReferenceMarkRun.RunProperties = new RunProperties(new FontSize() { Val = "20" }, new VerticalTextAlignment() { Val = VerticalPositionValues.Superscript });
var FnreferenceMark = new FootnoteReferenceMark();
footnoteReferenceMarkRun.Append(FnreferenceMark);
para.Append(footnoteReferenceMarkRun);
para.Append(footnoteTextRun);
footnote.Append(para);
docMainPart.FootnotesPart.Footnotes.Append(footnote);
// Add footnote reference to the body text
var reference = new FootnoteReference() { Id = footnoteId };
Run footnoteReferenceBodyRun = new Run();
footnoteReferenceBodyRun.RunProperties = new RunProperties(new FontSize() { Val = "24" }, new VerticalTextAlignment() { Val = VerticalPositionValues.Superscript });
footnoteReferenceBodyRun.Append(reference);
run.Parent.InsertAfter(footnoteReferenceBodyRun, run);
|
|php|laravel|eloquent|uuid| |
Here is the specification for insert: https://en.cppreference.com/w/cpp/container/map/insert. Particularly, you need to provide a pair representing the key and value (for an `std::map`, value_type is `std::pair<const Key, T>`).
Your example of `my_map.insert("TEST", make_pair<string,int>("ONE",2));` don't work since there isn't any overload `insert` that matches your code.
However, there are a number of ways to insert:
```
using new_t = map<string,int> ;
map<string,new_t> easy_map;
easy_map.insert({"A", new_t{make_pair("A", 2)}});
easy_map.insert(make_pair("B", new_t{make_pair("B", 2)}));
easy_map.insert({"C", new_t{{"C", 3}}});
easy_map.insert({"D", {{"D", 4}}});
for (auto &&i : easy_map)
{
for (auto &&j : i.second)
{
cout << i.first << " " << j.first << " " << j.second << endl;
}
}
``` |
I write code for the ESP32 microcontroller.
I set up a class named "dmhWebServer".
This is the call to initiate my classes:
An object of the dmhFS class is created and I give it to the constructor of the dmhWebServer class by reference. For my error see the last code block that I posted. The other code block could explain the way to where the error shows up.
```
#include <dmhFS.h>
#include <dmhNetwork.h>
#include <dmhWebServer.h>
void setup()
{
// initialize filesystems
dmhFS fileSystem = dmhFS(SCK, MISO, MOSI, CS); // compiler is happy I have an object now
// initialize Activate Busy Handshake
dmhActivateBusy activateBusy = dmhActivateBusy();
// initialize webserver
dmhWebServer webServer(fileSystem, activateBusy); // compiler also happy (call by reference)
}
```
The class dmhFS has a custom constructor (header file, all good in here):
```
#include <Arduino.h>
#include <SD.h>
#include <SPI.h>
#include <LittleFS.h>
#include <dmhPinlist.h>
#ifndef DMHFS_H_
#define DMHFS_H_
class dmhFS
{
private:
// serial peripheral interface
SPIClass spi;
String readFile(fs::FS &fs, const char *path);
void writeFile(fs::FS &fs, const char *path, const char *message);
void appendFile(fs::FS &fs, const char *path, const char *message);
void listDir(fs::FS &fs, const char *dirname, uint8_t levels);
public:
dmhFS(uint16_t sck, uint16_t miso, uint16_t mosi, uint16_t ss);
void writeToSDCard();
void saveData(std::string fileName, std::string contents);
String readFileSDCard(std::string fileName);
};
#endif
```
Header file of the dmhWebServer class (not the whole thing):
```
public:
dmhWebServer(dmhFS &fileSystem, dmhActivateBusy &activateBusyHandshake);
};
```
This is the dmhWebServer class:
```
class dmhWebServer
{
private:
// create AsyncWebServer object on port 80
AsyncWebServer server = AsyncWebServer(80);
// server to client communication
AsyncEventSource events = AsyncEventSource("/events");
void setupHandlers();
void setupWebServer();
void serveFiles();
void setupStaticFilesHandlers();
void setupEventHandler();
void setupPostHandler();
void onRequest(AsyncWebServerRequest *request);
// http communication
void sendToClient(const char *content, const char *jsEventName);
void receiveFromClient(std::array<const char *, 2U> par);
// uses SD Card
dmhFS sharedFileSystem;
// uses shared memory for data exchange with a activate busy handshake
dmhActivateBusy abh;
public:
dmhWebServer(dmhFS &fileSystem, dmhActivateBusy &activateBusyHandshake);
};
```
This is the constructor of the dmhWebServer class:
```
#include <dmhWebServer.h>
#include <dmhFS.h>
#include <dmhActivateBusy.h>
// This is the line where the compiler throws an error ,character 85 is ")"
dmhWebServer::dmhWebServer(dmhFS &fileSystem, dmhActivateBusy &activateBusyHandshake)
{
// webserver sites handlers
setupHandlers();
abh = activateBusyHandshake;
sharedFileSystem = fileSystem;
// start web server, object "server" is instantiated as private member in header file
server.begin();
}
```
My compiler says:
> src/dmhWebServer.cpp:5:85: error: no matching function for call to 'dmhFS::dmhFS()'
Line 5:85 is at the end of the constructor function declaration
This is my first question on StackOverflow since only lurking around here :) I try to clarify if something is not alright with the question.
I checked that I do the call by reference in C++ right. I am giving the constructor "dmhWebServer" what he wants.
What is the problem here? |
convert csv file with json data inside to a column, rows table in 2nd csv file |
|json|ruby|csv| |
null |
I'm encountering an issue with Zustand where updated values are not retrieved when I try to access them. Specifically, I'm setting the totalPrice value as an initial value in Zustand, and after updating it, when I try to retrieve the value again, the initial value is displayed first, followed by the updated value. This behavior is consistent for all Zustand values I've configured. I'm puzzled as to why this is happening. Any insights or suggestions on how to resolve this issue would be greatly appreciated.
This is my zustand code.
```
import {create} from 'zustand';
import {persist} from 'zustand/middleware';
interface IFirstSurveyStore {
zustandMonitorUsage: -1 | 1 | 2 | 4 | 8 | 16;
setZustandMonitorUsage: (newUsage: -1 | 1 | 2 | 4 | 8 | 16) => void;
zustandTotalPrice: number;
setZustandTotalPrice: (newPrice: number) => void;
resetFirstSurvey: () => void;
}
const useFirstSurveyStore = create<IFirstSurveyStore>()(
persist(
set => ({
zustandMonitorUsage: -1,
setZustandMonitorUsage: (newUsage: -1 | 1 | 2 | 4 | 8 | 16) =>
set({zustandMonitorUsage: newUsage}),
zustandTotalPrice: 10,
setZustandTotalPrice: (newPrice: number) =>
set({zustandTotalPrice: newPrice}),
resetFirstSurvey: () => {
set({
zustandMonitorUsage: -1,
zustandTotalPrice: 0,
});
},
}),
{name: 'FirstSurvey'},
),
);
export default useFirstSurveyStore;
export type {IFirstSurveyStore};
```
After I update I checked updated value goes well by developer tools. But if i change page(that console the zustandTotalPrice), zustandTotalPrice is still the updated value at the developer tools, but when I console it, the initial value(10) comes and after that updated value comes.
```
const DoubleHandleRangeSlider = () => {
const {setZustandTotalPrice, zustandTotalPrice} = useFirstSurveyStore();
const [maxValue, setMaxValue] = useState(zustandTotalPrice);
const handleMaxChange = (event: React.ChangeEvent<HTMLInputElement>) => {
const newMax: number = parseInt(event.target.value, 10);
setMaxValue(newMax);
};
useEffect(() => {
const saveMaxValue = () => {
setZustandTotalPrice(maxValue);
};
const timerId = setTimeout(saveMaxValue, 500);
return () => clearTimeout(timerId);
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [maxValue]);
console.log(zustandTotalPrice);
return(<></>);
};
```
In my opinion, zustandTotalPrice changed the value and persisted it(at the developer tools) so If I call it, then the updated value should come. I can't understand why the initial value comes. |
Zustand doesn't retrieve updated values, displays initial values instead |