instruction stringlengths 0 30k ⌀ |
|---|
Overlay text on image not renedered correctly when using custom hindi font with GD library functions |
|php|codeigniter|gd| |
null |
I will add another answer as none of present ones feel complete/general for me.
To solve the original question, getting a list of Lora compatible modules programmatically, I have tried using
target_modules = 'all-linear',
which seems available in latest PEFT versions.
However, that would raise an error when applying to `google/gemma-2b` model.
(dropout layers were for some reason added to the `target_modules`, see later for the layers supported by LORA).
From documentation of the PEFT library:
only the following modules: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv2d`, `transformers.pytorch_utils.Conv1D`.
I ended up creating this function for getting all Lora compatible modules from arbitrary models:
import torch
from transformers import Conv1D
def get_specific_layer_names(model):
# Create a list to store the layer names
layer_names = []
# Recursively visit all modules and submodules
for name, module in model.named_modules():
# Check if the module is an instance of the specified layers
if isinstance(module, (torch.nn.Linear, torch.nn.Embedding, torch.nn.Conv2d, Conv1D)):
# model name parsing
layer_names.append('.'.join(name.split('.')[4:]).split('.')[0])
return layer_names
list(set(get_specific_layer_names(model)))
Which yields on gemma-2B
[
'down_proj',
'o_proj',
'k_proj',
'q_proj',
'gate_proj',
'up_proj',
'v_proj']
This list was valid for a target_modules selection
peft.__version__
'0.10.1.dev0'
transformers.__version__
'4.39.1' |
null |
## Preamble
I've been coding for a while now and have always loved doing it. I always code at every opertunity I get no matter how complex/out of my depth the problem is
The unfortunate thing I always keep finding is both my code and algorithms are inefficient, I always look at other peoples programs on how they've done the same thing as me, just faster/better, and I can never understand how they came up with these algorithms. Not just algorithms, but the code as well, they use these fancy things that make no sense to me that seem to make their code lightning fast
Not only that but I always seem to have made my solution overly complex, which generally makes it harder and more complicated to understand
### For example
Here is my code from a challenge on Leetcode [Word Break II](https://leetcode.com/problems/word-break-ii/)
```
#define HASHSIZE 1217
class Solution {
private:
vector<string> Find(string newval, string* hashdict) {
// stores all the strings that have been found
vector<string> returnvals = vector<string>();
// checks if there is no more splits that can be done
if (newval.find_last_of(" ") + 1 >= newval.length()) {
returnvals.push_back(newval);
return returnvals;
}
// starts at the end of the last word (if there is none it will go to the start)
for (int i = 0; newval.find_last_of(" ") + 1 + i <= newval.length();
i++) {
// checks if i is greater than the longest word
if (i > 10) {
break;
}
// creates a substring of a possible word
string testval = newval.substr(newval.find_last_of(" ") + 1, i);
// calculates the hash value of the word
int hashval = 0;
int count = 1;
for (char let : testval) {
hashval += count * (int)let;
count++;
}
// finds if it is in the hash table
int add = 0;
string dictval = hashdict[(add + hashval) % HASHSIZE];
while (testval.compare(dictval) != 0 &&
hashdict[(add + hashval) % HASHSIZE] != "") {
add++;
dictval = hashdict[(add + hashval) % HASHSIZE];
}
// if it is in the hash table then it adds a space to it and then calls itself with the new string
if (hashdict[(add + hashval) % HASHSIZE] != "") {
// checks if this is the last word
if (newval.find_last_of(" ") + 1 + i == newval.length()) {
returnvals.push_back(newval);
} else {
string tempnewvals =
newval.substr(0, newval.find_last_of(" ") + 1);
tempnewvals +=
newval.substr(newval.find_last_of(" ") + 1, i);
tempnewvals += " ";
tempnewvals += newval.substr(
newval.find_last_of(" ") + 1 + i,
newval.length() - newval.find_last_of(" ") - 1 - i);
vector<string> tempnewreturnvals =
Find(tempnewvals, hashdict);
// adds all the new strings to its return values
for (string val : tempnewreturnvals) {
returnvals.push_back(val);
}
}
}
}
return returnvals;
}
public:
vector<string> wordBreak(string s, vector<string>& wordDict) {
// calculates the hash value for the word dictionary and sotres it in the table
string hash[HASHSIZE];
for (string word : wordDict) {
int sum = 0;
int count = 1;
for (char let : word) {
sum += count * (int)let;
count++;
}
int add = 0;
while (hash[(add + sum) % HASHSIZE] != "") {
add++;
}
hash[(add + sum) % HASHSIZE] = word;
}
// finds all combinations of words
return Find(s, hash);
}
};
```
I thought this was a generally good solution to the problem at hand, and then I looked at someone elses solution
```
class Solution {
public:
vector<string> wordBreak(string s, vector<string>& wordDict) {
int n=s.size();
unordered_set<string>word_Set(wordDict.begin(),wordDict.end());
vector<vector<string>>dp(n+1,vector<string>());
dp[0].push_back("");
for(int i = 0; i < n; ++i){
for(int j = i+1; j <= n; ++j){
string temp = s.substr(i, j-i);
if(dp[i].size() > 0 && word_Set.count(temp)){
for(auto x : dp[i]){
dp[j].emplace_back(x + (x == "" ? "" : " ") + temp);
}
}
}
}
return dp[n];
}
};
```
Its like 4 times smaller and twice as fast as my code
### A better example of the sheer unnecessary complexity of my code
Here is my code from another challenge on Leetcode [Subarrays With K Different Integers](https://leetcode.com/problems/subarrays-with-k-different-integers/description/)
```
class Solution {
private:
int getAmount(vector<int>& nums, int k) {
// hash table of the occurances of each number
unordered_map<int, vector<int>> hashtable;
// hash table of the previously found sub arrays
unordered_map<int, vector<int>> otherhashtable;
// pointers that need to be rechecked for smaller sub arrays
vector<int*> pointers = vector<int*>();
int count = 0;
// hashes all the values in the nums array and stores their index in the (i realise now this should really be a dictionary than a hash table but bygones be bygones)
for (int num : nums) {
hashtable[num].push_back(count);
for (int val : hashtable[num]) {
}
count++;
}
count = 0;
// setting some variables before the start
int uniquevals = 1;
int startpoint = 0;
int endpoint = 0;
while (endpoint < nums.size()) {
// checks if there are less than the required unique numbers in the sub array
if (uniquevals < k) {
// if so the algorithm move forward the end pointer and checks if there is now a new unique number in the sub array
endpoint++;
bool unique = true;
if (endpoint >= nums.size()) {
break;
}
// checks the value at the end pointer in the hash table to see if it is already within the bounds of the sub array
for (int val : hashtable[nums[endpoint]]) {
if (val >= startpoint && val < endpoint) {
unique = false;
break;
}
}
if (unique) {
uniquevals++;
}
// checks if there is not enough unique values
} else if (uniquevals > k) {
// increments the startpointer
startpoint++;
// checks if the amount of unique values has changed
bool change = true;
for (int val : hashtable[nums[startpoint - 1]]) {
if (val >= startpoint && val <= endpoint) {
change = false;
break;
}
}
// if so it increments the count and decrements the unique value counter
if (change) {
uniquevals--;
count++;
otherhashtable[startpoint].push_back(endpoint);
pointers.push_back(new int[2]{startpoint, endpoint});
// I'm commenting in post (ik silly me), and have no idea why i need this here, but i get the wrong answer if i don't
endpoint++;
bool unique = true;
if (endpoint >= nums.size()) {
break;
}
for (int val : hashtable[nums[endpoint]]) {
if (val >= startpoint && val < endpoint) {
unique = false;
break;
}
}
if (unique) {
uniquevals++;
}
// if not, simply increment the counter
} else {
otherhashtable[startpoint].push_back(endpoint - 1);
pointers.push_back(new int[2]{startpoint, endpoint - 1});
count++;
}
} else {
// this checks if the end pointer is at the end, if so it just increments the start pointer until the unique vals counter is decremented
if (endpoint == nums.size() - 1) {
while (uniquevals == k) {
count++;
startpoint++;
bool change = true;
for (int val : hashtable[nums[startpoint - 1]]) {
if (val >= startpoint && val <= endpoint) {
change = false;
break;
}
}
if (change) {
uniquevals--;
} else {
otherhashtable[startpoint].push_back(endpoint);
pointers.push_back(
new int[2]{startpoint, endpoint});
}
}
// else it increments the counter and the endpointer
} else {
otherhashtable[startpoint].push_back(endpoint);
pointers.push_back(new int[2]{startpoint, endpoint});
count++;
endpoint++;
bool unique = true;
if (endpoint >= nums.size()) {
break;
}
// checks for a new unique value in the sub array
for (int val : hashtable[nums[endpoint]]) {
if (val >= startpoint && val < endpoint) {
unique = false;
break;
}
}
if (unique) {
uniquevals++;
}
}
}
}
// again im doing this in post and dont know why i need this here but it dont work without it
endpoint--;
while (uniquevals == k) {
startpoint++;
bool change = true;
for (int val : hashtable[nums[startpoint - 1]]) {
if (val >= startpoint && val <= endpoint) {
change = false;
break;
}
}
if (change) {
uniquevals--;
} else {
count++;
otherhashtable[startpoint].push_back(endpoint);
pointers.push_back(new int[2]{startpoint, endpoint});
}
}
// this finds all the smaller sub arrays and increments the counter for each valid one
for (int* point : pointers) {
startpoint = point[0];
endpoint = point[1];
bool isvalid = true;
while (isvalid) {
isvalid = false;
for (int val : hashtable[nums[endpoint]]) {
if (val >= startpoint && val < endpoint) {
isvalid = true;
break;
}
}
if (isvalid) {
endpoint--;
bool done = false;
// makes sure it has not been searched before
for (int val : otherhashtable[startpoint]) {
if (val == endpoint) {
done = true;
}
}
if (!done) {
count++;
}
}
}
}
return count;
}
public:
int subarraysWithKDistinct(vector<int>& nums, int k) {
return getAmount(nums, k);
}
};
```
I thought again that this was a pretty decent solution to the problem, but yet again I was proven wrong
```
int cnt[20001];
class Solution {
public:
int subarraysWithKDistinct(vector<int>& nums, int k) {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
return subcount(nums, k) - subcount(nums, k-1);
}
int subcount(vector<int>& nums, int k) {
memset(cnt, 0, 20001 * sizeof(int));
int c=1, res=0;
cnt[nums[0]] = 1;
auto head = nums.begin(), tail = nums.begin();
while (head < nums.end()) {
if (c <= k && tail < nums.end()) {
tail++;
if (tail < nums.end()) {
cnt[*tail]++;
if (cnt[*tail] == 1) c++;
}
} else {
res += (int)(tail - head - 1);
cnt[*head]--;
if (cnt[*head] == 0) c--;
head++;
}
}
return res;
}
};
```
This one uses much less memory and is orders of magnitude faster
## Enough of the preamble cutting to the chase
- What process do you need to go through to come up, with not just a solution, but an acceptable and fast solution to a problem?
- How do you make the code efficient for the algorithm it is trying to tackle?
- How do you come to a conclusion that you most probably have the fastest solution or the most efficient solution and there is no better one? |
How to learn to make algorithms and code more efficient? |
|c++|algorithm|performance|optimization| |
I am using an api which works but I am having trouble with the location header.
The api document states - If a 201 response is returned it means successful and the location of the newly created data in the http response 'location' header.
So I get the 201 response but no data. I assume the data is in the location header how to get that data.
This doesnt seem to work for me
https://stackoverflow.com/questions/21291675/php-curl-read-a-specific-response-header
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $api_url);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $post_data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, $header);
$result = curl_exec($ch); |
I have been given the task of reading a file, processing the data and returning the results. As it is an async process, I run into the problem where you cannot return from a .then, and an unresolved Promise is unresolved. I am using fetch. I know this issue has been discussed a lot, apologies. Here is code. Thank you.
function myFunction () {
async function fetchData() {
await fetch("./my_file.txt")
.then((res) => res.text())
.then((text) => {
//I want to return text here
})
.catch((e) => console.error(e));
}
const response = fetchData();
//I can process the data here and then return it
//process data
//return results
} |
Fetch file, read contents and return contents all in one function |
|javascript|promise|fetch| |
Add this to application tag in main folder AndroidManifest or fork the package you used and add it to package and push it back again to pub.dev :)
```
<application>
<receiver android:name="com.farsitel.bazaar.auth.receiver.AuthReceiver"
android:exported="true">
</receiver>
</application>
``` |
How about trying this one?
```xml
<p class="post-date">
<time datetime="2023-01-01" th:text="${#dates.format(item.getCreatedDate(), 'dd-MM-yyyy HH:mm')}">Oct 5, 2023</time>
</p>
```
Explaination:
I saw that the error said the method format of `org.thymeleaf.expression.Temporals` does not have the prototype of (java.sql.Timestamp, java.lang.String). So I can understand that your `item.getCreatedDate()` is `java.sql.Timestamp` class:
```java
package java.sql;
public class Timestamp extends java.util.Date {
...
}
```
This class is extends of `java.util.Date`, so you can use `org.thymeleaf.expression.Dates` instead:
```java
package org.thymeleaf.expression;
public final class Dates {
public String format(final Date target, final String pattern) {
...
}
}
``` |
**I was facing the same error while using the Nextjs.**
[![enter image description here][1]][1]
This is how I handled this error:
You need to update your nodejs version, You should use Node v20 and 64-bit.
**Step 1:** Uninstall the existing nodejs
**Step 2:** Install the Latest Microsoft Visual C++ Redistributable Version from here https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170
**Step 3:** Install Nodejs v20+ and 64-bit from here https://nodejs.org/en/download
**Step 4:** Restart the system
**Step 5:** Run command `npm run dev`
[![enter image description here][2]][2]
**Output:**
[![enter image description here][3]][3]
[1]: https://i.stack.imgur.com/eoneZ.png
[2]: https://i.stack.imgur.com/5CrVD.png
[3]: https://i.stack.imgur.com/k8kwx.png |
Visual Studio scaffolded Create form does not work in .NET Core 8 MVC |
|asp.net-mvc|foreign-keys|visual-studio-2022|modelstate|asp.net-mvc-scaffolding| |
null |
clang is part of llvm. There is no package for clang alone. Instead, install llvm14.
But you should first deinstall llvm11 if you no longer need it.
pkg delete llvm11 |
I would like to write an Update query. I need to update target_table field in tblnames1 based on tblenames2.
**tblNames1**:
|filename | Description | target table
|----|-----|-----------------------------------------------------------
| /app/data/shared/mbs/test.yaml | select * from test.fn_hierarchy_prod_group(1); |
|/app/data/shared/nkm/test1.yaml | select *from run_update_query |
| /app/data/shared/nkm/test5.yaml | select *from func_datad_addr_1 |
|/app/data/shared/nkm/test2.yaml | INSERT INTO a_base(evnt_nbr,triggering_evnt,)SELECT evnt_nbr,triggering_evnt FROM delim;|
**tblNames2**:
|ID | Description | target table
|----|-----|----------------------
| 1082 | test.fn_hierarchy_prod_group | dba.l,dba.z
| 1091 | func_datad_addr | dba.n
| 1099 | fn_hierarchy_customer | dba.m
| 1100 | run_update_query | dba.j
Output -
The query should return and update target table column in tblNames1 using 2 ways - i) by comparing description field of tablenames2 into tablenames1 and populate target table of tblnames2 into tblnames 1 ii) If its insert statement then populate the target after insert into tablename, pattern is not fixed though:
|filename |Description| target table
|--|--|----------------------------------------------------------
| /app/data/shared/mbs/test.yaml | select * from test.fn_hierarchy_prod_group(1); | dba.l
| /app/data/shared/mbs/test.yaml | select * from test.fn_hierarchy_prod_group(1); | dba.z
|/app/data/shared/nkm/test1.yaml | select *from run_update_query |dba.j
| /app/data/shared/nkm/test5.yaml | select *from func_datad_addr_1 |
|/app/data/shared/nkm/test2.yaml | INSERT INTO a_base(evnt_nbr,triggering_evnt,)SELECT evnt_nbr,triggering_evnt FROM delim;|a.base |
Compare fields in two tables |
|sql|postgresql|postgresql-9.1|postgresql-9.4|postgresql-9.5| |
I use MSMQ queues for a web application made with IIS. Usually I set Everyone permission (write and read) on every queue.
Sometimes, at and moment, Everyone permission is lost.. Why? Nobody removes it manually.
Only some queues lose permission. Can be a problem of the user owner of the queue? Some Windows policy... |
Private queues MSMQ lose Everyone permission |
|permissions|queue|msmq| |
null |
I'm deleting your post because this seems like a programming-specific question, rather than a conversation starter. With more detail added, this may be better as a Question rather than a Discussions post. Please see [this page](https://stackoverflow.com/help/how-to-ask) for help on asking a question on Stack Overflow. If you are interested in starting a more general conversation about how to approach a technical issue or concept, feel free to make another Discussion post. |
I'm deleting your post because this seems like a programming-specific question, rather than a conversation starter. With more detail added, this may be better as a Question rather than a Discussions post.
Please see [this page](https://stackoverflow.com/help/how-to-ask) for help on asking a question on Stack Overflow.
If you are interested in starting a more general conversation about how to approach an issue or concept related to the topic of this collective, feel free to make another Discussion post. You can check the discussions guidelines at https://stackoverflow.com/help/discussions-guidelines |
null |
You cannot.
Closures have anonymous opaque types and only implement the `Fn*` traits (and auto traits) they can. Rust does not have reflection and it's reflection-like mechanisms aren't possible on non-local types. |
I have encountered an issue when rendering a visualization of a 2d matrix using Pygame (I am aware this not the best library for the job ... but anyway). The issue arises when I attempt to render each node in the matrix as a rectangle. Each node is an instance of the Node class and has x1, y1, x2, and y2 values derived from it's position in the array. x1 and y1 are the coordinates for the first point of the rectangle and x2 and y2 are the coordinates are the second point. When I use lines to represent the nodes, everything seems to render as I expected. However when I use rectangles, the rectangles clump together. I noticed these are the rectangles representing the nodes after the 0th row and col positions in the 2d list. Does anyone know why this is? I have provided script A (lines) and script B (rectangles) with images of the output for review. Thanks.
Script A (lines)
```
import pygame
import time
pygame.init()
class Node:
count = 0
def __init__(self, row, col):
Node.count += 1
self.id = Node.count
self.row = row
self.col = col
self.x1 = col * 10
self.y1 = row * 10
self.x2 = col * 10 + 5
self.y2 = row * 10 + 5
def display(matrix):
for i in matrix:
print(i)
matrix = [[Node(i, j) for j in range(10)] for i in range(10)]
win = pygame.display.set_mode((500, 500))
win.fill((0, 0, 0,))
for i in range(len(matrix)):
for node in matrix[i]:
pygame.draw.line(win, (0, 0, 255), (node.x1, node.y1), (node.x2, node.y2), width=1 )
pygame.display.update()
time.sleep(0.1)
```
Script B (rectangles)
```
import pygame
import time
pygame.init()
class Node:
count = 0
def __init__(self, row, col):
Node.count += 1
self.id = Node.count
self.row = row
self.col = col
self.x1 = col * 10
self.y1 = row * 10
self.x2 = col * 10 + 5
self.y2 = row * 10 + 5
def display(matrix):
for i in matrix:
print(i)
matrix = [[Node(i, j) for j in range(10)] for i in range(10)]
win = pygame.display.set_mode((500, 500))
win.fill((0, 0, 0,))
for i in range(len(matrix)):
for node in matrix[i]:
pygame.draw.rect(win, (0, 0, 255), (node.x1, node.y1, node.x2, node.y2))
pygame.display.update()
time.sleep(0.1)
time.sleep(10)
[[enter image description here](https://i.stack.imgur.com/hoYfw.png)](https://i.stack.imgur.com/NZAxV.png)
```
The fact that the lines render in the correct positions leaves me puzzled as to why the rectangles are not as they are both using the same coordinates. |
For install **cudatoolkit and cudnn** by Anaconda you can use these following command `conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0`
You must aware the **tensorflow version** must be less than **2.11**
For check if the everything installed properly
1) In command prompt check `nvidia-smi` command. if shows command not found you must install the latest GPU driver
2) Use this python script to config the GPU in programming
import tensorflow as tf
if tf.config.list_physical_devices('GPU'):
print('GPU is available.')
else:
print('GPU is NOT available. Make sure TensorFlow version less than 2.11 and Installed all GPU drivers.')
# Define and configure TensorFlow session
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.compat.v1.Session(config=config) |
{"Voters":[{"Id":209103,"DisplayName":"Frank van Puffelen"},{"Id":807126,"DisplayName":"Doug Stevenson"},{"Id":2851937,"DisplayName":"Jay"}],"SiteSpecificCloseReasonIds":[13]} |
I am new to React-Native, i am trying to use Navigation for 2 screens, when i add my LoginScreen & RegisterScreen inside Stack.Navigator, my screens are not visible, it shows blank screen, if i add LoginScreen outside Stack.navigator section, the LoginScreen looks fine but obviously i am not able to navigate to registerScreen, can anyone help in that?
```
import 'react-native-gesture-handler'
import React from 'react';
import {View, Text, StatusBar } from 'react-native';
import LoginScreen from './src/screens/LoginScreen';
import RegisterScreen from './src/screens/RegisterScreen';
import { Colors } from './src/utils/Colors'
import { createStackNavigator } from '@react-navigation/stack';
import { NavigationContainer } from '@react-navigation/native';
const Stack = createStackNavigator();
const App = () => {
return (
<View>
<NavigationContainer>
<StatusBar backgroundColor={Colors.white} barStyle='dark-content'/>
<Stack.Navigator initialRouteName="LoginScreen" screenOptions={{headerShown: false}}>
<Stack.Screen name="LoginScreen" component={LoginScreen} />
<Stack.Screen name="RegisterScreen" component={RegisterScreen}/>
</Stack.Navigator>
</NavigationContainer>
</View>
);
}
export default App;
`
```
When i add LoginScreen outside Stack.Navigator, the screen works fine.
```
import 'react-native-gesture-handler'
import React from 'react';
import {View, Text, StatusBar } from 'react-native';
import LoginScreen from './src/screens/LoginScreen';
import RegisterScreen from './src/screens/RegisterScreen';
import { Colors } from './src/utils/Colors'
import { createStackNavigator } from '@react-navigation/stack';
import { NavigationContainer } from '@react-navigation/native';
const Stack = createStackNavigator();
const App = () => {
return (
<View>
<NavigationContainer>
<StatusBar backgroundColor={Colors.white} barStyle='dark-content'/>
<Stack.Navigator initialRouteName="LoginScreen" screenOptions={{headerShown: false}}>
<Stack.Screen name="LoginScreen" component={LoginScreen} />
<Stack.Screen name="RegisterScreen" component={RegisterScreen}/>
</Stack.Navigator>
<LoginScreen/>
</NavigationContainer>
</View>
);
}
```
export default App;` |
Screen inside Stack.Navigator not visible in React-Native |
Rendering a visualisation of matrix using pygame |
|python|matrix|pygame|rendering| |
null |
I am developing a project where we have a java backend spring server connecting with a front end page through endpoints. At the moment in one of our endpoints we are sending an image url to be represented in the frontend. Currently we can see some performance issues on the frontend when loading the images as the sizes vary quite a lot and some are quite big. In order to try and solve this I am checking how to compress the image in the backend in order to be easier for the front end to represent them. However, during my investigation I also noticed that there are a lot of tools to compress images in the front end directly.
My question is should I compress them in the backend (will include downloading the image + compressing it into a new file + sending it to frontend + deleting the file) or should it be done by the frontend?
Thanks a lot for the help! |
Should I compress images in java backend before sending to frontend? |
|java|image|frontend|compression|backend| |
I'm starting to program in C# and decided to install Visual Studio Code as my text editor and installed the extension called C# Dev Kit. When running the code (without debugging) with the following button:
https://learn-attachment.microsoft.com/api/attachments/c8dad1ae-b55b-4c70-a3bb-f35e4c869c67?platform=QnA
I noticed that the following command was being executed in the Visual Studio Code terminal:
https://learn-attachment.microsoft.com/api/attachments/df4f1fe9-da30-4dca-b32b-804665e8df85?platform=QnA
**Command: PS C:\\Users\\User\\Desktop\\Calculadora\> & 'c:\\Users\\User.vscode\\extensions\\ms-dotnettools.csharp-2.22.5-win32-x64.debugger\\x86_64\\vsdbg.exe' '--interpreter=vscode' '--connection=2f353c5843504a368c832105e58f278a'**
Apparently, this command is running a debugger called vsdbg.exe located in the C# extension folder. My question is why does this happen if I'm only running my code and not debugging it? Shouldn't It be running a command like "dotnet run" or something similar to just run it and not debug it? When running the code with the debug button, which is the following:
https://learn-attachment.microsoft.com/api/attachments/05605d3f-866e-4429-833a-4e06821169b1?platform=QnA
I noticed that it executed exactly the same command.
I have tried with other languages like C or C++, and it also seems that they always debug even when you're just running them without debugging.
I'm just trying to understand how this works and if It always debug my code. |
Does Visual Studio Code always debug C# code? |
|visual-studio-code| |
null |
I installed these, and the service started:
`sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin` |
null |
I'm trying to secure a page using a jwt token in a session in NextJS, I was looking at similar questions on the forum and I saw that they recommended using useeffect to get access but I still can't recover anything.
The code I started with
`"use client";
import {
useEffect,
useState
} from "react";
import {
useRouter
} from 'next/navigation'
import {
valJwt
} from "@/libs/jwtSec";
import Cookies from "js-cookie";
export default function isAuth(Component: any) {
return function IsAuth(props: any) {
const auth = valJwt(sessionStorage.getItem("token_test"));
useEffect(() => {
if (!auth) {
return redirect("/");
}
}, []);
if (!auth) {
return null;
}
return < Component {
...props
}
/>;
};
}`
Output
ReferenceError: sessionStorage is not defined
And the edition with which I do not recover anything is this
`"use client";
import {
useEffect,
useState
} from "react";
import {
useRouter
} from 'next/navigation'
import {
valJwt
} from "@/libs/jwtSec";
import Cookies from "js-cookie";
export default function isAuth(Component: any) {
return function IsAuth(props: any) {
const [token, setToken] = useState(null);
useEffect(() => {
let tok_ses = sessionStorage.getItem("token_test");
if (tok_ses) {
setToken(tok_ses);
}
}, []);
useEffect(() => {
if (token) {
const auth = valJwt(sessionStorage.getItem("token_test"));
if (!auth) {
return redirect("/");
}
}
}, [token]);
};
}`
I tried with js-cookie but I can't get access from the same function either, how could I get access?
I need to access the content of a sessionStorage from NextJS on the client side |
|react-native|react-navigation-v5| |
null |
Without `docker-compose.yml` (as most VPS CPanels (open-source PaaS like Dokku, Caprover, Easypanel) don't support `docker-compose.yml`) so I had to find an alternate solution using `--env-file` option in a `Makefile`:
```make
.PHONY: build-staging
build-staging: ## Build the staging docker image.
docker build -f docker/staging/Dockerfile -t easypanel-nextjs:0.0.1 .
.PHONY: start-staging
start-staging: ## Start the staging docker container.
docker run --detach --env-file .env.staging easypanel-nextjs:0.0.1
.PHONY: stop-staging
stop-staging: ## Stop the staging docker container.
docker stop $$(docker ps -a -q --filter ancestor=easypanel-nextjs:0.0.1)
```
Now, I just do this in the terminal:
```bash
$ make build-staging
$ make start-staging
$ make stop-staging
```
Obviously, the syntax becomes much cleaner with `docker compose` but most VPS CPanels don't support it so this is as good as it gets. |
I have an app image that I want to install but I get the following error when trying to run it
[![enter image description here][1]][1]
On trying to install the libfuse2 package this is the error I get
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/XoauR.png
[2]: https://i.stack.imgur.com/4z7B3.png
I'd appreciate any help that will help in installation and running of the App image |
How to install libfuse2 on Ubuntu 22.04 |
|ubuntu|package|appimage|lts|libfuse| |
I'm using the following IIS Rewrite Rule to block as many bots as possible.
<rule name="BotBlock" stopProcessing="true">
<match url=".*" />
<conditions>
<add input="{HTTP_USER_AGENT}" pattern="^$|bot|crawl|spider" />
</conditions>
<action type="CustomResponse" statusCode="403" statusReason="Forbidden" statusDescription="Forbidden" />
</rule>
This rule blocks all requests with an empty User-Agent string or a User-Agent string that contains `bot`, `crawl` and `spider`. This works great but it also blocks `googlebot`, which I do not want.
So how do I exclude the `googlebot` string from the above pattern so it does hit the site.
I've tried
`^$|!googlebot|bot|crawl|spider`
`^$|(?!googlebot)|bot|crawl|spider`
`^(?!googlebot)$|bot|crawl|spider`
`^$|(!googlebot)|bot|crawl|spider`
But they either block all User-Agents or still do not allow googlebot. Who has a solution and knows a bit about regex? |
IIS Rewrite modules exclude bots but allow GoogleBot |
|asp.net|regex|iis|url-rewriting|url-rewrite-module| |
Why i can not work with Product Hunt api in python. My api key is correct
```
`ph = ProductHunt(api_key='WUY0eh1BU_-BnIOtYcFBSEk2UuwUsFCTOXzbZe0CYFY')
daily_products = ph.get_daily()`
```
```
Traceback (most recent call last):
File "C:\Users\eesca\PyProjects\solkit\test.py", line 4, in <module>
daily_products = ph.get_daily()
^^^^^^^^^^^^^^
File "C:\Users\eesca\PyProjects\solkit\.venv\Lib\site-packages\producthunt\producthunt.py", line 41, in get_daily
products = data.get('data', {}).get('posts', {}).get('edges', [])
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'get'
```
|
You can first try to solve x^2 = exp(-x^2) numerically. Say the solution is x*.
Then compute the integral of exp(-x^2) between 0 and x*, and substract the integral of x^2 on the same interval (which is x*^3/3).
Concerning the first integral, you might want to have a look at erf function implemented in `math` and `numpy` library in python for instance. |
I'm trying to add service principal Databricks managed on azure and put account level permissions with terraform like this:
[![enter image description here][1]][1]
**Error: cannot create mws permission assignment: Endpoint not found for /2.0/accounts/4f93b050-9cee-4668-8136-7937fe98f18e/workspaces/6491331033656740/permissionassignments/principals/187629890527464**
**terraform:**
```
provider "databricks" {
azure_workspace_resource_id = azurerm_databricks_workspace.xxxxx_workspace.id
host = azurerm_databricks_workspace.xxxxx_workspace.workspace_url
auth_type = "azure-cli"
}
resource "azurerm_databricks_workspace" "xxxxx_workspace" {
name = "ADM-Databricks-xxxx"
resource_group_name = var.resource_group_name
location = var.region
sku = "premium"
custom_parameters {
storage_account_name = "admdatalakedevxxxxx${random_string.naming.result}"
}
}
resource "databricks_service_principal" "principal" {
display_name = "databricks-adm"
allow_cluster_create = true
workspace_access = true
databricks_sql_access = true
}
resource "databricks_group_member" "i-am-admin" {
group_id = data.databricks_group.admins.id
member_id = databricks_service_principal.principal.id
}
resource "databricks_mws_permission_assignment" "add_admin_group" {
workspace_id = azurerm_databricks_workspace.xxxxx_workspace.workspace_id
principal_id = databricks_service_principal.principal.id
permissions = ["ADMIN"]
}
```
[1]: https://i.stack.imgur.com/YkOTH.png |
There is a part of mathematics called the theory of multiple zeta values (MZVs) introduced around 1992. In this theory, we study the properties of parametric nested infinite series (see, e.g., [here](https://www.usna.edu/Users/math/meh/mult.html) for the definition of one particular variant) whose numeric evaluation is, therefore, of high computational complexity. Several days ago, I started using Python to calculate the basic partial sums of specific MZVs.
The following code allows me to calculate the 20th partial sum of the so-called multiple zeta-star value zeta*(10,2,1,1,1) or any other similar instance by changing the inputs `S`, `n`.
```
S = (10, 2, 1, 1, 1)
n = 20
l = len(S)
def F(d, N):
if d == 0:
return 1
else:
return sum(F(d-1, k)/(k**S[-d]) for k in range(1, N+1))
print(F(l, n))
```
Of course, the partial sums of the d-dimensional MZVs essentially depend on their arguments forming a d-tuple (s_1, ..., s_d) and on the upper summation bound `n`.
**My question** is whether we can define a function typed in a user-friendly way `F([s_1, ..., s_d], n)` with the first argument being a list with integer entries and the second argument being the summation bound n. Thus, instead of typing the two inputs separately, `S` and `n`, I am looking for a way to directly type `F([10, 2, 1, 1, 1], 20)` or `F([2, 1], 100)`, etc., with the given delimiters of the two arguments. |
I'm on Laravel 10, and created an API using sanctum.
In my controller, I want to call the local API to display in a view. The API works fine using postman, but when called in my controller the page doesn't load and times out.
Below is my module route in `module/api/routes/api.php`
```
Route::group(['middleware' => ['auth:sanctum', 'api']], function () {
Route::get('sample', [ApiController::class, 'sample']);
});
```
here is the HTTP code in my controller
```
$response = Illuminate\Support\Facades\Http::withHeaders([
"Accept"=> "application/json",
"Authorization" => "Bearer " . env('SANCTUM_AUTH')
])
->get(route('sample').'/', ['type' => '2']);
dd($response);
```
and this is the error i'm getting.
`cURL error 28: Operation timed out after 30006 milliseconds with 0 bytes received (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for http://127.0.0.1:8000/sample/?type=2`
How do I call the API locally with sanctum? |
Timing closure problems in FIFO |
|verilog|xilinx|timing|fifo| |
|sql|datetime|google-bigquery|datediff|time-difference| |
Mismatched data types: The data types of the referencing and referenced columns must be the same.
Dangling foreign keys: A foreign key that links to a nonexistent table or column.
Referencing a nonexistent table or column: The table that the foreign key references must already exist before you can define a foreign key that references it.
Please ensure that your schema doesn’t have these issues. If you’re still facing problems, could you please provide more details about the error message? That would help me assist you better. |
I am trying to learn Drizzle ORM and integrate it with next/auth.
I am trying to push an updated schema to my DB, (postgres through neon), but am running into the following error:
> Error: foreign key constraint "job-tracker-t3_session_userId_job-tracker-t3_user_id_fk" cannot be implemented
>
> Detail: 'Key columns "userId" and "id" are of incompatible types: text and uuid.',
When I first tried to push the schema, there was a type mismatch, userId was type text while id was uuid. I then changed userId to uuid as well, and got the same problem. I also dropped all the tables and tried to repush, and got the same error.
Here is my schema:
```
export const users = createTable("user", {
id: uuid("id")
.default(sql`gen_random_uuid()`)
.primaryKey(),
username: varchar("name", { length: 255 }).unique(),
email: varchar("email", { length: 255 }).notNull().unique(),
avatar: varchar("image", { length: 255 }),
sub: varchar("sub", { length: 255 }),
displayName: varchar("displayName", { length: 255 }),
date: timestamp("date").default(sql`CURRENT_TIMESTAMP`),
autoArchive: boolean("autoArchive").default(true),
passwordHash: text("passwordHash").notNull(),
});
export const accounts = createTable(
"account",
{
userId: uuid("userId")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
type: uuid("type").notNull(),
provider: text("provider").notNull(),
providerAccountId: text("providerAccountId").notNull(),
refresh_token: text("refresh_token"),
access_token: text("access_token"),
expires_at: integer("expires_at"),
token_type: text("token_type"),
scope: text("scope"),
id_token: text("id_token"),
session_state: text("session_state"),
},
(account) => ({
compoundKey: primaryKey({
columns: [account.provider, account.providerAccountId],
}),
}),
);
export const sessions = createTable("session", {
sessionToken: text("sessionToken").notNull().primaryKey(),
userId: text("userId")
.notNull()
.references(() => users.id, { onDelete: "cascade" }),
expires: timestamp("expires", { mode: "date" }).notNull(),
});
export const verificationTokens = createTable(
"verificationToken",
{
identifier: text("identifier").notNull(),
token: text("token").notNull(),
expires: timestamp("expires", { mode: "date" }).notNull(),
},
(vt) => ({
compoundKey: primaryKey({ columns: [vt.identifier, vt.token] }),
}),
);
```
When I look at my DB, I see that the tables were all created and userId is of type uuid. Should I disregard the error since the tables were created, or will I get more problems from this in the future?
Thanks! |
null |
It is good practice to write programs in such a way that an error is raised if you specify the wrong number of arguments.
And you can always open the program itself and take a look :) |
I'm trying to populate a 2D array of objects and get a result like this example:
```
Guid guid = Guid.NewGuid();
var data = new[] {
new object[] { 22, "cust1_fname","cust1_lname",guid },
new object[] { 23, "cust2_fname","cust2_lname",guid },
new object[] { 24, "cust3_fname","cust3_lname",guid },
};
```
[Code Example 1](https://i.stack.imgur.com/IyIVn.png)
I tried this way:
[Code Example 2](https://i.stack.imgur.com/nu6Fp.png)
But the objects are not added as direct children under the 2D array as in the first example |
As of spaCy version 3.7
```py
import spacy
nlp = spacy.load("en_core_web_sm")
ner = nlp.get_pipe("ner")
print(ner.labels)
```
Outputs:
```('CARDINAL', 'DATE', 'EVENT', 'FAC', 'GPE', 'LANGUAGE', 'LAW', 'LOC', 'MONEY', 'NORP', 'ORDINAL', 'ORG', 'PERCENT', 'PERSON', 'PRODUCT', 'QUANTITY', 'TIME', 'WORK_OF_ART')``` |
Install `react-native-safe-area-context` using `npm install react-native-safe-area-context`. Wrap your code in `<SafeAreaProvider>` and then `<SafeAreaView>`. This time, `<SafeAreaView>` is imported from `react-native-safe-area-context`. See if this works.
sample code
import React from "react";
import { Text, View } from "react-native";
import { SafeAreaProvider, SafeAreaView } from "react-native-safe-area-context";
const App = () => {
return (
<SafeAreaProvider>
<SafeAreaView>
<View>
<Text>Hey</Text>
</View>
</SafeAreaView>
</SafeAreaProvider>
);
};
export default App;
|
Generally speaking, if the program isn't in any way interactive and replies entirely on arguments from the command line, then the best way to know its argument criteria is to ask it. Typically the argument `--help` will produce a list of available options and how to use them, like so: `ls --help`, or `python somescript.py --help`.
The exact option required to get help may vary from program to program. On MS-DOS `/?` was common, and some older Unix programs used `-?`, but generally speaking `--help` has been a standard convention for quite some time, and you should try that first before looking for other options.
Btw, if you want to write Python programs that output their own help information, there's handy libraries such as argparse that handle most of the hard work for you. The argparse library (a standard part of Python) accepts a specification of the types of arguments your Python script will accept, and it will validate the user's command line arguments for you to check they meet your requirements, and even print out the --help information or error messages as necessary. |
just go to Android device manager and edit your device, uncheck the device frame checkbox, and it will solve your problem. |
I am attempting to create a flatpak for an old development version of Wine, 6.13, but have been running into problems.
I first downloaded the flathub manifest repository for Wine as it existed for version 6.0.2:
https://github.com/flathub/org.winehq.Wine/tree/a954b18213547d4acaeb7b6e7f5157205fed45b4
Then, I renamed the yml and xml files as follows:
`org.oldbuild.wine-6-13.yml`
`org.oldbuild.wine-6-13.appdata.xml`
In `org.oldbuild.wine-6-13.yml`, I changed the ID and the reference to the appdata file, added a line giving host filesystem permissions, and changed the wine version for the download:
`id: org.oldbuild.wine-6-13`
` - --filesystem=host`
```
url: https://dl.winehq.org/wine/source/6.x/wine-6.13.tar.xz
sha256: e03a21a011d45d2ae9f222040fb7690b97156376e7431f861f20073eaf24f28a
```
` path: org.oldbuild.wine-6-13.appdata.xml`
Then, in `org.oldbuild.wine-6-13.appdata.xml`, I changed the ID and release version lines:
` <id>org.oldbuild.wine-6-13</id>`
` <release version="6.13" date="2021-07-20"/>`
I also added the following `modules/spirv-headers.json` file:
```
{
"name": "spirv",
"buildsystem": "cmake-ninja",
"cleanup": [
"/bin",
"/include",
"/lib/cmake",
"/lib/pkgconfig",
"/share/man",
"*.so"
],
"sources": [
{
"type": "archive",
"url": "https://github.com/KhronosGroup/SPIRV-Headers/archive/refs/tags/sdk-1.3.236.0.tar.gz",
"sha256": "4d74c685fdd74469eba7c224dd671a0cb27df45fc9aa43cdd90e53bd4f2b2b78"
}
]
}
```
After doing the above, I tried building it with: `flatpak run org.flatpak.Builder wine-6-13 org.oldbuild.wine-6-13.yml`
After everything successfully compiled, it gave an error when composing the metadata:
```
Composing metadata...
Run failed, some data was ignored.
Errors were raised during this compose run:
general
E: filters-but-no-output
org.oldbuild.wine-6-13
E: no-valid-category
Refer to the generated issue report data for details on the individual problems.
Error: ERROR: appstreamcli compose failed: Child process exited with code 1
```
At this point, I went into the `wine-6-13` directory and tried manually creating the `metadata`, `metadata.debuginfo`, `metadata.org.winehq.Wine.gecko`, and `metadata.org.winehq.Wine.mono` files. I used a 6.0.2 build for reference, changing things as appropriate.
`name=org.oldbuild.wine-6-13`
`[Extension org.oldbuild.wine-6-13.Debug]`
`built-extensions=org.oldbuild.wine-6-13.Debug;org.winehq.Wine.gecko;org.winehq.Wine.mono;`
```
[Runtime]
name=name=org.oldbuild.wine-6-13.Debug
[ExtensionOf]
ref=app/org.oldbuild.wine-6-13/x86_64/stable-21.08
```
```
[Runtime]
name=org.winehq.Wine.gecko
[ExtensionOf]
ref=app/org.oldbuild.wine-6-13/x86_64/stable-23.08
```
```
[Runtime]
name=org.winehq.Wine.mono
[ExtensionOf]
ref=app/org.oldbuild.wine-6-13/x86_64/stable-21.08
```
After finishing the above, I ran `flatpak build-finish wine-6-13`, and then manually placed a `wine-6-13/export/share/metainfo/org.oldbuild.wine-6-13.metainfo.xml` file with the same contents as `org.oldbuild.wine-6-13.appdata.xml`.
From here, I ran the commands to finish creating a flatpak file and install it:
`flatpak build-export export-6-13 wine-6-13`
`flatpak build-bundle export-6-13 org.oldbuild.wine-6-13.flatpak org.oldbuild.wine-6-13 --runtime-repo=https://flathub.org/repo/flathub.flatpakrepo`
`flatpak install org.oldbuild.wine-6-13.flatpak`
I do not know why the metadata failed to build, but the manually completed flatpak fails to launch wine with the following error.
```
my_bash_prompt$ flatpak run org.oldbuild.wine-6-13 some_windows_program.exe
bwrap: execvp wine: No such file or directory
```
Does anyone know what's causing this problem? Thanks in advance! |
I'm trying to create a curl in python to connect to a swagger api and download the data to a csv.
This is the script
# Import necessary modules
import pycurl
import urllib.parse
query = """
SELECT min({cart_pai.pk}) FROM {cart as carrinho JOIN cart AS cart_pai ON {carrinho.parent} = {cart_pai.pk}}
"""
safe_string = urllib.parse.quote_plus(query)
url = """
https://api.cokecxf-commercec1-p1-public.model-t.cc.commerce.ondemand.com/clarowebservices/v2/claro/flexiblesearch/export/search/getAsCsv?
"""
# Define the headers
headers = ['Authorization: Bearer 4DmQkomKs0z845705JIlfW-hhNg']
# Initialize a Curl object
c = pycurl.Curl()
# Set the URL to send the request to
c.setopt(c.URL, url + safe_string)
# Set the HTTPHEADER option with the headers
c.setopt(pycurl.HTTPHEADER, headers)
# Perform the request
c.perform()
# Close the Curl object to free system resources
c.close() |
Curl python + token bearer + query sql |
|python| |
null |
Very new to NetSuite scripting. I am writing a very basic workflow action script to take the ID of the created from transaction (which is already referenced on the current transaction form), and then populate that ID into the new transaction ID field. But it keeps populating the words "createdFromID" instead of the actual ID.
`function wf_populateField() {
nlapiSetFieldValue('tranid','createdFromId');
}` |
SuiteScript 1 Issue - Unable to populate Created From ID on another field |
|javascript|netsuite|suitescript|suitescript2.0|suitescript1.0| |
null |
So I want to make my taskbar popup whenever I am in my browser full screen. Like I am watching a YT video, I hover in the taskbar region(bottom screen) for few seconds and task bar pop ups. Like I am using autohide task bar, So normally it is hidden, but whenever I hover below, it pop ups. I would like to do it also in full screen.
I tried ChatGPT and it suggested me to create and run a PowerShell script. It gave me this script:
```
# Function to show the taskbar
Function Show-Taskbar {
param([int]$delay)
# Send the Windows key to show the taskbar
$wshell = New-Object -ComObject WScript.Shell
$wshell.SendKeys('^{ESC}')
# Wait for the taskbar to appear
Start-Sleep -Milliseconds $delay
# Minimize any fullscreen application if it's active
$wshell.SendKeys('%{SPACE}n')
}
# Call the function with a delay of 2000 milliseconds (2 seconds)
Show-Taskbar -delay 2000
```
It didn't work. Can someone tell me how can I achieve it? Like I want to popup the taskbar so how can I do it? |
Angular with NX
1. create new file _redirects
2. Add below line in _redirect file
/* /index.html 200
3.open project.json
ex: apps\my-app\project.json
4.add path at targets.build.options.assets :
"assets" :[
"apps/my-app/src/_redirects"
]
|
```
java.lang.IllegalArgumentException: Expected list containing Map, List, Boolean, Integer, Long, Double, String and Date
````
This error occurred when I used springsecurty to query the database for a certain user
What errors may be causing the problem? |
Securing routes with sessionStorage in NextJS |
|session|next.js| |
null |
I stumbled on the `x = [m]*n` and running it in the interpreter I can see that the code allocates an n element array initialized with m. But I can't find a description of this type of code online. What is this called?
>>> [0] * 7
[0, 0, 0, 0, 0, 0, 0]
|
null |
I'm trying to secure a page using a jwt token in a session in NextJS, I was looking at similar questions on the forum and I saw that they recommended using useeffect to get access but I still can't recover anything.
The code I started with
`"use client";
import {
useEffect,
useState
} from "react";
import {
useRouter
} from 'next/navigation'
import {
valJwt
} from "@/libs/jwtSec";
import Cookies from "js-cookie";
export default function isAuth(Component: any) {
return function IsAuth(props: any) {
const auth = valJwt(sessionStorage.getItem("token_test"));
useEffect(() => {
if (!auth) {
return redirect("/");
}
}, []);
if (!auth) {
return null;
}
return < Component {
...props
}
/>;
};
}`
Output
ReferenceError: sessionStorage is not defined
And the edition with which I do not recover anything is this
`"use client";
import {
useEffect,
useState
} from "react";
import {
useRouter
} from 'next/navigation'
import {
valJwt
} from "@/libs/jwtSec";
import Cookies from "js-cookie";
export default function isAuth(Component: any) {
return function IsAuth(props: any) {
const [token, setToken] = useState(null);
useEffect(() => {
let tok_ses = sessionStorage.getItem("token_test");
if (tok_ses) {
setToken(tok_ses);
}
}, []);
useEffect(() => {
if (token) {
const auth = valJwt(sessionStorage.getItem("token_test"));
if (!auth) {
return redirect("/");
}
}
}, [token]);
};
}`
I tried with js-cookie but I can't get access from the same function either, how could I get access?
I need to access the content of a sessionStorage from NextJS on the client side |
I have encountered an issue when rendering a visualization of a 2d matrix using Pygame (I am aware this not the best library for the job ... but anyway). The issue arises when I attempt to render each node in the matrix as a rectangle. Each node is an instance of the Node class and has x1, y1, x2, and y2 values derived from it's position in the array. x1 and y1 are the coordinates for the first point of the rectangle and x2 and y2 are the coordinates are the second point. When I use lines to represent the nodes, everything seems to render as I expected. However when I use rectangles, the rectangles clump together. I noticed these are the rectangles representing the nodes after the 0th row and col positions in the 2d list. Does anyone know why this is? I have provided script A (lines) and script B (rectangles) with images of the output for review. [Output for script A][1], [Output for script B][2]
Script A (lines)
```
import pygame
import time
pygame.init()
class Node:
count = 0
def __init__(self, row, col):
Node.count += 1
self.id = Node.count
self.row = row
self.col = col
self.x1 = col * 10
self.y1 = row * 10
self.x2 = col * 10 + 5
self.y2 = row * 10 + 5
def display(matrix):
for i in matrix:
print(i)
matrix = [[Node(i, j) for j in range(10)] for i in range(10)]
win = pygame.display.set_mode((500, 500))
win.fill((0, 0, 0,))
for i in range(len(matrix)):
for node in matrix[i]:
pygame.draw.line(win, (0, 0, 255), (node.x1, node.y1), (node.x2, node.y2), width=1 )
pygame.display.update()
time.sleep(0.1)
```
Script B (rectangles)
```
import pygame
import time
pygame.init()
class Node:
count = 0
def __init__(self, row, col):
Node.count += 1
self.id = Node.count
self.row = row
self.col = col
self.x1 = col * 10
self.y1 = row * 10
self.x2 = col * 10 + 5
self.y2 = row * 10 + 5
def display(matrix):
for i in matrix:
print(i)
matrix = [[Node(i, j) for j in range(10)] for i in range(10)]
win = pygame.display.set_mode((500, 500))
win.fill((0, 0, 0,))
for i in range(len(matrix)):
for node in matrix[i]:
pygame.draw.rect(win, (0, 0, 255), (node.x1, node.y1, node.x2, node.y2))
pygame.display.update()
time.sleep(0.1)
time.sleep(10)
[[enter image description here](https://i.stack.imgur.com/hoYfw.png)](https://i.stack.imgur.com/NZAxV.png)
```
The fact that the lines render in the correct positions leaves me puzzled as to why the rectangles are not as they are both using the same coordinates.
[1]: https://i.stack.imgur.com/B3RlF.png
[2]: https://i.stack.imgur.com/f3Vrr.png |
Why ProductHunt api dont work with Python? |
|python|django|api|web|pip| |
null |
<!-- language-all: sh -->
> Why is PowerShell failing when redirecting the output?
The reason is that `python` (which underlies `html2text`) - like many other Windows CLIs (console applications) - modifies its output behavior based on whether the output target is a *console (terminal)* or is _redirected_:
* In the *former* case, such CLIs use the _Unicode_ version of the WinAPI [`WriteConsole`](https://learn.microsoft.com/en-us/windows/console/writeconsole) function, meaning that _any_ character from the global [Unicode](https://en.wikipedia.org/wiki/Unicode) alphabet is accepted.
* This means that character-encoding problems do _not_ surface in this case, and the output usually _prints_ properly to the console (terminal) - that said, *exotic* Unicode characters may not print properly, necessitating switching to a different _font_.
* In the *latter* case, CLIs must _encode_ their output, and are expected to respect the legacy Windows _OEM code page_ associated with the current console window, as reflected in the output from `chchp.com` and - by default - in `[Console]::OutputEncoding` inside a PowerShell session:
* E.g., the OEM code page is [`437`](https://en.wikipedia.org/wiki/Code_page_437) on US-English systems, and if the text to output contains characters that cannot be represented in that code page - which (for non-[CJK](https://en.wikipedia.org/wiki/CJK_characters) locales) is a _single_-byte encoding limited to _256_ characters in total.
* Notably, Python exhibits *nonstandard* behavior by default, by encoding redirected output based on the _ANSI_ code page (e.g, [`1252`](https://en.wikipedia.org/wiki/Windows-1252) on US-English systems) rather than the _OEM_ code page (both of which are determined by the system's active legacy _system locale_, aka _language for non-Unicode programs_). However, like the OEM code page (in non-CJK locales), ANSI code pages too are limited to 256 characters, and trying to encode a character outside that set results in the error you saw.
* To avoid this limitation, modern CLIs increasingly encode their output using UTF-8 instead, either by default (e.g., Node.js), or on an _opt-in_ basis (e.g., Python).
---
In the context of PowerShell, an external program's (stdout) output is considered _redirected_ (*not* targeting the console/terminal) in one of the following cases:
* capturing external-program output in a variable (`$text = wget ...`, as in your case), or using it as part an of _expression_ (e.g., `"foo" + (wget ...)`)
* _relaying_ external-program output _via the pipeline_ (e.g., `wget ... | ...`)
* in _Windows PowerShell_ and [_PowerShell (Core) 7_](https://github.com/PowerShell/PowerShell/blob/master/README.md) _up to v7.3.x_: also with `>`, the [redirection operator](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Redirection); in *v7.4+*, using `>` directly on an external-program call now _passes the raw bytes through_ to the target file.
That is, in all those cases _decoding_ the external program-output comes into play, into _.NET strings_, based on the encoding stored in `[Console]::OutputEncoding`.
In the case at hand, this stage wasn't even reached, because Python itself wasn't able to _encode_ its output.
---
The **solution** in your case is therefore two-pronged, as suggested by [zett42](https://stackoverflow.com/users/7571258/zett42):
* Make sure that `html2text` outputs *UTF-8*-encoded text.
* `html2text` is a Python-based script/executable, so (temporarily) set `$env:PYTHONUTF8=1` before invoking it.
* Make sure that PowerShell interprets the output as UTF-8:
* To that end, (temporarily) set `[Console]::OutputEncoding` to `[System.Text.UTF8Encoding]::new()`
To put it all together:
```
$text =
& {
$prevEnv = $env:PYTHONUTF8
$env:PYTHONUTF8 = 1
$prevEnc = [Console]::OutputEncoding
[Console]::OutputEncoding = [System.Text.UTF8Encoding]::new()
try {
wget.exe -O - https://www.voidtools.com/forum/viewtopic.php?p=36618#p36618 |
html2text
} finally {
$env:PYTHONUTF8 = $prevEnv
[Console]::OutputEncoding = $prevEnc
}
}
```
Note:
* When you pipe data _from PowerShell_ TO an external program (not the case here), PowerShell uses the [**`$OutputEncoding`** preference variable](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Preference_Variables#outputencoding) to encode it, in which case you may have to (temporarily) change `$OutputEncoding` too; it defaults to ASCII(!) in _Windows PowerShell_, and to (BOM-less) UTF-8 in _PowerShell (Core) 7_ - which is problematic in both cases, as it doesn't match the default value of `[Console]::OutputEncoding`.
For instance, to both send data as UTF-8 and to decode it as such, you can (temporarily) set:
$OutputEncoding = [Console]::OutputEncoding = [System.Text.UTF8Encoding]::new()
* It is possible to configure a given system to use UTF-8 _system-wide_ by default, which would make things just work without extra effort in this case (though in _Windows PowerShell_ you may situationally still have to set `$OutputEncoding`); however, this configuration, which sets the system locale in a way that sets both the OEM and the ANSI code page to `65001` (UTF-8), has _far-reaching consequences_ that may break existing scripts - see [this answer](https://stackoverflow.com/a/57134096/45375).
* [GitHub issue #7233](https://github.com/PowerShell/PowerShell/issues/7233) is a much lower-impact suggestion to make _PowerShell (Core) 7_ console windows default to UTF-8, without the need to change the system locale (active code pages).
|
I am using this code with UISearchBar but you can use this code with UISearchController.
let searchBar = UISearchBar()
searchBar.sizeToFit()
searchBar.placeholder = "Search"
navigationItem.titleView = searchBar
if let textfield = searchBar.value(forKey: "searchField") as? UITextField {
textfield.textColor = UIColor.blue
if let backgroundview = textfield.subviews.first {
// Background color
backgroundview.backgroundColor = UIColor.white
// Rounded corner
backgroundview.layer.cornerRadius = 14;
backgroundview.clipsToBounds = true;
}
}
|