instruction stringlengths 0 30k β |
|---|
|arcgis|census| |
null |
I have setup a postgres:13.3 docker container and scram-sha-256 authentication.
Initially, I ran:
```
docker run -d --name my-postgres13 -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=fbp123 -e POSTGRES_DB=mydb -e POSTGRES_HOST_AUTH_METHOD=scram-sha-256 -v pgdata13:/var/lib/postgresql/data postgres:13.3
```
Postgres.conf:
```
password_encryption = scram-sha-256
```
pg_hba.conf:
```
hostnossl all all 0.0.0.0/0 scram-sha-256
local all all scram-sha-256
```
After above done and restarted container, I created a new fbp2 user and applied password 'fbp123', and password seems to be saved as
scram in pg_authid table:
```
16386 | fbp2 | t | t | f | f | t | f | f | -1 | SCRAM-SHA-256$4096:yw+jyaEzlvlOjZnc/L/flA==$tqPlJIDXv9zueaGd8KpQf11N82IGgAOsK4
Lhb7lPhi4=:+mCXFKb2y5PG6ycIKCz7xaY8U5MNLnkzlPZK8pt3to0= |
```
I use the original plain-text from within my java app to connect:
```
hikariConfig = new HikariConfig();
hikariConfig.setUsername("fbp2");
hikariConfig.setPassword("fbp123");
hikariConfig.setJdbcUrl("jdbc:postgresql://%s:%s/%s".formatted("localhost", 5432, "mydb"));
HikariDataSource dataSource = new HikariDataSource(hikariConfig);
return dataSource.getConnection();
```
From logs, this url is used: ``` jdbc:postgresql://localhost:5432/mydb ```
The issue is I'm having authentication issue, although I use the plain-text password that I used in postgres server:
```
2024-03-30 14:38:03.372 DEBUG 22440 [ main] c.z.h.u.DriverDataSource : Loaded driver with class name org.postgresql.Driver for jdbcUrl=jdbc:postgresql://localhost:5432/mydb
2024-03-30 14:38:03.601 DEBUG 22440 [ main] c.z.h.p.PoolBase : HikariPool-1 - Failed to create/setup connection: FATAL: password authentication failed for user "fbp2"
2024-03-30 14:38:03.601 DEBUG 22440 [ main] c.z.h.p.HikariPool : HikariPool-1 - Cannot acquire connection from data source
org.postgresql.util.PSQLException: FATAL: password authentication failed for user "fbp2"
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:693)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:203)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:258)
```
Note that If I revert to "trust" and send no passwords, I have this:
```
org.postgresql.util.PSQLException: The server requested SCRAM-based authentication, but no password was provided.
```
So, it seems server only wants scram. I have tried md5 with no success.
----
Some relevant dependencies:
```
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.3.0</version>
</dependency>
<dependency>
<groupId>com.zaxxer</groupId>
<artifactId>HikariCP</artifactId>
<version>5.1.0</version>
</dependency>
```
My docker desktop runs on windows 11.
I use Oracle OpenJDK 20.0.1
I can connect to mydb with fbp2 user with no problem via psql admin tool. |
In GLSL I iterate over a float buffer, that contains the coordinates and a couple properties of elements to render. I was curious about shaders (don't have much experience with them) and wanted to obsessively optimise it. When looking at how it gets compiled (I'm using WebGL+Spector.js), I notice that in the loop where I access the array it clamps the accesses to the size of the buffer.
I understand this is heavily machine dependent, and is done to ensure no out of bounds accesses, but is there not a way to avoid these checks or guarantee to the compiler out of bounds accesses aren't possible (eg. adding a condition to the loop)? I'm mostly curious about the (small) performance impact these operations have (6 `clamp`s, and int + float casts per iteration!)
Any other potential optimisation tips are welcome, I'm very new to this and super interested in it. I thought about maybe passing the data as an array of `vec3`s instead to reduce array accesses to 2/element. Not sure if it would improve it though !
### Original code:
```c
// maxRelevantIndex is a uniform, elementSize is a const = 6, elements is of size 60
for (int i = 0; i < maxRelevantIndex; i += elementSize) {
float x1 = elements[i];
float y1 = elements[i + 1];
float x2 = elements[i + 2];
float y2 = elements[i + 3];
float br = elements[i + 4];
float color = elements[i + 5];
...
}
```
### Decompiled:
```glsl
for (int _uAV = 0;
(_uAV < _uU);
(_uAV += 6)) {
float _uk = _uV[int(clamp(float(_uAV), 0.0, 59.0))];
float _ul = _uV[int(clamp(float((_uAV + 1)), 0.0, 59.0))];
float _um = _uV[int(clamp(float((_uAV + 2)), 0.0, 59.0))];
float _un = _uV[int(clamp(float((_uAV + 3)), 0.0, 59.0))];
float _uAW = _uV[int(clamp(float((_uAV + 4)), 0.0, 59.0))];
float _uAC = _uV[int(clamp(float((_uAV + 5)), 0.0, 59.0))];
...
}
```
#### Update/Attemps:
I've tried both changing the indexing to use `uint`s, and adding `&& i < 60` in the loop condition, neither got rid of the clamp/cast. I did end up converting my data to an array of `vec4`s which gave a small perf improvement though, although I don't know if it's due to the avoided clamp/casts, or simply due to less array lookups being used. |
Usually using a variable is an additional overhead, but in your case Chrome was able to provide the same performance. But if a variable is reused, that could actually boost performance. So the rule could be - don't create unnecessary variables.
Also note that JS engine could optimize code while compiling so in reality your both examples could be exactly the same after being compiled.
If you create a variable it could be considered a write operation, but write operations are anything that mutates data or creates new one, in your case you join an array and this is a quite big write operation that stores the result as a temporary string. So when you assign this string to a real variable you add almost nothing to the already tremendous overhead. The less write operations the faster code. But that's about constant factors in an algorithm. I suggest to learn about time complexity and big O notation.
```
` Chrome/123
---------------------------------------------------------------------------------------
> n=10 | n=100 | n=1000 | n=10000
without vars β 1.00x x100k 565 | 1.00x x10k 594 | β 1.00x x1k 629 | β 1.00x x10 125
with vars 1.02x x100k 577 | β 1.00x x10k 592 | 1.01x x1k 635 | 1.03x x10 129
---------------------------------------------------------------------------------------
https://github.com/silentmantra/benchmark `
```
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
let $length = 10;
const big_strings = [];
const palette = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
for (let i = 0; i < $length; i++) {
let big_string = "";
for (let i = 0; i < 100; i++) {
big_string += palette[Math.floor(Math.random() * palette.length)];
}
big_string.charCodeAt(0);
big_strings.push(big_string);
}
let $input = big_strings;
var arrayStringsAreEqual = function(word1, word2) {
let a = word1.join('');
let b = word2.join('');
if (a == b) {
return true;
} else {
return false;
}
};
var arrayStringsAreEqual2 = function(word1, word2) {
return word1.join('') == word2.join('');
};
// @benchmark with vars
arrayStringsAreEqual($input, $input);
// @benchmark without vars
arrayStringsAreEqual2($input, $input);
/*@skip*/ fetch('https://cdn.jsdelivr.net/gh/silentmantra/benchmark/loader.js').then(r => r.text().then(eval));
<!-- end snippet -->
|
The data returned from `useSearch` needs to be an array in order for you to map it, so in your `useSearch` hook you need to only get the `value` array if you want to only render the results.
```javascript
// in useSearch hook
// we only take the array of webpages value
// looking from your console.log screenshot
setData(response.data.webPages.value);
```
You are trying to map your `searchData` which is a string. In order for you to map the actual api data you would need to use the data returned from your `useSearch` hook
```javascript
// this is the API data
// no need to destruct
const data = useSearch({ searchTerm: searchData });
// replace searchData with data
{data ? (
<ul>
{data.map((i) => (
<TileItems
key={i.id}
image={i.thumbnailUrl}
title={i.name}
description={i.snippet}
website={i.url}
/>
))}
</ul>
) : (
<p>Loading...</p>
)}
```
Lemme know if this fixes your problem
|
I have been learning python for about a year. I need to understand using unicode glyphs. In particular. In particular this set: https://unicode.org/charts/PDF/U11D60.pdf
I tried this and when I tried to print it, i didn't get the expected glyph;
```
a = '\u11D72'
print(a)
```
I'm using vscode on a mac. What am I missing? I want to use these glyphs for an encryption program (project) using a translate table. Something like this
```
mystr = 'String to be translated'
trans_dict = {'A':'\u11D72','B':'\u11D73','C':'\u11D74',and so on...}
mytable = mystr.maketrans(trans_dict)
mystr_translated = mystr.translate(mytable)
```
|
Question about unicode assignments in python |
|python|macos|python-3.11| |
Call [`Language.Haskell.TH.Syntax.addDependentFile`](https://hackage.haskell.org/package/template-haskell-2.8.0.0/docs/Language-Haskell-TH-Syntax.html#v:addDependentFile), then [wait for this cabal issue to get fixed](https://github.com/haskell/cabal/issues/4746).
As a workaround, you can add a comment to some relevant Haskell source file(s) to get them to rebuild (and re-execute their TH). Don't forget to delete it again next build and certainly before you check your source in! |
I follow the Nicholas Renotte tutorial "Build a Deep Facial Recognition App"(Python). But at the part 4 I faced a problem, here is the code:
# Siamese L1 Distance class
class L1Dist(Layer):
# Init method - inheritance
def __init__(self, **kwargs):
super().__init__()
# Magic happens here - similarity calculation
def call(self, input_embedding, validation_embedding):
return tf.math.abs(input_embedding - validation_embedding)
TypeError: unsupported operand type(s) for -: 'list' and 'list'
In the video all is fine, but in my case function can't do the substraction (input_embedding - validation_embedding)
Arguments received by L1Dist.call():
args=(['<KerasTensor shape=(None, 4096), dtype=float32, sparse=False, name=keras_tensor_18>'], ['<KerasTensor shape=(None, 4096), dtype=float32, sparse=False, name=keras_tensor_19>'])
Tried to modify:
def call(self, input_embedding, validation_embedding):
input_embedding = tf.convert_to_tensor(input_embedding)
validation_embedding = tf.convert_to_tensor(validation_embedding)
input_embedding = tf.squeeze(input_embedding, axis=0) # Remove potential first dimension
validation_embedding = tf.squeeze(validation_embedding, axis=0)
return tf.math.abs(input_embedding - validation_embedding)
But failed
line 108, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: TypeError: object of type 'KerasTensor' has no len()
Tried tf.keras.layers.Subtract()([input_embedding, validation_embedding])
But AttributeError: Exception encountered when calling Subtract.call().
'list' object has no attribute 'shape'
With keras.ops.subtract(input_embedding, validation_embedding)
faced: ValueError(f"Invalid dtype: {dtype}")
ValueError: Invalid dtype: list |
WebSecurityConfigurerAdapter is deprecated now (spring security 6.0 and higher)
So we can rewrite the above code as
@Configuration
@EnableWebSecurity
public class SecurityConfig{
public SecurityFilterChain
filterchain(HttpSecurity http) throws
Exception {
http.authorizeRequests()
.requestMatchers("/**")
.authenticated()
.requestMatchers("/admin/**")
.hasRole("ADMIN")
.formLogin(login ->
login.loginPage("/login")
.defaultSuccessUrl("/inicio")
.permitAll()
.logout();
}
}
@Configuration
@Order(Ordered.HIGHEST_PRECEDENCE)
public class BasicSecurityConfig {
public SecurityFilterchain
filterchain(HttpSecurity http)
throws
Exception {
http.csrf().disable();
http.requestMatcher("/api/**")
.authorizeHttpRequests()
.anyRequest()
.authenticated()
.httpBasic();
}
}
|
error CS0029: Cannot implicitly convert type 'System.Threading.Tasks.Task<Firebase.Auth.AuthResult>' to 'Firebase.Auth.FirebaseUser'
public void OnClickSignIn()
{
Copy
FirebaseAuth auth = FirebaseAuth.DefaultInstance;
Debug.Log("Clicked SignIn");
string emailText = emailSignin.GetComponent<TMP_InputField>().text;
string passwordText = passwordSignin.GetComponent<TMP_InputField>().text;
auth.SignInWithEmailAndPasswordAsync(emailText,
passwordText).ContinueWithOnMainThread(task =>
{
if (task.IsCanceled)
{
Debug.Log("SignIn Canceled");
Debug.LogError("SignInWithEmailAndPasswordAsync was canceled.");
return;
}
if (task.IsFaulted)
{
Debug.Log("SignIn Failed");
Debug.LogError("SignInWithEmailAndPasswordAsync encountered an error: " + task.Exception);
signinFailNotification.OpenNotification();
return;
}
FirebaseUser newUser = task;
if (newUser != null)
{
signinSuccessNotification.OpenNotification();
Debug.LogFormat("User signed in successfully: {0} ({1})",
newUser.DisplayName, newUser.UserId);
}
});
}
public void OnClickSignUp()
{
Copy
FirebaseAuth auth = FirebaseAuth.DefaultInstance;
Debug.Log("Clicked SignUp");
string emailText = emailSignup.GetComponent<TMP_InputField>().text;
string passwordText = passwordSignup.GetComponent<TMP_InputField>().text;
auth.CreateUserWithEmailAndPasswordAsync(emailText,
passwordText)
.ContinueWithOnMainThread(task =>
{
if (task.IsCanceled)
{
Debug.Log("Signup Canceled");
Debug.LogError("CreateUserWithEmailAndPasswordAsync was canceled.");
return;
}
if (task.IsFaulted)
{
Debug.Log("Signup Failed");
Debug.LogError("CreateUserWithEmailAndPasswordAsync encountered an error: " + task.Exception);
signupFailNotification.OpenNotification();
return;
}
// Firebase user has been created.
Debug.Log("Signup Successful");
FirebaseUser newUser = task;
if (newUser != null)
{
writeNewUser(newUser.UserId,newUser.Email);
signupSuccessNotification.OpenNotification();
Debug.LogFormat("Firebase user created successfully: {0} ({1})",
newUser.DisplayName, newUser.UserId);
}
}); |
Cannot implicitly convert type 'System.Threading.Tasks.Task (firebase) |
|c#|visual-studio|unityscript| |
null |
In my case since I had 30+ other tables with data I wouldn't afford to lose, what I did is,
1. I backed up and deleted the `migrations` folder from the app containing the table
2. I Deleted the `django_migrations` table
3. I created a new database and changed the project database to the new database
4. I run Django migrations and app migrations in the new database
5. I exported the table i had deleted including all the tables it is related to and the `django_migrations` table in SQL files
6. I change the default project database back to my original database
7. I delete the tables that where related to the table i had deleted and imported them from the SQL files including `django_migrations` table
This solution worked on both Mysql and SQLite databases |
I am trying to figure out how possibly I could reuse the tabview with textboxes in my GUI python app using CTK. My current approach deletes all tabviews and creates new one with textbox to fill with data.
This is very problematic since it really distorts and lags my GUI everytime tabviews are re-created. I am looking for way that I could re-use the existing tabviews, or have set created default 3 tabviews each with textbox inside and just re-write the data and the Tabview titles. I struggle to make this, is there any way Iam missing? I would greatly appreciate any way that deals with the lag/Gui Distortion
```
def update_tabview(self, data_list):
if self._prev_data_list == data_list:
return
self._prev_data_list = data_list.copy() if data_list else None
my_tabview = self.tabview
current_tabs = my_tabview._name_list.copy()
for tab in current_tabs:
my_tabview.delete(tab)
if len(data_list) == 0:
my_tabview.add("Default TabView")
else:
for data in data_list:
name = data["name"]
my_tabview.add(name)
textbox = ctk.CTkTextbox(my_tabview.tab(name), wrap="word",
font=("Helvetica", 14, "bold"), width=380, height=260)
textbox.grid(row=0, column=0, padx=(5, 5), pady=(5, 10), sticky="nsew")
textbox.insert(tk.END, "TEXT" + "\n" + "\n")
textbox.insert(tk.END, data["a"] + "\n")
textbox.insert(tk.END, "\n" + "TEXT" + "\n\n")
for text in data["b"]:
textbox.insert(tk.END, text + "\n")
textbox.insert(tk.END, "\n" + "TEXT" + "\n\n")
for text in data["c"]:
textbox.insert(tk.END, text + "\n")
textbox.insert(tk.END, "\n" + "TEXT" + "\n\n")
for text in data["d"]:
textbox.insert(tk.END, text + ", ")
textbox.configure(state='disabled')
``` |
I wanna see my look id in corresponding bigquey job labels. My objective is to efficiently track Looker queries within BigQuery. Could you please provide guidance on achieving this integration? Thanks |
Can i add new label called looker-context-look_id in BigQuery connection(Looker) |
|python|google-bigquery|looker| |
I believe this will work in `2010`:
B1: =IF(A1="Name","Group",IF(A2="Name","", INDEX($A$1:A1,LOOKUP(2,1/($A$1:A1="Name"),ROW($A$1:A1))-1)))
and fill down.
***Algorithm***
- If the adjacent cell in Column A = "Name" then enter "Group"
- If the next cell down in column A = "Name" then leave a blank
- The lookup formula will return the row number of the last cell in column A (up to the current row of the formula) that = "Name"
- Subtract `1` to get the row number of the group name
- *The `1` would need to be changed to the first row number of your table in the event the table does not start in Row 1*
- The Index function will then return the relevant group name
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/vNNmx.png |
I have a script to send an email with mailR. So far so good.
But when I want to add a folder as an attachment, then it won't send the mail (the script is fine and there's no error).
The name of the file in my folder change every week, so the attachment has to be the folder name and then it must send the file in it.
My script:
Library(mailR)
sender <- "name@mail.com"
recipients <- c("name@mail.com")
send.mail(from = sender,
to = recipients,
subject = "Subject of the email",
body = "Test",
smtp = list(host.name = "abnabnabnn,
user.name = "nsankjdnkak",
passwd = "123456", ssl = TRUE),
authenticate = TRUE,
send = TRUE,
attach.files = ("\\\\ab01\\Users\\tinus\\Desktop\\Temp folder\\"),
debug = TRUE)
What am I doing wrong?
I tried and search, but no result |
R script (mailR) all files from folder as attachment |
|r|email|attachment|send| |
null |
I'm writing a bot to "archive" messages (move them but keep details of author etc) and threads to a specified channel based on date(s).
I'm nearly there but I can't see how to delete the thread that is posted under the channel name:
[Screenshot](https://i.stack.imgur.com/mPyPc.png)
I can delete the messages in the thread after I move them to another channel but I can't see how to delete that thread under the channel.
I had though it was the "THREAD_STARTER_MESSAGE" (and it still may be) but I can't delete that as I get a 403 error saying "Cannot execute action on a system message"
Any suggestions welcome.
BTW - for anyone interested here's what the moved/"archived" messages look like:
[Screenshot](https://i.stack.imgur.com/555wO.png) |
{"Voters":[{"Id":1745001,"DisplayName":"Ed Morton"}],"DeleteType":1} |
Delete defaultProject from angular.json file if it exists.
defaultProject has been deprecated.
See https://github.com/angular/angular-cli/issues/11111 |
I own a vps. There we have three minecraft servers and one discord bot. we also have a local mongoDB running on the vps. I have connected to it using node.js mongoose library. I am talking about the discord bot in this case. When I create documents in the mongodb, it works, and the code reads everything. Suddenly, after about 2-3 hours, every single document i created, is deleted. Its really annoying, and makes the bot completely useless. There are no errors, no logs, it just deletes them. This is quite urgent, as we have users who wants to use the bot. When hosting the mongodb publicly, everything works, and nothing is getting deleted, so its not the code. Any help is appreciated. PS. There are no TTL idexes as far as I know, but I couldnt find any reliable documentation on how to check using mongoose.
I am just trying to host my bot on a VPS, its my first time doing so. Because we want to avoid dataleaks, we decided to host the mongoDB locally on the VPS, we need the mongoDB to communicate over all the servers (minecraft and discord). Since everything works, and nothing is being deleted when hosting the mongoDB using atlas, I would think hosting it on the VPS would work. WHen we tried contacting the support team on the VPS service, they said they couldnt do anything as this is a problem with mongoDB |
Hibernate: JOIN inheritance question - why the need for two left joins |
|sql|hibernate|join|hibernate-mapping| |
guard let url = URL(string:"https://blaaaajo.de/getHiddenUsers.php") else { return }
let postString = "blockedUsers=245,1150"
var request = URLRequest(url: url)
request.httpMethod = "POST"
request.httpBody = postString.data(using: String.Encoding.utf8);
do{
let (responseString, _) = try await URLSession.shared.data(for: request)
if let decodedResponse = try? JSONDecoder().decode([HiddenUsersModel].self, from: responseString){
gettingBlockedUsers = false
blockedUsers = decodedResponse
}
}catch{
print("Error: \(error)")
}
the HiddenUsersModel:
struct HiddenUsersModel: Codable {
var userid: Int
var nickname: String
}
I'm always getting `data not valid`
The url and the POST key `blockedUsers` with the value `245,1150` is 100% correct, I'm also using this API for the web and Android app.
The code on server side doesn't get executed though, not even at the beginning of the PHP script. So no JSON response is generated.
The error I'm getting:
Error: Error Domain=NSURLErrorDomain Code=-999 "Abgebrochen" UserInfo={NSErrorFailingURLStringKey=https://blaaaajo.de/do_getblockedusers.php, NSErrorFailingURLKey=https://blaaaajo.de/do_getblockedusers.php, _NSURLErrorRelatedURLSessionTaskErrorKey=(
"LocalDataTask <33660134-3AEC-4416-A917-C0FC64934DB5>.<7>"
), _NSURLErrorFailingURLSessionTaskErrorKey=LocalDataTask <33660134-3AEC-4416-A917-C0FC64934DB5>.<7>, NSLocalizedDescription=Abgebrochen} |
I'm having problems setting up HTTPS in my Spring Boot application. The application is hosted on an AWS EC2 server with Ubuntu 20. When I try to access the application via Postman using HTTPS, I get a timeout in the server response.
Spring Security configuration:
```java
@EnableWebSecurity
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
private final UserDetailsDataImplements clientService;
private final PasswordEncoder passwordEncoder;
public SecurityConfiguration(UserDetailsDataImplements usuarioService, PasswordEncoder passwordEncoder) {
this.clientService = usuarioService;
this.passwordEncoder = passwordEncoder;
}
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
auth.userDetailsService(clientService).passwordEncoder(passwordEncoder);
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().disable()
.requiresChannel() // Requer configuraΓ§Γ΅es de canal (HTTP/HTTPS)
.anyRequest().requiresSecure() // Requer HTTPS para todas as requisiΓ§Γ΅es
.and()
.authorizeRequests()
.antMatchers(HttpMethod.POST, "/login").permitAll()
.antMatchers(HttpMethod.GET, "/update").permitAll()
.antMatchers(HttpMethod.POST, "/client").permitAll()
.antMatchers(HttpMethod.GET, "/data/test").permitAll()
.antMatchers(HttpMethod.POST, "/data/register").permitAll()
.anyRequest().authenticated()
.and()
.addFilter(new AuthenticationFilter(authenticationManager()))
.addFilter(new AuthValidation(authenticationManager()))
.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS)
.and()
.cors();
}
@Bean
public CorsConfigurationSource corsConfigurationSource() {
CorsConfiguration configuration = new CorsConfiguration();
configuration.setAllowedOrigins(Arrays.asList("http://localhost:3000"));
configuration.setAllowedMethods(Arrays.asList("GET", "POST", "PUT", "DELETE", "OPTIONS", "HEAD", "TRACE", "CONNECT"));
configuration.setAllowedHeaders(Arrays.asList("*"));
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
source.registerCorsConfiguration("/**", configuration);
return source;
}
}
```
AWS EC2 console:
```shell
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.6.3)
2024-03-31 22:25:01.203 INFO 16246 --- [ main] com.brasens.main.BrasensRest : Starting BrasensRest v0.0.1-SNAPSHOT using Java 11.0.22 on ip-172-31-21-105 with PID 16246 (/home/ubuntu/mspm-backend/target/msmp-http-0.0.1-SNAPSHOT.jar started by ubuntu in /home/ubuntu/mspm-backend/target)
2024-03-31 22:25:01.209 INFO 16246 --- [ main] com.brasens.main.BrasensRest : The following profiles are active: prod
2024-03-31 22:25:04.665 INFO 16246 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
2024-03-31 22:25:05.058 INFO 16246 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 371 ms. Found 14 JPA repository interfaces.
2024-03-31 22:25:06.972 INFO 16246 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8443 (https)
2024-03-31 22:25:07.001 INFO 16246 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2024-03-31 22:25:07.002 INFO 16246 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.56]
2024-03-31 22:25:07.209 INFO 16246 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2024-03-31 22:25:07.215 INFO 16246 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 5846 ms
2024-03-31 22:25:08.780 INFO 16246 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default]
2024-03-31 22:25:08.965 INFO 16246 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.6.4.Final
2024-03-31 22:25:09.386 INFO 16246 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
2024-03-31 22:25:09.599 INFO 16246 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2024-03-31 22:25:10.598 INFO 16246 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2024-03-31 22:25:10.652 INFO 16246 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.PostgresPlusDialect
2024-03-31 22:25:13.054 INFO 16246 --- [ main] org.hibernate.tuple.PojoInstantiator : HHH000182: No default (no-argument) constructor for class: com.brasens.main.security.PasswordResetToken (class must be instantiated by Interceptor)
2024-03-31 22:25:13.726 INFO 16246 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
2024-03-31 22:25:13.740 INFO 16246 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
2024-03-31 22:25:15.235 WARN 16246 --- [ main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning
2024-03-31 22:25:15.973 INFO 16246 --- [ main] f.a.AutowiredAnnotationBeanPostProcessor : Autowired annotation should only be used on methods with parameters: public void com.brasens.main.cronjobs.Scheduler.check()
2024-03-31 22:25:16.363 INFO 16246 --- [ main] o.s.s.w.a.c.ChannelProcessingFilter : Validated configuration attributes
2024-03-31 22:25:16.441 INFO 16246 --- [ main] o.s.s.web.DefaultSecurityFilterChain : Will secure any request with [org.springframework.security.web.access.channel.ChannelProcessingFilter@4a89ef44, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6a950a3b, org.springframework.security.web.context.SecurityContextPersistenceFilter@681c0ae6, org.springframework.security.web.header.HeaderWriterFilter@15639d09, org.springframework.web.filter.CorsFilter@4f7be6c8, org.springframework.security.web.authentication.logout.LogoutFilter@1a2e0d57, com.brasens.main.security.AuthenticationFilter@647b9364, com.brasens.main.security.AuthValidation@b6bccb4, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@4d98e41b, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@7459a21e, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@49edcb30, org.springframework.security.web.session.SessionManagementFilter@52bd9a27, org.springframework.security.web.access.ExceptionTranslationFilter@7634f2b, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@1e1237ab]
2024-03-31 22:25:17.839 INFO 16246 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 1 endpoint(s) beneath base path '/actuator'
2024-03-31 22:25:18.286 INFO 16246 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8443 (https) with context path ''
2024-03-31 22:25:18.341 INFO 16246 --- [ main] com.brasens.main.BrasensRest : Started BrasensRest in 18.862 seconds (JVM running for 20.927)
^C2024-03-31 22:28:58.761 INFO 16246 --- [ionShutdownHook] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2024-03-31 22:28:58.764 INFO 16246 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2024-03-31 22:28:58.791 INFO 16246 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
```
Photo of the Postman:
[enter image description here](https://i.stack.imgur.com/AuTwO.png)
Photo of the AWS EC2 Security Groups:
[enter image description here](https://i.stack.imgur.com/8jFON.png)
The outbound rules also look like this
application.properties:
```
http.port: 8080
server.port: 8443
################# SSL CONFIG #################
security.require-ssl=true
server.ssl.key-store:/etc/letsencrypt/live/brasens.com/keystore.p12
server.ssl.key-store-password: root
server.ssl.keyStoreType: PKCS12
server.ssl.keyAlias: tomcat
```
```java
@RestController
@RequestMapping("/data")
public class DataController {
@GetMapping("/test")
public ResponseEntity test() {
System.out.println("TESTED!");
return ResponseEntity.ok("TESTING...");
}
}
```
What could be causing the timeout when trying to access the application via HTTPS?
Are there any additional settings I should make in Spring Boot or AWS EC2 to ensure that HTTPS is working correctly?
Any suggestions on how to diagnose and resolve this timeout problem? |
I have a dataset which contains the order, species, and various trait data of species of birds. I am trying to make it so it only keeps data on certain orders. Normally when I am subsetting by species I merge datasets so that only those with the same row names, i.e. species names, are kept in the new data frame, but it appears that when using read.csv() row names cannot be duplicates, and as the Order names would be duplicates, I cannot have them as row names. So how would I subset it so that the new data frame only contains information on select Orders? This is how the columns with Orders and Species look like:
Order1 Species
Acanthagenys_rufogularis Passeriformes Acanthagenys_rufogularis
Acanthiza_apicalis Passeriformes Acanthiza_apicalis
Acanthiza_chrysorrhoa Passeriformes Acanthiza_chrysorrhoa
Acanthiza_lineata Passeriformes Acanthiza_lineata
Acanthiza_nana Passeriformes Acanthiza_nana
Acanthiza_pusilla Passeriformes Acanthiza_pusilla
Acanthiza_reguloides Passeriformes Acanthiza_reguloides
Acanthiza_uropygialis Passeriformes Acanthiza_uropygialis
Acanthorhynchus_tenuirostris Passeriformes Acanthorhynchus_tenuirostris
Accipiter_cirrocephalus Accipitriformes Accipiter_cirrocephalus
While the first 10 lines only contain 2 orders, the dataset contains information on 26 Avian Orders, and I am only interested in Passeriformes, Charadriiformes, Psittaciformes, & Struthioniformes |
The `graphiql` endpoint is available with this dependency `spring-boot-starter-graphql`. Keep the following dependency only in pom.xml for graphQL and enable the graphiQL in `application.properties` with the following line `spring.graphql.graphiql.enabled=true`.
```
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-graphql</artifactId>
</dependency>
```
I refactored the schema in `resources/graphql/schema.graphqls`
```
type Query {
book(id: ID) : BookResponse
fullName(firstName : String, lastName : String) : String
}
type BookResponse {
id : ID
bookName : String
author : String
description : String
ownerFirstName : String
ownerSecondName : String
}
```
Also, added the `QueryMapping` for the GraphQL Query in the `Controller` File.
```
@Controller
public class BookGraphqlController {
private BookService bookService;
public BookGraphqlController(BookService bookService) {
this.bookService = bookService;
}
@QueryMapping
public String fullName(@Argument String firstName, @Argument String lastName) {
return firstName + " " + lastName;
}
@QueryMapping
public GetBookResponse book(@Argument UUID id) {
return bookService.getBookById(id);
}
}
```
The above code changes made the endpoint working.
**Additional note:**
If you want to use `GraphQLQueryResolver` interface, this would be available in project `com.graphql-java-kickstart`. So, one would need to add this dependency in their project. |
Either explicitly close the output file:
```js
const dsGeoJSON2 = gdal.open('./upload2/objects.geojson');
const out2 = gdal.vectorTranslate(
'./upload2/objects.dxf',
dsGeoJSON2,
['-f', 'DXF']
).close();
```
either simply quit the program.
The flushing/closing of the file runs when the GC collects the variable holding the dataset. |
When I use Xcode15 create complications with Apple Watch, `getLocalizableSampleTemplate` not display as expected, how to fix this error?
Expect the code to show the actual effect (without using the WidgetKit framework)
```
func getLocalizableSampleTemplate(for complication: CLKComplication, withHandler handler: @escaping (CLKComplicationTemplate?) -> Void) {
if complication.family == .graphicCircular {
let template = CLKComplicationTemplateGraphicCircularView(
Circle()
.foregroundColor(Color.red)
)
handler(template)
} else {
let template = CLKComplicationTemplateGraphicRectangularFullView(
Circle()
.foregroundColor(Color.red)
)
handler(template)
}
}
```
 |
You don't need to access in `shadow-root` on newly opened tab. Saving pdf file is much easier, using Chrome driver options.
You can just pass preferences to chromedriver that will automatically save your pdf file to directory on print actions.
```python
import json
import sys
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
try:
print_settings = {
"recentDestinations": [{
"id": "Save as PDF",
"origin": "local",
"account": "",
}],
"selectedDestinationId": "Save as PDF",
"version": 2,
"isHeaderFooterEnabled": False,
"isLandscapeEnabled": True
}
prefs = {'printing.print_preview_sticky_settings.appState': json.dumps(print_settings),
"download.prompt_for_download": False,
"profile.default_content_setting_values.automatic_downloads": 1,
"download.directory_upgrade": True,
"savefile.default_directory": "/Users/a1/PycharmProjects/PythonProject", #this is path to dir where you want to save the file
"safebrowsing.enabled": True}
options = webdriver.ChromeOptions()
options.add_experimental_option('prefs', prefs)
options.add_argument('--kiosk-printing')
service = Service()
driver = webdriver.Chrome(options)
driver.maximize_window()
actions = ActionChains(driver)
wait = WebDriverWait(driver, 20)
driver.get("https://web.bcpa.net/BcpaClient/#/Record-Search")
text_input = wait.until(EC.visibility_of_element_located((By.XPATH, '//input[@class="form-control"]'))).send_keys(
"2216 NW 6 PL FORT LAUDERDALE, FL 33311")
search_button = driver.find_element(By.XPATH,
'//span[@class="input-group-addon"]/span[@class="glyphicon glyphicon-search"]').click()
printer_click = wait.until(EC.visibility_of_element_located((By.XPATH, '//div[@class="col-sm-1 btn-printrecinfo"]'))).click()
time.sleep(5)
except Exception as e:
print(e)
sys.exit(1)
``` |
Keras similarity calculation. Enumerating distance between two tensors, which indicates as lists |
|python|tensorflow|machine-learning|keras| |
null |
{"Voters":[{"Id":3889449,"DisplayName":"Marco Bonelli"},{"Id":9952196,"DisplayName":"Shawn"},{"Id":3440745,"DisplayName":"Tsyvarev"}]} |
Needed some help on the ZKA security scheme as explained below:
1. In deriving session key, it says to use ZKA master key, random number and control mask. Any idea how control mask value is constructed. I believe its same for all transactions but how is it constructed for PAC?
2. Is control mask and control vector same?
3. In the ZKA algorithm, it refers to a value βdβ. Any idea what this value means and what to use here?
Appreciate response on the above questions. Many thanks.
I am using this value for Control Vector 00 21 5F 00 03 41 00 00 00 21 5F 00 03 21 00 00 |
HSM ZKA control mask values |
|security|hardware-security-module| |
null |
The 'shaky' hover problem arises because as the element moves up it is no longer hovered, so it comes back down and then it's hovered again and so on....
This snippet moves the hover up one level so hovering on that causes the top card to move up but the hover remains on the parent.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
// grab elements
let game = document.getElementById('game');
let deckEl = document.getElementById('deck');
let hand = document.getElementById('hand');
// global variables
let arr = [{
card: 1,
text: "Attack"
}, {
card: 2,
text: "Attack"
}, {
card: 3,
text: "Attack"
}, {
card: 4,
text: "Shield"
}, {
card: 5,
text: "Shield"
}, {
card: 6,
text: "Shield"
}, {
card: 7,
text: "Parry"
}, {
card: 8,
text: "Parry"
}, {
card: 9,
text: "Parry"
}];
let cardsInHand = [];
// shuffling the deck with selected deck as argument
function shuffleDeck(array) {
let i = array.length;
while (i--) {
const i2 = Math.floor(Math.random() * i);
[array[i], array[i2]] = [array[i2], array[i]];
}
}
// drawing cards with card number as argument
function drawCards(cardAmount) {
for (cardAmount; cardAmount > 0; cardAmount--) {
cardsInHand.push(arr[arr.length - 1]);
arr.pop();
}
}
// generate deck element
function generateDeck() {
let cardOutline = 200;
arr.forEach(card => {
let cardEl = document.createElement('div');
cardEl.classList.add('card');
cardOutline -= 3;
cardEl.style.top = cardOutline + "px";
//console.log(cardEl.style.top);
deckEl.appendChild(cardEl);
})
}
generateDeck();
shuffleDeck(arr);
drawCards(3);
<!-- language: lang-css -->
#deck {
display: flex;
justify-content: space-around;
position: relative;
}
.card {
height: 100px;
width: 100px;
background-color: red;
position: absolute;
border: 1px solid black;
}
#deck .card:last-child {
transition: transform 0.3s ease;
}
#deck:hover .card:last-child {
transform: translateY(-20px);
}
<!-- language: lang-html -->
<link rel="stylesheet" href="style.css">
<div class="container">
<div id="game">
<div id="deck"></div>
<div id="hand"></div>
</div>
</div>
<!-- end snippet -->
|
I'm student in practicing PintOS Project.
In Programming Project 3(Virtual Memory), I got ploblems about "preprocess in compiling" (C program).
I had tried all attempt that do my best, but I'm absolutely lost at this point on how to fix it.
Finally i come to here, had to ask you about this issue.
**error**
There is even stack growth, so I am modifying `syscall.c` to implement `mmap`, **the `spt` field is being recognized as an incomplete type and is not being excluded.**
**current situation**
The thread structure in question is declared in `thread.h`, and the type `supplemental_page_table` of spt, an element in the `thread` structure, is declared in `vm.h`. Above the thread structure in current thread.h
`#ifdef VM
#include "vm/vm.h"` is preprocessing the format vm.h. I am currently using the EC2 server(ubuntu 18.04) via SSH connection to VS code, and have tried solutions such as make clean, make, inserting and changing the order of #include preprocessing and forward declaration code, and rebooting + reinstalling EC2, but there is no progress.
**Questions**
1. If `vm.h`, where the `spt` structure is declared in `thread.h`, is included before the thread structure, shouldn't it be able to be used without problems?
```
...
#ifdef VM // I'm in project3(VM)
#include "vm/vm.h"
...
struct thread {
...
#ifdef VM
/* Table for whole virtual memory owned by thread. */
struct supplemental_page_table spt; // The spt structure is defined in vm.h.
...
```
```
/* Print in terminal */
In file included from ../../include/userprog/process.h:4:0,
from ../../include/vm/vm.h:7,
from ../../vm/vm.c:4:
../../include/threads/thread.h:151:33: error: field βsptβ has incomplete type
struct supplemental_page_table spt;
^~~
In file included from ../../vm/vm.c:4:0:
../../include/vm/vm.h:200:1: warning: "/*" within comment [-Wcomment]
/* νμ¬ νλ‘μΈμ€μ λ©λͺ¨λ¦¬ 곡κ°μ λνλ΄λ ꡬ쑰체μ
λλ€.
../../vm/vm.c: In function βvm_initβ:
../../vm/vm.c:21:23: warning: unused variable βstartβ [-Wunused-variable]
struct list_elem *start = list_begin(&frame_table);
^~~~~
../../vm/vm.c: In function βvm_alloc_page_with_initializerβ:
../../vm/vm.c:84:1: warning: label βerrβ defined but not used [-Wunused-label]
err:
^~~
../../vm/vm.c: In function βspt_insert_pageβ:
../../vm/vm.c:105:6: warning: unused variable βsuccβ [-Wunused-variable]
int succ = false;
^~~~
../../vm/vm.c: In function βspt_remove_pageβ:
../../vm/vm.c:111:55: warning: unused parameter βsptβ [-Wunused-parameter]
void spt_remove_page (struct supplemental_page_table *spt, struct page *page) {
^~~
...
```
2. Additionally, I have a customization called `tid_t` in `process.h` that is also defined in `thread.h`, and I've included it, but it doesn't reference it. I solved this problem by defining it again in process.h ,it is repeated, but I thought I'd ask along the same problem-context as above. For reference, I have the above issue after this.
```
/* Code in process.h */
#ifndef USERPROG_PROCESS_H
#define USERPROG_PROCESS_H
#include "threads/thread.h"
bool install_page (void *upage, void *kpage, bool writable);
// typedef int tid_t; // If i remove annotation in this line, going to problem mentioned above
tid_t process_create_initd (const char *file_name);
tid_t process_fork (const char *name, struct intr_frame *if_);
int process_exec (void *f_name);
int process_wait (tid_t);
void process_exit (void);
void process_activate (struct thread *next);
#endif /* userprog/process.h */
/* Print */
In file included from ../../include/vm/vm.h:7:0,
from ../../include/threads/thread.h:12,
from ../../threads/init.c:24:
../../include/userprog/process.h:9:1: error: unknown type name βtid_tβ; did you mean βsize_tβ?
tid_t process_create_initd (const char *file_name);
^~~~~
size_t
```
```
/* here is modified code i did, but i think it is not problem */
void *mmap (void *addr, size_t length, int writable, int fd, off_t offset) {
if (offset % PGSIZE != 0) {
return NULL;
}
if (pg_round_down(addr) != addr || is_kernel_vaddr(addr) || addr == NULL || (long long)length <= 0)
return NULL;
if (fd == 0 || fd == 1) {
exit(-1);
}
if (spt_find_page(&thread_current()->spt, addr))
return NULL;
struct file *target = find_file_by_fd(fd);
if (target == NULL)
return NULL;
void * ret = do_mmap(addr, length, writable, target, offset);
return ret;
}
void munmap (void *addr) {
do_munmap(addr);
}
/* Do the mmap */
void *do_mmap (void *addr, size_t length, int writable, struct file *file, off_t offset) {
struct file *mfile = file_reopen(file);
void * ori_addr = addr;
size_t read_bytes = length > file_length(file) ? file_length(file) : length;
size_t zero_bytes = PGSIZE - read_bytes % PGSIZE;
while (read_bytes > 0 || zero_bytes > 0) {
size_t page_read_bytes = read_bytes < PGSIZE ? read_bytes : PGSIZE;
size_t page_zero_bytes = PGSIZE - page_read_bytes;
struct supplemental_page_table *spt = (struct supplemental_page_table*)malloc
(sizeof(struct supplemental_page_table));
spt->file = mfile;
spt->offset = offset;
spt->read_bytes = page_read_bytes;
if (!vm_alloc_page_with_initializer (VM_FILE, addr, writable, lazy_load_segment, spt)) {
return NULL;
}
read_bytes -= page_read_bytes;
zero_bytes -= page_zero_bytes;
addr += PGSIZE;
offset += page_read_bytes;
}
return ori_addr;
}
/* Do the munmap */
void do_munmap (void *addr) {
while (true) {
struct page* page = spt_find_page(&thread_current()->spt, addr);
if (page == NULL)
break;
struct supplemental_page_table * aux = (struct supplemental_page_table *) page->uninit.aux;
// dirty(μ¬μ©λμλ) bit 체ν¬
if(pml4_is_dirty(thread_current()->pml4, page->va)) {
file_write_at(aux->file, addr, aux->read_bytes, aux->offset);
pml4_set_dirty (thread_current()->pml4, page->va, 0);
}
pml4_clear_page(thread_current()->pml4, page->va);
addr += PGSIZE;
}
}
```
[Here is my Team git-repository][3]
For now, we've saved this git in an intact (error-free) state, but we'll create a state with the same errors and push it soon.
`Thank you very much for your time!`
[1]: https://i.stack.imgur.com/Z3yIc.png
[2]: https://i.stack.imgur.com/jmvKL.png
[3]: https://github.com/KraftonJungle4th/Classroom5_Week10-11_Team3_PintOS/tree/DJ |
{"Voters":[{"Id":1773237,"DisplayName":"Hhovhann"}],"DeleteType":1} |
I have a domain specific JSON object for which I want to store in a vector db. I would be using embedding models like sentence=transformer or openai's text-embedding-002.
Questions are:
a) Can these models efficiently compute proper embeddings for these json objects?
b) Even if they can compute embeddings how efficiently can an LLM reason through them later? LLMs can reason through text but would they from JSONs?
The domain specific data is not so much esoteric - mostly the keys and values are English. |
How do I embed json documents using embedding models like sentence-transformer or open ai's embedding model? |
|openai-api|large-language-model|sentence-transformers|openaiembeddings| |
This would best be accomplished in the following manner:
- Set the `background-color` of the body to what you want the bottom half to be.
- Set the `background-color` of the element and give it the `clip-path`.
This will ensure you always have the nice separation.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-css -->
html {
height: 100%;
box-sizing: border-box;
}
*, *::before, *::after {
box-sizing: inherit;
}
body {
margin: 0;
min-height: 100%;
background: #313131;
}
.bg {
top: 0;
left: 0;
right: 0;
bottom: 0;
position: fixed;
background: #099dd7;
clip-path: ellipse(148% 70% at 91% -14%);
}
<!-- language: lang-html -->
<body><div class="bg"></div></body>
<!-- end snippet -->
The reason that the `linear-gradient` doesn't work is that the `clip-path` that is being applied hides the bottom half of it. |
Try this.
> Map(`[<-`, my_list, '.k', value=lapply(my_list, `[`, 'id') |>
+ unlist() |>
+ duplicated() |>
+ split(sapply(my_list, nrow) |> {
+ \(.) mapply(rep.int, seq_along(.), .)
+ }())) |>
+ lapply(subset, !.k, select=-.k)
[[1]]
id country
1 xxxyz USA
3 zzuio Canada
[[2]]
id country
2 ppuip Canada |
When a user right-clicks and drags a file into a different directory, they are presented with a small context menu that includes entries such as **Copy here** and **Move here**:

Is there a way to query these items yourself. I'm already doing this for the regular right-click context menu on a directory background.

I was able to trigger the default implementation for this right drag-and-drop popup by following these steps:
- Register your app with `RegisterDragDrop`.
- In the `IDropTarget::Drop` method:
- Obtain the `IShellFolder` interface for the drop path.
- Get the `IDropTarget` interface from this folder.
- Forward the `IDataObject` to the `DragEnter` and `Drop` methods.
- The `Drop` method will display the correct context menu.
Here's the simplified version of the code:
```
HRESULT Win_DragDropTargetDrop(IDropTarget *iTarget, IDataObject *iData, DWORD keys, POINTL cursor, DWORD *effect)
{
HRESULT hr = 0;
IShellFolder *folder = null;
IDropTarget *folderDropTarget = null;
POINTL pt = {0};
hr = Win_GetIShellFolder(hwnd, dragDropPath, &folder);
hr = folder->lpVtbl->CreateViewObject(folder, hwnd, &IID_IDropTarget, &folderDropTarget);
hr = folderDropTarget->lpVtbl->DragEnter(folderDropTarget, iData, MK_RBUTTON, pt, effect);
hr = folderDropTarget->lpVtbl->Drop(folderDropTarget, iData, MK_RBUTTON, pt, effect);
}
```
However, what I want is to enumerate these items within `IContextMenu` myself, using functions such as `CreatePopupMenu`, `QueryContextMenu`, and then iterating through that menu via `GetMenuItemCount` and `GetMenuItemInfo`.
The reason is that I'm rendering the context menu myself (instead of using the Windows built-in UI framework). |
I am busy with creating a REST API where you can load company details and then load products for each company, each with their own enpoints
POST /company-details
POST /products
Company details have details in a request body as
companyName
uniqueCode
description
contact
location
Products has information in a request body
uniqueCpyCode
productName
skuNo
Each one will create a record with an auto generated id in a MySQL db. Each to a table for company details and products. Then I also have a delete endpoint to delete products ( i also have one for companies)
DELETE /products/{id}
One of the issues i have, i want users to delete all products for a company, but would like to know what is the best practice for mass deletions in my use case.
I am thinking of having an endpoint
DELETE /products/{uniqueCpyCode}
I.e. delete products where cpyCode = request id. But i want to know is that best practice to mass delete resources like that or do i need an endpoint where i need to pass all ids although the number of products can potentially be close to a 1000 products.
Or is it better to only have the one delete endpoint and let the user call it multiple times.
Then another question i have, what if you want to delete the whole company. What is in general the better approach? 1) Call both Delete endpoints.
DELETE /products
DELETE /company
2) the user only call
DELETE /company
Which internally also call the DELETE /products enpoint.
|
I used to develop a simple price tracking application for myself with Android Studio. I paused my project for few month and got back to it recently. I noticed that my application is crashing on startup with nothing in the logcat.
I use my personal phone for development, tethered with USB, Xiaomi Android 13.
What is weird is that:
- Clear cache --> Works again
- Installing the app with android studio once again: All good
- Restarting the app after an android studio build --> Crash again with no log.
Any idea how I could troubleshoot it? Because I don't have any log and the application crash after installing it with Android Studio, I am not sure where to go with it. Could the app be wrongly packaged? But if so why would it work for the first run?
Any help is welcome :)
All the best,
Philippe |
Android studio crash, nothing in logcat |
|android|kotlin|android-studio|adb|logcat| |
We can do that with a [lifecycle precondition][1] in a null_resource
A couple of things before we dive into the code:
- Your code has two `sc1_default` variables I'm assuming that the second one was a typo and it what we need there is `sc1_default`
- For additional validation use `type` in the variables, it's a good practice makes the code more readable and if someone accidentally passes the wrong type the code fails gracefully.
see sample code below
``` lang-hcl
variable "sc1_default" {
type = bool
default = "false"
}
variable "sc2_default" {
type = bool
default = "false"
}
variable "sc3_default" {
type = bool
default = "true"
}
variable "sc4_default" {
type = bool
default = "true"
}
resource "null_resource" "validation" {
lifecycle {
precondition {
condition = (
(var.sc1_default ? 1 : 0) +
(var.sc2_default ? 1 : 0) +
(var.sc3_default ? 1 : 0) +
(var.sc4_default ? 1 : 0)
) < 2
error_message = "Only one sc can be true"
}
}
}
```
You can see I set the `sc3_default` and `sc4_default` both to true just to trigger the error ...
The condition is the core of this validation we are just adding all the true with the help of shorthand if syntax `(var.sc_default ? 1 : 0)` and the total should be less than two, I'm assuming that all false is OK, but if not you can change that logic to check that is precisely one.
A terraform plan on that code will error out with the following message:
``` lang-txt
Planning failed. Terraform encountered an error while generating this plan.
β·
β Error: Resource precondition failed
β
β on main.tf line 22, in resource "null_resource" "validation":
β 22: condition = (
β 23: (var.sc1_default ? 1 : 0) +
β 24: (var.sc2_default ? 1 : 0) +
β 25: (var.sc3_default ? 1 : 0) +
β 26: (var.sc4_default ? 1 : 0)
β 27: ) < 2
β βββββββββββββββββ
β β var.sc1_default is "false"
β β var.sc2_default is "false"
β β var.sc3_default is "true"
β β var.sc4_default is "true"
β
β Only one sc can be true
```
[1]: https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#custom-condition-checks |
How do I keep only specific rows based on whether a column has a specific value? |
|r|subset| |
I am developing a .Net 8.0 application, using Repository pattern and database first approach.
Program.cs file:
```
builder.Services.ConfigureRepositoryManager();
builder.Services.ConfigureSqlContext(builder.Configuration);
```
ServiceExtensions.cs file:
```
public static void ConfigureRepositoryManager(this IServiceCollection services)
{
services.AddScoped<IRepositoryManager, RepositoryManager>();
}
public static void ConfigureSqlContext(
this IServiceCollection services,
IConfiguration configuration
)
{
var conString = new SqlConnectionStringBuilder(configuration.GetConnectionString("sqlConnection"));
conString.UserID = Environment.GetEnvironmentVariable("MSSQLServerUser");
conString.Password = Environment.GetEnvironmentVariable("MSSQLServerPassword");
services.AddDbContext<RepositoryContext>(options =>
{
options.UseSqlServer(conString.ConnectionString);
});
}
```
The connection string inside `ConfigureSqlContext` method gets updated with username and password, meaning the `conString.ConnectionString` is exactly correct. But on Repository instantiation, it receives the connection string defined inside `appsettings.json` file.
RepositoryManager.cs file:
```
public RepositoryManager(RepositoryContext context)
{
_context = context;
_appointment = new Lazy<IAppointmentRepository>(
() => new AppointmentRepository(_context)
);
}
```
In this class, the _context has wrong connection string.
I am trying to exclude the username and password from connection string, whether in development or in production. As MS suggests, there are multiple ways such as user secrets, but I couldn't make it work. It only works if I include the username and password inside connection string in `appsettings.json` file. |
Repository manager receives the wrong connection string in .net core |
|c#|.net|.net-core|repository-pattern| |
null |
As shown below, within the mqst environment, there are two different python binary files: python and python3. But they have the same version, being 3.12. If I do
```
which python
```
I get the python file path. And I will get the python3 file path if I do
```
which python3
```
Does anyone knows why this is the case?
[enter image description here](https://i.stack.imgur.com/WAue0.jpg)
This doesn't affect my workflow. But I am just curious. |
Conda has two different python binarys (python and python3) with the same version for a single environment. Why? |
|python|anaconda|environment|miniconda| |
null |
`systemctl` is systemd, which is Linux only.
`brew services start mariadb` is for MacOS.
[ref](https://mariadb.com/kb/en/installing-mariadb-on-macos-using-homebrew/) |
I switched from `VCL` to `Firemonkey` frame work and I noticed that `TGrid`/`TStringgrid` doesn't have multi select option in it. However, all I could find is these 2 articles
https://www.developpez.net/forums/showthread.php
https://www.tek-tips.com/faqs.cfm?fid=6650
but they seem to be outdated.
I want to achieve multi cell select + multi edit cell in `Firemonkey` framework without a 3rd party plugin.
Also, do I have to build my grid component from scratch or built it on to of the `TGrid` class or can I achieve it just from Evans?
|
Working on a class library IΒ΄d like to add logging to a number of classes. I use `Psr\Log\LoggingInterface` to give the classes access to a logger object. But how do I set the logger in all classes in a decent way. There is a dependency to the logger as it is being used everywhere in the classes
I can use
## Constructor
Add the logger to the constructor with
```php
public function __constructor (
readonly \Psr\Log\LoggingInterface $logger = new \Psr\Log\NullLoger()
);
```
## Setlogger
Use a public setLogger()
```php
public function setLogger( \Psr\Log\LoggingInterface $logger ): self
{
$this->logger = $logger;
return $this;
}
private \Psr\Log\LoggingInterface $logger;
public function __constructor()
{
...
// Make sure a logger is set
$this->logger = new \Psr\Log\NullLogger();
...
}
```
What would be the best way to do this? Is there a better way. |
How to add logging to an abstract class in php |
|php|oop|dependency-injection| |
There's no nice way of combining optionals in situations like this.
You could possibly look at `Optional.ifPresent` if you like lambdas;
```
usrDb.ifPresent(u -> {
roleDb.ifPresent(r -> u.getRoles().add(r));
}
```
Alternatively use `Optional.isPresent`
```
if (usrDb.isPresent() && roleDb.isPresent()) {
usrDb.get().getRoles().add(roleDb.get())
}
```
|
Can someone explain me why i get error " INSERT statement conflicted with the FOREIGN KEY constraint "FK_ArticleTag_Tags_ArticleId". The conflict occurred in database "Blog", table "dbo.Tags", column 'TagId'".
```
public class Article
{
public Article()
{
Comments = new HashSet<Comment>();
Tags = new HashSet<Tag>();
}
[Key]
public int ArticleId { get; set; }
public int? CategoryId { get; set; }
[StringLength(30)]
public string ArticleName { get; set; } = null!;
public string? ArticleDescription { get; set; }
public bool Visibility { get; set; }
[ForeignKey("CategoryId")]
[InverseProperty("Articles")]
public virtual Category Category { get; set; }
[InverseProperty("Article")]
public virtual ICollection<Comment> Comments { get; set; }
[ForeignKey("TagId")]
[InverseProperty("Articles")]
public virtual ICollection<Tag> Tags { get; set; }
}
public class Tag
{
public Tag()
{
Articles = new HashSet<Article>();
}
[Key]
public int TagId { get; set; }
[Required]
[StringLength(50)]
public string Title { get; set; }
[ForeignKey("ArticleId")]
[InverseProperty("Tags")]
public virtual ICollection<Article>? Articles { get; set; }
}
after migration, with 50 articles and 20 tags, I cannot add a new row to (autogenerated) ArticleTag table where ArticleId is greater than 20.
```
I have no idea what this is about, can someone explain to me what I'm doing wrong? |
Entity framework 8 dbcontext - cant some rows in many to many relationship |
|entity-framework| |
null |
*There is very good post about trimming 1fr to 0: https://stackoverflow.com/questions/52861086/why-does-minmax0-1fr-work-for-long-elements-while-1fr-doesnt*
In general I would like to have a cell which expands as the content grows, but within the limits of its parent.
Currently I have a grid with cell with such lengthy content that even without expanding it is bigger than the entire screen. So I would to do two things -- clip the size of the cell, and secondly -- provide the scroll.
I used `minmax(0, 1fr)` from the post I mentioned, so grid has free hand to squash the cell to zero, but it still does not effectively compute the height, so the scroller does not not the "fixed" height. Without this information the scroll is not activated.
<!-- begin snippet: js hide: false console: false babel: false -->
<!-- language: lang-html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
<style>
body
{
width: 100vw;
height: 100vh;
margin: 0;
padding: 0;
overflow: hidden;
}
.page
{
height: 100%;
/*display: flex;
flex-direction: column;*/
display:grid;
grid-template-rows: min-content minmax(0, 1fr);
}
.header
{
}
.content
{
flex-grow: 1;
flex-shrink: 1;
flex-basis: 0;
}
.quiz-grid
{
height: 100%;
max-height: 100%;
display: grid;
grid-template-columns: minmax(0, 1fr) auto minmax(0, 1fr);
grid-template-rows: minmax(0, 1fr) min-content;
grid-template-areas:
"left main right"
"footer footer footer";
}
.quiz-cell-left
{
grid-area: left;
min-height: 0;
}
.quiz-cell-right
{
grid-area: right;
min-height: 0;
}
.quiz-cell-main
{
grid-area: main;
border: 1px red solid;
min-height: 0;
}
.quiz-cell-footer
{
grid-area: footer;
justify-self: center;
align-self: center;
}
/* my scroll view component */
.scroll-container
{
position: relative;
width: 100%;
min-height: 0;
background-color: azure;
max-height: 100%;
}
.scroll-content
{
height: 100%;
width: 100%;
overflow-y: auto;
}
</style>
</head>
<body>
<div class="page">
<div class="header">Header</div>
<div class="content">
<div class="quiz-grid">
<div class="quiz-cell-main">
<div class="scroll-container"> <div class="scroll-content">
<h1>something</h1>
<h1>else</h1>
<h1>alice</h1>
<h1>cat</h1>
<h1>or dog</h1>
<h1>now</h1>
<h1>world</h1>
<h1>something</h1>
<h1>else</h1>
<h1>alice</h1>
<h1>cat</h1>
<h1>or dog</h1>
<h1>now</h1>
<h1>world</h1>
<h1>something</h1>
<h1>else</h1>
<h1>alice</h1>
<h1>cat</h1>
<h1>or dog</h1>
<h1>now</h1>
<h1>world</h1>
<h1>something</h1>
<h1>else</h1>
<h1>alice</h1>
<h1>cat</h1>
<h1>or dog</h1>
<h1>now</h1>
<h1>world</h1>
</div>
</div>
</div>
<div class="quiz-cell-footer">
footer
</div>
</div>
</div>
</div>
</body>
</html>
<!-- end snippet -->
*Comment to the code: the content sits in cell "main", the other elements (header, footer) are just to make sure the solution would not be over simplified.*
**Update 1**: originally I used "flex" for outer layout, I switched to grid and I managed to achieve the partial clip at least, I am still stuck with my real clip with scroll.
**Update 2**: technically it **seems** I solved it (I am looking at hidden problems now) -- I based the two-div scroller on grid as well with the container scroller "grid-template-rows: minmax(0, auto);", but honestly it was dumb luck, I simply noticed grid propagates height information, so maybe it will work. But I still wonder why "flex" does not work for outer layout, and how to "pass" height info to scroller two-div without resorting to grid (I am not against grid, but with this pace I will have grid everywhere :-)). |
MongoDB documents are randomly deleted after a few hours |
|node.js|mongodb|discord.js| |
null |
I'm writing a bot to "archive" messages (move them but keep details of author etc) and threads to a specified channel based on date(s).
I'm nearly there but I can't see how to delete the thread that is posted under the channel name:
[Screenshot](https://i.stack.imgur.com/mPyPc.png)
I can delete the messages in the thread after I move them to another channel but I can't see how to delete that thread under the channel.
I had though it was the "THREAD_STARTER_MESSAGE" (and it still may be) but I can't delete that as I get a 403 error saying "Cannot execute action on a system message".
Any suggestions welcome.
BTW - for anyone interested here's what the moved/"archived" messages look like:
[Screenshot](https://i.stack.imgur.com/555wO.png) |
here you are creating a Python array by the statement <br>
`[torch.sin(theta)*torch.cos(phi), torch.sin(theta)*torch.sin(phi), torch.cos(theta)]`<br>
if you want to perform a multiplication on this you may have to convert the list into a torch tensor
you can do this by <br>
`torch.tensor([torch.sin(theta)*torch.cos(phi), torch.sin(theta)*torch.sin(phi), torch.cos(theta)])`
this will ensure that you don't have to switch your data from GPU to CPU (as in the case of numpy operation)
```
import torch
n_inc = torch.tensor(1)
theta = torch.tensor(0.6109)
phi = torch.tensor(0)
k0 = torch.tensor(6.2832)
kinc = k0*n_inc*torch.tensor([torch.sin(theta)*torch.cos(phi),
torch.sin(theta)*torch.sin(phi),
torch.cos(theta)])
print(kinc)
``` |
{"OriginalQuestionIds":[21697188],"Voters":[{"Id":6196568,"DisplayName":"shingo"},{"Id":7733418,"DisplayName":"Yunnosch"},{"Id":9214357,"DisplayName":"Zephyr"}]} |
I have an error message that I understand, but I don't know how to resolve it. The 'add' call works without HTTP and 'rxMethod.' However, this method cannot be loaded outside of the constructor. What am I doing wrong?
addEntity: (entityToPush: entity) => {
return rxMethod(pipe(
tap(() => patchState(entityStore, {isLoading: true})),
switchMap(() => entityHttpService.createEntity(entityToPush).pipe(
tapResponse({
next: (entityToPush) => patchState(entityStore, {entities: [...entityStore.entities(), entityToPush]}),
error: console.error,
finalize: () => patchState(entityStore, {isLoading: false})
})
))
));
error: rxMethod() can only be used within an injection context such as a constructor, a factory function, a field initializer, or a function used with |
Create Entity with signalStore and rxMethod |
|angular|signals|ngrx|store|angular17| |
I got the same error and as @andrew suggested opening a new terminal after installing helped me. Just posting here if anyone comes across this issue. |
I created `person` table as shown below:
```sql
CREATE TABLE person (
id INTEGER,
name VARCHAR(20),
age INTEGER
);
```
Then, I created `my_func()` which returns the record `ROW(1,'John'::VARCHAR,27)` as shown below:
```sql
CREATE FUNCTION my_func() RETURNS trigger
AS $$
BEGIN
RETURN ROW(1,'John'::VARCHAR,27);
END; -- β β β β β β β β β β β β β
$$ LANGUAGE plpgsql;
```
Then, I created `my_t` trigger as shown below:
```sql
CREATE TRIGGER my_t BEFORE INSERT OR UPDATE OR DELETE ON person
FOR EACH ROW EXECUTE FUNCTION my_func();
```
Finally, inserting a row to `person` table got the same error as shown below:
```sql
postgres=# INSERT INTO person (id, name, age) VALUES (NULL, NULL, NULL);
ERROR: returned row structure does not match the structure of the triggering table
DETAIL: Returned type character varying does not match expected type character varying(20) in co
lumn 2.
CONTEXT: PL/pgSQL function my_func() during function exit
```
So, I replaced `::VARCHAR` with `::VARCHAR(20)` as shown below:
```sql
CREATE FUNCTION my_func() RETURNS trigger
AS $$
BEGIN
RETURN ROW(1,'John'::VARCHAR(20),27);
END; -- β β β β β β β
$$ LANGUAGE plpgsql;
```
Finally, I could insert a row to `person` table without error as shown below:
```sql
postgres=# INSERT INTO person (id, name, age) VALUES (NULL, NULL, NULL);
INSERT 0 1
postgres=# SELECT * FROM person;
id | name | age
----+------+-----
1 | John | 27
(1 row)
``` |
null |
I switched from `VCL` to `Firemonkey` frame work and I noticed that `TGrid`/`TStringgrid` doesn't have multi select option in it. However, all I could find is this article(2013 release)
https://pictoselector.wordpress.com/2013/05/28/adding-multiselect-to-firemonkeys-tgrid/
but they seem to be outdated.
I want to achieve multi cell select + multi edit cell in `Firemonkey` framework without a 3rd party plugin.
Also, do I have to build my grid component from scratch or built it on to of the `TGrid` class or can I achieve it just from Evans?
|
when i try to launch the jar file that i build from my kotlin program on desktop in the intillij IDE it work perfectly but when i try to run it using JAR with the terminal it show me this error
"Exception in thread "main" java.lang.NoClassDefFoundError: io/github/jan/supabase/SupabaseClientBuilder
at org.main.MainKt.main(Main.kt:19)
at org.main.MainKt.main(Main.kt)
Caused by: java.lang.ClassNotFoundException: io.github.jan.supabase.SupabaseClientBuilder
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
... 2 more"
how can i fix it ?
i have checked all the implementation and gradle file file and asked ChatGPT but i found nothing |
getting error when trying to launch kotlin jar file that use supabase "java.lang.NoClassDefFoundError" |
|java|kotlin|gradle|desktop|supabase| |
null |
I am currently trying to make a Discord Bot with Python which when authenticated with a user is able to join the user onto a server. This is the guide I followed: https://dev.to/dandev95/add-a-user-to-a-guild-with-discord-oauth2-in-python-using-requests-595f
Sadly, it did not work. Therefore I changed the code a bit and got this:
```
import discord
from discord import app_commands
import requests
with open("token.txt") as file:
token = file.read().strip()
API_ENDPOINT = 'https://discord.com/api/v10'
CLIENT_ID = '123456789'
CLIENT_SECRET = '123456789abcdefg'
REDIRECT_URI = "https://google.com"
class aclient(discord.Client):
def __init__(self):
super().__init__(intents = discord.Intents.all())
self.synced = False
async def on_ready(self):
await self.wait_until_ready()
await tree.sync(guild = discord.Object(id=1209872153269243904))
print(f"Logged in as {self.user}.")
client = aclient()
tree = app_commands.CommandTree(client)
def exchange_code(code):
data = {
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET,
'grant_type': 'authorization_code',
'code': code,
'redirect_uri': REDIRECT_URI
}
headers = {
'Content-Type': 'application/x-www-form-urlencoded'
}
r = requests.post('%s/oauth2/token' % API_ENDPOINT, data=data, headers=headers)
r.raise_for_status()
return r.json()
def add_to_guild(access_token, userID, guildID):
url = f"{API_ENDPOINT}/guilds/{guildID}/members/{userID}"
botToken = token
data = {
"access_token" : access_token,
}
headers = {
"Authorization" : f"Bot {botToken}",
'Content-Type': 'application/json'
}
response = requests.put(url=url, headers=headers, json=data)
print(response.text)
@client.event
async def on_ready():
print(f"Logged in as {client.user}")
code = exchange_code('abcdefghijklmnopqrstuvwxyz')['access_token']
print(code)
add_to_guild(code, '716235295032344596', '622176715183226910')
client.run(token)
```
When the Bot is run this is printed in the console:
`OgGVKANijnj3yyAec0OuP0D4dY63qK
{"message": "Missing Permissions", "code": 50013}`
It says "Missing Permissions" but as seen here it should:
[](https://i.stack.imgur.com/r5BDS.png)
I tried searching in the Docs [here](https://discord.com/developers/docs/topics/oauth2) but I also did not find any solution to my problem there.
|