instruction stringlengths 0 30k ⌀ |
|---|
I need help with the architecture pattern I should use in a NestJS project.
So I am using a command/query approach for developing my RestAPIs. Right now it's a Monolith and not a microservice architecture, but I am developing it in a way that tomorrow it will be easy to switch to Microservice.
So consider a scenario, where I have 2 APIs one is createStudent and the other is createUser.
In my application, I have 2 separate folders under src users and students where users will handle all stuff related to users and students will cater to student fees, attendance etc.
Each of them has its own entities and repository files.
Also, creating a student involves a step of creating a user as well. So basically let's say for a student to create, a user will be created, its contacts and address details will be saved, instutres detailees will be saved, document details will be saved etc.
Considering this, creating users, contacts, and addresses are part of the user folder, and repository files and entity files related to these are stored in the user folder.
Create student, assign institute to the student, insert documents, are part of the student folder and repository, and entity files related to these are stored in the student folder.
Right now what I am doing is, in createStudent handler, I have injected repositories for user, userAddresses and userContacts and using them in the handler to get/create/update records related to user, address or contacts.
Though I have a separate handler for createUser as well, where I also need to do the same eventually, it will have nothing to do with the student.
I am still able to do stuff I need to do, I am just thinking, that tomorrow If I switch to a microservices approach where the user and student will be different microservices with different databases, I will not be able to inject the repository and somehow need to call the Rest API for user or student to achieve this.
Am I doing it the right way or Is there a way where I can call one handler from another handler in NestJS so that I can segregate the logic in their specific handlers?
The second thought is, if users and students are so closely linked to each other and the exchange of data is happening, should those be segregated into different microservices or not? |
How to decode audio stream using tornado websocket? |
|python|python-3.x|torchaudio| |
If you don't want to use styles, you can make a simple subclass:
public class TextImageButton extends ImageTextButton {
public TextImageButton(String text, Skin skin,Texture texture) {
super(text, skin);
clearChildren();
add(new Image(texture));
add(getLabel());
}
} |
In all four cases you presented, Oracle would do a normal index seek (binary tree search) on the leading column `eid` because an equality predicate was provided for this column in each of your cases. But what else it does with the index depends on what other columns you're filtering on:
1. **First column** (`eid`) only: after finding the first `eid=10` entry in the leaf blocks using a binary search/seek, Oracle will scan that and any subsequent leaf blocks it needs to, moving through each block (single block reads) using a linked list, until it finds the first `eid` value that is not 10. As it does so it gathers `ROWID`s containing the table segment physical row address which it issues single block reads by `ROWID` to obtain the rest of the row.
2. **First two columns** (`eid` and `ename`): because `ename` is the second column in the index, Oracle will use both `eid` and `ename` together at the same time as it performs the binary search (seek) to find the first leaf block entry where `eid=10` and `ename='raj'`. It will then procede as above scanning leaf blocks until it finds the first row for which either of these columns have different values.
3. **First and third column** (`eid` and `esal`): because `esal` is the *third* column and you're skipping the second column, Oracle cannot use a single binary search/seek operation on `esal`. It has two choices:
3a. It does a binary search/seek only on the leading column, `eid`, but once it finds the first `eid=10` value in the leaf blocks it will do a normal scan of leaf blocks following that linked list - looking through *all* `eid=10` rows, but grabbing `ROWID`s only for those with `esal=1000`.
3b. Or, it does a skip scan: *for every distinct value of the missing intermediate column(s)* (`ename`), it will do a separate binary search/seek on the combined `eid=10 / esal=1000` value. This is a seek not a scan, but it is potentially many seeks. If there are many `ename` values this results in a lot of unnecessary single block I/O and can perform poorly. But if there are only a few values, it works pretty well.
4. **All columns**: Your last example would do a single binary search/seek on all three columns. Nothing special here.
You didn't offer this as an example, but to complete the study:
5. **Third column only**: If you queried on `esal=1000` only, Oracle could do one of the following:
5a. Forget the index and scan the table itself (if 1000 is common)
5b. Do a full scan (scattered read) of 100% of the leaf blocks of the index (if 1000 is uncommon but there are many `eid`/`ename` values)
5c. Do a skip-scan, which means a binary search/seek for `esal=1000` for every single distinct combination of the preceding,unfiltered columns (`eid` and `ename`). That would be a lot of seeks, so is rather unlikely the optimizer would choose it unless it believes there aren't very many `eid`/`ename` values.
Whenever Oracle has a choice, it all depends on statistics and expected cardinalities from each operation which is largely driven by the min/max and # distinct values known for each column combined with overall table row counts. Of course you can force it with hints to if you think you know better than the statistics, but it is recommended to hold off on hinting until one has a solid grasp of how Oracle queries work internally, as you can easily tell it to do the wrong thing.
In conclusion, index column order matters a great deal. It doesn't have to be perfect, as you don't want dozens of indexes on a table, so you have to compromise a little, but carefully considering column order within composite indexes is an important modeling consideration based on the kinds of queries expected or present.
|
python virtual environment get deleted on HPC authomatically |
|hpc| |
I need to validate English words. I have used https://www.npmjs.com/package/an-array-of-english-words but it is not comprehensive. Please suggest me a good JS NPM library. Thanks. |
Assistance with sum for multiple separate rows using multiple criteria please |
I'm unable to ping/or perform curl to any website from an instance which has only ipv6 address assigned whereas I'm able to ping/access internet if I create a windows instance with same settings (Ubuntu or Amazon Linux instances are not working), some of the details are as follows
ip -6 addr:

Route Table:

Security Groups:

Ping command response (Stuck with no output): https://i.stack.imgur.com/Zie6f.png |
null |
|php|html|woocommerce|dokan| |
I have a situation where I have several tables in my SQL Server database that I would like to query using Entity Framework Core. Rather than have an individual entity for each table, I would like to have one entity that contains properties for all of the columns across all of the tables.
When I try to do this, I end up with a
> System.InvalidOperationException: 'The required column was not present in the results of a 'FromSql' operation.'
Here's essentially what my tables look like:
Table #1 Columns: Column1, Column2, Column3
Table #2 Columns: Column3, Column4, Column5
Table #3 Columns: Column3, Column4, Column6
This is essentially what my Entity looks like:
```
public class UniversalEntity
{
public string? Column1 { get; set; }
public string? Column2 { get; set; }
public string? Column3 { get; set; }
public string? Column4 { get; set; }
public string? Column5 { get; set; }
public string? Column6 { get; set; }
}
```
This is what my `DbContext` looks like:
```
public class MyContext : DbContext
{
public DbSet<UniversalEntity>? Universals {get; set;}
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseSqlServer([My Database Connection String]);
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
var entityTypeBuilder = modelBuilder.Entity<UniversalEntity>().HasNoKey();
entityTypeBuilder.Property(p => p.Column1).IsRequired(false);
entityTypeBuilder.Property(p => p.Column2).IsRequired(false);
entityTypeBuilder.Property(p => p.Column3).IsRequired(false);
entityTypeBuilder.Property(p => p.Column4).IsRequired(false);
entityTypeBuilder.Property(p => p.Column5).IsRequired(false);
entityTypeBuilder.Property(p => p.Column6).IsRequired(false);
}
}
```
And this is how I'm querying the tables:
```
MyContext context = new MyContext();
List<UniversalEntity>? results = context.Universals.FromSqlRaw("RAW SQL QUERY").ToList();
```
If I query table #2 for example, then the error I get looks like this:
> System.InvalidOperationException: 'The required column 'Column1' was not present in the results of a 'FromSql' operation.'
I thought that using
entityTypeBuilder.Property(p => p.ColumnX).IsRequired(false);
in the `OnModelCreating` method would have stopped the columns from being required, but that does not appear to work.
Is it possible to create an entity that has properties that aren't required to be filled by a call to `FromSqlRaw`? If so, how?
I'm currently using EF Core 7, but I'm open to switching versions if necessary.
All of my queries are strictly read-only as well. No queries will need to write to the tables.
|
I know this question is old, but the solution to my application, was different to the already suggested answers. If anyone else like me still have this issue, and none of the above answers works, this might be the problem solution:
var binding = new BasicHttpBinding(BasicHttpSecurityMode.TransportCredentialOnly);
// Configure transport security
binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Windows;
binding.Security.Transport.ProxyCredentialType = HttpProxyCredentialType.Windows;
binding.Security.Transport.Realm = "";
// Configure message security
binding.Security.Message.ClientCredentialType = BasicHttpMessageCredentialType.UserName;
binding.MaxReceivedMessageSize = 10485760; //10MB limit
EndpointAddress endpointAddress = new EndpointAddress(SSRSReportExecutionUrl);
//Create the execution service SOAP Client
var rsExec = new ReportExecutionServiceSoapClient(binding, endpointAddress);
if (rsExec.ClientCredentials != null)
{
rsExec.ClientCredentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Impersonation;
// rsExec.ClientCredentials.Windows.ClientCredential = clientCredentials;
rsExec.ClientCredentials.UserName.UserName = "Your PC Login USERNAME";
rsExec.ClientCredentials.UserName.Password = "Your PC Login PASSWORD";
}
//This handles the problem of "Missing session identifier"
rsExec.Endpoint.EndpointBehaviors.Add(new ReportingServicesEndpointBehavior());
await rsExec.LoadReportAsync(null, "/" + "YOUR_REPORT_FOLDER_PATH"+ "/" + "REPORT_NAME", null); |
I have the following problem:
I have the URL to a picture 'HTTP://WWW.ROLANDSCHWAIGER.AT/DURCHBLICK.JPG' saved in my database. I think you see the problem here: The URL is in uppercase. Now I want to display the picture in the SAP GUI, but for that, I have to convert it to lowercase.
I have the following code from a tutorial, but without the conversion:
```abap
*&---------------------------------------------------------------------*
*& Report ZDURCHBLICK_24035
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
REPORT zdurchblick_24035.
TABLES: zproject_24035.
PARAMETERS pa_proj TYPE zproject_24035-projekt OBLIGATORY.
DATA gs_project TYPE zproject_24035.
*Controls
DATA: go_container TYPE REF TO cl_gui_custom_container.
DATA: go_picture TYPE REF TO cl_gui_picture.
START-OF-SELECTION.
WRITE: / 'Durchblick 3.0'.
SELECT SINGLE * FROM zproject_24035 INTO @gs_project WHERE projekt =
@pa_proj.
WRITE gs_project.
IF sy-subrc = 0.
WRITE 'Wert im System gefunden'.
ELSE.
WRITE 'Kein Wert gefunden'.
ENDIF.
WRITE : /'Es wurden', sy-dbcnt, 'Werte gefunden'.
AT LINE-SELECTION.
zproject_24035 = gs_project.
CALL SCREEN 9100.
*&---------------------------------------------------------------------*
*& Module CREATE_CONROLS OUTPUT
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
MODULE create_conrols OUTPUT.
* SET PF-STATUS 'xxxxxxxx'.
* SET TITLEBAR 'xxx'.
IF go_container IS NOT BOUND.
CREATE OBJECT go_container
EXPORTING
container_name = 'BILD'.
CREATE OBJECT go_picture
EXPORTING
parent = go_container.
CALL METHOD go_picture->load_picture_from_url
EXPORTING
url = gs_project-bild.
ENDIF.
ENDMODULE.
```
|
How do you retrieve body from ClientResponse? |
this would be a solution in Python. It reads the file from input.txt and counts the occurrances between the character "——". Then it writes the number of lines found per block to output.txt. I hope this meets your needs.
'''
parses the file input.txt in the same path
as this python file
and counts the occurrences of a keyword2 (kw2)
in blocks around kw1
'''
import pathlib
import re
kw1 = "——"
kw2 = "^[\d]+ nuit"
def write_to_output(res_kw):
fout = open("output.txt", 'a')
for cnt in res_kw:
fout.write(str(cnt) + "\n")
fout.close()
path = pathlib.Path(__file__).parent.resolve()
print(path)
finput = open("input.txt", mode="r", encoding="utf-8")
lines = finput.readlines()
block = []
for i in range(len(lines)):
if re.search(kw1, lines[i]):
block.append(i)
nuit = [0]*(len(block)-1)
i = 1
while i < len(block):
start = block[i - 1]
end = block[i]
for line in lines[start : end]:
if re.search(kw2, line):
nuit[i-1] += 1
else:
pass
start = end
i += 1
write_to_output(nuit)
|
## Bundle JRE/JDK within app
> users can get started without having to install the JDK manually
You should bundle a JRE/JDK within your app. This is the canonical way to distribute Java desktop apps and console apps. Such apps can qualify for distribution within online app marketplace such as the [*Apple App Store*][1], [*Google Play*][2], etc.
See [*Java Client Roadmap Update*](https://www.oracle.com/technetwork/java/javase/javaclientroadmapupdatev2020may-6548840.pdf) white paper by Oracle, 2020-05.
On a related note, some readers may benefit from reading [*Java Is Still Free*](https://docs.google.com/document/d/1nFGazvrCvHMZJgFstlbzoHjpAVwv5DEdnaBr_5pKuHo/edit), written by some pillars of the Java community.
The [OpenJDK][3] project provides some tooling to enable bundling a JRE/JDK within your app. See:
- [*JEP 282: jlink: The Java Linker*][4]
- [*JEP 392: Packaging Tool*](https://openjdk.org/jeps/392) (*jpackage*)
Also, on the cutting edge of technology, you may consider building an app of native code using [GraalVM][5] technology.
## No convenient auto-install
You said:
> I want to add an automatic JDK installation feature to my Java project
As others said, there seems to be no such tool, and no such feature within the Java platform.
In theory, you could write your own to download and install a JVM along the lines of what [SDKMAN!][6] does. To do that, your app would start by running shell scripts or some native code as there is no way to run Java code without already having a JRE/JDK installed.
### *OpenWebStart*
You might find happiness with [*OpenWebStart*][7], an open-source re-implementation of the [*Java Web Start*][8] technology.
Your [JNLP][9] application can be run with a version of Java such as the [LTS][10] versions 8, 11, 17, or 21.
To quote [their FAQ][11]:
>Do I have to install a Java on my system to run OWS?
>
>There is no need to have any JVM installed on your system to run OWS. OWS bundles its own JVM. OWS has a JVM Manager which can find local JREs or download JREs (from the Internet) to run your Jnlp application..
>
>Don’t worry if no JVMs are shown in JVM Manager after installation of OWS. If you wish you can use Find local or Add local… to detect and add an existing Java instance on your client device to the JVM Manager or simply start the JNLP file and the JVM Manager will download an appropriate JRE for your application.
>
>The downloaded JREs are stored in %userhome%/.cache/icedtea-web/
[1]: https://en.wikipedia.org/wiki/App_Store_(Apple)
[2]: https://en.wikipedia.org/wiki/Google_Play
[3]: https://en.wikipedia.org/wiki/OpenJDK
[4]: https://openjdk.org/jeps/282
[5]: https://en.wikipedia.org/wiki/GraalVM
[6]: https://sdkman.io/
[7]: https://openwebstart.com/
[8]: https://en.wikipedia.org/wiki/Java_Web_Start
[9]: https://en.wikipedia.org/wiki/Java_Web_Start#Java_Network_Launching_Protocol_(JNLP)
[10]: https://en.wikipedia.org/wiki/Long-term_support
[11]: https://openwebstart.com/docs/FAQ.html |
```
interface Test {
a: string;
b: number;
c: boolean;
}
let arr: string[] = []
function test<S extends Pick<Test, 'a' | 'b'>, T extends keyof S>(val: T[]) {
arr = val // unexcepted error!
}
```
PlayGround: https://www.typescriptlang.org/play?ssl=19&ssc=27&pln=15&pc=1#code/JYOwLgpgTgZghgYwgAgCoQM5mQbwFDKHJwBcyWUoA5gNwFEBGZIArgLYPR1HIJkMB7AQBsIcEHQC+eGaOxwoUMhWoBtALrIAvMg0yYLEAjDABIZJCwAeAMrIIAD0ggAJhmQAFYAgDWV9FgANMgA5HAhyAA+oQwhAHzBqPZOEK7uPhAAngIwyDZxABQAbnDCZKgaAJS49IQKUNrIJcLIAPStyIaOSAAOkC72igJQAIR40jJgmT0odjpevv6YYMFhEdEhsXF0UzNojRnZuTYyCGZYTaXlGo169Y3NbR0A7gAWmcjA2MDuAj4A-EA
But, if I don't use function, it's ok:
```
type S = Pick<Test, 'a' | 'b'>;
type T = keyof S
const val: T[] = []
arr = val // why it is ok?
```
|
Why keyof a picked type is not picked keys in function generic? |
|typescript| |
Can't change the width of the div the wraps the img. The div has the class of slick-slide and slick active. The 2 imgs shown are inside the div while the 3rd is outside. The sliderContainer class affects the next button while i can't even see the prev button.[The current situation](https://i.stack.imgur.com/ohY11.png)
```
import { useEffect, useState } from "react";
import { Link } from "react-router-dom";
import styles from "./Trending.module.css";
import Slider from "react-slick";
import "slick-carousel/slick/slick.css";
import "slick-carousel/slick/slick-theme.css";
const token = `${process.env.REACT_APP_TOKEN}`;
export default function Trending({ setId }) {
const [result, setResult] = useState([]);
const [config, setConfig] = useState({});
useEffect(() => {
trendingMovieDay();
}, []);
async function trendingMovieDay() {
setLoading(true);
try {
const response = await fetch(
"https://api.themoviedb.org/3/configuration",
{
headers: {
Authorization: token,
},
}
);
const result = await response.json();
const res = await fetch(
`https://api.themoviedb.org/3/trending/movie/day`,
{
headers: {
Authorization: token,
},
}
);
const data = await res.json();
// File path used in getting poster img
setConfig({
baseURL: result.images.secure_base_url,
posterSize: result.images.still_sizes[2],
backdropSize: result.images.backdrop_sizes[3],
});
setResult(data.results);
} catch (error) {
console.log("Error fetching trending movies of day data:", error);
} finally {
setLoading(false);
}
}
const settings = {
dots: true,
infinite: true,
speed: 500,
slidesToShow: 3,
slidesToScroll: 3,
};
return (
<div className={styles.container}>
<h2 className={styles.trending}>Trending</h2>
<div className={styles.sliderContainer}>
<Slider {...settings}>
{result.map((el) => {
return (
<div style={{ display: "flex" }}>
<Link
to={`/movies/${el.id}`}
style={{ display: "block", width: "100px" }}
>
<img
key={el.id}
className={styles.movieImg}
src={`${config.baseURL}${config.posterSize}${el.poster_path}`}
alt={el.title}
style={{
width: "150px",
height: "auto",
borderRadius: "8px",
}}
/>
</Link>
</div>
);
})}
</Slider>
</div>
</div>
);
}
```
Tried setting the classes in css to have shorter width but doesn't work. |
How to create a route on a web map (Flask) using folium and osmnx? |
|python|flask|networkx|folium|osmnx| |
null |
I copied the following code from a tutorial, but still couldn't figure out whether I made a mistake somewhere or whether it has to do with the browser support.
<html>
<head>
<script type="text/javascript">
function loadXMLDoc(dname)
{
if(window.XMLHttpRequest)
{
xhttp = new XMLHttpRequest();
}
else
{
xttp = new ActiveXObject("Microsoft.XMLHTTP");
}
xhttp.open("GET", dname, false);
xhttp.send();
return xhttp.responseXML;
}
function change(text)
{
var xmlDoc = loadXMLDoc("dom.xml");
var x = xmlDoc.getElementsByTagName("title")[0].childNodes[0];
x.nodeValue = text;
var y = xmlDoc.getElementsByTagName("title");
for(i=0; i<y.length; i++)
{
document.write(y[i].childNodes[0].nodeValue+"<br />");
}
}
function remove(node)
{
xmlDoc = loadXMLDoc("dom.xml");
var y = xmlDoc.getElementsByTagName(node)[0];
xmlDoc.documentElement.removeChild(y);
alert("The element "+node+" has been removed!");
}
function prove(u)
{
var x = xmlDoc.getElementsByTagName(u);
for (i=0; i<x.length; i++)
{
document.write(x[i].childNodes[0].nodeValue);
document.write("<br />");
}
</script>
</head>
<body>
<input type="button" value="remove" onclick="remove('book')" />
<input type="button" value="prove it" onclick="prove('book')" />
</body>
</html>
Update
---
Here's an XML file that may help:
<?xml version="1.0" encoding="ISO-8859-1"?>
<bookstore>
<book category="cooking">
<title lang="en">Everyday Italian</title>
<author>Giada</author>
<year>2005</year>
<price>30.00</price>
</book>
<book category="cooking">
<title lang="en">Book 2</title>
<author>Giada</author>
<year>2005</year>
<price>30.00</price>
</book>
<book category="cooking">
<title lang="en">Book 3</title>
<author>Giada</author>
<year>2005</year>
<price>30.00</price>
</book>
</bookstore> |
How do I fix error in visual studio code of submodules not opening/cloning correctly? |
You can use [eval][1] to set variables in a recipe. From [this answer here][2], I understand that you can even create pseudo local variables that way, by automatically prefixing them with the target. This is achieved by prefixing the variable with `$@_`.
```Makefile
mytarget:
$(eval $@_foo = bar)
@echo mytarget: $($@_foo)
other: mytarget
@echo other: $($@_foo)
```
Now we can see that this works as intended. `mytarget` has the variable, while `other` cant see it.
```bash
$ make other
mytarget: bar
other:
```
[1]: https://www.gnu.org/software/make/manual/html_node/Eval-Function.html
[2]: https://stackoverflow.com/a/74742720/9208887 |
I am having problem when i want to encode data like ttsjson or is there any way that can get the data to encode it?
[enter image description here](https://i.stack.imgur.com/pfM9K.png)
Please tell me what I need to do.
I tried decoding but the code often crashes. Is there any way to read how the web encodes? |
Separation of Students and Users in NestJS Microservice architecture |
|node.js|architecture|microservices|nestjs-microservice| |
I am trying to evaluate CNN model using two different approaches:
1.
model.evaluate(test_data)
In this case I get 79% accuracy score: [1.2163524627685547, 0.7924528121948242]
2.
I want to get the actual prediction values and use scikit-learn metrics to get accuracy score:
test_prediction=model.predict(test_data)
test_prediction=np.argmax(test_prediction, axis=1)
y_test = np.concatenate([y_batch for X_batch, y_batch in test_data])
metrics.accuracy_score(test_prediction,y_test)
The accuracy score is 21% in this case.
Why is there difference and which way is more reliable?
|
Difference between model.evaluate and metrics.accuracy_score |
|machine-learning|deep-learning|conv-neural-network|metrics|evaluation| |
I have a spring boot application where I get request text and files as form-data
If I send text data in code format, I get this error
```
JSON parse error: Unexpected character ('b' (code 98)): was expecting comma to separate Object entries
at org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter.readJavaType(AbstractJackson2HttpMessageConverter.java:406) ~[spring-web-6.0.10.jar:6.0.10]
```
The request value is like this
```
{
"content": " #backgroundImage {
border: none;
height: 100%;
pointer-events: none;
position: fixed;
top: 0;
visibility: hidden;
width: 100%;
}
[show-background-image] #backgroundImage {
visibility: visible;
}
</style>
</head>
<body>
<iframe id="backgroundImage" src=""></iframe>
<ntp-app></ntp-app>
<script type="module" src="new_tab_page.js"></script>
<link rel="stylesheet" href="chrome://resources/css/text_defaults_md.css">
<link rel="stylesheet" href="chrome://theme/colors.css?sets=ui,chrome">
<link rel="stylesheet" href="shared_vars.css">
</body>
</html>",
"title": "test1"
}
```
At first I got
```
JSON parse error: Illegal unquoted character ((CTRL-CHAR, code 10)): has to be escaped using backslash to be included in string value
```
But this one is fixed after using
```
jackson:
parser:
allow-unquoted-control-chars: true
```
in application.yml
Is client-side responsible to parse JSON type
or
server-side should parse the JSON type data? |
Spring boot JSON parse error: Unexpected character error |
|javascript|json|spring-boot| |
null |
So I am basically very new to Assembly in general. I was trying to write this simple code where I need to sort numbers in an ascending order from source data and rewrite it into destination data.
and whenever I run it, it shows: executables selected for download on to the following processors doesn't exist...
here is the code, maybe something wrong with it:
`
.global main
main:
ldr r0, =src_data // load the input data into r0
ldr r1, =dst_data // load the output data into r1
mov r2, #32 // word is 32
first_loop:
ldr r3, =Input_data // load the input data into r3
ldr r4, [r3], #4 // load the 1st elem of input data into r4
mov r5, r0 // move to the second loop
second_loop:
ldr r6, [r1] // load the value of output data into r6
cmp r4, r6 // compare the current element eith the value in output data
ble skip_swap // if the current element is less than or equal, continue
// swap the elements:
str r6, [r1]
str r4, [r1, #4]
skip_swap:
// move to the next elem in output data
add r1, r1, #4
// decrement by one remaining elements (r2 was 32)
subs r2, r2, #1
// check if we have reached the end of input_data
cmp r3, r0
// if not, continue to the second loop
bne second_loop
// if the first loop counter is not zero, continue to the first loop
subs r2, r2, #1
bne first_loop
// exit
mov r7, #0x1
svc 0 // software interrupt
.data
.align 4
src_data: .word 2, 0, -7, -1, 3, 8, -4, 10
.word -9, -16, 15, 13, 1, 4, -3, 14
.word -8, -10, -15, 6, -13, -5, 9, 12
.word -11, -14, -6, 11, 5, 7, -2, -12
// should list the sorted integers of all 32 input data
// like if saying: .space 32
dst_data: .word 0, 0, 0, 0, 0, 0, 0, 0
.word 0, 0, 0, 0, 0, 0, 0, 0
.word 0, 0, 0, 0, 0, 0, 0, 0
.word 0, 0, 0, 0, 0, 0, 0, 0
`
well, basically registers were all blank in Vitis IDE, even though I think it is supposed to be working.. |
How can i change the width of the slider in react slick? |
|reactjs|react-slick| |
null |
{"Voters":[{"Id":1506454,"DisplayName":"ASh"},{"Id":19648279,"DisplayName":"Harshitha"},{"Id":9214357,"DisplayName":"Zephyr"}],"SiteSpecificCloseReasonIds":[16]} |
## odoo ## odoo ##
docker exec -it (id conatianer odoo) /bin/bash
after installed : pip install dropbox , pip install pyncclient, pip install nextcloud-api-wrapper , pip install boto3 , pip install paramiko
for module auto_database_backup |
|c++|function|parsing|lambda|c++17| |
I encountered a error
PS C:\\Users\\91789\\Desktop\\Underwater-Image-Enhancement\\WPFNet\> python Underwater_test.py C:\\Users\\91789\\Desktop\\input C:\\Users\\91789\\Desktop\\New folder ./checkpoint/generator_600.pth
C:\\Users\\91789\\AppData\\Local\\Programs\\Python\\Python312\\python.exe: can't open file 'C:\\\\Users\\\\91789\\\\Desktop\\\\Underwater-Image-Enhancement\\\\WPFNet\\\\Underwater_test.py': \[Errno 2\] No such file or directory
How can i solve it and run it and this is the github link : [https://github.com/LiuShiBen/WPFNet.git] |
Encountering Errors Running GitHub Project: Wavelet-pixel domain progressive fusion network for underwater image enhancement - Seeking Assistance |
|python|github| |
null |
There are several aspects to your question which I cannot make out, so here are some general thoughts:
* if performance is an issue, you ***must*** profile your code so that you don't waste your time optimising the wrong parts
* you don't necessarily have to convert your *entire* video frame to `PIL.Image` if you are only annotating one or two corners you could potentially pass them on their own
* you don't necessarily have to pass any of your OpenCV frame to PIL if you choose to annotate onto a, say, solid black background
* you don't have to create a new `PIL.Image` for every frame - you could create one at the start of your video then fill it with black at the start of each frame and then draw text onto it - you won't need to create a new drawing context then for each frame
* you don't need to convert BGR->RGB->BGR, you could just specify your PIL drawing in BGR colours
* you can just get the font ***once*** at the start of your video instead of for every frame |
I have a custom layout that lays views from left to right and if horizontal space runs out then it lays the next views below. If I put a basic text element that says hello inside this custom layout then the view occupies all the horizontal space. How can I adjust my setup so the custom layout only occupies the needed horizontal space.
```
struct ContentView: View {
var body: some View {
CustomLayout {
Text("Hello")
}
.background(.blue)
}
}
struct CustomLayout: Layout {
var alignment: Alignment = .leading
var spacing: CGFloat = 0
func sizeThatFits(proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) -> CGSize {
let maxWidth = proposal.width ?? 0
var height: CGFloat = 0
let rows = generateRows(maxWidth, proposal, subviews)
for (index, row) in rows.enumerated() {
if index == (rows.count - 1) {
height += row.maxHeight(proposal)
} else {
height += row.maxHeight(proposal) + spacing
}
}
return .init(width: maxWidth, height: height)
}
func placeSubviews(in bounds: CGRect, proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) {
var origin = bounds.origin
let maxWidth = bounds.width
let rows = generateRows(maxWidth, proposal, subviews)
for row in rows {
let leading: CGFloat = bounds.maxX - maxWidth
let trailing = bounds.maxX - (row.reduce(CGFloat.zero) { partialResult, view in
let width = view.sizeThatFits(proposal).width
if view == row.last {
return partialResult + width
}
return partialResult + width + spacing
})
let center = (trailing + leading) / 2
origin.x = (alignment == .leading ? leading : alignment == .trailing ? trailing : center)
for view in row {
let viewSize = view.sizeThatFits(proposal)
view.place(at: origin, proposal: proposal)
origin.x += (viewSize.width + spacing)
}
origin.y += (row.maxHeight(proposal) + spacing)
}
}
func generateRows(_ maxWidth: CGFloat, _ proposal: ProposedViewSize, _ subviews: Subviews) -> [[LayoutSubviews.Element]] {
var row: [LayoutSubviews.Element] = []
var rows: [[LayoutSubviews.Element]] = []
var origin = CGRect.zero.origin
for view in subviews {
let viewSize = view.sizeThatFits(proposal)
if (origin.x + viewSize.width + spacing) > maxWidth {
rows.append(row)
row.removeAll()
origin.x = 0
row.append(view)
origin.x += (viewSize.width + spacing)
} else {
row.append(view)
origin.x += (viewSize.width + spacing)
}
}
if !row.isEmpty {
rows.append(row)
row.removeAll()
}
return rows
}
}
extension [LayoutSubviews.Element] {
func maxHeight(_ proposal: ProposedViewSize) -> CGFloat {
return self.compactMap { view in
return view.sizeThatFits(proposal).height
}.max() ?? 0
}
}
``` |
Aren't you meant to be passing a **blob-URL** for the image instead of just blob-data?
Because it doesn't look like you're converting the blob to a URL with this...
canvas.toBlob(function (blob){
jQuery.ajax({
url:"//domain.com/wp-admin/admin-ajax.php",
type: "POST",
data: {action: "addblobtodb", image: blob},
success: function(id) {console.log("Succesfully inserted into DB: " + id);
}
});
So I would try this...
canvas.toBlob((blob)=>{
let Url=URL.createObjectURL(blob);
jQuery.ajax({
url:"//domain.com/wp-admin/admin-ajax.php",
type: "POST",
data: {action: "addblobtodb", image: Url}, // <-- Image blob url passed here
success: function(id) {console.log("Succesfully inserted into DB: " + id);}
});
},'image/png',1); // <--- specify the type & quality of image here
I hope this helps...
|
I am working on a **Spring Boot** project. For production environment, I am using **SQL Server** AND **Azure App Service**. I followed following steps to configure my deployment.
My database url is-
```jdbc:sqlserver://spring-sql-server.database.windows.net:1433;database=<database-name>;user=<username>@spring-sql-server;password=<password>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;```
1. Added all my secrets to the GitHub Repository Secrets.
[![enter image description here][1]][1]
2. Added environment variable configuration to my application.properties file.
[![enter image description here][2]][2]
When I build my project using GitHub Actions, I get an error-
```Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaConfiguration.class]: [PersistenceUnit: default] Unable to build Hibernate SessionFactory; nested exception is java.lang.RuntimeException: Driver com.microsoft.sqlserver.jdbc.SQLServerDriver claims to not accept jdbcUrl, ${SPRING_DATASOURCE_URL}```
I have added the url properly but still getting this error. (I am not sure, but is it possible that the error might be because of a '#' in my passsword?)
[1]: https://i.stack.imgur.com/zzFcn.png
[2]: https://i.stack.imgur.com/nI92D.png |
Driver com.microsoft.sqlserver.jdbc.SQLServerDriver claims to not accept jdbcUrl, ${SPRING_DATASOURCE_URL}: GitHub Actions |
|sql-server|spring-boot|azure-web-app-service|github-actions|azure-sql-database| |
|rust|cors|network-programming|actix-web| |
another approach is to use HTTP Live Streaming - HLS - the web server is simply a standard httpd server - video/audio is preprocessed on server side into a set of bitrate playlists.
The heavy lifting logic is on the client side to retrieve the media as a series of 6 second files, based on bandwidth appropriate playlist ... bonus points for a client which auto calibrates for optimum bandwidth
So :
- use files not memory
- there are open source HLS segmenter (ffmpeg)
|
Need to calculate Matrix exponential with Tailor series with MPI matrix is small 3 x 3 for example
Meanwhile
```
vector<vector<double>> matrixExp(const vector<vector<double>>& A) {
int n = A.size();
vector<vector<double>> E(n, vector<double>(n, 0));
vector<vector<double>> T(n, vector<double>(n, 0));
vector<vector<double>> localE(n, vector<double>(n, 0));
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
for (int i = 0; i < n; i++)
E[i][i] = 1;
for (int i = 0; i < n; i++)
localE[i][i] = 0;
T = E;
for (int j = 1; j <= rank; j++)
{
T = matrixMult(T, A);
T = matrixDiv(T, j);
}
localE = T;
for (int i = rank + 1; i <= N; i += size) {
for (int j = i; j < i + size; j++) {
T = matrixMult(T, A);
T = matrixDiv(T, j);
}
localE = matrixSum(localE, T);
}
MPI_Reduce(localE[0].data(), E[0].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
MPI_Reduce(localE[1].data(), E[1].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
MPI_Reduce(localE[2].data(), E[2].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
return E;
}
```
But i dont know how to optimize this
```
for (int j = i; j < i + size; j++) {
T = matrixMult(T, A);
T = matrixDiv(T, j);
}
```
[enter image description here](https://i.stack.imgur.com/MYK1f.png) Maybe it's impossible with this implementation |
Create one entity for multiple tables? |
I know how to work with Data type in SwiftData (or CoreData). But the one question I feel missed is what is the correct way to care about memory management in different cases.
For example, I have the model for image Data:
```swift
@Model class ImageModel {
@Attribute(.externalStorage) let data: Data
var uiImage: UIImage { .init(data: data) ?? .init() }
init(data: Data) {
self.data = data
}
}
```
and the Item model:
```swift
@Model class Item {
@Relationship(deleteRule: .cascade)
var images: [ImageModel] = []
init() {}
}
```
The problem started when I tried to show items in the list because it's so ineffectively to use computed `var uiImage: UIImage { .init(data: data) ?? .init() }` for every image in every item I scroll. I considered several options, such as making a small proxy image data display in ItemView or lazy loading.
How do you achieve continuous smooth image loading and displaying in the list?
I considered several options, such as making a small proxy image data display in ItemView or lazy loading. |
Memory management for image data storing and retrieving with SwiftData (or CoreData) |
|ios|arrays|swift|xcode|swift-data| |
null |
Cannot edit functions.php and any of the website pages anymore on localhost |
{"Voters":[{"Id":20259506,"DisplayName":"Chamalka Jayashan"}],"DeleteType":1} |
I recommend wrapping the Editor inside `ClientOnly` (from remix-utils). |
CSS "position: fixed" respects parent's margin property. Why? |
I'm getting this error:
> Entity Framework Core 6.0.25 initialized 'GerenciadorContext' using provider 'Microsoft.EntityFrameworkCore.SqlServer:6.0.25' with options: None
>
> fail: Microsoft.EntityFrameworkCore.Database.Connection[20004]
> An error occurred using the connection to database 'GerenciadorDeProjetos' on server 'DESKTOP-707QCVQ'.
> fail: Microsoft.EntityFrameworkCore.Query[10100]
> An exception occurred while iterating over the results of a query for context type 'GerenciadorDeProjetos.Data.GerenciadorContext'.
My `program.cs`:
```
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
var connectionString = builder.Configuration.GetConnectionString("Default");
var myAllowSpecificOrigins = "_var myAllowSpecificOrigins";
builder.Services.AddDbContext<GerenciadorContext>(opts =>
{
opts.UseSqlServer(connectionString);
});
builder.Services.AddCors(opts =>
{
opts.AddPolicy(name: myAllowSpecificOrigins, builder =>
{
builder.WithOrigins("http://127.0.0.1:5500")
.AllowAnyOrigin()
.AllowAnyHeader();
});
});
builder.Services.AddIdentity<Usuario, IdentityRole>()
.AddEntityFrameworkStores<GerenciadorContext>()
.AddDefaultTokenProviders();
builder.Services.AddTransient<IUserDao, UserDao>();
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddAutoMapper(AppDomain.CurrentDomain.GetAssemblies());
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseCors(myAllowSpecificOrigins);
app.UseAuthorization();
app.MapControllers();
app.Run();
```
My connection string:
"Data Source=DESKTOP-707QCVQ;Initial Catalog=GerenciadorDeProjetos;Integrated Security=True;Trust Server Certificate=True" |
I would like to build a very basic daily sales report. I'm am trying to decide how to structure the database to best accomplish this. Here is a use case for it:
- On Jan 5, 2011, Provider A makes $500 total off of its products
- On Jan 5 2011, Provider A makes $200 total off of its products
- On Jan 6, 2011, Provider B makes $450 total off of its products
- On Jan 6, Provider B makes $75 total off of its products
The current structure I have is:
`PROVIDER table`
- pk
- provider
`PRODUCT table`
- provider (FK)
- product
- start_date (sales)
- end_date
The `start_date` and `end_date` are when sales on the product may occur. It is only used for reference, and does not really affect anything else.
`SALES table`
- product (FK)
- sales
- **How to store date**?
`sales` would be the daily proceed ($) for sales from that product.
I'm not quite sure how to store the sales. Sales would only be calculated as a daily sum for each product. What would be the best way to structure the `SALES` table?
|
Structuring a daily sales database |
I'm developing a library for obtaining information from files in the portable executable format.
It is important for me that the library works in both 32-bit and 64-bit modes.
During testing, I used hard paths for example: L"C:\\Windows\\System32\\ntoskrnl.exe" to display a list of exported functions.
In Windows 10, I discovered that in administrator mode, my 32-bit application returns error code 2 (ERROR_FILE_NOT_FOUND) when trying to open an existing file, while the 64-bit application successfully opens this file.
In Windows 7 on another computer, both applications successfully open this file.
In addition, I discovered that many applications on Windows 10, in particular Total Commander, winhex, x32dbg, ollydbg, do not see many files in the System32 folder.
I can read them from disk by jumping around the file system, is there any easier way? |
32-bit applications do not display some files in Windows 10 |
That mention of task placement is specifically referring to the selected VPC subnets, which is definitely still a thing that you have to do. That answer is still completely correct. You have to get your Fargate task to communicate with the outside service through a NAT Gateway. The NAT Gateway's public IP is the IP that the database will see. |
null |
I'm reading the Nick Hodges' book "Coding in Delphi" to learn the Delphi Programming Language and I'm trying to understand the interface usage part in the book.
In a unit, I've put a simple interface:
unit INameInterface;
interface
type
IName = interface
['{CE5E1B61-6F44-472B-AE9E-54FF1CAE0D70}']
function FirstName: string;
function LastName: string;
end;
implementation
end.
and in another unit, I've put the implementation of this interface, according to the book sample:
unit INameImplementation;
interface
uses
INameInterface;
type
TPerson = class(TInterfacedObject, IName)
protected
function FirstName: string;
function LastName: string;
end;
implementation
{ TPerson }
function TPerson.FirstName: string;
begin
Result := 'Fred';
end;
function TPerson.LastName: string;
begin
Result := 'Flinstone';
end;
end.
At this point, I've created a simple VCL form application in order to use the object I've created. The form code is this:
unit main;
interface
uses
Winapi.Windows, Winapi.Messages, System.SysUtils, System.Variants,
System.Classes, Vcl.Graphics, Vcl.Controls, Vcl.Forms, Vcl.Dialogs,
Vcl.StdCtrls, INameImplementation;
type
TfrmMain = class(TForm)
lblFirtName: TLabel;
lblLastName: TLabel;
txtFirstName: TStaticText;
txtLastName: TStaticText;
btnGetName: TButton;
procedure btnGetNameClick(Sender: TObject);
procedure FormCreate(Sender: TObject);
private
Person: TPerson;
public
{ Public declarations }
end;
var
frmMain: TfrmMain;
implementation
{$R *.dfm}
procedure TfrmMain.FormCreate(Sender: TObject);
begin
txtFirstName.Caption := '';
txtLastName.Caption := '';
end;
procedure TfrmMain.btnGetNameClick(Sender: TObject);
begin
txtFirstName.Caption := ...
end;
end.
My question is this: how can I use the interface? The two functions are declared as protected so how can I access them from the form? Do I have to define them as public, or should I use the `INameInterface` interface unit?
I'm terribly confused about interfaces!!!
|
How to encode ttsJson data? |
|json| |
null |
[enter image description here][1]My name is Eduardo sorry for my bad English not my native language.
Im playing with Databricks with DLT and I have a question.
When a Play with regular ETL I can full manage the compute resource. I can tur on/off. But when a create a DLT Pipeline the 'job compute' is always on.
If there is an explanation about this ? Is charging me always or only when the pipeline execute.
Best regards
I create differente pipelines, and read some resources. But still don't find a direct answer
[1]: https://i.stack.imgur.com/0llhW.png |
If the MDC Adapter you use is the one that uses `ThreadLocal`, e.g. `LogbackMDCAdapter`, then the filter you posted will work fine for virtual threads as well as for platform threads. If your virtual thread would be suspended and then _continued_ on another _Carrier_ (platform) thread, then its `ThreadLocal`s would be correctly transferred (at least Project Loom so promises).
["Don't Cache Expensive Reusable Objects in Thread-Local Variables"](https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html#GUID-68216B85-7B43-423E-91BA-11489B1ACA61) of Virtual Threads doc says:
> Virtual threads support thread-local variables just as platform threads do
However, the users of virtual threads are warned against excessive usage of `ThreadLocal`s on virtual threads (which could be very well understood as these `ThreadLocal`s have to be transferred between _Carrier_ threads of our virtual thread - this is pretty rough picture of what is going on "under the hood").
Instead, Project Loom advises to use `ScopedValue`. However, in the scope of your question, 1) a usage of `ScopedValue` should be initiated at the point of virtual thread spawning, i.e. somewhere in Servlet Container (if Tomcat is used then it would be a Connector) and 2) a special `ScopedValue`-oriented `MDCAdapter` implementation should be used 3) `ScopedValue` is still a preview in Java 21. Some plans in that direction have been laid out in [How to propagating context through StructuredTaskScope by ScopedValue... how about the MDC ThreadContextMap in StructuredTaskScope?](https://stackoverflow.com/questions/77716273/how-to-propagating-context-through-structuredtaskscope-by-scopedvalue-by-the-wa). It seems to me that, as `ScopedValue` is designed, the efforts should be applied at two levels: Servlet Container, where the virtual thread is spawned, and special `MDCAdapter` implementation. More on usage of `ScopedValue`s for MDC on [Logback: availability of MDCs in forks created inside a StructuredTaskScope](https://stackoverflow.com/questions/78142173/logback-availability-of-mdcs-in-forks-created-inside-a-structuredtaskscope/). |
My name is Eduardo sorry for my bad English not my native language.
Im playing with Databricks with DLT and I have a question.
When a Play with regular ETL I can full manage the compute resource. I can tur on/off. But when a create a DLT Pipeline the 'job compute' is always on.
If there is an explanation about this ? Is charging me always or only when the pipeline execute.
Best regards
I create differente pipelines, and read some resources. But still don't find a direct answer
[console][1]
[1]: https://i.stack.imgur.com/mAhr6.png |
[duckdb-wasm](https://www.npmjs.com/package/@duckdb/duckdb-wasm) npm module is big since it comprise also test and different deployments. Minimal stripped down version should be around 40MB uncompressed / 7.3 MB after compression.
[duckdb](https://www.npmjs.com/package/duckdb) module is also possibly an option.
Both duckdb AND duckdb-wasm use the same underlying library, only API is somewhat different AND there are different models (native on one side, Wasm-sandbox in the other). Both are in active development. |
IIUC, you need to pass the [`centroid`][1] of the [`unary_union`][2] as the *origin* of the rotation :
```
out = gpd.GeoDataFrame(
geometry=gdf.rotate(
angle=-90, origin=list(gdf.unary_union.centroid.coords)[0]
)
)
```
***NB**: There is no need to [`explode`][3] the geometry, because you do not have [MultiPolygons][4].*
[![enter image description here][5]][5]
Used input (`gdf`) :
```
import geopandas as gpd
# download from Google Drive
input_poly_path = (
"C:/Users/Timeless/Downloads/"
"share-20240330T125957Z-001.zip!share"
)
gdf = gpd.read_file(input_poly_path, engine="pyogrio")
```
[1]: https://shapely.readthedocs.io/en/stable/reference/shapely.Polygon.html#shapely.Polygon.centroid
[2]: https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoSeries.unary_union.html
[3]: https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoDataFrame.explode.html#
[4]: https://shapely.readthedocs.io/en/latest/reference/shapely.MultiPolygon.html
[5]: https://i.stack.imgur.com/gxn11.png |
> [my home screen showing like this](https://i.stack.imgur.com/abwOE.jpg)
i have used react navigation and my status bar is coming inside background image but not the bottom button which comes on my android device
```
import { View, Text, StyleSheet, Image, Animated } from 'react-native';
import { useWindowDimensions, TouchableOpacity } from 'react-native';
import React, { FC, useEffect, useState } from 'react';
interface SliderSwiperProps {
item: any;
}
const SliderSwiper: FC<SliderSwiperProps> = ({ item}) => {
const { width } = useWindowDimensions();
return (
<Image source={item.image} style={styles.image} />
);
};
export default SliderSwiper;
const styles = StyleSheet.create({
image: {
flex: 1,
justifyContent: 'center',
resizeMode: 'cover',
width: '100%',
},
});
```
[1]
[here is the image]: https://i.stack.imgur.com/FJj0o.jpg |
Remove the `@Component` annotation from your `@interface` class and add it to the `@Aspect` class. Then, modify your @Around annotation to: `@Around(value="@annotation(fully.qualified.package.here.ValidateUser)")` . I've removed the `execution(* *(..))` part as then the aspect would be involved on *every* (public) method, which is not what you want.
By the way, instead of creating a custom aspect to authenticate and authorize a user I strongly suggest to use the Spring Security module instead, and put your custom auth logic in a custom PermissionEvaluator. Or, given you mention OAuth in your snippet, verify whether a Spring security OAuth2 setup will suffice.
More info on https://www.baeldung.com/spring-security-create-new-custom-security-expression and https://www.baeldung.com/spring-security-oauth |
**I try to provide some more kontext:**
ultralytics_crop_objects is a list with like 20 numpy.ndarray, which are representing pictures (59, 381, 3) e.g.:[ultralytics_crop_objects[5]](https://i.stack.imgur.com/x6QvJ.png).
I started passing a single picture out of the list to recognize.
pipeline.recognize([ultralytics_crop_objects[5]])
--> ji856931
The result is "ji856931". So not all characters where detected.
But when I pass the whole list of pictures and look at the result for the 6th picture, The result is different. See: [Different Results][1]
results = pipeline.recognize(ultralytics_crop_objects)
results[5] --> ji8569317076
I dont get it at all. So would be super happy if someone could provide a hint. My only explaination would be that keras_ocr is using different detection_threshold for a single Picture than for a list of more than one picture. Could that be?
I checked a couple of times if I accidentally used another pipeline or if the input pictures are different. But they aren't. And I googled a lot.
Heres the complete Code:
import keras_ocr
pipeline = keras_ocr.pipeline.Pipeline()
results = pipeline.recognize([ultralytics_crop_objects[5]])
print(results)
results = pipeline.recognize(ultralytics_crop_objects)
print(results[5])
|
Here's a better and more general way to handle this. In addition, it won't strip out "#" that are in the middle of a string, as yours would. This relies on the fact that a ### header must be followed by a space.
```
function render(md){
let code = "";
let mdLines = md.split('\n');
for(let i = 0; i < mdLines.length; i++){
if(mdLines[i][0] == "#") {
// We have a header. How many are there?
let s = mdLines[i].indexOf(" ")
if( mdLines[i].slice(0,s) == '#'.repeat(s) )
code += "<h"+s+">" + mdLines[i].slice(s+1) + "</h"+s+">";
else
code += mdLines[i];
}
else
code += mdLines[i];
};
return code;
}
let text1 = "## he#llo \n there \n # yooo"
let text2 = "# he#llo \n there \n ## yooo"
console.log(render(text1));
console.log(render(text2));
```
Output:
```
timr@Tims-NUC:~/src$ node x.js
<h2>he#llo </h2> there # yooo
<h1>he#llo </h1> there ## yooo
timr@Tims-NUC:~/src$ |
{"Voters":[{"Id":1883316,"DisplayName":"Tim Roberts"}],"DeleteType":1} |
I found the solution that I was looking for. first use watch method to detect changes on variable you are storing data on like this:
watch: {
input: function () {
if (isLocalStorage() /* function to detect if localstorage is supported*/) {
localStorage.setItem('storedData', this.input)
}
}
}
This will update variable’s value whenever user add new inputs.
Then assign the new value to variable like this:
app.input = localStorage.getItem('storedData');
And that's it :) |
{"Voters":[{"Id":23825920,"DisplayName":"AlGM93"}],"DeleteType":1} |
The easiest way is to use [Howard Hinnant's free, open-source, header-only date.h][1]:
#include "date/date.h"
#include <iostream>
#include <string>
int
main()
{
using namespace date;
using namespace std::chrono;
auto time = system_clock::now();
std::string s = format("%FT%T", floor<seconds>(time));
std::cout << s << '\n';
}
This library is the prototype for the new C++20, chrono extensions. Though in C++20, the details of the formatting may change slightly to bring it in line with the expected C++20 `fmt` library.
C++20 version
---
#include <chrono>
#include <iostream>
#include <format>
#include <string>
int
main()
{
auto time = std::chrono::system_clock::now();
std::string s = std::format("{:%FT%T}",
std::chrono::floor<std::chrono::seconds>(time));
std::cout << s << '\n';
}
[Demo.][2]
[1]: https://github.com/HowardHinnant/date/blob/master/include/date/date.h
[2]: https://wandbox.org/permlink/XdNPuC5pYASfqTE1 |
I am creating physics simulation engine in C++ and SFML. I encountered issues where I want to draw a vector of objects however draw does not draw first object from the vector. If I want to draw next objects it works fine. Has anybody else had any similar issues?
`main.cpp`:
```
#include "../include/engine.hpp"
int main()
{
auto engine = new Engine();
while(engine->isRunning())
{
engine->update(0.25f);
engine->draw();
}
}
```
`engine.cpp`:
```
#include "../../include/engine.hpp"
Engine::Engine()
: window("Collision Engine")
{
generateObjects(totalObjects);
}
void Engine::update(float dt)
{
window.update();
releaseObject(dt);
applyConstraint();
applyGravity();
for(int i = 0; i < objectReleaseCount; i++)
objects[i].updatePosition(dt);
}
void Engine::draw()
{
window.beginDraw();
for(int i = 0; i < objectReleaseCount; i++)
window.draw(objects[i].shape);
window.endDraw();
}
bool Engine::isRunning() const
{
return window.isOpen();
}
void Engine::generateObjects(int objectsCount)
{
for(int i = 0; i < objectsCount; i++)
{
Sphere object;
object.radius = 40.0;
object.color.red = 255;
object.color.green = 150;
object.color.blue = 0;
object.color.opacity = 255;
objects.push_back(object);
}
}
void Engine::releaseObject(float dt)
{
totalTime += dt;
if(totalTime == releaseTime)
if(objectReleaseCount < totalObjects)
{
objectReleaseCount++;
totalTime = 0;
}
}
void Engine::applyGravity()
{
for(int i = 0; i < objectReleaseCount; i++)
objects[i].accelerate(gravity);
}
void Engine::applyConstraint()
{
for(int i = 0; i < objectReleaseCount; i++)
{
sf::Vector2f position = objects[i].position.current;
sf::Vector2f size(objects[i].radius * 2, objects[i].radius * 2);
// Left border
if (position.x - size.x < 0)
position.x = size.x;
// Right border
if (position.x + size.x > window.screenWidth)
position.x = window.screenWidth - size.x;
// Top border
if (position.y - size.y < 0)
position.y = -size.y;
// Bottom border
if (position.y + size.y > window.screenHeight)
position.y = window.screenHeight - size.y;
objects[i].position.current = position;
}
}
```
`engine.hpp`:
```
#ifndef ENGINE_HPP
#define ENGINE_HPP
#include <SFML/Graphics.hpp>
#include <math.h>
#include "window.hpp"
#include "workingDirectory.hpp"
#include "sphere.hpp"
class Engine
{
public:
Engine();
void update(float dt);
void draw();
bool isRunning() const;
void generateObjects(int count);
void releaseObject(float dt);
void applyGravity();
void applyConstraint();
private:
Window window;
WorkingDirectory workingDir;
std::vector<Sphere> objects;
sf::Vector2f gravity = {-1.0f, 1.0f};
int objectReleaseCount = 1;
int totalObjects = 2;
float totalTime = 0;
float releaseTime = 40.0f;
};
#endif
```
`sphere.cpp`:
```
#include "../../include/sphere.hpp"
Sphere::Sphere()
{
shape.setRadius(radius);
shape.setFillColor(sf::Color(color.red,
color.green,
color.blue,
color.opacity));
setInitialPosition(position.initial);
}
void Sphere::updatePosition(float dt)
{
velocity = position.current - position.previous;
position.previous = position.current;
position.current = position.current + velocity + acceleration * dt * dt;
shape.setPosition(position.current);
std::cout << "Position = x: " << position.current.x << "y: " << position.current.y << std::endl;
acceleration = {};
}
void Sphere::setInitialPosition(sf::Vector2f newPosition)
{
position.current = position.initial;
position.previous = newPosition;
shape.setPosition(newPosition);
}
void Sphere::accelerate(sf::Vector2f acc)
{
acceleration += acc;
}
```
`sphere.hpp`:
```
#ifndef SPHERE_HPP
#define SPHERE_HPP
#include <iostream>
#include <SFML/Graphics.hpp>
#include "window.hpp"
class Sphere
{
public:
Sphere();
void updatePosition(float dt);
void setInitialPosition(sf::Vector2f newPosition);
void accelerate(sf::Vector2f acc);
// Custom shape
sf::CircleShape shape;
int radius;
struct
{
sf::Uint8 red = 255;
sf::Uint8 green = 0;
sf::Uint8 blue = 255;
sf::Uint8 opacity = 255;
} color;
struct
{
sf::Vector2f initial {400.0f, 100.0f};
sf::Vector2f current;
sf::Vector2f previous;
} position;
sf::Vector2f velocity;
sf::Vector2f acceleration;
};
#endif
```
I tried making an object inside `draw()` function and it worked however it does not work when I want to access objects variable declared inside `.hpp` file. |
How do you specify if storage should be local or session when using VueUse?
It doesn't seem to mention it in the docs https://vueuse.org/core/usestorage/ |
How do you specify if storage should be local or session when using VueUse? |
Swift UI custom layout occupies all horizontal space |
|swift|swiftui| |
I am trying to figure out the next term in the following sequence but I'm stumped.
```
1, 5, 2, 10, 3, 15, ....
```
Can someone explain the pattern and hence what the next term should be? |
Sequences - Find the next term in the sequence |
|sequence|series|discrete-mathematics| |