instruction stringlengths 0 30k ⌀ |
|---|
|git|github| |
I came across this question seems a year later since I asked it XD. I have managed to get it to work in RTL :
1) Edit scheme > App Language > Arabic (or your RTL language or 'Right-to-Left Pseudolanguage')
If you only do this step, the RTL layout will appear correctly in the simulator but not on a real device (unless the device's language is RTL)
2) Set the locale language
This will force the app's preferred language to always remain your RTL language.
@main
struct AppName: App {
init() {
UserDefaults.standard.set(["ar"], forKey: "AppleLanguages")
}
var body: some Scene {
WindowGroup {
ContentView()
}
}
} |
Version 5.1.0 is available since January 2024.
In a spring-boot application for instance I just needed to overwrite in properties
<querydsl.version>5.1.0</querydsl.version>
to get rid of the warnings |
So I found two similar post on stackoverflow who helped me out, and it was the following two:
[https://stackoverflow.com/questions/73000249/user-claims-is-empty-for-every-page-outside-of-areas-identity][1]
[https://stackoverflow.com/questions/59550789/how-do-i-set-up-authentication-authorisation-for-asp-net-core-3-0-with-pagemo][2]
So I ended up with changing my Program.cs and added spesific authorization in my handler, and that was it
builder.Services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = IdentityConstants.ApplicationScheme;
options.DefaultChallengeScheme = IdentityConstants.ApplicationScheme;
}).AddIdentityServerJwt();
And:
[Authorize(AuthenticationSchemes = "Identity.Application")]
And that was it!
[1]: https://stackoverflow.com/questions/73000249/user-claims-is-empty-for-every-page-outside-of-areas-identity
[2]: https://stackoverflow.com/questions/59550789/how-do-i-set-up-authentication-authorisation-for-asp-net-core-3-0-with-pagemo |
Has anyone tried using Langchain's AI21 integration AI21SemanticTextSplitter?
There is a mention of it on [Langchain's Text Splitters Page](https://stackoverflow.com).
This is [its documentation](https://stackoverflow.com).
I tried the examples given there and it says `ImportError: cannot import name 'AI21SemanticTextSplitter' from 'langchain_ai21' (/usr/local/lib/python3.10/dist-packages/langchain_ai21/__init__.py)
`.
I have installed the required package (`pip install langchain-ai21`). It was suggested to install llama index and update langchain, which I have done.
I was wondering if it has been depreciated or is it just an updation in the package?
Any help would be appreciated. |
I have a button called EXPORT and now I need to create a shortcut to export file to excel. The short cut is Ctrl+Alt+E. When I press the shortcut it have to call the function "onCommandExecution" which will check the ID condition and execute the function "onExportToExcel". I have been made debugging and I the ID was not called in the function. In other words the function onCommandExecution has not been called. The problem have to be either in view file or in manifest file but I see that both are correct!
I have this error in console but also when I delete the changes, it is the same. Error says:
Refused to execute script from **https://port8089-workspaces-ws-m88sh.eu10.applicationstudio.cloud.sap/com.volkswagen.ifdb.plg.hlplnk/Component-preload.js** because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.
Does someone have a solution please?
Thank you
//Controller.js file
onCommandExecution: function(oEvent) {
var mSId = oEvent.getSource().getId();
if (mSId === "CE_EXPORT") {
this.onExportToExcel();
}
},
onExportToExcel: function() {
var mSId = oEvent.getSource().getId();
var oSettings, oSheet, aProducts;
var aCols = [];
aCols = this.createColumnConfig();
aProducts = this.byId("messageTable").getModel("oMessageModel").getProperty('/');
var oDate = new Date();
var regex = new RegExp(",", "g");
var aDateArr = oDate.toISOString().split("T")[0].split("-");
var sDate = aDateArr.join().replace(regex, "");
var aTimeArr = oDate.toISOString().split("T")[1].split(":");
var sSeconds = oDate.toISOString().split("T")[1].split(":")[2].split(".")[0];
var sTime = aTimeArr[0] + aTimeArr[1] + sSeconds;
oSettings = {
workbook: {
columns: aCols
},
dataSource: aProducts,
fileName: "export_" + sDate + sTime
};
if (mSId === "CE_EXPORT") {
oSheet = new Spreadsheet(oSettings);
oSheet.build()
.then(function() {
MessageToast.show(this.getOwnerComponent().getModel("i18n").getResourceBundle().getText("excelDownloadSuccessful"));
})
.finally(function() {
oSheet.destroy();
});
}
},
<View.xml file>
<Page>
<footer>
<OverflowToolbar >
<Button icon="sap-icon://excel-attachment" text="{i18n>exportToExcelBtn}" press="onExportToExcel" tooltip="{i18n>exportToExcelBtnTooltip}"/>
</OverflowToolbar>
</footer>
<dependents>
<core:CommandExecution id="CE_EXPORT" command="Export" enabled="true" execute="onCommandExecution" />
</dependents>
</Page>
<manifest.json file>
"sap.ui5": {
"rootView": {
"viewName": "com.volkswagen.ifdb.cc.sa.view.Main",
"type": "XML"
},
"dependencies": {
"minUI5Version": "1.65.0",
"libs": {
"sap.m": {},
"sap.ui.comp": {},
"sap.ui.core": {},
"sap.ui.layout": {},
"sap.ushell": {}
}
},
"contentDensities": {
"compact": true,
"cozy": true
},
"commands": {
"Export":{
"shortcut": "Ctrl+Alt+E"
}
},
[1]: https://port8089-workspaces-ws-m88sh.eu10.applicationstudio.cloud.sap/com.volkswagen.ifdb.plg.hlplnk/Component-preload.js |
There seems to be no built-in cross-platform way in `Java` for retrieving the root path (e.g., the drive) of a file, so a third-party library or custom code are the only solutions.
[Apache Commons IO][1] has a [FilenameUtils][2] class and its [`getPrefix()`][3] method does exactly what the question asked.
[1]: https://commons.apache.org/proper/commons-io/
[2]: https://commons.apache.org/proper/commons-io/javadocs/api-release/
[3]: https://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/FilenameUtils.html#getPrefix(java.lang.String) |
null |
[enter image description here][1]I use netleague order in netmeta package of R for my network meta analysis
netleague(net2, digits = 2, ci = FALSE)
However, it showed indirect effect in lower triangle and direct effect in upper triangle.
I just want it to show the reciprocal number in the upper triangle for league table instead of direct effect
Additionally, I used this order
netleague(net1, net1, seq = netrank(net1), ci = FALSE, backtransf = FALSE)
To combine and try to show upper triangle as the the reciprocal number but still cannot.
I have tried to find many solutions but still cannot.
[1]: https://i.stack.imgur.com/9ys0f.jpg |
null |
I have a database with more than 2 million records and new records are made every day.
I'm using MySQL and I need fulltext, so I can't do partitioning, as it's not possible to use fulltext in a partitioned table.
Some queries are quick, but others are extremely time-consuming.
I would like a suggested solution for this.
Would it be worth switching to another type of database? which?
Thank you very much in advance, have a great day. |
Solution Indication - Database |
|database|full-text-search|innodb|partitioning|myisam| |
Qwik has not polluted ServiceWorker with lots of code.
you have to implement pwa offline ServiceWorker by yourself.
here is a repo that did just that:
- **GitHub**: https://github.com/qwikdev/pwa
- **Example**: https://pwa-a3b.pages.dev/ |
null |
Note: I simplified the code for testing.
I have an express.js route with the following code:
```
// /auth/register/confirm/:confirmationToken
const registerConfirm = asyncHandler(async (req, res, next) => {
console.log('registerConfirm');
res.status(httpStatus.OK).json({ msg: 'Registration confirmed' });
});
```
I call it from Angular with the following code:
```
this.authService.registerConfirm('abcdef').subscribe();
```
In my node.js console, I always get two lines with `registerConfirm` instead of one:
```
registerConfirm
registerConfirm
```
It's as if it was making two `PUT` requests from Angular except that in my network's tab I only have one `OPTION` and one `PUT` call. For a console.log it's not an issue but when I have code that does database operations it is called twice and messes everything.
I also tried to call this route from Postman and it worked properly. It doesn't work when the call is made from Angular. |
For anyone getting this same issue. I simply needed to deploy the application locally for the first time, with my regular (non-service) account and all subsequent deployments on Github actions with the service account started succeeding.
/Shrug |
Add the following into `/etc/mysql/my.cnf` file:
innodb_buffer_pool_size = 64M
example:
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
innodb_buffer_pool_size = 64M
Note that the configuration file may have a different name. Other examples:
- `/etc/mysql/mariadb.conf.d/50-server.cnf` (recent Debian-based)
- `/etc/my.cnf` |
You can create a global variable `let app = {...}` and store all program data in there (this is how I usually organize programs). Then, store the state of your items under `app.data.checkboxState` or somewhere like that.
Below is an example program showing how to apply that:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const app = { // variable to hold info about program
data : {
checkboxState : {} // store checkbox state here
}
}
app.getState = function() { // method to get state and save it in memory
// get all elements with the class 'checkbox'
// you can use any class you like instead, just
// make sure all your checkbox elements have that class
let checkboxes = document.getElementsByClassName("checkbox");
for (let i=0; i<checkboxes.length; i++) { // loop through them
let checkbox = checkboxes[i];
app.data.checkboxState[checkbox.id] = checkbox.checked; // save this checkboxs' state
}
}
app.showState = function() { // method to show user the state
alert("App state is: \n " + JSON.stringify(app.data.checkboxState).replaceAll('{', '').replaceAll('}','').replaceAll(':', ' = ').replaceAll('"', '').replaceAll(',', '\n '));
// you dont have to understand this, it's just
// formatting to make the message easier to read
}
app.run = function() { // method to execute program
app.getState();
app.showState();
}
<!-- language: lang-css -->
html, body {
font-family:arial;
}
h2 {
color:turquoise;
}
button {
color:black;
background:turquoise;
border:none;
padding:5px 10px;
border-radius:2px;
width:100px;
cursor:pointer;
margin:5px 0px;
}
button:hover {
box-shadow: 1px 1px 4px 0px black;
}
<!-- language: lang-html -->
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<h2> Check options below: </h2>
<p>
<input class="checkbox" id="op1" type="checkbox"></input><span>Option 1</span><br>
<input class="checkbox" id="op2" type="checkbox"></input><span>Option 2</span><br>
<input class="checkbox" id="op3" type="checkbox"></input><span>Option 3</span><br>
<input class="checkbox" id="op4" type="checkbox"></input><span>Option 4</span><br>
<input class="checkbox" id="op5" type="checkbox"></input><span>Option 5</span><br>
</p>
<button onclick="app.run()">OK</button><br>
<button onclick="app.getState()">Save state</button><br>
<button onclick="app.showState()">Show state</button><br>
</body>
</html>
<!-- end snippet -->
Click "Run code snippet" (above) to run the code. If you click "OK", the state will be saved and a message will be shown showing the saved data. Click "Save state" to save without showing a message, and "Show state" to show the message without saving. Notice how when you click "Show state" without clicking "Save state" first, the message is not updated (even if you check or uncheck a box)
Just a reminder-- for such a small program, it is probably not necessary to do this and might just make it more complicated, but when you create larger programs, organizing your code like this really makes it easier to maintain :).
|
I have a GitHub action that does a conditional execution of a job where it builds and runs the tests for a project only if there was a change in one specific directory of the code. Seems like a common thing to do. I used this as a reference: https://github.com/marketplace/actions/paths-changes-filter
**Problem** it always shows it as "changed" because it is comparing if my branch is different from main which it is. I want it to determine if there were any files changed in this specific directory ('libs/storage/Tsavorite/**') and if so, then it will run all the steps to build and test it.
Looking at the output, the step where it is checking for change, it says:
"Change detection refs/remotes/origin/main..refs/remotes/origin/darrenge/TsavIntoCI" so that tells me it is comparing main to my branch.
**CI yml file:**
name: My CI
on:
workflow_dispatch:
push:
paths-ignore:
- 'website/**'
- '*.md'
branches:
- darrenge/TsavIntoCI
pull_request:
branches:
- main
env:
DOTNET_SKIP_FIRST_TIME_EXPERIENCE: 1
DOTNET_NOLOGO: true
jobs:
changes:
runs-on: windows-latest
outputs:
tsavorite: ${{ steps.filter.outputs.tsavorite }}
steps:
- uses: actions/checkout@v4
- name: Check if Tsavorite changed
uses: dorny/paths-filter@v3
id: filter
with:
filters: |
tsavorite:
- 'libs/storage/Tsavorite/**'
verify-changes:
needs: changes
runs-on: windows-latest
if: needs.changes.outputs.tsavorite == 'true'
steps:
- name: CHANGES MADE
run: echo "CHANGES were made"
verify-NO-changes:
needs: changes
runs-on: windows-latest
if: needs.changes.outputs.tsavorite == 'false'
steps:
- name: NO CHANGES MADE
run: echo "NO CHANGES were made"
I am expecting it to only run the "changes were made" part of the code if a file in the commit that forced this run was made in libs/storage/Tsavorite/** |
|android|push-notification|firebase-cloud-messaging| |
null |
null |
{"OriginalQuestionIds":[78060144],"Voters":[{"Id":5221944,"DisplayName":"Mikhail Berlyant","BindingReason":{"GoldTagBadge":"google-bigquery"}}]} |
I have a teams application which includes a bot, which works across Teams, Outlook and Microsoft 365. I am adding search messaging extensions to it. The search messaging extension works well within Teams, but in Outlook I receive an error message "Something went wrong".
[](https://i.stack.imgur.com/Eu3kk.png)
Looking at the traffic via Fiddler provides an unhelpful error message:
[](https://i.stack.imgur.com/nlmH8.png)
Looking at my webserver traffic I don't believe this request is ever reaching by bot.
I suspect the issue may be to do with the bot channels. [This guide](https://learn.microsoft.com/en-us/microsoftteams/platform/m365-apps/extend-m365-teams-message-extension?tabs=existing-app%2Csearch-based-message-extension#prerequisites) implies I need to add the "Microsoft 365 Extensions" channel. However this does not appear as an available channel for my bot via the Azure Bot Services portal:
[](https://i.stack.imgur.com/yC0Kv.png)
My bot does not appear in the [Bot Framework portal](https://dev.botframework.com/) which the guide mentions as an alternative to Azure Bot Services Portal. |
Search Message Extension Works in Teams but not in Outlook |
|botframework|microsoft-teams| |
null |
|c|tty|ioctl| |
Solved by clicking to another element:
const knowledge = document.querySelector('[value="knowledge"]')
if (knowledge && knowledge.checked == false) {
knowledge.parentNode.nextSibling.click()
} |
I've been trying to achieve the result in the table above with no luck so far .. I have table called Nodes consists of ( base doc, current done and target ).
BaseDocType . BaseDocID
DocType . DocID
TargetDocType . TargetDocID ..
I want to fetch all the related nodes for any specific node if that's possible ..if any one can help I will be appreciate it a lot ..
it's a sql server database .
`
With CTE1 (ID, BaseDocType, BaseDocID, DocType, DocID, TargetDocType, TargetDocID)
As
(
Select ID, BaseDocType, BaseDocID, DocType, DocID, TargetDocType, TargetDocID
From Doc.Nodes Where DocType=8 and DocID = 2
Union All
Select a.ID, a.BaseDocType, a.BaseDocID, a.DocType, a.DocID, a.TargetDocType, a.TargetDocID
From Doc.Nodes a
Inner Join CTE1 b ON
(a.BaseDocType = a.BaseDocType and a.BaseDocID = b.BaseDocID and a.DocType != b.DocType and a.DocID != b.DocID)
)
Select *
From CTE1`
this query is not working .. says
Msg 530, Level 16, State 1, Line 8
The statement terminated. The maximum recursion 100 has been exhausted before statement completion.
[Example][1]
[1]: https://i.stack.imgur.com/t3q8u.jpg |
GitHub Actions conditional execution always shows change because comparing to main instead of private branch |
|github|github-actions| |
null |
I have a button that renders correctly on windows, and mobile simulations. However, on my iPhone, the button color, and position attributes are incorrect. Any and all help is greatly appreciated!
html:
<button class="more-projects-button" type="button" onclick="window.location.href='Projects.html';">More Projects</button>
css:
.more-projects-button {
position: relative;
top: 40px;
left: 50%;
transform: translateX(-50%);
background-color: #007bff;
color: white;
border: none;
border-radius: 10px;
padding: 10px 20px;
text-decoration: none;
}
Thanks for looking at my code, as I'm new to web development |
That was fixed for CCL in the repo, but not for ACL. Also note that Lisp-Stat won't load into the free version of ACL because of heap size restrictions (at least version 10 wouldn't). I don't have a commercial license any longer, but the fix is a one-liner you can try.
Would you please report this as an issue at https://github.com/Lisp-Stat/data-frame/issues ? |
You cannot use `or` keyword in `join` clause.
Instead, you can use `or` or `||` in `where` clause to filter not satisfying condition.
See also: https://stackoverflow.com/questions/1159022/linq-join-with-or |
I have a large dataset that is difficult to investigate without analysis tools. It's general form is this, but with 16 "ItemType0" columns and 16 "ItemType1", "ItemType2", etc columns.
It represents the properties (many of them) of up to 16 different items recorded at a single timestep, then properties of that timestep.
|Time|ItemType0[0].property|ItemType0[1].property|Property|
|:--:|:-------------------:|:-------------------:|:------:|
|1 |1 |0 |2 |
|2 |0 |1 |2 |
|3 |3 |3 |2 |
I'd like to receive:
|Time|ItemType0.property|Property|
|:--:|:----------------:|:------:|
|1 |1 |2 |
|2 |0 |2 |
|3 |3 |2 |
|1 |0 |2 |
|2 |1 |2 |
|3 |3 |2 |
```
import pandas as pd
wide_df = pd.DataFrame({
"Time": [1,2,3],
"ItemType0[0].property": [1,0,3],
"ItemType0[1].property": [0,1,3],
"Property": [2,2,2]})
```
What I've tried:
1.
```
ids = [col for col in wide_df.columns if "[" not in col]
inter_df = pd.melt(wide_df, id_vars=ids, var_name="Source")
```
MemoryError: Unable to allocate 28.3 GiB for an array with shape (15,506831712) and data type uint32
2. I wouldn't even know where to begin with `pd.wide_to_long` as everything doesn't start with the same. |
Using either melt or wide_to_long for large dataset with inconsistent naming |
Is AI21SemanticTextSplitter (from langchain_ai21) Depreciated? |
|langchain|chunking| |
null |
i have a problem with a java future and an handler function. code example:
```
public HealthCheckResponse call() {
String redisHost = this.configuration.getRedisHost();
log.info("connect to redis host: {}", redisHost);
Future<RedisConnection> redisConnectionFuture = Redis.createClient(Vertx.vertx(), redisHost).connect();
while (!redisConnectionFuture.isComplete()) {
log.debug("waiting for redis connection future complete: ({})", redisConnectionFuture.isComplete());
}
log.info("redis connection future completed, {} and succeded {}", redisConnectionFuture.isComplete(), redisConnectionFuture.succeeded());
if (redisConnectionFuture.isComplete() && redisConnectionFuture.succeeded()) {
return HealthCheckResponse.up("RedisCustomHealthCheck");
}
log.info("sending down RedisCustomHealthCheck");
return HealthCheckResponse.down("RedisCustomHealthCheck");
}
```
so my problem is that i have to check the redis connection. this is an async function so, i can set onSuccess and write my logic. there i can not return the HealtCheckResponse. Question, i dont want to wait with the while loop. waht is an possible soultion for this problem? |
how to avoid while loop while waiting for future complete? |
|java|redis|quarkus|future|vert.x| |
Want descending order of data in hive:
Requirement: Our requirement is we want to get pnote column in a way that data for particular invoice number should be in descending order. As we want to get the most recent date record from the top of the array (zeroth element) from pnote column.
Currect outputs:
1. selecting concat_dispute_notes
SELECT concat(to_date(t.modified_on),"-",b.named_user,"-",t.notes) as concat_dispute_notes
FROM Table
1. output of concat_dispute_notes that we are getting.
````
2023-05-31-LMINCU2-chased for pym plan
2023-08-16-LMINCU2-chased for pym plan
2023-12-07-LMINCU2-chased for pym plan
2024-01-11-LMINCU2-chased for pym plan
2024-01-22-LMINCU2-chased for pym plan
2023-05-16-LMINCU2-chased for pym plan
2023-05-02-LMINCU2-chased for pym plan
2023-03-22-LMINCU2-chased for pym plan
2022-11-22-LMINCU2-chased for pym plan
````
Time taken: 31.091 seconds, Fetched: 9 row(s)
------------------------------------------------------------------------------------------------------------------------------------------
2. selecting pnote from concat_dispute_note
Query->
SELECT concat_ws("... ",collect_set(case when trim(concat_dispute_notes) <>"" then concat_dispute_notes end)) as PNOTE
from (SELECT concat(to_date(t.modified_on),"-",b.named_user,"-",t.notes) as concat_dispute_notes
FROM Table
2. output of pnote that we are getting->
````
2023-05-31-LMINCU2-chased for pym plan...
2023-08-16-LMINCU2-chased for pym plan...
2023-12-07-LMINCU2-chased for pym plan...
2024-01-11-LMINCU2-chased for pym plan...
2024-01-22-LMINCU2-chased for pym plan...
2023-05-16-LMINCU2-chased for pym plan...
2023-05-02-LMINCU2-chased for pym plan...
2023-03-22-LMINCU2-chased for pym plan...
2022-11-22-LMINCU2-chased for pym plan
````
Time taken: 41.456 seconds, Fetched: 1 row(s)
------------------------------------------------------------------------------------------------------------------------------------------
**We have tried using order by but not getting required order of output
SELECT concat_ws("... ",collect_set(case when trim(concat_dispute_notes) <>"" then concat_dispute_notes end)) as PNOTE
from (SELECT concat(to_date(t.modified_on),"-",b.named_user,"-",t.notes) as concat_dispute_notes
FROM Table) order by t.modified_on desc) as t2 order by PNOTE;
**output by order by
````
2023-05-31-LMINCU2-chased for pym plan...
2023-08-16-LMINCU2-chased for pym plan...
2023-12-07-LMINCU2-chased for pym plan...
2024-01-11-LMINCU2-chased for pym plan...
2024-01-22-LMINCU2-chased for pym plan...
2023-05-16-LMINCU2-chased for pym plan...
2023-05-02-LMINCU2-chased for pym plan...
2023-03-22-LMINCU2-chased for pym plan...
2022-11-22-LMINCU2-chased for pym plan
````
Time taken: 38.52 seconds, Fetched: 1 row(s)
We tried sort_Array:
SELECT concat_ws("...", sort_array(collect_set(case when trim(concat_dispute_notes) <> "" then concat_dispute_notes end))) as PNOTE
FROM (
SELECT concat(to_date(t.modified_on), "-", b.named_user, "-", t.notes) as concat_dispute_notes
FROM Table) AS Tablepnote11;
Output of Sort_Array:
````
2022-11-22-LMINCU2-chased for pym plan...
2023-03-22-LMINCU2-chased for pym plan...
2023-05-02-LMINCU2-chased for pym plan...
2023-05-16-LMINCU2-chased for pym plan...
2023-05-31-LMINCU2-chased for pym plan...
2023-08-16-LMINCU2-chased for pym plan...
2023-12-07-LMINCU2-chased for pym plan...
2024-01-11-LMINCU2-chased for pym plan...
2024-01-22-LMINCU2-chased for pym plan
```
Time taken: 31.719 seconds, Fetched: 1 row(s)
The output of sort array is Ascending order. We need descending order.
------------------------------------------------------------------------------------------------------------------------------------------
We have also tried sort by, cluster by, reverse sort, sort_array(---,false) but did not get required output.
we have tried all the above tried using Sort_Array which sort in Ascending order, tried ranking, sort_by, cluster_by, order_by, group_by, Reverse_array sort array(,false) but nothing worked to get it in descending order as a single record. |
Note the `Async` in `AsyncCallback`: you're making an async call to your server, but your code is trying to synchronously read the value. Until onSuccess or onFailure is called, no state has yet been returned to the server.
Long ago browsers supported synchronous calls to the server, but GWT-RPC never did, and browsers have since removed that feature _except_ when running on a web worker - the page itself can never do what you are asking.
Two brief options you can pursue (but it would be hard to make a concrete suggestion without more code - i.e. "besides logging the value, what are you going to do with it"):
* Structure the code that needs this server data such that it can guarantee that the value has already been returned from the server. For example, don't start any work that requires the value until onSuccess has been called (and be sure to handle the onFailure case as well).
* Guard the server data in some async pattern like a Promise or event, so that the code that needs this value can itself be asynchronous. |
While `$model->relation()->create([])` creates a related model, in the context of the relation, there isn't a way to give multiple contexts for multiple relations.
So you need to add the `user_id` yourself the data passed to `create`
``` php
#find post
$validated = $request->validated();
$validated['user_id'] = $request->user()->id; // user performing the request
$post = Post::find(1);
#Create comment and connect to post
$comment = $post->comments()->create($validated);
``` |
I am building an app in Next.js with no server. I get a video file input using an `<input>` tag, after that I convert it to a blob url and save it in `localStorage` for cross-session use.
However, after reloading the page the blob URL becomes invalid (at least that's what I think,) how can I use the blob after reload or is there an alternative ways?
**My Code:**
// convert file obj to blob url and save to localStorage
```
localStorage.setItem('temp', URL.createObjectURL(data.file[0]))
```
// get video blob url and set it as videoRef src
```
const vidUrl = localStorage.getItem('temp')
videoRef.current.src = vidUrl;
```
|
null |
It can be a vpn issue, connect to VPN to get started and later you can disconnect it. |
I'm relative new to HDR and I'm trying to understand the color transfer functions. The idea is to produce a YouTube-compatible HDR video.
So I manage to create a Direct2D surface with `DXGI_FORMAT_R16G16B16A16_FLOAT` and use `GUID_WICPixelFormat128bppPRGBAFloat` bitmaps and my primitives have a full color range as well.
I can convert these bitmaps to 10-10-10-2 DWORD (10-bit) with WIC (from `GUID_WICPixelFormat128bppPRGBAFloat` to `GUID_WICPixelFormat32bppRGBA1010102`) and use the NVidia NvEnc H.265 with `NV_ENC_HEVC_PROFILE_MAIN10_GUID` and `NV_ENC_BUFFER_FORMAT_ABGR10` buffers.
So far, so good. My app produces a 10-bit YUV video which plays correctly in Windows and also can be correctly loaded with Media Foundation in my editor later on.
The troubles begin when I specify in the NVEnc color transfers. As far as I understand, the color transfer modes describe the way to translate color values into light for HDR displays.
When I put the PG or HLG specs to NvEnc:
ne.encodeCodecConfig.hevcConfig.hevcVUIParameters.videoSignalTypePresentFlag = 1;
ne.encodeCodecConfig.hevcConfig.hevcVUIParameters.videoFormat = NV_ENC_VUI_VIDEO_FORMAT_UNSPECIFIED;
ne.encodeCodecConfig.hevcConfig.hevcVUIParameters.videoFullRangeFlag = 0;
ne.encodeCodecConfig.hevcConfig.hevcVUIParameters.colourDescriptionPresentFlag = 1;
ne.encodeCodecConfig.hevcConfig.hevcVUIParameters.colourPrimaries = NV_ENC_VUI_COLOR_PRIMARIES_BT2020;
ne.encodeCodecConfig.hevcConfig.hevcVUIParameters.transferCharacteristics = NV_ENC_VUI_TRANSFER_CHARACTERISTIC_SMPTE2084; ne.encodeCodecConfig.hevcConfig.hevcVUIParameters.colourMatrix = NV_ENC_VUI_MATRIX_COEFFS_BT2020_NCL;
the result video is recognized in YouTube as HDR but it plays badly, colors are heavily distorted (same in Windows Player).
The question is, how do I need to pre-process my RGB10 input in order to be correctly transferred with PQ or HLG?
I was suggested [OpenColorIO][1], I'm not very much sure how to proceed from now on.
[1]: https://opencolorio.readthedocs.io/en/latest/api/colorspace.html |
HDR video publishing |
|winapi|colors|ms-media-foundation|direct2d|hdr| |
How can I use a defined menu type list for running a range of selections, i.e. conditionally tick off the selected items at once.
Conventionally, the configurator looks like this:
```shell
@echo off
chcp 1251 >nul
:begin
echo [1] Folder 1 [3] Folder 3
echo [2] Folder 2 [4] Folder 4
set /P op=Enter the number:
if "%op%"=="1" goto op1
if "%op%"=="2" goto op2
if "%op%"=="3" goto op3
if "%op%"=="4" goto op4
:op1
cls
call "Delete_Folder1.bat" >nul
echo Done
timeout /t 2 >nul
cls
goto begin
:op2
cls
call "Delete_Folder2.bat" >nul
echo Done
timeout /t 2 >nul
goto begin
:op3
cls
call "Delete_Folder3.bat" >nul
echo Done
timeout /t 2 >nul
cls
goto begin
:op4
cls
call "Delete_Folder4.bat" >nul
echo Done
timeout /t 2 >nul
cls
goto begin
```
I don't understand how to select multiple itemss at once. Conditionally 1 and 3, or 2, 3 and 4, That is, to list several items in a row, and have them all fulfilled. |
I am using R for deep learning with the MNIST dataset.
I have written this code to store the training and testing data, and define and fit the model:
```
library(keras)
#Obtain data
mnist <- dataset_mnist()
train_data <- mnist$train$x
train_labels <- mnist$train$y
test_data <- mnist$test$x
test_labels <- mnist$test$y
#Reshape & normalize
train_data <- array_reshape(train_data,c(nrow(train_data), 784))
train_data <- train_data / 255
test_data <- array_reshape(test_data,c(nrow(test_data), 784))
test_data <- test_data / 255
#One hot encoding train_labels <- to_categorical(train_labels, 10)
test_labels <- to_categorical(test_labels, 10)
#Model
model <- keras_model_sequential()
model %>% layer_dense(units=128,activation="relu", input_shape=c(784)) %>%
layer_dropout(rate=0.3) %>%
layer_dense(units=64,activation="relu") %>%
layer_dropout(rate=0.2) %>%
layer_dense(units=10,activation="softmax")
#Compile
model %>% compile(loss="categorical_crossentropy",
optimizer="rmsprop",
metrics="accuracy")
#Train
history <- model %>% fit(train_data,
train_labels,
epochs=10,
batch_size=784,
validation_split=0.2,
verbose=2)
#Evaluation and prediction
model %>% evaluate(test_data, test_labels)
pred <- model %>% predict(test_data)
print(table(Predicted=pred, Actual=test_labels))
```
When running it in R studio, the following error occurs:
```
ValueError: No gradients provided for any variable: (['dense_124/kernel:0', 'dense_124/bias:0', 'dense_123/kernel:0', 'dense_123/bias:0', 'dense_122/kernel:0', 'dense_122/bias:0'],). Provided `grads_and_vars` is ((None, <tf.Variable 'dense_124/kernel:0' shape=(784, 128) dtype=float32>), (None, <tf.Variable 'dense_124/bias:0' shape=(128,) dtype=float32>), (None, <tf.Variable 'dense_123/kernel:0' shape=(128, 64) dtype=float32>), (None, <tf.Variable 'dense_123/bias:0' shape=(64,) dtype=float32>), (None, <tf.Variable 'dense_122/kernel:0' shape=(64, 10) dtype=float32>), (None, <tf.Variable 'dense_122/bias:0' shape=(10,) dtype=float32>)).
```
I think the problem may be with the conflicting shapes of the input data and the input, but no idea how to solve this.
Thanks for help! |
I am using R for deep learning with the MNIST dataset.
I have written this code to store the training and testing data, and define and fit the model:
```
library(keras)
#Obtain data
mnist <- dataset_mnist()
train_data <- mnist$train$x
train_labels <- mnist$train$y
test_data <- mnist$test$x
test_labels <- mnist$test$y
#Reshape & normalize
train_data <- array_reshape(train_data,c(nrow(train_data), 784))
train_data <- train_data / 255
test_data <- array_reshape(test_data,c(nrow(test_data), 784))
test_data <- test_data / 255
#One hot encoding train_labels <- to_categorical(train_labels, 10)
test_labels <- to_categorical(test_labels, 10)
#Model
model <- keras_model_sequential()
model %>% layer_dense(units=128,activation="relu", input_shape=c(784)) %>%
layer_dropout(rate=0.3) %>%
layer_dense(units=64,activation="relu") %>%
layer_dropout(rate=0.2) %>%
layer_dense(units=10,activation="softmax")
#Compile
model %>% compile(loss="categorical_crossentropy",
optimizer="rmsprop",
metrics="accuracy")
#Train
history <- model %>% fit(train_data,
train_labels,
epochs=10,
batch_size=784,
validation_split=0.2,
verbose=2)
#Evaluation and prediction
model %>% evaluate(test_data, test_labels)
pred <- model %>% predict(test_data)
print(table(Predicted=pred, Actual=test_labels))
```
When running it in R studio, the following error occurs:
```
ValueError: No gradients provided for any variable: (['dense_124/kernel:0', 'dense_124/bias:0', 'dense_123/kernel:0', 'dense_123/bias:0', 'dense_122/kernel:0', 'dense_122/bias:0'],). Provided `grads_and_vars` is ((None, <tf.Variable 'dense_124/kernel:0' shape=(784, 128) dtype=float32>), (None, <tf.Variable 'dense_124/bias:0' shape=(128,) dtype=float32>), (None, <tf.Variable 'dense_123/kernel:0' shape=(128, 64) dtype=float32>), (None, <tf.Variable 'dense_123/bias:0' shape=(64,) dtype=float32>), (None, <tf.Variable 'dense_122/kernel:0' shape=(64, 10) dtype=float32>), (None, <tf.Variable 'dense_122/bias:0' shape=(10,) dtype=float32>)).
```
I think the problem may be with the conflicting shapes of the input data and the input, but no idea how to solve this.
Thanks for help! |
Got it by **removing the bidirectional relationship** and adding a annotation `@JoinColumn` to my private `List<Frequencia> frequencias` |
I have a server which has a bunch of routes which may take a json body as an input and may answer with a json response as an output, and I want to quickly test these routes and various combinations of them.
The commands I am using manually look like :
```bash
curl -H "Content-Type:application/json" -d @- http://$HOST:$PORT/route1 | jq
```
I want to be able to compose these routes, piping them. Currently, I use a small script ./jqr which looks like :
```bash
case $1 in
route1)
jq 'filter_input_route1' |
curl -H "Content-Type:application/json" -d @- http://$HOST:$PORT/route1 |
jq
;;
# ...
```
And can be used like this :
```bash
./jqr route1 < share_example_a.json | jq 'some_manual_processing'
```
Which is better but far from perfect. Ideally, I would like to extract the curl invocation in a jq function rather than in a shell program, so as to be able to do
```bash
jq 'route1|some_manual_processing' < share_example_a.json
```
It may not look like much, but in addition to save alternating calls of jq and ./jqr, piping and copying is a lot more powerful in jq than in bash. Say I have routes `add` and `disable` with no output and a route `list` with no input, I could do things like :
```bash
echo '{"name":"toto","enabled":true}' | jq 'route_add,route_list|.[]|select(.name=="toto")'
```
```bash
jq 'route_list|.[]|select(.name=="toto")|route_disable'
```
Where `def filter_input_route_disable : {"id":.id}`
For the same reasons, making a wrapper script around jq to augment its capabilities seems highly impractical.
jq 's manual, while otherwise very thorough, is very indigent when it comes to custom functions and modules. I did manage to find some other sources to learn how to write them, but during my testings in `jq-1.6`, I didn't manage to shell out from a jq custom function.
It seems to be a popular feature request more than a decade old, approved by stedolan
https://github.com/jqlang/jq/issues/147
which still wasn't implemented somewhat recently
https://github.com/jqlang/jq/issues/1101
. I assume there are privilege issues involved : https://github.com/jqlang/jq/pull/1005
. Additionally, there might be issues with order of operations and (a)synchronicity, maybe jq language makes assumptions not compatible with what I want to do, and maybe it would not work well with awaiting results from a network request, I really have no clue.
One possible solution might be to write a jq function in C rather than in jq, like some built-in functions are, but I have no idea where to start.
Am I even going the right way about my issue ? I know of Postman and such, I'm not much of a clickety kid myself, I'd rather use my ./jqr solution, but I'm open to using some other curl frontend with good keyboard control and decent posix compatibility, preferably TUI.
Assuming a crude combination of curl and jq is the right way, is there anything I am missing ? Like a jq version or fork capable of shelling out ?
Assuming I didn't miss anything, is it worth attempting to write jq custom functions in C ? Like would it take more than a handful easy lines per route, and less than an hour of constant overhead, learning and setting things up ? |
Can I shell out from a jq custom function |
|shell|jq| |
I've developed a note-taking app that allows users to create notes with tags. I want users to be able to add 5 additional tags for each newly created note. The main issue is that the tags are automatically included in the newly created note instead of being stored separately. Despite several hours of effort, I haven't been successful in solving the problem. I'm seeking a solution to ensure that the tags are stored and displayed separately from the notes.
Implemented logic to extract and store tags separately from notes upon note creation. Adjusted the code to ensure that tags are not automatically included in the newly created note. Utilized event listeners to capture tag input and display them separately from the notes. Employed localStorage to persistently store both notes and tags. Updated the display function to show tags independently of notes.I've deleted the code now, as either the 'Add new Note' button didn't work, or when I pressed 'Enter' on the tags, they didn't get added anymore. I haven't been working with JS for long and I'm trying to master this language better through smaller projects, but I just can't seem to make progress here. Despite these attempts, the problem persists, and tags continue to be inadvertently included in the newly created notes.
HTML:
```
<div class="popup-box">
<div class="popup">
<div class="content">
<header>
<p></p>
<i class="uil uil-times"></i>
</header>
<form action="#">
<div class="row title">
<label>Title</label>
<input type="text" spellcheck="false" placeholder="Add a title...">
</div>
<div class="row description">
<label>Description</label>
<textarea spellcheck="false" placeholder="Add a description..."></textarea>
</div>
<div class="row wrapper-tag">
<div class="title-tags">
<label>Tags</label>
</div>
<div class="content-tags">
<p>Press enter or add a comma after each tag</p>
<ul><input type="text" spellcheck="false" class="tag-input" placeholder="Add a tag..."></ul>
</div>
<div class="details">
<p><span>5</span> tags are remaining</p>
</div>
</div>
<button></button>
</form>
</div>
</div>
</div>
<div class="wrapper">
<li class="add-box">
<div class="icon"><i class="uil uil-plus"></i></div>
<p>Add new note</p>
</li>
</div>
```
JS:
```
const addBox = document.querySelector(".add-box"),
popupBox = document.querySelector(".popup-box"),
popupTitle = popupBox.querySelector("header p"),
closeIcon = popupBox.querySelector("header i"),
titleTag = popupBox.querySelector("input"),
descTag = popupBox.querySelector("textarea"),
addBtn = popupBox.querySelector("button");
const months = ["January", "February", "March", "April", "May", "June", "July",
"August", "September", "October", "November", "December"];
const notes = JSON.parse(localStorage.getItem("notes") || "[]");
let isUpdate = false, updateId;
addBox.addEventListener("click", () => {
popupTitle.innerText = "Add a new Note";
addBtn.innerText = "Add Note";
popupBox.classList.add("show");
document.querySelector("body").style.overflow = "hidden";
if(window.innerWidth > 660) titleTag.focus();
});
closeIcon.addEventListener("click", () => {
isUpdate = false;
titleTag.value = descTag.value = "";
popupBox.classList.remove("show");
document.querySelector("body").style.overflow = "auto";
});
function showNotes() {
if(!notes) return;
document.querySelectorAll(".note").forEach(li => li.remove());
notes.forEach((note, id) => {
let filterDesc = note.description.replaceAll("\n", '<br/>');
let liTag = `<li class="note">
<div class="details">
<p>${note.title}</p>
<span>${filterDesc}</span>
</div>
<div class="bottom-content">
<span>${note.date}</span>
<div class="settings">
<i onclick="showMenu(this)" class="uil uil-ellipsis-h"></i>
<ul class="menu">
<li onclick="updateNote(${id}, '${note.title}', '${filterDesc}')"><i class="uil uil-pen"></i>Edit</li>
<li onclick="deleteNote(${id})"><i class="uil uil-trash"></i>Delete</li>
</ul>
</div>
</div>
</li>`;
addBox.insertAdjacentHTML("afterend", liTag);
});
}
showNotes();
function showMenu(elem) {
elem.parentElement.classList.add("show");
document.addEventListener("click", e => {
if(e.target.tagName != "I" || e.target != elem) {
elem.parentElement.classList.remove("show");
}
});
}
function deleteNote(noteId) {
let confirmDel = confirm("Are you sure you want to delete this note?");
if(!confirmDel) return;
notes.splice(noteId, 1);
localStorage.setItem("notes", JSON.stringify(notes));
showNotes();
}
function updateNote(noteId, title, filterDesc) {
let description = filterDesc.replaceAll('<br/>', '\r\n');
updateId = noteId;
isUpdate = true;
addBox.click();
titleTag.value = title;
descTag.value = description;
popupTitle.innerText = "Update a Note";
addBtn.innerText = "Update Note";
}
addBtn.addEventListener("click", e => {
e.preventDefault();
let title = titleTag.value.trim(),
description = descTag.value.trim();
if(title || description) {
let currentDate = new Date(),
month = months[currentDate.getMonth()],
day = currentDate.getDate(),
year = currentDate.getFullYear();
let noteInfo = {title, description, date: `${month} ${day}, ${year}`}
if(!isUpdate) {
notes.push(noteInfo);
} else {
isUpdate = false;
notes[updateId] = noteInfo;
}
localStorage.setItem("notes", JSON.stringify(notes));
showNotes();
closeIcon.click();
}
});
// TAG
document.addEventListener("DOMContentLoaded", function() {
const tagInput = document.querySelector('.tag-input');
const tagsList = document.querySelector('.content-tags ul');
const tagsRemaining = document.querySelector('.details span');
let tagsCount = 0;
tagInput.addEventListener('keydown', function(event) {
if ((event.key === 'Enter' || event.key === ',') && tagInput.value.trim() !== '') {
event.preventDefault();
const tagText = tagInput.value.trim();
if (tagsCount < 5) {
const tagItem = document.createElement('li');
tagItem.textContent = tagText;
tagsList.appendChild(tagItem);
tagInput.value = '';
tagsCount++;
tagsRemaining.textContent = 5 - tagsCount;
} else {
alert('You can only add up to 5 tags!');
}
}
});
tagsList.addEventListener('click', function(event) {
if (event.target.tagName === 'LI') {
event.target.remove();
tagsCount--;
tagsRemaining.textContent = 5 - tagsCount;
}
});
});
```
|
Trouble Separating Tags from Notes in JavaScript Notes App |
|javascript|html|css|tags| |
null |
[Windrose Diagram][1]I have set a higher zorder value for the bar plot compared to gridlines. But the gridlines are still visible over the bars. I have also tried for 'ax.set_axisbelow(True)' which is not working. Can anyone explain me how to solve the issue ?
```
from windrose import WindroseAxes
import pandas as pd
import matplotlib.pyplot as plt
# Sample data
data = {
'WD (Deg)': [45, 90, 135, 180, 225, 270, 315, 0, 45],
'WS (m/s)': [2, 3, 4, 5, 6, 7, 8, 9, 10]}
# Create a DataFrame
df = pd.DataFrame(data)
# Create a WindroseAxes object
ax = WindroseAxes.from_ax()
# Customize the grid
ax.grid(True, linestyle='--', linewidth=2.0, alpha=0.5,
color='grey', zorder = 0)
# Set the axis below the wind rose bars
ax.set_axisbelow(True) ## Not working
# Plot the wind rose bars
ax.bar(df['WD (Deg)'], df['WS (m/s)'], normed=True, opening=0.5, edgecolor='black',
cmap=plt.cm.jet, zorder = 3)
plt.show()
```
I don't understand why is it happening. I want to plot the gridlines below the barplots. Thank in advance.
[1]: https://i.stack.imgur.com/Y4wwe.jpg |
Same issue for me.
Try to use
return navigateTo("/sign-in"**, { external: true }**);
instead
return navigateTo("/sign-in"); |
For Blazor render mode `InteractiveServer` Microsoft says the way to avoid reading in data twice for a page (pre-render and again post-render) is to [save it the first time using PersistentComponentState and then the second time just use the data saved off to it][1].
Is this persisted data global to the app or is it scope level data tied to the session/circuit? Reading the docs it sounds like it's global, but it suggest as the key `nameof({VARIABLE})` and for multiple users hitting the same page, they would have the same key.
That will work if the data is per session/circuit. That will have key name conflicts if it is global.
And follow up question, if it's global, why not use MemoryCache? Which solves the re-read time hit not only for pre/post render, but for multiple people hitting the same page who should be shown the same data (like Events today).
[1]: https://learn.microsoft.com/en-us/aspnet/core/blazor/components/prerender?view=aspnetcore-8.0#persist-prerendered-state |
Is PersistentComponentState tied to the SignalR circuit or global to the app? |
|blazor|blazor-server-side| |
My problem was I installed postgress server before on my computer. Because of this I didn't have free port 5432. I changed my port to 5438, and I connected to my server successfully. |
[Windrose Diagram][1] I have set a higher zorder value for the bar plot compared to gridlines. But the gridlines are still visible over the bars. I have also tried for 'ax.set_axisbelow(True)' which is not working. Can anyone explain me how to solve the issue ?
```
from windrose import WindroseAxes
import pandas as pd
import matplotlib.pyplot as plt
# Sample data
data = {
'WD (Deg)': [45, 90, 135, 180, 225, 270, 315, 0, 45],
'WS (m/s)': [2, 3, 4, 5, 6, 7, 8, 9, 10]}
# Create a DataFrame
df = pd.DataFrame(data)
# Create a WindroseAxes object
ax = WindroseAxes.from_ax()
# Customize the grid
ax.grid(True, linestyle='--', linewidth=2.0, alpha=0.5,
color='grey', zorder = 0)
# Set the axis below the wind rose bars
ax.set_axisbelow(True) ## Not working
# Plot the wind rose bars
ax.bar(df['WD (Deg)'], df['WS (m/s)'], normed=True, opening=0.5, edgecolor='black',
cmap=plt.cm.jet, zorder = 3)
plt.show()
```
I don't understand why is it happening. I want to plot the gridlines below the barplots. Thank in advance.
[1]: https://i.stack.imgur.com/Y4wwe.jpg |
I think the following link may be used for the detailed procedure.
https://www.bythedevs.com/post/securing-when-an-http-request-is-received-trigger-in-power-automate-part-2 |
I think you're on the right track; if you drop the first element of the 'breaks' vector in your 'integer_breaks' function it produces your desired outcome (if I've understood correctly), e.g.
``` r
library(tidyverse)
int_breaks_drop_zero <- function(n = 5, ...) {
fxn <- function(x) {
breaks <- floor(pretty(x, n, ...))[-1]
names(breaks) <- attr(breaks, "labels")
breaks[-1]
}
return(fxn)
}
xxx <- c(1, 2, 4, 1, 1, 4, 2, 4, 1, 1, 3, 3, 4 )
yyy <- c(11, 22, 64, 45, 76, 47, 23, 44, 65, 86, 87, 83, 56 )
data <- data.frame(xxx, yyy)
p <- ggplot(data = data, aes(x = data[ , 1], y = data[ , 2]), group = data[ , 1]) +
geom_count(color = "blue") +
expand_limits(x = 0) +
scale_x_continuous(breaks = int_breaks_drop_zero())
p
```
<!-- -->
<sup>Created on 2024-03-13 with [reprex v2.1.0](https://reprex.tidyverse.org)</sup>
Does that solve your problem? |
Copy From One Closed Workbook to Another (`PERSONAL.xlsb`!?)
-
Sub CopyRawData()
Const SRC_FOLDER_PATH As String = "U:\Documents\Macro Testing\Raw Data\"
Const SRC_FILE_PATTERN As String = "SLTEST_*.csv"
Const SRC_FIRST_ROW_RANGE As String = "A2:G2"
Const DST_FILE_PATH As String _
= "U:\Documents\Macro Testing\Data\Finished Data.xlsx"
Const DST_SHEET_NAME As String = "Banana"
Const DST_FIRST_CELL As String = "A2"
Dim sFileName As String: sFileName = Dir(SRC_FOLDER_PATH & SRC_FILE_PATTERN)
If Len(sFileName) = 0 Then
MsgBox "No file matching the pattern """ & SRC_FILE_PATTERN _
& """ found in """ & SRC_FOLDER_PATH & """!", vbExclamation
Exit Sub
End If
Dim sFilePath As String, sFilePathFound As String
Dim sFileDate As Date, sFileDateFound As Date
Do While Len(sFileName) > 0
sFilePathFound = SRC_FOLDER_PATH & sFileName
sFileDateFound = FileDateTime(sFilePathFound)
If sFileDate < sFileDateFound Then
sFileDate = sFileDateFound
sFilePath = sFilePathFound
End If
sFileName = Dir
Loop
Application.ScreenUpdating = False
Dim swb As Workbook: Set swb = Workbooks.Open(sFilePath, , True) ' , Local:=True)
Dim sws As Worksheet: Set sws = swb.Sheets(1)
Dim srg As Range, slcell As Range, rCount As Long
With sws.Range(SRC_FIRST_ROW_RANGE)
Set slcell = .Resize(sws.Rows.Count - .Row + 1) _
.Find("*", , xlValues, , xlByRows, xlPrevious)
If slcell Is Nothing Then
swb.Close SaveChanges:=False
MsgBox "No data found in workbook """ & sFilePath & """!", _
vbExclamation
Exit Sub
End If
rCount = slcell.Row - .Row + 1
Set srg = .Resize(rCount)
End With
Dim dwb As Workbook: Set dwb = Workbooks.Open(DST_FILE_PATH)
Dim dws As Worksheet: Set dws = dwb.Sheets(DST_SHEET_NAME)
Dim drg As Range: Set drg = dws.Range(DST_FIRST_CELL) _
.Resize(rCount, srg.Columns.Count)
srg.Copy Destination:=drg
swb.Close SaveChanges:=False
With drg
' Clear below.
.Resize(dws.Rows.Count - .Row - rCount + 1).Offset(rCount).Clear
' Format.
.HorizontalAlignment = xlCenter
.VerticalAlignment = xlCenter
'.EntireColumn.AutoFit
End With
'dwb.Close SaveChanges:=True
Application.ScreenUpdating = True
MsgBox "Raw data copied.", vbInformation
End Sub |
JavaScript: How to use blob URL saved in localStorage after reload |
|javascript|reactjs|next.js| |
1. Sure you did not update accidentaly to 4.27.**3** or later? I got exactly your problem, after I installed the 4.28.0 Version - see below...
2. You need Hyper-V enabled for this: is it working correctly on your machine? If you are using Windows Home Edition there is no chance: upgrade your Windows to Professional Edition - see maybe [tag:docker-for-windows]?
From my view, at this time the Docker Desktop (at least Version 4.28.0) seems to have a problem with some current Windows 10 setups, updates and things ...
After I deinstalled the 4.28.0 and replaced it with a fresh install of the Docker Desktop Version 4.27.2 (see [Docker Desktop release notes][2]) everything works fine for me with VS 2022 and ASP.NET 8.
... don't Update DD until this is fixed! ;)
In [GitHub, docker/for-win: ERROR: request returned Internal Server Error for API route and version...][1] there is a hint upgrading the WSL2 which might help too.
[1]: https://github.com/docker/for-win/issues/13909
[2]: https://docs.docker.com/desktop/release-notes/#4272 |
I have used the Eureka discovery server for setting up the microservices.
Initially, DiscoveryClient was registered under DiscoveryClient_API-GATEWAY/host.docker.internal:api-gateway. Now it has changed to the machine name DiscoveryClient_API-GATEWAY/SL-GGnanaseelan.mshome.net:api-gateway, and it's throwing the following error.
Could you please help with this?
<!-- begin snippet: js hide: false console: false babel: false -->
<!-- language: lang-html -->
2024-03-28T16:24:19.884+05:30 INFO 20080 --- [api-gateway] [ main] com.netflix.discovery.DiscoveryClient : Discovery Client initialized at timestamp 1711623259882 with initial instances count: 0
2024-03-28T16:24:19.886+05:30 INFO 20080 --- [api-gateway] [ main] o.s.c.n.e.s.EurekaServiceRegistry : Registering application API-GATEWAY with eureka with status UP
2024-03-28T16:24:19.887+05:30 INFO 20080 --- [api-gateway] [ main] com.netflix.discovery.DiscoveryClient : Saw local status change event StatusChangeEvent [timestamp=1711623259887, current=UP, previous=STARTING]
2024-03-28T16:24:19.889+05:30 INFO 20080 --- [api-gateway] [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_API-GATEWAY/SL-GGnanaseelan.mshome.net:api-gateway: registering service...
2024-03-28T16:24:19.970+05:30 INFO 20080 --- [api-gateway] [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_API-GATEWAY/SL-GGnanaseelan.mshome.net:api-gateway - registration status: 204
<!-- end snippet -->
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
java.net.UnknownHostException: Failed to resolve 'SL-GGnanaseelan.mshome.net' [A(1)] after 2 queries
at io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1125) ~[netty-resolver-dns-4.1.107.Final.jar:4.1.107.Final]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
<!-- end snippet -->
Now
[![enter image description here][1]][1]
eailer
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/5g1MO.png
[2]: https://i.stack.imgur.com/j6lkJ.png |
I have some JavaScript that finds all the checked items and arranges the data in memory.
I need to display a modal popup. And, when that popup is closed, I will again need the data I created above.
I'm trying to determine the best way to store that data so I won't need to recreate it. I used to think I could store it in an element using a `data-xxx` attribute. But I've found you cannot read and write these like a variable.
I supposed another approach is to create a variable at a page-level scope. But JavaScript isn't my main language and I'm not sure where a variable will live through different events.
What is a super reliable way to store data so that it will be available later? |
You cannot use `or` keyword in `join` clause.
Instead, you can use `or` or `||` in `where` clause to filter out unsatisfying condition.
See also: https://stackoverflow.com/questions/1159022/linq-join-with-or |
your problem is in this for loop. In each iteration of the loop you are getting a random number between 1 and (9-n). there is nothing preventing you from getting the same number on a subsequent iteration. In theory you could end up getting "1" all 9 times.
for (int n = unshuffledCategories.Count; n > 0; --n)
{
int k = r.Next(n);
String temp = unshuffledCategories[k];
shuffledCategories.Add(temp);
}
You could try this instead:
for (int n = labels.Count; n > 0; --n)
{
int k = r.Next(unshuffledCategories.Count);
String temp = unshuffledCategories[k];
unshuffledCatogories.Remove(temp);
shuffledCategories.Add(temp);
}
This way you are removing the item from unshuffledCategories each time and getting a random value between 0 and the remaining elements so you wont end up with duplicates. |
To work with an `SDDL` (Security Descriptor Definition Language) you first need to know the structure.
From [MS Learn - Security Descriptor String Format](https://learn.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format)
> The format is a null-terminated string with tokens to indicate each of the four main components of a security descriptor:
> * owner (O:),
> * primary group (G:),
> * DACL (D:),
> * and SACL (S:)
**DACL (Discretionary Access Control List)**
A DACL is a list of Access Control Entries (ACEs) that dictate who can access a specific object and what actions they can perform with it.
The term "discretionary" implies that the object’s owner has control over granting access and defining the level of access.
**SACL (System Access Control List)**
A SACL is a set of access control entries (ACEs) that specify the security events to be audited for users or system processes attempting to access an object. These objects can include files, registry keys, or other system resources.
**Structure of the SDDL**
This is a simple example of a Security Descriptor String `SDDL`
~~~
"O:LAG:BUD:(A;;RPWPCCDCLCSWRCWDWOGA;;;S-1-5-32-545)"
~~~
* O:owner_sid
* G:group_sid (primary group)
* D:dacl_flags(string_ace1)(string_ace2)... (string_acen)
* S:sacl_flags(string_ace1)(string_ace2)... (string_acen)
When assigning permissions you are using the `DACL` part of the `SDDL`.
Every entry in a `SDDL` is called an `ACE` (Access Control Entry).
This particular example doesn't have a SACL (S: is missing).
Instead of SID:s for O: and G:, the constants `LA` (Local Administrator) and `BU` (Builtin Users) are used.
See [MS Learn - SID Strings](https://learn.microsoft.com/en-us/windows/win32/secauthz/sid-strings)
~~~
ConvertFrom-SddlString "O:AOG:AOD:(A;;RPWPCCDCLCSWRCWDWOGA;;;S-1-5-32-545)"
Owner : EXAMPLEHOST\Administrator
Group : BUILTIN\Users
DiscretionaryAcl : {BUILTIN\Users: AccessAllowed (ChangePermissions, CreateDirectories,
ExecuteKey, GenericAll, GenericExecute, GenericWrite, ListDirectory,
ReadExtendedAttributes, ReadPermissions, TakeOwnership, Traverse,
WriteData, WriteExtendedAttributes, WriteKey)}
SystemAcl : {}
~~~
Each `ACE-string` in the `DACL` follows the structure of
~~~
ace_type;ace_flags;rights;object_guid;inherit_object_guid;account_sid;(resource_attribute)
~~~
See [MS Learn - ACE Strings](https://learn.microsoft.com/en-us/windows/win32/secauthz/ace-strings)
**Constructing an ACE**
So, we want to add additional `ACE-strings` into the `DACL` (or change, remove or replace).
This might be done by changing the `SDDL` using string manipulation. But I don't know how to integrate the `SDDL` back into a .Net object in that way.
The relevant fields for adding an ACE to a DACL are:
* ace_type: Indicates the type of ACE (e.g., A for access allowed, D for access denied).
* ace_flags: Flags specifying inheritance and other properties.
* rights: Specifies the access rights granted or denied.
* account_sid: The Security Identifier (SID) of the user or group.
The order of these fields matters.
Example ACE string for granting read access to a specific user:
~~~
(A;;GA;;;S-1-5-32-545)
~~~
* A: Access allowed.
* GA: Grant all permissions.
* S-1-5-32-545 (Well known SID for BUILTIN\Users)
Now, when the basics are set, is where the answers from above makes an entry using some .Net magic.
The `RawDescriptor` is already avaialable from the `ConvertFrom-SddlString` cmdlet.
~~~
$sddl = ConvertFrom-SddlString "O:LAG:BUD:(A;;RPWPCCDCLCSWRCWDWOGA;;;S-1-5-32-545)"
$sddl.RawDescriptor
IsContainer : False
IsDS : False
ControlFlags : DiscretionaryAclPresent, SelfRelative
Owner : S-1-5-21-XXXXXXX-500
Group : S-1-5-32-545
SystemAcl :
DiscretionaryAcl : {System.Security.AccessControl.CommonAce}
IsSystemAclCanonical : True
IsDiscretionaryAclCanonical : True
BinaryLength : 96
$sddl.RawDescriptor.DiscretionaryAcl
BinaryLength : 24
AceQualifier : AccessAllowed
IsCallback : False
OpaqueLength : 0
AccessMask : 269353023
SecurityIdentifier : S-1-5-32-545
AceType : AccessAllowed
AceFlags : None
IsInherited : False
InheritanceFlags : None
PropagationFlags : None
AuditFlags : None
~~~
Constructing a bare `ACE`-object can be done as above. But is incomplete and my knowledge of .Net doesn't permit me to find out what's missing :/
The other option provided above is working with a `SecurityDescriptor` object instead. Which is already provided for us :)
~~~
$sddl.RawDescriptor.DiscretionaryAcl.AddAccess("Allow", "S-1-5-32-546", 268435456,"None","None")
$sddl.RawDescriptor.GetSddlForm([System.Security.AccessControl.AccessControlSections]::All)
O:LAG:BUD:(A;;CCDCLCSWRPWPRCWDWOGA;;;BU)(A;;GA;;;BG)
~~~
See [MS Learn - DiscretionaryAcl.AddAccess Method](https://learn.microsoft.com/en-us/dotnet/api/system.security.accesscontrol.discretionaryacl.addaccess?view=net-8.0)
The "only" thing missing for now in this answer, is how to construct the mask which lurks in [MS Learn - ObjectAccessRule Class](https://learn.microsoft.com/en-us/dotnet/api/system.security.accesscontrol.objectaccessrule?view=net-8.0)
I will get back on this if I learn how...
(To be continued) |
The algo is the following:
1. Calculate total size of all items
2. Get size of a group
3. Iterate items and push to groups according to the current sum of item sizes
It doesn't include a case when there's both groups share equal amount of an item.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const groupNum = 5;
const items=[{title:"Item A",size:5},{title:"Item B",size:18},{title:"Item C",size:10},{title:"Item D",size:12},{title:"Item E",size:4},{title:"Item F",size:3},{title:"Item G",size:1},{title:"Item H",size:2},{title:"Item I",size:9},{title:"Item J",size:9},{title:"Item K",size:11},{title:"Item L",size:2},{title:"Item M",size:4},{title:"Item N",size:16},{title:"Item O",size:38},{title:"Item P",size:11},{title:"Item R",size:2},{title:"Item S",size:8},{title:"Item T",size:4},{title:"Item U",size:3},{title:"Item V",size:14},{title:"Item W",size:3},{title:"Item X",size:7},{title:"Item Y",size:4},{title:"Item Z",size:3},{title:"Item 1",size:1},{title:"Item 2",size:10},{title:"Item 3",size:2},{title:"Item 4",size:5},{title:"Item 5",size:1},{title:"Item 6",size:2},{title:"Item 7",size:10},{title:"Item 8",size:1},{title:"Item 9",size:4},];
console.log(distributeItems(items, groupNum));
function distributeItems(items, groupNum) {
const total = items.reduce((r, item) => r + item.size, 0);
const chunkTotal = total / groupNum | 0;
const groups = Array.from({ length: groupNum }, () => []);
let sum = 0, groupIdx = 0;
for (let i = 0; i < items.length; i++) {
const group = groups[groupIdx];
const item = items[i];
sum += item.size;
if (sum >= chunkTotal) {
const right = chunkTotal - (sum - item.size);
const left = sum = Math.min(chunkTotal, item.size - right);
groupIdx++;
// compare the right edge of the left chunk with the left edge of the right chunk (the next one)
if (right < left && group.length) {
groups[groupIdx].push(item);
} else {
group.push(item);
}
// just fill the last group
if (groupIdx === groupNum - 1) {
while (++i < items.length) groups[groupIdx].push(items[i]);
break;
}
continue;
}
group.push(item);
}
return groups;
}
<!-- end snippet -->
And benchmarking against A* search suggested in the comments:
```
` Chrome/122
--------------------------------------------------
Alexander 1.00x | x100000 136 137 138 139 140
Mabel 35588.24x | x10 484 488 493 499 519
--------------------------------------------------
https://github.com/silentmantra/benchmark `
```
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const groupNum = 5;
const items=[{title:"Item A",size:5},{title:"Item B",size:18},{title:"Item C",size:10},{title:"Item D",size:12},{title:"Item E",size:4},{title:"Item F",size:3},{title:"Item G",size:1},{title:"Item H",size:2},{title:"Item I",size:9},{title:"Item J",size:9},{title:"Item K",size:11},{title:"Item L",size:2},{title:"Item M",size:4},{title:"Item N",size:16},{title:"Item O",size:38},{title:"Item P",size:11},{title:"Item R",size:2},{title:"Item S",size:8},{title:"Item T",size:4},{title:"Item U",size:3},{title:"Item V",size:14},{title:"Item W",size:3},{title:"Item X",size:7},{title:"Item Y",size:4},{title:"Item Z",size:3},{title:"Item 1",size:1},{title:"Item 2",size:10},{title:"Item 3",size:2},{title:"Item 4",size:5},{title:"Item 5",size:1},{title:"Item 6",size:2},{title:"Item 7",size:10},{title:"Item 8",size:1},{title:"Item 9",size:4},];
function distributeItems(items, groupNum) {
const total = items.reduce((r, item) => r + item.size, 0);
const chunkTotal = total / groupNum | 0;
const groups = Array.from({ length: groupNum }, () => []);
let sum = 0, groupIdx = 0;
for (let i = 0; i < items.length; i++) {
const group = groups[groupIdx];
const item = items[i];
sum += item.size;
if (sum >= chunkTotal) {
const right = chunkTotal - (sum - item.size);
const left = sum = Math.min(chunkTotal, item.size - right);
groupIdx++;
// compare the right edge of the left chunk with the left edge of the right chunk (the next one)
if (right < left && group.length) {
groups[groupIdx].push(item);
} else {
group.push(item);
}
// just fill the last group
if (groupIdx === groupNum - 1) {
while (++i < items.length) groups[groupIdx].push(items[i]);
break;
}
continue;
}
group.push(item);
}
return groups;
}
// @benchmark Alexander
distributeItems(items, groupNum);
class Balancer {
cost(groupSizes) {
return groupSizes.reduce((cost, size) => cost + Math.pow(size - this.averageSize, 2), 0);
}
searchAStar(currentGroup, groupSizes, remainingItems) {
if (currentGroup === this.groupNum) {
return { cost: this.cost(groupSizes), groups: groupSizes };
}
let bestResult = { cost: Infinity, groups: null };
for (let i = 1; i <= remainingItems.length; i++) {
const newItem = remainingItems.slice(0, i);
const newGroupSizes = groupSizes.slice();
newGroupSizes[currentGroup] += newItem.reduce((total, item) => total + item.size, 0);
const restResult = this.searchAStar(currentGroup + 1, newGroupSizes, remainingItems.slice(i));
const totalResult = {
cost: restResult.cost + this.cost(newGroupSizes),
groups: [newItem].concat(restResult.groups),
};
if (totalResult.cost < bestResult.cost) {
bestResult = totalResult;
}
}
return bestResult;
}
distributeItems(items, groupNum) {
this.items = items;
this.groupNum = groupNum;
this.averageSize = items.reduce((total, item) => total + item.size, 0) / groupNum;
const result = this.searchAStar(0, Array(groupNum).fill(0), items);
return result.groups.slice(0, groupNum);
}
}
// @benchmark Mabel
new Balancer().distributeItems(items, groupNum);
/*@end*/eval(atob('e2xldCBlPWRvY3VtZW50LmJvZHkucXVlcnlTZWxlY3Rvcigic2NyaXB0Iik7aWYoIWUubWF0Y2hlcygiW2JlbmNobWFya10iKSl7bGV0IHQ9ZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgic2NyaXB0Iik7dC5zcmM9Imh0dHBzOi8vY2RuLmpzZGVsaXZyLm5ldC9naC9zaWxlbnRtYW50cmEvYmVuY2htYXJrL2xvYWRlci5qcyIsdC5kZWZlcj0hMCxkb2N1bWVudC5oZWFkLmFwcGVuZENoaWxkKHQpfX0='));
<!-- end snippet -->
|
> Codebase was like this:
>
> ```
> Thread.Sleep(500);
> ```
> A colleague refactored it to be like this:
>
> ```
> Task.Delay(500).Wait();
> ```
Your colleague's refactoring constitutes neither an improvement nor a regression. Both commands block the current thread for 500 milliseconds. Functionally there is no difference between them. If one is more precise than the other (in the order of a milliseconds), it's probably the `Thread.Sleep(500)`, but I haven't tested experimentally this theory.
The mechanism that blocks the current thread is not exactly the same. In the case of the `Thread.Sleep` there is a direct call to the `Interop.Kernel32.Sleep` API ([source code][1]). In the case of `Task.Delay`+`Wait` there is much more going on. It involves the .NET [`TimerQueue`][2] infrastructure, a `ManualResetEventSlim` for blocking the current thread until a signal comes from the said infrastructure, and eventually deep in this infrastructure there should be a `Interop.Kernel32.Sleep` as well that manages the activity of the dedicated .NET thread that ticks the timers. But in the grand scheme of things all these are happening very fast, and you shouldn't see any measurable negative effect in performance by using your colleague's approach.
It is mostly painful for the eyes of the reviewers though, to see a simple line of code replaced by a less simple line of code that does the same thing, but requires significantly more mental effort to comprehend. Including the mental effort associated with querying the intentions behind this unwarranted complexity.
---
**Caution:** As Marc Gravell mentioned [in a comment][3], the `Task.Delay(500).Wait();` relies on the availability of a `ThreadPool` thread to complete the `Task.Delay` task that signals the awakening of the blocked thread. So in a scenario where the `ThreadPool` is saturated, the `Task.Delay(500).Wait();` can become remarkably innacurate, extending the delay time by many seconds. Theoretically it could even cause a deadlock in a scenario where the `ThreadPool` has reached its [maximum size][4], and the saturation incident can't be resolved by injecting more threads. The maximum size of the `ThreadPool` is normally pretty high (32,767 in my machine for a console app), so this scenario is quite unlikely to occur in practice.
[1]: https://github.com/dotnet/runtime/blob/v8.0.0/src/libraries/System.Private.CoreLib/src/System/Threading/Thread.Windows.cs#L20
[2]: https://github.com/dotnet/runtime/blob/v8.0.0/src/libraries/System.Private.CoreLib/src/System/Threading/Timer.cs
[3]: https://stackoverflow.com/questions/78060027/thread-sleep-vs-task-delay-wait/78064515?noredirect=1#comment137626488_78064515
[4]: https://learn.microsoft.com/en-us/dotnet/api/system.threading.threadpool.getmaxthreads |
e ureka Discovery client is not register under API-GATEWAY\host.docker.internal |
|spring-boot|docker|microservices| |
This is my code to read the csv file asynchronoulsy using ReadLineAsync() function from the [StreamReader][1] class but it reads first line only of the [csv file][2]
private async Task ReadAndSendJointDataFromCSVFileAsync(CancellationToken cancellationToken) {
Stopwatch sw = new Stopwatch();
sw.Start();
string filePath = @ "/home/adwait/azure-iot-sdk-csharp/iothub/device/samples/solutions/PnpDeviceSamples/Robot/Data/Robots_data.csv";
using(StreamReader oStreamReader = new StreamReader(File.OpenRead(filePath))) {
string sFileLine = await oStreamReader.ReadLineAsync();
string[] jointDataArray = sFileLine.Split(',');
// Assuming the joint data is processed in parallel
var tasks = new List < Task > ();
// Process joint pose
tasks.Add(Task.Run(async () => {
var jointPose = jointDataArray.Take(7).Select(Convert.ToSingle).ToArray();
var jointPoseJson = JsonSerializer.Serialize(jointPose);
await SendTelemetryAsync("JointPose", jointPoseJson, cancellationToken);
}));
// Process joint velocity
tasks.Add(Task.Run(async () => {
var jointVelocity = jointDataArray.Skip(7).Take(7).Select(Convert.ToSingle).ToArray();
var jointVelocityJson = JsonSerializer.Serialize(jointVelocity);
await SendTelemetryAsync("JointVelocity", jointVelocityJson, cancellationToken);
}));
// Process joint acceleration
tasks.Add(Task.Run(async () => {
var jointAcceleration = jointDataArray.Skip(14).Take(7).Select(Convert.ToSingle).ToArray();
var jointAccelerationJson = JsonSerializer.Serialize(jointAcceleration);
await SendTelemetryAsync("JointAcceleration", jointAccelerationJson, cancellationToken);
}));
// Process external wrench
tasks.Add(Task.Run(async () => {
var externalWrench = jointDataArray.Skip(21).Take(6).Select(Convert.ToSingle).ToArray();
var externalWrenchJson = JsonSerializer.Serialize(externalWrench);
await SendTelemetryAsync("ExternalWrench", externalWrenchJson, cancellationToken);
}));
await Task.WhenAll(tasks);
}
sw.Stop();
_logger.LogDebug(String.Format("Elapsed={0}", sw.Elapsed));
}
Basically, the csv file has 10128 lines. I want to read the latest line which gets added to the csv file.
How do I do it?
Using File.ReadLine(filePath) throws this exception
> Unhandled exception. System.IO.PathTooLongException: The path
> '/home/adwait/azure-iot-sdk-csharp/iothub/device/samples/solutions/PnpDeviceSamples/Robot/-2.27625e-06,-0.78542,-3.79241e-06,-2.35622,5.66111e-06,3.14159,0.785408,0.00173646,-0.0015847,0.000962475,-0.00044469,-0.000247682,-0.000270337,0.000704195,0.000477503,0.000466693,-6.50664e-05,0.00112044,-2.47425e-06,0.000445592,-0.000685786,1.21642,-0.853085,-0.586162,-0.357496,-0.688677,0.230229' is too long, or a component of the specified path is too long.
private async Task ReadAndSendJointDataFromCSVFileAsync(CancellationToken cancellationToken) {
Stopwatch sw = new Stopwatch();
sw.Start();
string filePath = @ "/home/adwait/azure-iot-sdk-csharp/iothub/device/samples/solutions/PnpDeviceSamples/Robot/Data/Robots_data.csv";
using(StreamReader oStreamReader = new StreamReader(File.ReadLInes(filePath).Last())) {
string sFileLine = await oStreamReader.ReadLineAsync();
string[] jointDataArray = sFileLine.Split(',');
// Assuming the joint data is processed in parallel
var tasks = new List < Task > ();
// Process joint pose
tasks.Add(Task.Run(async () => {
var jointPose = jointDataArray.Take(7).Select(Convert.ToSingle).ToArray();
var jointPoseJson = JsonSerializer.Serialize(jointPose);
await SendTelemetryAsync("JointPose", jointPoseJson, cancellationToken);
}));
// Process joint velocity
tasks.Add(Task.Run(async () => {
var jointVelocity = jointDataArray.Skip(7).Take(7).Select(Convert.ToSingle).ToArray();
var jointVelocityJson = JsonSerializer.Serialize(jointVelocity);
await SendTelemetryAsync("JointVelocity", jointVelocityJson, cancellationToken);
}));
// Process joint acceleration
tasks.Add(Task.Run(async () => {
var jointAcceleration = jointDataArray.Skip(14).Take(7).Select(Convert.ToSingle).ToArray();
var jointAccelerationJson = JsonSerializer.Serialize(jointAcceleration);
await SendTelemetryAsync("JointAcceleration", jointAccelerationJson, cancellationToken);
}));
// Process external wrench
tasks.Add(Task.Run(async () => {
var externalWrench = jointDataArray.Skip(21).Take(6).Select(Convert.ToSingle).ToArray();
var externalWrenchJson = JsonSerializer.Serialize(externalWrench);
await SendTelemetryAsync("ExternalWrench", externalWrenchJson, cancellationToken);
}));
await Task.WhenAll(tasks);
}
sw.Stop();
_logger.LogDebug(String.Format("Elapsed={0}", sw.Elapsed));
}
[1]: https://learn.microsoft.com/en-us/dotnet/api/system.io.streamreader?view=net-8.0
[2]: https://github.com/addy1997/azure-iot-sdk-csharp/blob/main/iothub/device/samples/solutions/PnpDeviceSamples/Robot/Data/Robots_data.csv |
Your goal was to enable dynamically created HTML components to be compatible with the Swiper plugin. In order to accomplish this, Swiper elements were dynamically detected and initialized as new elements are added to the DOM using MutationObserver and event delegation. Thus, items that are created dynamically can also easily make use of Swiper capabilities.
var targetNode = document.body;
var config = { childList: true, subtree: true };
var callback = function(mutationsList, observer) {
for(var mutation of mutationsList) {
if (mutation.type === 'childList') {
mutation.addedNodes.forEach(function(node) {
if (node.classList && node.classList.contains('swiper1')) {
var swiper = new Swiper(node, {
spaceBetween: 30,
pagination: {
el: node.parentElement.querySelector('.swiper-pagination1'),
clickable: true,
dynamicBullets: true,
},
navigation: {
nextEl: node.parentElement.querySelector('.swiper-button-next1'),
prevEl: node.parentElement.querySelector('.swiper-button-prev1'),
},
});
}
});
}
}
};
var observer = new MutationObserver(callback);
observer.observe(targetNode, config);
$(document).on('click', '.swiper1', function() {
var swiper = new Swiper(this, {
spaceBetween: 30,
pagination: {
el: $(this).parent().find('.swiper-pagination1'),
clickable: true,
dynamicBullets: true,
},
navigation: {
nextEl: $(this).parent().find('.swiper-button-next1'),
prevEl: $(this).parent().find('.swiper-button-prev1'),
},
});
});
|
I was working on a fullstack MEAN project and I decided to create separate local repos for client and server so that I didn't have to do a merge whenever I changed something on the backend to test it on the frontend. My folder structure is like this:
--MyProject
--client
--.git
--server
--.git
I now want to add the project to GitHub but I don't want to create two separate repositories there. I want a single repository which when cloned will pull the entire project, both frontend and backend. But the repos in client and server should be isolated from each other.
I read about a git feature called submodules but I am unsure if that is what I'm looking for. In every tutorial it's mentioned that submodules are generally used when we want to use some third party repo in our own version controlled project. Also it seems overly complex for this simple task of placing the two repos in a single container on GitHub. |
Single Github repository for two local repos for a fullstack project |
As of v17.2, yes deferrable views are only supported on standalone components.
The Angular team is looking to add support for non-standalone components in v18.
So wait & see. |
I am porting some code that defined `tty_ldisc_ops.ioctl()` as:
````
static int ...ldisc_ioctl(struct tty_struct *tty, struct file *file, unsigned int cmd, unsigned long arg)
````
But the [current spec](https://docs.kernel.org/driver-api/tty/tty_ldisc.html#c.tty_ldisc_ops) is:
````
static int ...ldisc_ioctl(struct tty_struct *tty, unsigned int cmd, unsigned long arg)
````
What happened to the "file" argument? I looked for change logs and source. |
I've now created a python script to produce a csv file report from Thingsboard and below outlining the key concepts needed to achieve this. I found the TB documentation a bit light so hope that some of this detail is of help to others.
The process I used was to make a REST API call using the TB python library. This worked very well.
But before cracking on with python, use the swagger interface to test your login and data calls:
https://thingsboard.cloud/swagger-ui/
(for the cloud instance of TB)
make sure to login at the Authorize button before making the REST calls
It is useful to do a first test using a simple method, e.g. device-controller, GET device id:
https://thingsboard.cloud/swagger-ui/#/device-controller/getDeviceByIdUsingGET
This checks that your login and the desired device id are working ok.
By the way, the device id is that listed at the admin dashboard under Entities->Devices->[select device of interest]->[copy device id]
As I want data from a particular device for a timeseries, use the method under the section telemetry-controller, GET timeseries data:
https://thingsboard.cloud/swagger-ui/#/telemetry-controller/getTimeseriesUsingGET
Pay attention that the method asks for the entityType (DEVICE) and also the entityID (what you copied from device id above). You also need to provide the parameter key(s) for the telemetry you want. If you're not sure of the exact key names, go back to the admin dashboard as above and select the Latest Telemetry tab to view the keys that your device is using.
Using this in swagger will check that you can access the device and pull the data from the database for a time period that you want. It's better to do this first than later also having doubts about the python code operation.
Now onto Python:
First, refer to the TB documentation:
https://thingsboard.io/docs/reference/python-rest-client/
install the TB python library to your computer/server with:
pip3 install tb-rest-client
Now start writing a script. The code lines below are NOT a fully working python script, but the key lines you'll need. Recommend starting simple to make sure the login and GET methods are working for you.
# library modules you'll likely need are
from tb_rest_client.rest_client_pe import *
from tb_rest_client.rest import ApiException
import logging
import json
import datetime
import time
import csv
# provide the TB url to use
url = 'https://thingsboard.cloud'
# create a TB object for the code
rest_client = RestClientPE(base_url=url)
try:
result = rest_client.login(username='your_username', password='your_password')
current_user = rest_client.get_user()
print('login result: {}'.format(current_user))
except ApiException as e:
print('login error: {}'.format(e))
# with a successful login now get data
# annoyed note: the device id is NOT the entity ID!!
# have to provide python object that states the entity type and id
# refer to the swagger page for what is expected by each method
entity_id = EntityId(entity_type='DEVICE', id=TB_creds['device_id'])
# make an object for the parameter keys you want to get
keys = "key1,key2,key3" # note: do NOT put spaces between the keys!
start_ts = 1704153600000 # an epoch value in ms to start from e.g. Jan 2 2024
finish_ts = 1704240000000 # an epoch value in ms to finish at e.g. Jan 3 2024
# now use the above with the TB rest api to get timeseries for the selected device, telemetry keys and defined time period
result_js = rest_client.get_timeseries(entity_id, keys, start_ts, finish_ts)
Some other observations to note:
There appears to be a limit on number of lines of data that can be retrieved in one call. That is, 100 time points can be downloaded. More than this is truncated.
Multiple calls may be made at sequential time blocks to overcome the 100 limit. But you will end up with multiple sets of data to parse in order to create a desired output format.
Data is returned in order of most recent first. This can be a bit unexpected if you want data in earliest time first.
Each parameter key is ordered separately, meaning that the resulting data is ordered by key then by timestamp. If you - as I did - want to list by time order then key value then it is necessary to parse the returned data and create a new object that is ordered by timestamp and they key value.
I won't go into detail on how to parse the data block to create an object that can be saved as csv. But let's now assume you have a data dict object that consists of timestamp followed by a list of telemetry data for each key.
How to write a csv file in python:
file_name = 'some_file_name.csv'
with open(file_name, 'w', newline='') as csvfile:
try:
file_obj = csv.writer(csvfile, delimiter=',', quotechar='\'', quoting=csv.QUOTE_MINIMAL)
file_obj.writerow(["Thingsboard report", time.strftime('%d/%b/%Y %X', start.timetuple())]) # give the report a title line
file_obj.writerow(["device id", TB_creds['device_id']]) # put device id in the file
file_obj.writerow(['ts'] + key_list) # write column headings for ts and key items
for ts_point, values in data_obj.items():
file_line = [ts_point] + values
file_obj.writerow(file_line)
except:
print('unable to make csv file')
Feedback or comment on this is welcome and I'll try to answer questions if possible. An upvote is appreciated if you like this answer.
|
You can update your `compilerOptions` to include the specific rules for optional chaining. This will allow you to keep the strict mode on and keep validating the dependencies.
Your `compilerOptions` would look like this:
```
{
"compilerOptions": {
"target": "ESNEXT",
"module": "commonjs",
"strict": true,
"esModuleInterop": true,
"sourceMap": true,
"lib": ["ESNext", "DOM", "ES2020.OptionalChaining", "ES2020.NullishCoalescing"]
}
}
```
(You might need to rebuild your TypeScript project (with `npx tsc`) afterwards for the change to be applied)
|
The answer is unsatisfying: It depends.
You need to incorporate profiling into your development cycle to understand the best way forward — CPU and trace profiles work well for this. We can't easily predict your workload.
> And more importantly, if I had an infinite amount of Cores, there would be NO waiting in any thread/goroutine.
If you're saying the work is entirely CPU-bound, having a number of goroutines equal to CPUs will likely perform best. Then your outer-logic sends work to each worker with a channel.
Trace profiles will show how efficiently work is being scheduled for each CPU. CPU profiling could highlight where the hot-spots are.
There's a talk by Dave Cheney where he shows three profiling techniques, including one where too many goroutines slowed down a CPU-bound program. It's worth a watch: https://www.youtube.com/watch?v=nok0aYiGiYA
|
If accepted answer does not work please check the following:
- If your env variables are defined in Vercel like they should
- If they are, make not defined them
- Make sure to redeploy as those env are resolved during build time |
I've been working on this for hours with little success. Using jQuery, I want to set the subject field to concatenate the values of 2 other dropdown fields. In script.js, I have something like this going. When this dropdown field is updated, it runs through the if conditions to set the reasonforcontact variable, and then set the subject to equal that variable. That works so far. I want to do the same sort of thing with a different dropdown field, set a different variable, and then use the 2 variables to set the subject.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
$('#request_custom_fields_20094966253837').change(function(){
if ($(this).attr('value') == 'customer_placing_an_order') {
let reasonforcontact = "Placing an Order";
$('#request_subject').val(reasonforcontact);
}
if ($(this).attr('value') == 'customer_website_issues') {
let reasonforcontact = "Website Issues";
$('#request_subject').val(reasonforcontact);
}
});
<!-- end snippet -->
I'm not sure if I need to call different functions or something else. I've tried referring to the other dropdown field value by adding this into the same function, but it doesn't work:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
if ($('#request_custom_fields_20094966253837').attr('value') == 'orgproduct_collegeitem') {
let orgproduct = "College Item";
$('#request_subject').val(orgproduct);
}
<!-- end snippet -->
|
I have a SQL query
```
SELECT
M.EMPLOYEEID, LISTAGG(L.NAME, ',') WITHIN GROUP (ORDER BY L.NAME) AS Locations
FROM
EMPLOYEEOFFICES M
LEFT JOIN
OFFICELIST L ON M.OFFICELISTID = l.officelistid
GROUP BY
M.EMPLOYEEID
```
The Linq equivalent I am trying to run:
```
var empOfficeLocations = (from om in _dbContext.EMPLOYEEOFFICES
join ol in _dbContext.Officelists
on om.Officelistid equals ol.Officelistid into grps
from grp in grps
group grp by om.Employeeid into g
select new
{
EmployeeId = g.Key,
Locations = string.Join(",", g.Select(x => x.Name))
}).ToList();
```
I have tried multiple versions of the above LINQ but haven't had any luck.
The error I get:
> Processing of the LINQ expression 'GroupByShaperExpression:
> KeySelector: e.EMPLOYEEID,
> ElementSelector:ProjectionBindingExpression: EmptyProjectionMember
> by 'RelationalProjectionBindingExpressionVisitor' failed. This may indicate either a bug or a limitation in Entity Framework. See https://go.microsoft.com/fwlink/?linkid=2101433 for more detailed information.
The translation:
```
Compiling query expression:
'DbSet<EMPLOYEEOFFICES>()
.Join(
inner: DbSet<Officelist>(),
outerKeySelector: e => e.Officelistid,
innerKeySelector: o => o.Officelistid,
resultSelector: (e, o) => new {
e = e,
o = o
})
.GroupBy(
keySelector: <>h__TransparentIdentifier0 => <>h__TransparentIdentifier0.e.Employeeid,
elementSelector: <>h__TransparentIdentifier0 => <>h__TransparentIdentifier0.o.Name)
.Select(g => new {
EMPLOYEEID = g.Key,
Locations = string.Join(
separator: ", ",
values: g)
})
```
The result I want is simple:
| EmployeeId | Locations |
| -------- | -------- |
| emp1 | loc1,loc2 |
| emp2 | loc1,loc3,loc4 |
I have searched Stackoverflow, tried chat gpt; the in-memory version seems to work, but try the same code with a db Context and everything falls apart.
The database in use is an Oracle database, version 11g
Microsoft.EntityFrameworkCore v5.0.9
Oracle.EntityFrameworkCore v5.21.4
How can I get this working? I do not wish to do this in-memory. |
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/h1LY6.gif
Kindly refer the following git project https://github.com/praveeniroh/LiveActivity/tree/main/LiveActivity
Note: we can update and end live activity without opening our app.
Points to remember:
1. The LiveActivityIntent file must be shared with both main target and widget target
2. You have to provide specific UI while updating state |