instruction stringlengths 0 30k β |
|---|
Looks like something has changed in Docker since the services.build.yml was created, as the docker file ref in settings.build.xml isn't being honoured so rod_licensing_base and rod_licensing_builder weren't being created.
Directly building the two images by doing `docker build -t rod_licensing/builder - < Dockerfile.build` and `docker build -t rod_licensing/base - < Dockerfile.base` resolved the problem. |
Primitive values such as strings and ints can't easily be registered in the container, because their meaning is ambiguous. The the following would not really work:
``` c#
// Would not really work
services.AddSingleton("SomeStringValue");
services.AddSingleton(typeof(IMyService<>), typeof(MyService<>));
```
But you can't register a delegate either with the open generic type:
``` c#
// Would certainly not work
services.AddSingleton(typeof(IMyService<>),
sp => new MyService<???>("SomeStringValue"));
```
So instead, you could register each closed type individually:
``` c#
services.AddSingleton<IMyService<A>>(new MyService<A>("SomeStringValue"));
services.AddSingleton<IMyService<B>>(new MyService<B>("SomeStringValue"));
```
But this might not be convenient, especially if you have many different closed versions, or when new versions are added regularly.
The best solution in this case is, IMO, to remove the ambiguity that strings cause and wrap that string in a wrapper class that can be registered:
``` c#
services.AddSingleton(new MyServiceSettings("SomeStringValue"));
// Here, MyService<T> now depends on MyServiceSettings rather than string
services.AddSingleton(typeof(IMyService<>), typeof(MyService<>));
```
This requires you to change the constructor of `MyService<T>` to accept a `MyServiceSettings` instead of a `string`:
``` c#
public class MyService<T> : IMyService<T>
{
private readonly MyServiceSettings settings;
public MyService(MyServiceSettings settings)
{
this.settings = settings;
}
// use settings.SomeStringValue in the class
}
```
Besides directly injecting the `MyServiceSettings`, ASP.NET Core also allows you to supply an `IOption<MyServiceSettings>` to the constructor. This is what @Ralf refers to in the comments. As I see it, however, this only leads to extra complexity, because in this case you would be injecting an interface that allows access to the `MyServiceSettings` parameter object. That's just an extra level of indirection that is in most cases not needed. Still, it's good to know this option exist. It's good to understand the options and know what their benefits, downsides and consequences are. |
If you are in India and using Jio then use VPN.
After trying that if you get an error something like
```
WARNING: Failed to write executable - trying to use .deleteme logic
ERROR: Could not install packages due to an OSError:
```
then use --user flag
pip install "package name" --user |
I want to make the SPARQL query to the Wikidata Query Service using query via URL-encoded POST (method 2) rather than GET (method 1) since GET queries have a limited length and some queries may be long if a lot of VALUES data are sent. Based on my past experience using query via POST directly (method 3), it has problems with character encoding at the Wikidata Query Service. The three methods for performing SPARQL queries via HTTP are described in the W3C SPARQL 1.1 specification.
I want to use the Python `urllib3` library rather than `requests`, since this code will be part of an AWS Lambda and `requests` is no longer a supported library in the `boto3` SDK. I could import `requests` as a layer, but I would prefer to keep things simple by just using `urllib3`.
I have been making URL-encoded POST HTTP queries using the `requests` library for a long time with no problems. However, when I use the analogous code for the `urllib3` library, I get an error. I am mystified by this behavior, particularly since the `requests` library is just a wrapper over `urllib3`. There must be something that `requests` is adding to the HTTP request that `urllib3` isn't. I've read the docs and looked at examples for making POST requests with `urllib3` and can't see anything I'm missing. I tried URL encoding the query (commented out in the code below), but that didn't make any difference.
I queried the Wikidata Query Service SPARQL endpoint using the following Python code and the requests library:
```
import requests
query_string = 'SELECT ?item WHERE {?item wdt:P31 wd:Q146.}LIMIT 10'
requestheader = {
'User-Agent': 'TestAgent/0.1 (mailto:email@domain.com)',
'Accept': 'application/sparql-results+json',
'Content-Type': 'application/x-www-form-urlencoded'
}
response = requests.post('https://query.wikidata.org/sparql', data={'query' : query_string}, headers=requestheader)
print(response.status_code)
print(response.headers)
print(response.text)
```
As expected, I received the following response from the API:
```
200
{'server': 'nginx/1.18.0', 'date': 'Thu, 22 Feb 2024 21:18:02 GMT', 'content-type': 'application/sparql-results+json;charset=utf-8', 'x-first-solution-millis': '1', 'x-served-by': 'wdqs1015', 'access-control-allow-origin': '*', 'cache-control': 'public, max-age=300', 'content-encoding': 'gzip', 'vary': 'Accept, Accept-Encoding', 'age': '0', 'x-cache': 'cp1106 miss, cp1106 pass', 'x-cache-status': 'pass', 'server-timing': 'cache;desc="pass", host;desc="cp1106"', 'strict-transport-security': 'max-age=106384710; includeSubDomains; preload', 'report-to': '{ "group": "wm_nel", "max_age": 604800, "endpoints": [{ "url": "https://intake-logging.wikimedia.org/v1/events?stream=w3c.reportingapi.network_error&schema_uri=/w3c/reportingapi/network_error/1.0.0" }] }', 'nel': '{ "report_to": "wm_nel", "max_age": 604800, "failure_fraction": 0.05, "success_fraction": 0.0}', 'x-client-ip': '129.59.122.76', 'accept-ranges': 'bytes', 'content-length': '217'}
{
"head" : {
"vars" : [ "item" ]
},
"results" : {
"bindings" : [ {
"item" : {
"type" : "uri",
"value" : "http://www.wikidata.org/entity/Q378619"
}
}, {
"item" : {
"type" : "uri",
"value" : "http://www.wikidata.org/entity/Q498787"
}
}, {
"item" : {
"type" : "uri",
"value" : "http://www.wikidata.org/entity/Q677525"
}
}, {
"item" : {
"type" : "uri",
...
}
} ]
}
}
```
However, when I make the analogous request using the urllib3 library I get an error. Code:
```
import urllib3
#import urllib.parse
query_string = 'SELECT ?item WHERE {?item wdt:P31 wd:Q146.}LIMIT 10'
# Try url encoding the query string. I think this isn't necessary because I think urllib3 already does this.
#query_string = urllib.parse.quote(query_string)
#print(query_string)
http = urllib3.PoolManager()
requestheader = {
'User-Agent': 'TestAgent/0.1 (mailto:email@domain.com)',
'Accept': 'application/sparql-results+json',
'Content-Type': 'application/x-www-form-urlencoded'
}
response = http.request('POST', 'https://query.wikidata.org/sparql', fields={'query' : query_string}, headers=requestheader)
print(response.status)
print(response.headers)
print(response.data.decode('utf-8'))
```
Response:
```
405
HTTPHeaderDict({'server': 'nginx/1.18.0', 'date': 'Mon, 26 Feb 2024 13:17:45 GMT', 'content-type': 'text/plain;charset=iso-8859-1', 'x-served-by': 'wdqs1018', 'access-control-allow-origin': '*', 'vary': 'Accept-Encoding', 'age': '0', 'x-cache': 'cp1108 miss, cp1108 pass', 'x-cache-status': 'pass', 'server-timing': 'cache;desc="pass", host;desc="cp1108"', 'strict-transport-security': 'max-age=106384710; includeSubDomains; preload', 'report-to': '{ "group": "wm_nel", "max_age": 604800, "endpoints": [{ "url": "https://intake-logging.wikimedia.org/v1/events?stream=w3c.reportingapi.network_error&schema_uri=/w3c/reportingapi/network_error/1.0.0" }] }', 'nel': '{ "report_to": "wm_nel", "max_age": 604800, "failure_fraction": 0.05, "success_fraction": 0.0}', 'x-client-ip': '166.194.158.40', 'content-length': '13'})
Not writable.
```
I cannot see any problems with the `urllib3` request. The Wikdiata Query Service is a public API and no authentication is required. |
Why doesn't a SPARQL POST query to the Wikidata SPARQL endpoint work with the Python urllib3 library when the corresponding requests query does? |
|post|python-requests|sparql|urllib3|wikidata-query-service| |
null |
If you could simplify and share the problem you are attempting to solve with the implementation shared, you would most likely have a better and easier solution.
For what it seems like, the following should do what you are looking for (barring the `merging`)
Function<Pair<String, String>, Integer> lengthOfKey
= p -> p.getKey().length();
Collector<Pair<String, String>, ?,
Map<String, String>> convertToValueMapWithUpperCase =
Collectors.toMap(Pair<String, String>::getValue,
e -> e.getValue().toUpperCase());
Map<Integer, Map<String, String>> groupingAndTransformation = list.stream()
.collect(Collectors.groupingBy(lengthOfKey,
convertToValueMapWithUpperCase)); |
an easier way to do this is by installing the superset_client package with this command `pip install superset-api-client` and then change this script based on your needs
<pre>
from superset_api_client import Superset
import requests
# Superset API configuration
superset_url = 'http://your-superset-url'
username = 'admin'
password = 'admin_password'
# User details
new_user = {
'username': 'new_user',
'firstname': 'New',
'lastname': 'User',
'email': 'new_user@example.com',
'password': 'new_user_password',
}
# Initialize Superset API client
client = Superset(superset_url, username, password)
# Check if user exists
existing_user = client.get(f'/api/v1/user/?q={new_user["username"]}')
if existing_user:
print(f"User '{new_user['username']}' already exists.")
else:
# Create a new user
response = client.post('/api/v1/user/', json=new_user)
if response.status_code == requests.codes.created:
print(f"User '{new_user['username']}' created successfully.")
else:
print(f"Failed to create user. Status code: {response.status_code}, Message: {response.text}")
</pre>
|
FlutterCarousel(
options: CarouselOptions(
physics: const NeverScrollableScrollPhysics(),
controller: _carouselController,
onPageChanged: (index, reason) {
currentView = index + 1;
//setState is called to update the current page with respect to the current view
setState(() {});
},
height: 50.0,
indicatorMargin: 10.0,
showIndicator: true,
slideIndicator: CircularWaveSlideIndicator(),
viewportFraction: 0.9,
),
items: swipeList.map((i) {
return const Text('');
}).toList(),
),
The above code outputs this kind of Carousel slider
(https://i.stack.imgur.com/CY9IY.jpg)
But I would like to change the look of Corousel slider like below
(https://i.stack.imgur.com/JcIOz.jpg)
|
I hate to say it, but ChatGPT for the win. Fixed my error. |
In my case, I initialize and use these (complex and expensive) object in file1, import them in file2 and use. It doesn't waste time create multiple obj in multiple functions accross two files, and works for me.
```
# utils.py
from fancy_module import Fancy_class
expensive_obj = Fancy_class()
def func1():
expensive_obj.do_stuff()
```
```
# work.py
from utils import func1, expensive_obj
def work_func():
expensive_obj.do_other_stuff()
```
However when I hand it over to my colleague to deploy, he points out it takes quite some time to do `expensive_obj = Fancy_class()` during import, and it causes troubles in the prod framework we are using (and I cannot change that). He asks to put it in a getter and use @lru_cache to avoid duplication.
```
# utils.py
from fancy_module import Fancy_class
@lru_cache
def get_expensive_obj():
return Fancy_class()
def func1():
expensive_obj = get_expensive_obj()
expensive_obj.do_stuff()
```
```
# work.py
from utils import func1, get_expensive_obj
def work_func():
expensive_obj = get_expensive_obj()
expensive_obj.do_other_stuff()
```
Not knowing how exactly lru_cache work, I worry if it would really avoid duplicating `expensive_obj`. Plus I need to create a few like this `expensive_obj` in a few dozen or so functions similar to func1() and work_founc(). Kind of messy.
Is there other solutions that allow me to:
1. share objects between functions across files
2. and avoid expensive initialization during the importing time
Thanks!! |
How efficiently share complex and expensive objects between python files |
|python|performance|object|initialization|python-lru-cache| |
|transformer-model|large-language-model| |
I need to figure out how to insert additional elemnents in the JSON being returned by SQL Server.
```
Create Table dbo.Project (Id int IDENTITY(1,1), [Description] NVARCHAR(100), [Note] NVARCHAR(100))
Insert Into Project ([Description],[Note]) VALUES('Daphne','Ocala county - Barn')
Insert Into Project ([Description],[Note]) VALUES('Sunny','Riverdon county - Prison')
Insert Into Project ([Description],[Note]) VALUES('Sasha','Sommer county - School')
SELECT (SELECT CAST(Id AS nvarchar) 'ExternalRefNbr', [Description], [Note] FOR JSON PATH, WITHOUT_ARRAY_WRAPPER) FROM Project
```
The Select query gives me this output.
```
{"ExternalRefNbr":"1","Description":"Daphne","Note":"Ocala county - Barn"}
{"ExternalRefNbr":"2","Description":"Sunny","Note":"Riverdon county - Prison"}
{"ExternalRefNbr":"3","Description":"Sasha","Note":"Sommer county - School"}
```
Now I would like to know how to insert another element in this JSON.
The output I would like is
```
{"ExternalRefNbr":{"value":"1"},"Description":{"value":"Daphne"},"Note":{"value":"Ocala county - Barn"}}
{"ExternalRefNbr":{"value":"2"},"Description":{"value":"Sunny"},"Note":{"value":"Riverdon county - Prison"}}
{"ExternalRefNbr":{"value":"3"},"Description":{"value":"Sasha"},"Note":{"value":"Sommer county - School"}}
```
**How do I get the "value" in there? Any suggestions are appreciated.**
|
How do I insert an additional element in the JSON output from SQL Server? |
I'm learning to write microservices in Go and I have created an API endpoint using Gin. My use case is that this endpoint receives an AWS role and then assumes it to access some AWS resource.
I found out that Gin handles each incoming reqeust in a different go routine and my concern is if this role assumption would work parallely.
I found the following example on how to assume a role on Stackoverflow:
```
stsClient := sts.NewFromConfig(cfg)
provider := stscreds.NewAssumeRoleProvider(stsClient, roleARN)
cfg.Credentials = aws.NewCredentialsCache(provider)
```
I'm currently loading `aws.Config` into a variable, `cfg`, at the start of the microservice in the `main` package and I could pass it to the service layer to share it. My questions are:
* Does every request need to load aws config? I'm concerned about the following line that will change the config credential for each request and leave it in the modified state which might cause issues if `cfg` is passed to API response logic.
```
cfg.Credentials = aws.NewCredentialsCache(provider)
```
* Is the role assumption operation thread safe?
Any guidance is appreciated.
TIA |
**Nuget Package**
https://www.nuget.org/packages/Askmethat.Aspnet.JsonLocalizer/
**Solution**
After some investigations, I finally find an example in Asp/Localization GitHub.
I provide here for people that want to use a flat JSON without breaking default culture provider.
**Data :**
**The flat JSON:**
[
{
"Key": "Hello",
"LocalizedValue" : {
"fr-FR": "Bonjour",
"en-US": "Hello"
}
}
]
**The C# model:**
class JsonLocalization
{
public string Key { get; set; }
public Dictionary<string, string> LocalizedValue = new Dictionary<string, string>();
}
**The Middleware**
**The Factory**
*This is just to have access to the CultureInfo is the StringLocalizer.*
public class JsonStringLocalizerFactory : IStringLocalizerFactory
{
public IStringLocalizer Create(Type resourceSource)
{
return new JsonStringLocalizer();
}
public IStringLocalizer Create(string baseName, string location)
{
return new JsonStringLocalizer();
}
}
**The Localizer**
*The logic to get the data from the JSON file*
public class JsonStringLocalizer : IStringLocalizer
{
List<JsonLocalization> localization = new List<JsonLocalization>();
public JsonStringLocalizer()
{
//read all json file
JsonSerializer serializer = new JsonSerializer();
localization = JsonConvert.DeserializeObject<List<JsonLocalization>>(File.ReadAllText(@"localization.json"));
}
public LocalizedString this[string name]
{
get
{
var value = GetString(name);
return new LocalizedString(name, value ?? name, resourceNotFound: value == null);
}
}
public LocalizedString this[string name, params object[] arguments]
{
get
{
var format = GetString(name);
var value = string.Format(format ?? name, arguments);
return new LocalizedString(name, value, resourceNotFound: format == null);
}
}
public IEnumerable<LocalizedString> GetAllStrings(bool includeParentCultures)
{
return localization.Where(l => l.LocalizedValue.Keys.Any(lv => lv == CultureInfo.CurrentCulture.Name)).Select(l => new LocalizedString(l.Key, l.LocalizedValue[CultureInfo.CurrentCulture.Name], true));
}
public IStringLocalizer WithCulture(CultureInfo culture)
{
return new JsonStringLocalizer();
}
private string GetString(string name)
{
var query = localization.Where(l => l.LocalizedValue.Keys.Any(lv => lv == CultureInfo.CurrentCulture.Name));
var value = query.FirstOrDefault(l => l.Key == name);
return value.LocalizedValue[CultureInfo.CurrentCulture.Name];
}
}
With this solution you are able to use the basic **IStringLocalizer** in your **Views** and **Controllers**.
Of course if you have a big json file, you can use **IMemoryCache** or **IDistributedMemoryCache** to improve performance.
**EDIT :**
In the application Startup add this lines to use your own implementation :
services.AddSingleton<IStringLocalizerFactory, JsonStringLocalizerFactory>();
services.AddSingleton<IStringLocalizer, JsonStringLocalizer>();
services.AddLocalization(options => options.ResourcesPath = "Resources");
After that you can configure as you want your globalization preferences.
|
I have the following lua function that creates, connects, and sends information on a udp socket.
```lua
local udp = ngx.socket.udp
local function write_to_socket(conf, bytes)
local sock = udp()
sock:settimeout(conf.timeout)
sock:setpeername(conf.socket_host, conf.socket_port)
sock:send(bytes)
sock:close()
end
```
(I've omitted error handling)
I would like to increase the writing buffer size with an option similar to `so_sndbuf`, but as far as I can tell both [ngx socket](https://github.com/openresty/lua-nginx-module?tab=readme-ov-file#ngxsocketudp) and [lua socket](https://w3.impa.br/~diego/software/luasocket/udp) don't offer this ability. Is there a way to change writing buffer size in lua?
I did look into writing C functions to do the socket logic but this is a small part of an application written in lua so I can't change my entry point to run C with `lua_register`. |
How to change buffer size for lua socket? |
|unix|lua|udp|unix-socket|datagram| |
null |
For SQLAlchemy==2.0.25
from sqlalchemy import create_engine
from sqlalchemy.engine import URL
url = URL.create(drivername=drivername,
username=youruser,
password=yourpass,
host=yourhost,
database=yourdbname)
engine = create_engine(url, echo=True)
if engine.dialect.has_table(table_name=YOURTABLENAME,connection=engine.connect()):
echo "table exist" |
I have a NodeJS/Express/MongoDB application that allows users into add blog posts using TinyMCE rich text editor. The blog text is stored in a variable called blogPost, with the formatted text displayed on the 'show blog' page.
On the website homepage, I am trying to return a substring of the first blog post (e.g. first 500 characters) as a featured blog post. In my ejs template, if I use:
<p><%- blogs[0].blogPost %></p>
Then it returns the full blog post, including all the formatting. But if I add a substring to it like this:
<p><%- blogs[0].blogPost.substring(0,500) %></p>
Then it does not work correctly, as it returns the formatting data as part of the string (e.g. <p class="MsoNormal" style="margin: 0cm; font-size: 12pt; font-family: Calibri etc.).
How can I return only a substring of plain text from the stored value? Thanks |
TinyMCE Return Plain Text from Stored Database Value |
|node.js|mongodb|express|tinymce|ejs| |
I installed and configured the SQL Server Reporting Service 2022 on a Windows Server 2022. However, upon accessing the report server portal (native mode configured) the same admin account used to install the Reporting Service is denied access to the Report Server configurations page where I can add users and set permissions accordingly, i.e. [https://reportserver/ReportServer][1]. The following error is displayed on the web page, running Edge as an administrator and without administrator, also on IE. Error **"The permissions granted to user 'Domain\UserName' are insufficient for performing this operation. (rsAccessDenied) Get Online Help"**. Please advise what could be the issue. The account is also a sysadmin on SQL, and it has RSExcute for both ReportServer and ReportServerTemp.
[1]: https://reportserver/ReportServer |
|json|sql-server| |
Text file:
```
{
"term": "ditech process solutions",
"country": "IN",
"action": "get_search_companies",
}
```
Code to read file:
```
import json
input_file_path = "input.txt"
with open(input_file_path) as json_data:
params = json.load(json_data)
```
Documentation:
- https://docs.python.org/3/library/json.html |
|c++|c++11|lambda| |
I am trying to split the data and rearrange the data in a CSV file. My data looks something like this
```none
1:100011159-T-G,CDD3-597,GG
1:10002775-GA,CDD3-597,GG
1:100122796-C-T,CDD3-597,TT
1:100152282-CAAA-T,CDD3-597,CC
1:100011159-T-G,CDD3-598,GG
1:100152282-CAAA-T,CDD3-598,CC
```
and I want a table that looks like this:
| ID | 1:100011159-T-G | 1:10002775-GA | 1:100122796-C-T |1:100152282-CAAA-T |
|---------------|-----------------|---------------|------------------|-------------------|
| CDD3-597 | GG | GG | TT | CC |
| CDD3-598 | GG | | | CC |
I have written the following code:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
import pandas as pd
input_file = "trail_berry.csv"
output_file = "trail_output_result.csv"
# Read the CSV file without header
df = pd.read_csv(input_file, header=None)
print(df[0].str.split(',', n=2, expand=True))
# Extract SNP Name, ID, and Alleles from the data
df[['SNP_Name', 'ID', 'Alleles']] = df[0].str.split(',', n=-1, expand=True)
# Create a new DataFrame with unique SNP_Name values as columns
result_df = pd.DataFrame(columns=df['SNP_Name'].unique(), dtype=str)
# Populate the new DataFrame with ID and Alleles data
for _, row in df.iterrows():
result_df.at[row['ID'], row['SNP_Name']] = row['Alleles']
# Reset the index
result_df.reset_index(inplace=True)
result_df.rename(columns={'index': 'ID'}, inplace=True)
# Fill NaN values with an appropriate representation (e.g., 'NULL' or '')
result_df = result_df.fillna('NULL')
# Save the result to a new CSV file
result_df.to_csv(output_file, index=False)
# Print a message indicating that the file has been saved
print("Result has been saved to {}".format(output_file))
<!-- end snippet -->
but this has been giving me the following error:
Traceback (most recent call last):
File "berry_trail.py", line 11, in <module>
df[['SNP_Name', 'ID', 'Alleles']] = df[0].str.split(',', n=-1, expand=True)
File "/nas/longleaf/home/svennam/.local/lib/python3.5/site-packages/pandas/core/frame.py", line 3367, in __setitem__
self._setitem_array(key, value)
File "/nas/longleaf/home/svennam/.local/lib/python3.5/site-packages/pandas/core/frame.py", line 3389, in _setitem_array
raise ValueError('Columns must be same length as key')
Can someone please help, I am having hard time figuring this out.Thanks in advance!
ValueError: Columns must be same length as key
|
function new_date(){
var ss = SpreadsheetApp.getActive();
var rg = ss.getRange("A2:A10");
var vs = rg.getDisplayValues();
Logger.log(JSON.stringify(vs));
}
DATA:
||A|
|:---:|:---|
|1|COL1|
|2|1/1/2024|
|3|1/2/2024|
|4|1/3/2024|
|5|1/4/2024|
|6|1/5/2024|
|7|1/6/2024|
|8|1/7/2024|
|9|1/8/2024|
|10|1/9/2024|
Execution log
12:40:53β―PM Notice Execution started
12:40:48β―PM Info [["1/1/2024"],["1/2/2024"],["1/3/2024"],["1/4/2024"],["1/5/2024"],["1/6/2024"],["1/7/2024"],["1/8/2024"],["1/9/2024"]]
12:40:56β―PM Notice Execution completed
It returns a 2 dimensional array even for just one column. So in your case if you wish to refer to each cell value it would be vs[i][0] |
null |
```
insert into `WorkerClone` select * from `Worker`;
``` |
I have set a higher zorder value for the bar plot compared to gridlines. But the gridlines are still visible over the bars. I have also tried for 'ax.set_axisbelow(True)' which is not working. Can anyone explain me how to solve the issue ?
```
from windrose import WindroseAxes
import pandas as pd
import matplotlib.pyplot as plt
# Sample data
data = {
'WD (Deg)': [45, 90, 135, 180, 225, 270, 315, 0, 45],
'WS (m/s)': [2, 3, 4, 5, 6, 7, 8, 9, 10]}
# Create a DataFrame
df = pd.DataFrame(data)
# Create a WindroseAxes object
ax = WindroseAxes.from_ax()
# Customize the grid
ax.grid(True, linestyle='--', linewidth=2.0, alpha=0.5,
color='grey', zorder = 0)
# Set the axis below the wind rose bars
ax.set_axisbelow(True) ## Not working
# Plot the wind rose bars
ax.bar(df['WD (Deg)'], df['WS (m/s)'], normed=True, opening=0.5, edgecolor='black',
cmap=plt.cm.jet, zorder = 3)
plt.show()
```
I don't understand why is it happening. I want to plot the gridlines below the barplots. Thank in advance. |
'zorder' is not working properly in a windrose diagram |
|pandas|dataframe|matplotlib|z-order|windrose| |
null |
If you know that the column exists, you could proceed similarly to pandas:
```
df = pl.DataFrame({'ABC': [1,2,3], 'DEF': [4,5,6],
'XYZ': [7,8,9], 'GHI': [10,11,12]})
out = df[:, :df.columns.index('XYZ')+1]
# or
out = df[:, :df.find_idx_by_name('XYZ')+1]
```
Or, shorter (and more efficient):
```
out = df[:, :'XYZ']
```
Output:
```
shape: (3, 3)
βββββββ¬ββββββ¬ββββββ
β ABC β DEF β XYZ β
β --- β --- β --- β
β i64 β i64 β i64 β
βββββββͺββββββͺββββββ‘
β 1 β 4 β 7 β
β 2 β 5 β 8 β
β 3 β 6 β 9 β
βββββββ΄ββββββ΄ββββββ
``` |
I am using PyTorch's LSTM api, but have a bit of an issue. I'm using an LSTM for a dummy AI model. The task of the model is to return 1 if the previous number is less than the current one.
So for an array like `[0.7, 0.3, 0.9, 0.99]`, the expected outputs are `[1.0, 0.0, 1.0, 1.0]`. The first output should be `1.0` no matter what.
I designed the following network to try this problem:
```py
# network.py
import torch
N_INPUT = 1
N_STACKS = 1
N_HIDDEN = 3
LR = 0.001
class Network(torch.nn.Module):
# params: self
def __init__(self):
super(Network, self).__init__()
self.lstm = torch.nn.LSTM(
input_size=N_INPUT,
hidden_size=N_HIDDEN,
num_layers=N_STACKS,
)
self.linear = torch.nn.Linear(N_HIDDEN, 1)
self.relu = torch.nn.ReLU()
self.optim = torch.optim.Adam(self.parameters(), lr=LR)
self.loss = torch.nn.MSELoss()
# params: self, predicted, expecteds
def backprop(self, xs, es):
# perform backprop
self.optim.zero_grad()
l = self.loss(xs, torch.tensor(es))
l.backward()
self.optim.step()
return l
# params: self, data (as a python array)
def forward(self, dat):
out, _ = self.lstm(torch.tensor(dat))
out = self.relu(out)
out = self.linear(out)
return out
```
And I am calling this from this file:
```py
# main.py
import network
import numpy as np
# create a new network
n: network.Network = network.Network()
# create some data
def rand_array():
# a bunch of random numbers
a = [[np.random.uniform(0, 1)] for i in range(1000)]
# now, our expected value is 0 if the previous number is greater, and 1 else
expected = [0.0 if a[i - 1][0] > a[i][0] else 1.0 for i in range(len(a))]
expected[0] = 1.0 # make the first element always just 1.0
return [a, expected]
# a bunch of random arrays
data = [rand_array() for i in range(1000)]
# 100 epochs
for i in range(100):
for i in data:
pred = n(i[0])
loss = n.backprop(pred, i[1])
print("Loss: {:.5f}".format(loss))
```
Now, when I run this program, I'm just getting a loss around `0.25`, and it isn't really changing once it gets there. I think the model is just picking the average value of `0` and `1` (`0.5`) for each input.
This leads me to the belief that the model can't see the previous data; the data is just random numbers (the expected output is based on these random numbers, though), and the model can't remember what happened before.
What is my issue? |
I'm trying to use the auto-focus feature of Android phones to detect object distances using the LENS_FOCUS_DISTANCE parameter, but I'm facing accuracy issues.
- My device: Samsung S22
- LENS_FOCUS_DISTANCE_CALIBRATION parameter: APPROXIMATE
- Autofocus Mode: CONTROL_AF_MODE_CONTINOUS_PICTURE
I have a few questions:
1. I've noticed that the further an object is from the camera, the greater the discrepancy between the LENS_FOCUS_DISTANCE value and the object's actual depth. I suspect this is due to the small focal length of the phone's camera, resulting in a larger Depth of Field (DoF). Could you share insights on the reasonable range of this discrepancy and how the auto-focus algorithm determines the focus distance within this DoF range?
2. How is the LENS_FOCUS_DISTANCE value calculated? I understand that in auto-focus mode, after focusing is complete and the lens position is fixed, the calculation might involve the imaging formula 1/f = 1/p + 1/q, where LENS_FOCUS_DISTANCE represents 1/p. Is this understanding correct?
According to the Android Developers Documentation:
- The desired distance to the plane of sharpest focus is measured from the lens's frontmost surface.
- This value should be zero for fixed-focus cameras.
- For APPROXIMATE and CALIBRATED devices, the focus metadata is reported in diopters (1/meter), where **`0.0f`** represents focusing at infinity, and increasing positive numbers indicate focusing closer to the camera device.
3. Is using the LENS_FOCUS_DISTANCE parameter as a reference for distance measurement physically sound? Despite potential inaccuracies, this parameter essentially defines the distance from the lens's front to the sharpest focus plane, right?
I originally expected that the return value of LENS_FOCUS_DISTANCE would change according to the distance of my object, thereby providing the correct object depth. However, the outcome did not meet my expectations.
For example, when the object was placed at 50 cm, the average return value was 39.1.
| Raw data | | |
| --- | --- | --- |
| Wrist | mean | variance |
| 10 | 10.2 | 0.171429 |
| 15 | 14.9 | 0.209524 |
| 20 | 18.1 | 0.123810 |
| 25 | 22.3 | 0.380952 |
| 30 | 25.7 | 0.380952 |
| 35 | 28.8 | 0.600000 |
| 40 | 32.5 | 1.695238 |
| 45 | 36.3 | 0.952381 |
| 50 | 39.1 | 0.838095 | |
Understanding LENS_FOCUS_DISTANCE Accuracy and Calculation for Distance Measurement on Samsung S22 |
|android|android-camera|android-camera2|autofocus| |
null |
|java|spring-boot|jvm|instrumentation|javaagents| |
Ok I'll just post more info here, in case anyone else ends up here:
* The model needs loading form disk to GPU. This takes CPU and time
* You need enough GPU VRAM.
* On a bigger GPU take away the `load_in_8bit` so you can use all the memory and compute. |
Create an Azure AD application and grant User.Read API permission:

Generate the auth-code by using below endpoint and sign-in with the user account:
```json
https://login.microsoftonline.com/TenantID/oauth2/v2.0/authorize?
&client_id=ClientID
&response_type=code
&redirect_uri=https://replyUrlNotSet
&response_mode=query
&scope=https://graph.microsoft.com/.default
&state=12345
```

**You can make use of below code to get the singed in user details:**
```csharp
using Microsoft.Graph;
using Azure.Identity;
class Program
{
static async Task Main(string[] args)
{
var scopes = new[] { "User.Read" };
var tenantId = "TenantID";
var clientId = "ClientID";
var clientSecret = "ClientSecret";
var authorizationCode = "authcodefromabove";
var options = new AuthorizationCodeCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud,
};
var authCodeCredential = new AuthorizationCodeCredential(
tenantId, clientId, clientSecret, authorizationCode, options);
var graphClient = new GraphServiceClient(authCodeCredential, scopes);
try
{
// Fetch user details using GET request to Microsoft Graph API
var result = await graphClient.Me.GetAsync();
// Output user details
Console.WriteLine($"User ID: {result.Id}");
Console.WriteLine($"Display Name: {result.DisplayName}");
Console.WriteLine($"Email: {result.Mail}");
Console.WriteLine($"Job Title: {result.JobTitle}");
// Add more properties as needed
}
catch (Exception ex)
{
Console.WriteLine($"Error fetching user details: {ex.Message}");
}
}
}
```

***Modify the code and use the below to get the details you require:***
```csharp
try
{
var result = await graphClient.Me
.GetAsync((requestConfiguration) =>
{
requestConfiguration.QueryParameters.Select = new string[] { "displayName", "id", "officeLocation", "givenName", "businessPhones", "jobTitle", "mobilePhone", "preferredLanguage", "surname", "userPrincipalName", "mail" };
});
// Output user details
Console.WriteLine($"User ID: {result.Id}");
Console.WriteLine($"Display Name: {result.DisplayName}");
Console.WriteLine($"Email: {result.Mail}");
Console.WriteLine($"Job Title: {result.JobTitle}");
Console.WriteLine($"Business Phones: {string.Join(",", result.BusinessPhones)}");
Console.WriteLine($"Given Name: {result.GivenName}");
Console.WriteLine($"Mobile Phone: {result.MobilePhone}");
Console.WriteLine($"Office Location: {result.OfficeLocation}");
Console.WriteLine($"Preferred Language: {result.PreferredLanguage}");
Console.WriteLine($"Surname: {result.Surname}");
Console.WriteLine($"User Principal Name: {result.UserPrincipalName}");
// Add more properties as needed
}
catch (Exception ex)
{
Console.WriteLine($"Error fetching user details: {ex.Message}");
}
}
}
```
And get response like below:
[![enter image description here][1]][1]
**UPDATE: To make use of Interactive browser credential flow make use of below code:**
Enable Public client flow:
[![enter image description here][2]][2]
Add the redirect URI in **Mobile and desktop applications** platform:
[![enter image description here][3]][3]
```csharp
using Microsoft.Graph;
using Azure.Identity;
class Program
{
static async Task Main(string[] args)
{
var scopes = new[] { "User.Read" };
var tenantId = "TenantID";
var clientId = "ClientID";
var options = new InteractiveBrowserCredentialOptions
{
TenantId = tenantId,
ClientId = clientId,
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud,
// MUST be http://localhost or http://localhost:PORT
// See https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/System-Browser-on-.Net-Core
RedirectUri = new Uri("http://localhost"),
};
// https://learn.microsoft.com/dotnet/api/azure.identity.interactivebrowsercredential
var interactiveCredential = new InteractiveBrowserCredential(options);
var graphClient = new GraphServiceClient(interactiveCredential, scopes);
try
{
// Fetch user details using GET request to Microsoft Graph API
var result = await graphClient.Me.GetAsync();
// Output user details
Console.WriteLine($"User ID: {result.Id}");
Console.WriteLine($"Display Name: {result.DisplayName}");
Console.WriteLine($"Email: {result.Mail}");
Console.WriteLine($"Job Title: {result.JobTitle}");
// Add more properties as needed
}
catch (Exception ex)
{
Console.WriteLine($"Error fetching user details: {ex.Message}");
}
}
}
```
[![enter image description here][4]][4]
[1]: https://i.stack.imgur.com/bK7Ju.png
[2]: https://i.stack.imgur.com/VjtDS.png
[3]: https://i.stack.imgur.com/ql0W7.png
[4]: https://i.stack.imgur.com/DLHSy.png |
null |
null |
You can use `$setWindowFields` to get the previous and next document for each entry. Afterwards, you can decide whether a document should be included in the result by checking whether any of the dates is in your range:
db.collection.aggregate([
{
"$setWindowFields": {
"partitionBy": null,
"sortBy": {
"dDate": 1
},
"output": {
"prev": {
"$first": "$dDate",
"window": {
"documents": [
-1,
-1
]
}
},
"next": {
"$first": "$dDate",
"window": {
"documents": [
1,
1
]
}
}
}
}
},
{
$set: {
include: {
$gt: [
{
$size: {
$filter: {
input: [
"$dDate",
"$prev",
"$next"
],
cond: {
$and: [
{
$gte: [
"$$this",
ISODate("2024-01-22T00:00:00Z")
]
},
{
$lte: [
"$$this",
ISODate("2024-01-24T00:00:00Z")
]
}
]
}
}
}
},
0
]
}
}
},
{
$match: {
include: true
}
},
{
$unset: [
"include",
"next",
"prev"
]
}
])
Above aggregation pipeline first sets the `prev` and `next` fields to the corresponding dates; then it adds a temporary field `include` that is set to `true` if the document should be included in the output. After filtering the documents with a `$match` stage, the temporary fields are removed from the documents so that the result is ready:
[
{
"_id": 2,
"dDate": ISODate("2024-01-11T00:00:00Z")
},
{
"_id": 3,
"dDate": ISODate("2024-01-22T00:00:00Z")
},
{
"_id": 4,
"dDate": ISODate("2024-01-23T00:00:00Z")
},
{
"_id": 5,
"dDate": ISODate("2024-01-24T00:00:00Z")
},
{
"_id": 6,
"dDate": ISODate("2024-01-30T00:00:00Z")
}
]
You can check the [mongoplayground here][1].
The pipeline can be optimized a bit by checking the include-condition directly in the match and omitting the temporary field; however, I hope that the query engine is able to sort this out. I've kept the field for demonstration purposes.
Sorting the data is necessary only once during the `$setWindowFields` stage.
[1]: https://mongoplayground.net/p/ZHwPvU5eb3k |
I have a data table that looks like this:
| ID | ParticipaΓ§Γ£o | | Pessoa |
| 01 | Comunicante | | Lucas |
| 01 | Vitima | | Lucas |
| 02 | Comunicante | | Rafa |
| 02 | Vitima | | Vitor |
I want to look like this:
| ID | Comunicante| | VΓtima|
| 01 | Lucas | | Lucas |
| 02 | Rafa | | Vitor |
Sometimes Comunicante is diferente from VΓtima because of this i dont want to merge comunicante,vitima. I want then in separate columns.
This was so far i made:
| ID | Comunicante| | VΓtima |
| 01 | Lucas | | |
| 01 | | | Lucas |
| 02 | Rafa | | |
| 02 | | | Vitor | |
How to merge duplicate rows when with same ID with diferrente data |
|merge|duplicates|openrefine| |
null |
I have a question that seems simple, but after I start thinking about it, it seems to be a little complicated...
The well-known Pycaret package only uses Pandas... but with the growing popularity of Polars, I'm starting to think about opening a pull-request to support Polars... the question is how this should be done....
there are many pd.DataFrame(...) lines, and I can't use an if condition to check the dataset type to know whether I should use pd.DataFrame(...) or pl.DataFrame(...)
Does anyone know how to do it? Any ideas? |
how to support polars in pycaret? |
|python|pandas|dataframe|python-polars|pycaret| |
I'm using ElasticCloud (also known as Elasticsearch Service) on Google Cloud [[1]](https://www.elastic.co/jp/partners/google-cloud) as a search engine.
In my architecture, an application server is deployed on Google Cloud Run, and it communicates to Elasticsearch deployed by ElasticCloud.
Currently, I'm utilizing Google Cloud Monitoring [[2]](https://cloud.google.com/monitoring/docs) for the observability of the application server, and Stack Monitoring [[3]](https://www.elastic.co/guide/en/kibana/current/xpack-monitoring.html) for Elasticsearch.
I wish to integrate the metrics of Elasticsearch to Google Cloud Monitoring to look into the metrics of the application server and Elasticsearch together on the same dashboard.
However, as far as I investigated, Google Cloud Monitoring offers only two ways to fetch the metrics from Elasticsearch.
1. Installing "Ops agent" to the VM where Elasticsearch is running [[4]](https://cloud.google.com/monitoring/agent/ops-agent/third-party/elasticsearch)
2. Construct a Kubernetes cluster with Elasticsearch Exporter and PodMonitoring custom resource [[5]](https://cloud.google.com/stackdriver/docs/managed-prometheus/exporters/elasticsearch)
The first option seems impossible because I'm not running Elasticsearch on my VM.
The second option seems reasonable but too complicated because my system does not have any Kubernetes cluster.
Is there any other good solution to ship Elasticsearch metrics to Google Cloud Monitoring?
Any knowledge or ideas would help me a lot, thanks! |
How do I ship the metrics of Elasticsearch (CPU utilization, cluster health, and others) on ElasticCloud to Google Cloud Monitoring? |
|elasticsearch|google-cloud-platform|google-cloud-monitoring|elastic-cloud| |
I have this function in amcharts4 that creates a series for each year of data, is there a way to put a combo so that only 3 series can be put on the graph but in each combo say which years to show? I have more than 10 years of data and I don't want to put it all on the graph
I use php and javascript. I get data from a json.
chart.dataSource.url = "ejemplodatos.php";
Thanks
function createAxisAndSeries(field, name, opposite, bullet) {
var valueAxis = chart.yAxes.push(new am4charts.ValueAxis());
if(chart.yAxes.indexOf(valueAxis) != 0){
valueAxis.syncWithAxis = chart.yAxes.getIndex(0);
}
var series = chart.series.push(new am4charts.LineSeries());
series.dataFields.valueY = field;
series.dataFields.categoryX = "mesdia";
series.strokeWidth = 2;
series.name = name;
series.tooltipText = "{name}: [bold]{valueY}[/]";
series.minBulletDistance = 10;
//series.disabled=true
var interfaceColors = new am4core.InterfaceColorSet();
switch(bullet) {
case "triangle":
valueAxis.renderer.line.disabled = true; //disables axis line
valueAxis.renderer.labels.template.disabled = true; //disables labels
valueAxis.renderer.grid.template.disabled = true; //disables grid
var bullet = series.bullets.push(new am4charts.Bullet());
bullet.width = 1;
bullet.height = 1;
bullet.horizontalCenter = "middle";
bullet.verticalCenter = "middle";
var triangle = bullet.createChild(am4core.Triangle);
triangle.stroke = interfaceColors.getFor("background");
triangle.strokeWidth = 2;
triangle.direction = "top";
triangle.width = 1;
triangle.height = 1;
break;
default:
var bullet = series.bullets.push(new am4charts.CircleBullet());
bullet.circle.stroke = interfaceColors.getFor("background");
bullet.width = 1;
bullet.height = 1;
bullet.circle.strokeWidth = 2;
break;
}
valueAxis.renderer.line.strokeOpacity = 1;
valueAxis.renderer.line.strokeWidth = 2;
valueAxis.renderer.line.stroke = series.stroke;
valueAxis.renderer.labels.template.fill = series.stroke;
valueAxis.renderer.opposite = opposite;
valueAxis.renderer.grid.template.disabled = true;
}
createAxisAndSeries("a2024", "2024", false, "default");
createAxisAndSeries("a2023", "2023", false, "default");
function toggleAxes(ev) {
var axis = ev.target.yAxis;
var disabled = true;
axis.series.each(function(series) {
if (!series.isHiding && !series.isHidden) {
disabled = false;
}
});
axis.disabled = disabled;
}
|
amcharts create only 3 series from combo data |
|javascript|php|mysql|amcharts| |
In our system, we have around 16 microservices that communicate seamlessly using events through a message broker. These microservices use gRPC as their endpoint communication protocol, with an orchestrator facilitating data consensus as needed. Additionally, we have an API gateway responsible for mapping HTTP requests to gRPC and vice versa.
[](https://i.stack.imgur.com/OBy1v.png)
The specific challenge I am grappling with arises from a contractual requirement dictating that our entire service infrastructure must route through another service called "Async." This service receives HTTP requests, converts them into AmqpMessages, and dispatches them to a queue. After processing these messages, the results are returned to the Async service in the form of AmqpMessages, which are then delivered to end users.
[](https://i.stack.imgur.com/3EDFj.png)
The focal point of my inquiry is not the Async service itself, but rather establishing effective communication with it to retrieve messages and delivering processed results back to Async.
The idea I have is to write a Convertor as an external layer and it has the task of converting these messages, but in this case, we have the problem of Middelware, how to implement it. |
SwiftData, Redundant ModelContainer Instances Causing Data Constraint Loss? |
|swiftui|swift-data| |
So I tried stretching the image but it didnt work. Not sure if thats what Im meant to do and I cant find anything online that would help.
public Image Background = new() { Source = new BitmapImage(new Uri("chessbackground2.jpg", UriKind.Relative)), Stretch =.Fill, StretchDirection = StretchdDirection.Both };
after that I just displyed the image on the window through c# code |
```
I have the following code, the RepositoryException will be handled by a global ExceptionHandle.
```
```
public Category update(Category entity) {
try {
em = emf.createEntityManager();
em.getTransaction().begin();
Category category = em.find(Category.class, entity.getId());
if(category == null)
throw new RepositoryException("category not found");
category.setName(entity.getName());
em.getTransaction().commit();
return entity;
} catch (PersistenceException e) {
em.getTransaction().rollback();
if (e.getCause().getCause() instanceof ConstraintViolationException
&& ((ConstraintViolationException) e.getCause().getCause()).getSQLException().getMessage()
.contains("UC_categories_name"))
throw new RepositoryException("category already exists");
throw e;
} finally {
em.close();
}
}
```
If you know another way to update entities while minimizing the number of database queries, you can comment it. |
is it bad practice to throw a custom exception that will not be caught by the try catch block? |
|java|hibernate|exception|jpa| |
null |
IMO
You need to GitHub Actions which does CI for you project.
In this CI I am considering Code checks, security checks, lint checks etc and packing it into zip ( for artifact creation) which moves this zip into s3 into AWS for audits.
Once it is checked into S3 you can use Codepipeline for CD. ( CodeBuild too if you want to run some processing before deploying). Each time it is uploaded to S3 it will trigger codepipeline. Codepipeline also allows you to add manual approval action which you can use to add to your slack using lambda which contains codepipeline console link, execution id and codebuild plan to see what will be deployed. |
null |
I've set up a GitLab CI configuration intended to trigger pipelines only on the develop and main branches when the commit message starts with "deploy" and the event is a push. Despite this, the pipeline is unexpectedly executing on push events to any branch, including ones not specified in the configuration, such as feature/partially_fill. Here's my current .gitlab-ci.yml setup:
```
workflow:
rules:
- if: '$CI_COMMIT_BRANCH == "develop" && $CI_COMMIT_MESSAGE =~ /^deploy.*/ && $CI_PIPELINE_SOURCE == "push"'
- if: '$CI_COMMIT_BRANCH == "main" && $CI_COMMIT_MESSAGE =~ /^deploy.*/ && $CI_PIPELINE_SOURCE == "push"'
include:
- local: '/.gitlab-ci-develop.yml'
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
when: always
- local: '/.gitlab-ci-main.yml'
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
when: always
```
The intention is clear: pipelines should only run on develop and main branches under specific conditions. However, it's not behaving as expected.
Questions:
1. Why is the GitLab CI pipeline being triggered on branches other than develop and main, despite the explicit conditions set in the workflow rules?
2. Is there a mistake in my configuration that's causing the CI pipeline to ignore the specified branch conditions?
3. How can I adjust my GitLab CI configuration to ensure that pipelines only run on develop and main branches when the commit message starts with "deploy" and the source is a push event?
Any insights or suggestions to correct this behavior would be greatly appreciated. Thank you in advance for your help! |
Communication with "Async" Service in Complex System Architecture |
|c#|.net|c#-4.0|microservices|message-queue| |
null |
I have a set of buttons in a class, and I'm trying to run a function so a piece of HTML will change based on what button was pressed. How would I use if/else statement to change the outcome based on the id of the button?
```
document.querySelector(".pizza").addEventListener('click', function(){
if (document.getElementById("small").clicked == true) {
Size_Price = 5;
document.getElementById("head1").textContent = "Small";
} else if (document.getElementById("medium").clicked == true) {
Size_Price = 10;
document.getElementById("head1").textContent = "Medium";
}
})
```
This is what I've tried, to no avail.
|
My buttons are in a class, how do I use if statement based on id? |
|javascript|html|if-statement| |
null |
In .NET 5/.NET 6 we typically get our services from the ServiceProvider directly.
But in some cases this is not possible and we need to create a service factory. We inject the service factory into our services and then, within the service, the service factory creates instances on demand. A public example would be `HttpClientFactory` (for demonstrating the thought).
The usual use case would be if a service needs to have multiple instances of a service and the instances need to be (pre)configured within the using service.
----------
My way to implement this looks as follows:
// appsettings.json
"MyServiceFactory": {
"Instances": {
"Instance1": {
"Option1": "Hi",
"Option2": "there"
},
"Instance2": {
"Option1": "Thanks",
"Option2": "for helping"
}
}
}
// the options class to bind
public class MyServiceFactoryOptions
{
public Dictionary<string, MyServiceOptions> Instances { get; set; } = new Dictionary<string, MyServiceOptions>();
}
// the factory implementation
public class MyServiceFactory : IMyServiceFactory
{
public readonly ILoggerFactory LoggerFactory;
public readonly ILogger<MyServiceFactory> Logger;
public readonly MyServiceFactoryOptions Options;
public MyServiceFactory(ILogger<MyServiceFactory> logger, IOptions<MyServiceFactoryOptions> options, ILoggerFactory loggerFactory)
{
LoggerFactory = loggerFactory ?? throw new ArgumentNullException(nameof(loggerFactory));
Logger = logger ?? throw new ArgumentNullException(nameof(logger));
Options = options.Value;
}
public IMyService CreateInstance(string name)
{
if (Options.Instances.TryGetValue(name, out MyServiceOptions options))
{
ILogger<MyService> logger = LoggerFactory.CreateLogger<MyService>();
IOptions<MyService> iOptions = Microsoft.Extensions.Options.Options.Create(options);
return new MyService(logger, iOptions);
}
else
{
throw new Exception($"Config is missing instance configuration for {name}");
}
}
}
//the service registration
services
.AddOptions<MyServiceFactoryOptions>()
.BindConfiguration("MyServiceFactory");
services.TryAddTransient<MyServiceFactory>();
And the `CreateInstance(string name)` method actually works creating the instances with the given configuration as desired.
----------
The problem rises when `MyService` needs to be disposed when the ASP.NET server is closed.
Typically
1. the (my) service implements `IDisposable`
2. the instance is created via the `ServiceProvider`
3. the instance is injected via the constructor
4. `Dispose` gets called when the server is shut down
In my implementation `Dispose` seems to be never called.
A hint why that my could be comes from [Andrew Locks's article - Four ways to dispose IDisposable][1].
As I understand it right now the problem is that the instances are not created via the `ServiceProvider` and therefor `Dispose` is not called.
Am I right and is there a way to get `Dispose` called?
[1]: https://andrewlock.net/four-ways-to-dispose-idisposables-in-asp-net-core/#automatically-disposing-services-leveraging-the-built-in-di-container |
> I wrote below bash Shell script to check whether the input value is a character string or a number (using Mathematical function):
> #!/bin/bash
> uniq_value=$1
> if `$(echo "$uniq_value / $uniq_value" | bc)` ; then
> echo "Given value is number"
> else
> echo "Given value is string"
> fi
> The execution result is as follows:
> $ sh -x test.sh abc
> `+ uniq_value=abc`
> `+++ echo 'abc / abc'`
> `+++ bc`
> `Runtime error (func=(main), adr=5): Divide by zero`
> + echo 'Given value is number'
> Given value is number
> There is an error like this: "Runtime error (func=(main), adr=5): Divide by zero"
> Can anyone please suggest how to rectify this error?
> The expected result for the input "abc123xy" should be "Given value is string"
> The expected result for the input "3.045" should be "Given value is number"
> The expected result for the input "6725302" should be "Given value is number"
> After this I will assign a series of values to "uniq_value" variable in a loop. Hence getting the output for this script is very important. |
if i understand the question correctly, you can try this way
const YourComponent = () => {
const [memberData, setMemberData] = useState([
{
created_at: 'NA',
email: 'test@test.com',
firstname: 'molly',
lastname: 'jones',
groups: [
{
group_title: ' title',
group_id: 4,
group_description: 'description',
},
],
},
{
created_at: 'NA',
email: 'example@test.com',
firstname: 'tim',
lastname: 'smith',
groups: [
{
group_title: ' test',
group_id: 5,
group_description: 'example',
},
],
},
]);
const groupId = 5;
const handleCheckboxToggled = () => {
setMemberData((prevMemberData) =>
prevMemberData.map((memberInfo) => {
if (memberInfo.groups.find((group) => group.group_id === groupId)) {
return {
...memberInfo,
groups: memberInfo.groups.filter((group) => group.group_id !== groupId),
};
}
return memberInfo;
})
);
};
return (
<div>
{memberData.map((member, index) => (
<div key={index}>
<p>{member.email}</p>
<input
type='checkbox'
aria-label='member of the group'
checked={member.groups.some((group) => group.group_id === groupId)}
onChange={handleCheckboxToggled}
/>
</div>
))}
</div>
);
};
|
I wish to hide the tooltip that is shown for xAxis title but still be able to show the same for when the user hovers over the axis labels and datapoints.
This is my xAxes config for reference:
[
{
"id": "c1",
"type": "CategoryAxis",
"title": {
"text": "am"
},
"renderer": {
"line": {
"isMeasured": false
},
"grid": {
"template": {
"disabled": true
}
},
"labels": {
"template": {
"rotation": -45,
"horizontalCenter": "right",
"verticalCenter": "middle",
"truncate": true,
"maxWidth": 300,
"dy": -10,
"adapter": [
{
"key": "text"
}
]
}
},
"minGridDistance": 20
},
"tooltipText": "",
"dataFields": {
"category": "am"
},
"cursorTooltipEnabled": true
} ]
Initially the behavior was that every time i hover over the axis title the previously hovered data point/ label's value is shown as the title's tooltip. I later changed my tooltipText value to empty string which seemed to be working. But now every time one of the data points is 0 either no tooltip is shown or is shown over the next data point.
[current_behaviour][1]
[1]: https://i.stack.imgur.com/auJo1.png |
Has anyone been able to install https://github.com/HariSekhon/Nagios-Plugins on ARM-based machines? I am looking to install them on Ubuntu 22.04 LTS running on aarch64 architecture. |
Compile or install HariSekhon/Nagios-Plugins on ARM |
|arm|nagios| |
Here's something from https://stackoverflow.com/a/14515846/10348047 to a question which linked here as a duplicate. I found this very useful for an LFS-related issue where none of `git reset --hard` or `git clean -df` or `git restore --staged .` resolved things.
## create a stand-alone, tagged, empty commit
true | git mktree | xargs git commit-tree | xargs git tag empty
## clear the working copy
git checkout empty
## go back to where you were before
git checkout master (or whatever) |
I have found the answer
```
public void AddProduct()
{
formModel.Items.Add(new FormProduct()
{
Id = AvailableProducts.FirstOrDefault().Id,
Product = AvailableProducts.FirstOrDefault(),
Quantity = 1,
});
// Notify EditContext of the change
FormEditContext.NotifyFieldChanged(FieldIdentifier.Create(() => formModel.Items));
}
```
```
public void RemoveProduct(
FormProduct formProduct)
{
formModel.Items.Remove(formProduct);
// Notify EditContext of the change
FormEditContext.NotifyFieldChanged(FieldIdentifier.Create(() => formModel.Items));
}
``` |
Using this code snippet I create a viewpoint for the current view.:
```
internal static Document doc = Autodesk.Navisworks.Api.Application.ActiveDocument;
internal static void CreateViewpoint()
{
Viewpoint curentVievpoint = doc.CurrentViewpoint.Value;
SavedViewpoint newViewpoint = new SavedViewpoint(curentVievpoint);
newViewpoint.DisplayName = "_ΠΠΈΠ΄";
doc.SavedViewpoints.AddCopy(newViewpoint);
}
```
How to pass the last saved viewpoint to "lastViewpoint"?
```
internal static void GoToLastCreatedViewpoint()
{
Viewpoint lastViewpoint = new Viewpoint();
doc.CurrentViewpoint.CopyFrom(lastViewpoint);
}
```
I tried to represent "lastViewpoint" as SavedViewpoint but failed. |
Show last Viewpoint in Autodesk Navisworks |
|autodesk-navisworks| |
null |
To answer your final question; yes, if your agent doing the deployment is in another vNet, that vNet has to be peered to **finish** the deployment. (Or the agent needs to be able to connect to that network.)
You are probably running into the following scenario:
1. Terraform creates the resources
2. Terraform creates the Private Endpoints and they register in the DNS zones
3. All future requests to those resources fail/timeout because they are now getting a private DNS name for the resources and the agent can no longer communicate, even though it initially got a successful web response for the API call to create them.
A couple other points to note:
- The Private Endpoints do not need to be created in the same subscription as the resources. (They must be in the same region.)
- You can have multiple private endpoints for resources
If you didn't want to peer the vNets, you could deploy two private endpoints for each, one in your agent subscription/vNet and the second in the prod sub/vNet. You would need to specify dependencies though using `depends_on` to verify that the PEs in the agent vNet are created first.
Otherwise your only other option is to connect your agent to the target vNet somehow.
|
I just compile with -static
/c/mingw64/bin/g++.exe -static -g -o tmp.exe tmp.cpp |
You can use below case statement:
SELECT
product,
CASE WHEN quantity_min = 1 THEN 1 ELSE quantity_min * 100 - 99 END AS quantity_min,
CASE WHEN quantity_min = 1 THEN 99 ELSE quantity_min * 100 - 1 END AS quantity_max,
price99 AS price
FROM
your_table
UNION
SELECT
product,
CASE WHEN quantity_min = 1 THEN 100 ELSE quantity_min * 100 END AS quantity_min,
999999 AS quantity_max,
price100 AS price
FROM
your_table; |
|python|django|pandas|dataframe|data-processing| |