instruction stringlengths 0 30k ⌀ |
|---|
The ``conan remote add`` or Conan remotes in general only work with Conan repositories, not generic ones. You need to configure a Conan repository in your server and then pass the URL to it in the ``conan remote add myrepo <url>`` (note this URL in case of Artifactory is provided by the SetMeUp, it is not the browser URL) |
I have a few months of programming experience with Python and would like to undertake a larger project for academic purposes, where I collect and analyze injury histories of football players.
The data source is a website that tracks such data based on media reports. Here's an example of Kylian Mbappe: https://www.transfermarkt.com/kylian-mbappe/verletzungen/spieler/342229
Using a Python code, I want to automate the retrieval of the tables on this website. On the internet, there are many tutorials on how to obtain data from AJAX requests using the "requests" and "beautifulsoup" libraries. However, in this case, I am unable to proceed because I cannot find the appropriate AJAX request using the developer console.
Can anyone give me some hints on how to start this project right? Do I have to go deeper into the JavaScripts of the website?
Kind Regards
I checked for the Fetch/XHR requests in the developer console in my Chrome browser, but did not find the appropriate request.
I found a document request called ajax/, but I dont know, if it contains useful information.
The following code returns <Response [404]>
```
import requests
url = "https://www.transfermarkt.de/kylian-mbappe/verletzungen/spieler/342229/"
r = requests.get(url)
print(r)
```
|
How do I scrape data from a javascript-website using Python? |
|python|web-scraping| |
null |
Jim is right. Option 2 is invalid. However, for some reasons, many people prefer to show an _"object"_ in between two actions. This is allowed, but... it is a _notational_ option that _maps_ to the same model as option 1. So, one rectangle connected with arrows to two actions _maps_ in the model to two pins and one object flow. I don't know any tool that supports this, though. Of course, you can use a drawing tool. Just make sure that you don't mix the "object"-notation with the pin-notation. |
Browsers have some default styles for `input:focus`.
I think you're misinterpreting `outline` as `border`.
You can remove browser's default outline by doing something like this:
```css
.in:focus {
outline: none;
}
```
So the final code will look like this:
```css
.in:focus, .in:hover{
outline: none;
border-width: 1px;
border-color: #002f86 !important;
}
```
But be aware that you should never completely remove focus styles. You should always have a distinct styles for `:focus` in order to respect accessibility concerns.
For more information please refer to [this article][1] on MDN website.
[1]: https://developer.mozilla.org/en-US/docs/Web/CSS/:focus#focus_outline_none |
Why does the webpage I created have whitespace on mobile view? |
|html|css| |
null |
It seems like the issue you're encountering is related to the lack of Unicode mapping for a specific Character ID (CID) in the font used in your PDF file. One approach to handle this is to substitute the font or replace it with another font. Unfortunately, directly replacing fonts in a PDF can be a bit complex, and there might not be a straightforward solution in PDFBox or any other tool.
Instead, a common approach is to extract the text content from the PDF and then convert it to Excel. Python provides several libraries that can assist you in achieving this, such as PyPDF2, PyMuPDF (MuPDF), or pdfplumber for extracting text from PDFs, and pandas for working with Excel files.
Here's a basic example using PyPDF2 and pandas:
import PyPDF2
import pandas as pd
def extract_text_from_pdf(pdf_path):
with open(pdf_path, 'rb') as file:
pdf_reader = PyPDF2.PdfFileReader(file)
text_content = []
for page_num in range(pdf_reader.numPages):
page = pdf_reader.getPage(page_num)
text_content.append(page.extractText())
return text_content
def save_to_excel(text_content, excel_path):
df = pd.DataFrame(text_content, columns=['Page Content'])
df.to_excel(excel_path, index=False)
if __name__ == "__main__":
pdf_path = 'your_existing.pdf'
excel_path = 'output_excel_file.xlsx'
text_content = extract_text_from_pdf(pdf_path)
save_to_excel(text_content, excel_path)
This script extracts text from each page of the PDF and saves it to an Excel file. Keep in mind that the formatting might not be perfect, and you may need to clean up the data in the resulting Excel file manually.
If you have a more complex PDF with tables or specific formatting, you might need a more sophisticated approach, such as using tools like Tabula or Camelot for table extraction. |
Rundeck Jobs fail with any code different from 0. You should "wrap" your deployment in a Rundeck inline-script based job that returns 0 on your condition. Check [this][1] answer.
[1]: https://stackoverflow.com/a/34338630/10426011 |
You can use `combine_first`:
In [79]: df['County'] = df['Code'].map(code_to_county).combine_first(df['County'])
In [80]: df
Out[80]:
Code County
0 1202000 Powiat brzeski_Malopolskie
1 2402000 Powiat bielski_Slaskie
2 802000 Powiat krośnieński_Lubuskie
3 3017000 Powiat ostrowski_Wielkopolskie
4 3005000 Powiat grodziski_Wielkopolskie
5 9999999 Powiat ciechanowski |
Access ``Transform`` requery 3 columns for transformation.
We can add third column by subquery.
Query
```
TRANSFORM First(qrProduct.[ItemV]) AS [First-ItemV]
SELECT qrProduct.[ProductId]
FROM (
SELECT Product.ProductId, Product.Item
, Product.Item as ItemV
FROM Product
)qrProduct
GROUP BY qrProduct.[ProductId]
PIVOT qrProduct.[Item];
```
output is what you want with test data above.
But your problem is some another (Too many crosstab column headers (2645)).
Probably, your table is like
|ProductID| Item|
|:--------|:--------|
|Product 1| Item 1-1|
|Product 1| Item 1-2|
|Product 1| Item 1-3|
|Product 2| Item 2-1|
|Product 2| Item 2-2|
|Product 3| Item 3-1|
|Product 3| Item 3-2|
|Product 3| Item 3-3|
There we add column row_number() - "ItemN.." for pivot. Crosstab column count is limited to max.number of items for ProductId ( not values)
```
TRANSFORM First(qrProductRanged.[Item]) AS [Min-Item]
SELECT qrProductRanged.[ProductId]
FROM
( SELECT Product.ProductId, Product.Item
,'ItemN' & count(*) as rn
FROM Product LEFT JOIN Product as Product_1
ON Product_1.ProductId = Product.ProductId
and Product_1.Item <= Product.Item
GROUP By Product.ProductId,Product.Item
)as qrProductRanged
GROUP BY qrProductRanged.[ProductId]
PIVOT qrProductRanged.[rn];
```
Subquery gets
|ProductId| rn| Item|
|:--------|:------|:--------|
|Product 1| ItemN1| Item 1-1|
|Product 1| ItemN2| Item 1-2|
|Product 1| ItemN3| Item 1-3|
|Product 2| ItemN1| Item 2-1|
|Product 2| ItemN2| Item 2-2|
|Product 3| ItemN1| Item 3-1|
|Product 3| ItemN2| Item 3-2|
And final output
|ProductId| ItemN1| ItemN2| ItemN3|
|:--------|:--------|:-------|:-------|
|Product 1| Item 1-1|Item 1-2|Item 1-3|
|Product 2| Item 2-1|Item 2-2| |
|Product 3| Item 3-1|Item 3-2| | |
I have created a button to insert a row. Each row I add says "Insert expense here.
The problem is that when I click the button and insert the row, the row new row only populates where it says B7. If I were to delete "test" in B7 leave it blank, and then click the button it would get generated under B5 where it says expenses.
I just want a new row to be added so I can list my expenses accordingly. I also attached a screenshot
Here is my code below:
Sub Button17_Click()
Range("B" & Rows.Count).End(xlUp).Select
ActiveCell.EntireRow.Insert
ActiveCell.FormulaR1C1 = "Insert An Expense…"
End Sub
["Insert An Expense..." only gets added below the cell that has "test" populated. If the cell is blank, "Insert an Expense" gets populated above the cell that says "expenses" ][1]
[1]: https://i.stack.imgur.com/GYUKf.png |
Creating a budgeting expense sheet - Adding a row with VBA |
Not exactly sure how to ask so I'll give my concrete example.
We have approximately 50 dll files being built by a project, but for a particular application **at least 2** of them are required, but we don't know which files are needed. We want to minimize the number of dll files bundled with the project.
If there was exactly one file needed, we could use divide and conquer. Split the list in half and keep only one half. If it works we can repeat the process with this half. If it doesn't work we can repeat the process on the other half. Divide and conquer works **very** quickly to identify one file.
This doesn't work if 2 (or more) files are required (because if you split the list and one file from each half is needed, it will fail with both halves of the list).
Is there a similar algorithm to divide and conquer which works like this and works quicker than "Eliminate one file. If it works, it's not needed, if it fails, it's needed"?
(I know there are dll dependency checkers and tools and things, but this question is more about the process - another example is finding particular checkins to source control which caused a breakage - finding one breaking change is easy and quick with divide and conquer. If two or more checkins contributed then you can no longer divide and conquer). |
Is there an equivalent to divide and conquer when trying to identify multiple items? |
|divide-and-conquer| |
I was searching for I believe the same answer and with this being the top Google result, this was the eventual answer that I came to that is both brilliant but incredibly simple.
Add an extra `\` to the `\n`.
Essentially anytime you're adding a `\n` to the row it will add that into the output file as a newline and it will break the CSV format. By changing that to `\\n` it will escape the `\` in the `\n` allowing it to be inserted into the CSV field as text rather than a newline. |
When you don't know the number of columns in excel beforehand and to use the same column names that excel uses: A, B etc you can use this option. Inspired from [this answer][1]
import string
df = pd.read_excel(wb_path, header=None)
def get_excel_col_name(col: int):
result = []
while col:
col, rem = divmod(col-1, 26)
result[:0] = string.ascii_uppercase[rem]
return ''.join(result)
df.columns = [get_excel_col_name(x) for x in range(1, len(df.columns)+1)]
[1]: https://stackoverflow.com/a/19169180/1019156 |
{"OriginalQuestionIds":[37882504],"Voters":[{"Id":2385133,"DisplayName":"Charlie Clark","BindingReason":{"GoldTagBadge":"openpyxl"}}]} |
"Hello everyone, I am trying to build an Android application using Jetpack Compose. However, when I try to run the app, an error appears like this:
> Process: com.example.bangkit_recycleview, PID: 7393
java.lang.RuntimeException: Cannot create an instance of class com.example.pokedex.MainViewModel
i am not using hilt, i just want to use jetpack compose only but i am facing an error like that when i am not using hilt, i put the detail of my code so you can see it more clearly
here is my code :
//MainActivityScreen
```
@Composable
fun MainActivityScreen(viewModel: MainViewModel = viewModel()) {
val context = LocalContext.current
val coroutineScope = rememberCoroutineScope()
val foods = viewModel.foods
LaunchedEffect(key1 = context) {
viewModel.getAllFoods()
}
}
```
//MainViewModel
```
class MainViewModel(private val foodDao: FoodDao) : ViewModel() {
private var _foods: List<FoodEntity> = emptyList()
val foods: List<FoodEntity>
get() = _foods
fun getAllFoods() {
viewModelScope.launch {
_foods = withContext(Dispatchers.IO) {
foodDao.getAllFoods()
}
}
}
fun insertFood(newFood: FoodEntity) {
viewModelScope.launch {
withContext(Dispatchers.IO) {
foodDao.insert(newFood)
}
}
}
}
```
Can anyone help me to find the solution of this problem? |
Cannot create an instance of com.example.project.mainViewModel when using Jetpack Compose |
|android|kotlin|android-jetpack-compose|viewmodel|android-viewmodel| |
I'm just playing around with quickly amending JS settings in Chrome on the fly, without installing add ons (company policy). In browser, when I visit the settings page: "chrome://settings/content/javascript?search=javascript", the settings load fine and I can toggle.
When I try this headless in Chrome, the settings page doesn't render. My headless settings do use --no-sandbox. Not sure if this has any bearing. Has anyone tried this and overcome it?
I'm using:
Capybara 3.39,
Selenium Webdriver 4.14,
Cucumber 9.1.2,
Ruby 3.2.2 |
Chrome settings don't load in headless chrome - Selenium Ruby |
|ruby|selenium-webdriver|selenium-chromedriver|capybara| |
I want to have the last item in the 1st column to just have the set gap of the Grid. And not be pushed down so much because of the length of the last item in the 2nd column. How do I achieve this? [Grid column of my services where I show my desired output](https://i.stack.imgur.com/Repyr.png) |
How to use the set gap of the Grid column and not based on the length of an item in the Grid |
|html|css|css-grid| |
null |
For a dataframe like this:
```
df = pd.DataFrame({"CARRIER": [1, 1, 2, 3, 3, 3]})
```
```
CARRIER
1 2
2 1
3 3
```
Use `value_counts`, then reset the index and change the column name:
```
df = df.value_counts().reset_index().rename(columns={'count': 'Count'})
```
```
CARRIER Count
0 1 2
1 2 1
2 3 3
``` |
I have a requirement that consists of, every time the user changes a unit of measure of a material using `MM02`, I have to change the info record. After this, I have to select all purchase orders that contain this material and that are not delivered or partially delivered and update Order unit <--> SKU.
The first part is done, that is update the info record (`EINA` table), but now I'm stuck in the second part of the requirement.
I will show what the user manually does to achieve that.
First, the user enters in `MM02` and makes the change on the field marked in red color:
[![MM02][1]][1]
After he saves, changes are reflected in the info record (`EINA` table) `ME13` transaction.
[![ME13][2]][2]
Now, to have this done in the purchase order, the user enters `ME22N` and opens a purchase order. On the items, he locates that one containing the material changed before, go to info record field, erase its content and press enter and the field Order **unit <--> SKU** is updated. Then, the user enters the info record again and saves the purchase order.
Here is the field before the user makes changes to the item.
[![ME22N][3]][3]
Now the user locates the info record field.
[![Info record][4]][4]
Clear the field and press Enter.
[![Info record][5]][5]
Now how you can see, the field **Order unit <--> SKU** was updated.
[![Field updated][6]][6]
Now, I ask you guys if there is a way to do the the same thing the user does using `ME22N` but using `BAPI_PO_CHANGE`?
I would not like do this using a batch input because `ME22N` is an enjoy and this kind of transactions do not work well with batch input. Additionally, in the old transaction `ME22`, this particular field (info record) is not open for input.
I appreciate any help.
Best regards.
Ronaldo S. Vieira
[1]: https://i.stack.imgur.com/xUzTN.png
[2]: https://i.stack.imgur.com/S9X9M.png
[3]: https://i.stack.imgur.com/oebhs.png
[4]: https://i.stack.imgur.com/FY7KH.png
[5]: https://i.stack.imgur.com/gTnIX.png
[6]: https://i.stack.imgur.com/CODGa.png |
Update info record in purchase orders |
|abap|sap-erp| |
Well this might not be a foolproof answer, but you can certainly try this. When we use "batching" - even though the number of Redis calls could be reduced, it is understood that the batch is expected to take more time to process.
In our case increasing (Fine tune as per your infra) the `RedisConnectionTimeout` and `RedisSyncTimeout` values fixed the issue. FYR - we are using 7000 ms timeout. |
I have a Spring MVC application currently deployed on Tomcat 9, and I'm looking to enhance its session management by integrating Redis. The application currently uses Spring Session with Tomcat's default session management, but we want to switch to Redis for better scalability and persistence.
Here's an overview of our current setup and the changes we're aiming to implement:
**Current Setup:**
<!-- Dependencies for Spring Session with Redis -->
<dependency>
<groupId>org.springframework.session</groupId>
<artifactId>spring-session-core</artifactId>
<version>2.7.4</version>
</dependency>
<dependency>
<groupId>org.springframework.session</groupId>
<artifactId>spring-session-data-redis</artifactId>
<version>2.7.4</version>
</dependency>
<dependency>
<groupId>io.lettuce</groupId>
<artifactId>lettuce-core</artifactId>
<version>6.3.1.RELEASE</version>
</dependency>
**Objective:**
Implement Redis-based session management in the Spring MVC application.
Ensure scalability and persistence of session data using Redis.
What I've Done So Far:
Added necessary dependencies for Spring Session with Redis (using Lettuce as the Redis client).
Configured Spring Session to use Redis for session management.
**Problem Encountered:**
While I've made progress in configuring Redis for session management, I'm encountering issues with setting up the session ID resolver and configuring the Redis connection properly. When I check redis data using redis-cli I can session being created. I'm unsure if my current configuration is correct or if there are additional steps I need to take to ensure everything works smoothly.
**Code Snippet:**
@PropertySource(value = "file:${APP_CONFIG}/redis.properties", ignoreResourceNotFound = false)
@EnableRedisHttpSession(redisNamespace = "spring:session:demo", cleanupCron = "0 */5 * * * ?", flushMode = FlushMode.IMMEDIATE, saveMode = SaveMode.ALWAYS)
@Order(Ordered.HIGHEST_PRECEDENCE)
public class SessionConfig extends AbstractHttpSessionApplicationInitializer {
@Value("${redis.port}")
private Integer redisPort;
@Value("${redis.host}")
private String redisHost;
@Value("${redis.pass}")
private String redisPassword;
private static final Logger log = LoggerFactory.getLogger(SessionConfig.class);
@Bean
@Primary
public LettuceConnectionFactory redisConnectionFactory() {
log.error("Creating LettuceConnectionFactory with Redis host: {} and port: {}", redisHost, redisPort);
final RedisStandaloneConfiguration redisStandaloneConfig = new RedisStandaloneConfiguration();
redisStandaloneConfig.setHostName(redisHost);
redisStandaloneConfig.setPort(redisPort);
redisStandaloneConfig.setPassword(redisPassword);
final LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(redisStandaloneConfig);
// lettuceConnectionFactory.setValidateConnection(true);
lettuceConnectionFactory.setEagerInitialization(true);
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory;
}
@Bean
@Order(Ordered.HIGHEST_PRECEDENCE)
public HttpSessionIdResolver httpSessionIdResolver() {
CookieHttpSessionIdResolver resolver = new CookieHttpSessionIdResolver();
DefaultCookieSerializer cookieSerializer = new DefaultCookieSerializer();
cookieSerializer.setCookieName("JSESSIONID"); //
cookieSerializer.setCookiePath("/");
cookieSerializer.setDomainNamePattern("^.+?\\.(\\w+\\.[a-z]+)$");
cookieSerializer.setUseBase64Encoding(true);
resolver.setCookieSerializer(cookieSerializer);
return resolver;
}
}
**Observations:**
Multiple JSESSIONID Values: Upon loading the login page in the browser, I notice two JSESSIONID values being generated. One appears to be created by Redis, while the other seems to be associated with Tomcat's session management mechanism.
Consistency in Redis Session ID: Upon successful authentication, I've verified that the session ID retrieved from Redis matches the one stored in the database.
Inconsistent Session Retrieval: In certain sections of my code, when attempting to access the session using RequestContextHolder, I'm obtaining the Tomcat session ID (JSESSIONID value), rather than the one managed by Redis.
In below code, I get the Redis-generated session ID:
@Override
public void onAuthenticationSuccess(HttpServletRequest request, HttpServletResponse response, Authentication authentication) throws IOException, ServletException {
HttpSession session = request.getSession(true);
//other codes
}
While in below, I get the Tomcat session JSESSIONID:
ServletRequestAttributes requestAttributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
HttpSession session = requestAttributes.getRequest().getSession();
|
Optimum Time step for Verlet's method to solve Damped Simple Harmonic Motion ODE |
To do it I would use the `roll...` functions from the `{zoo}` package.
``` r
library(dplyr)
library(zoo)
```
First let’s generate some data. each day in january will be present at least once.
``` r
set.seed(123)
january_dates <- seq(as.Date("2024-01-01"), length.out = 31, by = "day")
duplicate_dates <- sample(
x = seq(as.Date("2024-01-01"), length.out = 31, by = "day"),
size = 29,
replace = TRUE
)
data <- data.frame(
ticket_id = 1:60,
date = c(january_dates, duplicate_dates),
reply_time = sample(1:300, size = 60, replace = TRUE)
)
head(data)
#> ticket_id date reply_time
#> 1 1 2024-01-01 137
#> 2 2 2024-01-02 254
#> 3 3 2024-01-03 211
#> 4 4 2024-01-04 78
#> 5 5 2024-01-05 81
#> 6 6 2024-01-06 43
```
Now let’s calculate the `total_reply_time` and the `number_of_tickets` by day.
``` r
summary <- data |>
arrange(date) |>
summarise(
total_reply_time = sum(reply_time),
number_of_tickets = n(),
.by = date
)
```
The last step is to get the weighted rolling average
``` r
summary |>
mutate(
rollsum_reply_time = zoo::rollsum(total_reply_time, k = 7, fill = NA, align = "right"),
rollsum_tickerts = zoo::rollsum(number_of_tickets, k = 7, fill = NA, align = "right"),
rolling_average = rollsum_reply_time / rollsum_tickerts
)
#> date total_reply_time number_of_tickets rollsum_reply_time
#> 1 2024-01-01 137 1 NA
#> 2 2024-01-02 254 1 NA
#> 3 2024-01-03 521 3 NA
#> 4 2024-01-04 78 1 NA
#> 5 2024-01-05 380 3 NA
#> 6 2024-01-06 43 1 NA
#> 7 2024-01-07 279 2 1692
#> 8 2024-01-08 117 2 1672
#> 9 2024-01-09 308 2 1726
#> 10 2024-01-10 554 3 1759
#> 11 2024-01-11 27 2 1708
#> 12 2024-01-12 135 1 1463
#> 13 2024-01-13 224 1 1644
#> 14 2024-01-14 448 3 1813
#> 15 2024-01-15 452 2 2148
#> 16 2024-01-16 290 1 2130
#> 17 2024-01-17 69 1 1645
#> 18 2024-01-18 281 2 1899
#> 19 2024-01-19 321 3 2085
#> 20 2024-01-20 132 2 1993
#> 21 2024-01-21 141 1 1686
#> 22 2024-01-22 522 3 1756
#> 23 2024-01-23 153 1 1619
#> 24 2024-01-24 294 1 1844
#> 25 2024-01-25 540 4 2103
#> 26 2024-01-26 231 3 2013
#> 27 2024-01-27 502 3 2383
#> 28 2024-01-28 381 2 2623
#> 29 2024-01-29 83 2 2184
#> 30 2024-01-30 116 1 2147
#> 31 2024-01-31 356 2 2209
#> rollsum_tickerts rolling_average
#> 1 NA NA
#> 2 NA NA
#> 3 NA NA
#> 4 NA NA
#> 5 NA NA
#> 6 NA NA
#> 7 12 141.0000
#> 8 13 128.6154
#> 9 14 123.2857
#> 10 14 125.6429
#> 11 15 113.8667
#> 12 13 112.5385
#> 13 13 126.4615
#> 14 14 129.5000
#> 15 14 153.4286
#> 16 13 163.8462
#> 17 11 149.5455
#> 18 11 172.6364
#> 19 13 160.3846
#> 20 14 142.3571
#> 21 12 140.5000
#> 22 13 135.0769
#> 23 13 124.5385
#> 24 13 141.8462
#> 25 15 140.2000
#> 26 15 134.2000
#> 27 16 148.9375
#> 28 17 154.2941
#> 29 16 136.5000
#> 30 16 134.1875
#> 31 17 129.9412
```
<sup>Created on 2024-03-26 with [reprex v2.0.2](https://reprex.tidyverse.org)</sup>
|
After seeing the raw response, you have JSON Stringified, ie JSON String inside JSON.
So you need to decode first the JSON as it was a String, then decode according to your own model:
```
do {
let string = try JSONDecoder().decode(String.self, from: data)
let object = try JSONDecoder().decode(UserUndertakingResponse.self, from: Data(string.utf8))
print(object)
} catch {
print("Error decoding JSON: \(error)")
}
```
Your response has a typo, either it's fixed on the API, or you can force the decoding:
It's written "htmlDecription" instead of "htmlDescription" (missing an "s").
```
enum CodingKeys: String, CodingKey {
case errorCode, status, message, utID, showUserUndertaking
case htmlDescription = "htmlDecription"
}
```
Now, unrelated to your issue, but name your variables starting with a lowercase:
```
let SchCode = "&SchCode=\(strSchoolCodeSender)"
```
-->
```
let schCode = "&SchCode=\(strSchoolCodeSender)"
```
Also, for constructing the URL, I'd suggest to use `URLQueryItem`:
```
let baseURL = APIConstants-basePath.com + "/Undertaking"
var components = URLComponents(string: baseURL)
components?.queryItems = [URLQueryItem(name: "SchCode", value: strSchoolCodeSender),
URLQueryItem(name: "UserID", value: intUserID.toString)
...
]
guard let url = components.url else {
print("Invalid URL")
return
}
``` |
Unfortunately, there are a few things you can do to resolve this. The most viable to me involves an additional data processing layer.
If you can, create an AWS Glue ETL job to read the Parquet files, rename the columns as needed, and write the data back to S3 in a new location with a compliant schema. You can then create an Athena table pointing to this new location.
This approach involves an additional data processing step and potentially duplicated storage (original and transformed datasets), but it offers the most flexibility in terms of schema management. |
I encountered the same error. If you are using the vs code terminal, put your tar.gz inside your project directory. It seems like the vs code cannot read a file outside the project directory.
I used this command to import:
sanity dataset import production.tar.gz development --replace --allow-assets-in-different-dataset
|
i need help creating pulsing circle thing inside this animation, i've come pretty close with the rest of it i think, also any other help with the code is appreciated!
Gradient thing i tried didnt make it pulse but i think the color is good.
[Animation i have to make](https://giphy.com/gifs/RybwWJlLT7r8syjYoF)
[My code so far](https://codepen.io/Central-Attack/pen/XWGyYar)
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-css -->
body {
margin: 0;
padding: 0;
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
background-color: #050210;
}
.circle {
position: relative;
width: 300px;
height: 300px;
border-radius: 50%;
background: linear-gradient(#fb5dad, #55fb9f, #b97aff);
animation: my-animation 2s linear infinite;
}
@keyframes my-animation {
0% {
transform: rotate(0deg);
}
100% {
transform: rotate(360deg);
}
}
.circle span {
position: absolute;
width: 100%;
height: 100%;
border-radius: 50%;
background: linear-gradient(#fb5dad, #55fb9f, #b97aff);
animation-name: my-animation;
animation-duration: infinite;
animation-timing-function: linear;
}
.circle span:nth-child(1) {
filter: blur(5px);
}
.circle span:nth-child(2) {
filter: blur(10px);
}
.circle span:nth-child(3) {
filter: blur(25px);
}
.circle span:nth-child(4) {
filter: blur(50px);
}
.circle::after {
content: "";
position: absolute;
top: 20px;
left: 20px;
right: 20px;
bottom: 20px;
background: radial-gradient(black, rgb(11, 11, 88));
border-radius: 50%;
}
<!-- language: lang-html -->
<div class="circle">
<span></span>
<span></span>
<span></span>
<span></span>
</div>
<!-- end snippet -->
Tried doing things with 2d transformation scale gradient but to no success. |
{"OriginalQuestionIds":[62947285],"Voters":[{"Id":16343464,"DisplayName":"mozway","BindingReason":{"GoldTagBadge":"dataframe"}}]} |
The "cb() never called!" error was due to the mismatch version of the npm packages in package.json and package-lock.json. Mostly, in case where package.json had lower version and package-lock.json had higher version it was throwing error.
To solve the issue, I have to look for such packages and align the version in both package.json and package-lock.json.
Also, the [npm@8.6.0][1] validates package and package-lock for such mismatches. It is advisable to use npm@8.6.0 to later.
[1]: https://github.com/npm/cli/issues/5113 |
If you must use a position, you can capture an index variable by reference in your lambda. Of course, the usual caveats about `pow` and floating point numbers apply, and this is unnecessary to accomplish the task, but it is _possible_.
```
int idx = 0;
int number = std::accumulate(
arr.begin(), arr.end(),
[&idx](const int num, const int bit) {
return num + bit * std::pow(2, idx++);
}
);
```
Sidenote: `arr` is a misleading name for a `std::vector` variable.
Keep in mind also that the lambda you pass to `std::accumulate` should not mutate either of its two arguments, but rather just generate a new value. |
{"OriginalQuestionIds":[613183],"Voters":[{"Id":6622587,"DisplayName":"eyllanesc","BindingReason":{"GoldTagBadge":"python-3.x"}}]} |
|python|python-3.x|dictionary|sorting| |
I'm having a small problem with my code.
The case is: I have a <img> element and a button to browse for a file. When I click the button I can select an image (jpg or png) max size of 2mbs.
The "strange" is this, when I click the button for the first time, nothing shows on the `<img>`, but if a click the second time and choose the same image, the image appears. If I click again and choose another image, the same thing happens.
The question is: how can I resolve resolve this problem ?
```
$(".brw").on("click", function() {
var dtid = $(this).attr("data-id");
var fileDialog = $('<input type="file" accept=".png, .jpg">');
fileDialog.click().on("change",function() {
var oName = $(this)[0].files[0].name;
var lastDot = oName.lastIndexOf('.');
var oExt = oName.substring(lastDot + 1);
var oSize = Math.round($(this)[0].files[0].size / 1024);
if(oName == undefined || oName == 0) return;
if(oExt != "jpg" && oExt != "png") return;
if(oSize > 2000) {
$('#in'+dataId).show().delay(3000).fadeOut(300); // Show alert about image size larger than 2mbs
return;
}
var reader = new FileReader();
reader.onload = function(event) {
// Add image to the corresponding <img> element
$('#' + dtid).attr("src", event.target.result);
// Get the size of the selected image, adjust the aspect ratio and then modify the size of the corresponding <img> element
var newimg = new Image();
newimg.src = event.target.result;
var xheight = newimg.height;
var xwidth = newimg.width;
var ratio = Math.min(1, 320 / xwidth, 300 / xheight);
$('#'+dtid).css({width:xwidth * ratio, height:xheight * ratio});
// -----------------------------------
};
reader.onerror = function(event) {
alert("I AM ERROR: " + event.target.error.code);
};
reader.readAsDataURL($(this)[0].files[0]);
console.log('Trigger : ' + dtid);
console.log('Nome : ' + oName);
console.log('Extensão : ' + oExt);
console.log('Tamanho : ' + oSize);
});
return false;
});
```
The <img> element.
```
<center><img src="imgs/n.png" width="350px" height="300px" style="border: 2px dashed red" id="i1" alt=""/></center>
<input type="button" value="Browse..." class="imgs brw" data-id="i1" />
```
Thank you
So, the code works, but only when I select the image for the second time.
What I expect is to click the button for the first time and the image appears on <img> element |
I'm trying to use [i-Code CNES][1] for static code analysis on Fortran projects. After downloading the CLI tool from the [4.1.2 release][2] and attempting to run the `icode.bat` script, I encounter the following error:
```
Unrecognized option: -
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
```
I'm not very familiar with debugging cmd-batch scripts, and the error message does not specify the problematic line or option. The batch script is from their GitHub repository, available [here][3].
<!-- language-all: lang-bat -->
~~~
@REM i-Code CNES Startup Script for Windows
@REM
@REM Required ENV vars:
@REM JAVA_HOME - location of a JDK home dir
@echo off
set ERROR_CODE=0
@REM set local scope for the variables with windows NT shell
@setlocal
set "scriptdir=%~dp0"
if #%scriptdir:~-1%# == #\# set scriptdir=%scriptdir:~0,-1%
set "ICODE_HOME=%scriptdir%"
@REM ==== START VALIDATION ====
@REM *** JAVA EXEC VALIDATION ***
if not "%JAVA_HOME%" == "" goto foundJavaHome
for %%i in (java.exe) do set JAVA_EXEC=%%~$PATH:i
if not "%JAVA_EXEC%" == "" (
set JAVA_EXEC="%JAVA_EXEC%"
goto OkJava
)
if not "%JAVA_EXEC%" == "" goto OkJava
echo.
echo ERROR: JAVA_HOME not found in your environment, and no Java
echo executable present in the PATH.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation, or add "java.exe" to the PATH
echo.
goto error
:foundJavaHome
if EXIST "%JAVA_HOME%\bin\java.exe" goto foundJavaExeFromJavaHome
echo.
echo ERROR: JAVA_HOME exists but does not point to a valid Java home
echo folder. No "\bin\java.exe" file can be found there.
echo.
goto error
:foundJavaExeFromJavaHome
set JAVA_EXEC="%JAVA_HOME%\bin\java.exe"
:OkJava
goto run
@REM ==== START RUN ====
:run
set PROJECT_HOME=%CD%
@REM remove trailing backslash, see https://groups.google.com/d/msg/sonarqube/wi7u-CyV_tc/3u9UKRmABQAJ
IF %PROJECT_HOME:~-1% == \ SET PROJECT_HOME=%PROJECT_HOME:~0,-1%
%JAVA_EXEC% -Djava.awt.headless=true -XX:-UseGCOverheadLimit -Xms1024M -Xmx1024M -cp %ICODE_HOME%\*;%ICODE_HOME%\plugins\* fr.cnes.icode.application.ICodeApplication %*
if ERRORLEVEL 1 goto error
goto end
:error
set ERROR_CODE=1
@REM ==== END EXECUTION ====
:end
@REM set local scope for the variables with windows NT shell
@endlocal & set ERROR_CODE=%ERROR_CODE%
@REM see http://code-bear.com/bearlog/2007/06/01/getting-the-exit-code-from-a-batch-file-that-is-run-from-a-python-program/
goto exit
:returncode
exit /B %1
:exit
call :returncode %ERROR_CODE%
~~~
### What I've Tried:
- Ensuring that `JAVA_HOME` is set correctly and points to a valid JDK installation.
- Verifying that `java.exe` is accessible from the command line and the correct version is being used.
- Reading through the batch script to identify any obvious syntax issues, especially around JVM options.
The line that seems to be causing the issue involves setting JVM options for memory and classpath:
```
%JAVA_EXEC% -Djava.awt.headless=true -XX:-UseGCOverheadLimit -Xms1024M -Xmx1024M -cp %ICODE_HOME%\*;%ICODE_HOME%\plugins\* fr.cnes.icode.application.ICodeApplication %*
```
I suspect the error might be related to the JVM options, particularly `-XX:-UseGCOverheadLimit`, but I'm not sure.
### Questions:
1. How can I determine the exact cause of the "Unrecognized option" error in this context?
2. Is there a way to debug cmd-batch scripts to pinpoint where the error occurs?
3. Has anyone encountered a similar issue with i-Code CNES or JVM options in batch scripts and knows how to resolve it?
Any insights or suggestions on troubleshooting and fixing this issue would be greatly appreciated!
[1]: https://github.com/cnescatlab/i-CodeCNES
[2]: https://github.com/cnescatlab/i-CodeCNES/releases/download/4.1.2/icode-4.1.2.zip
[3]: https://github.com/cnescatlab/i-CodeCNES/blob/5ce536ed06538832089490ec10320ca046c1b83a/icode-app/src/main/scripts/icode.bat#L4 |
Error Running i-Code CNES Batch Script for Fortran Analysis: "Unrecognized option" |
|java|windows|batch-file|cmd| |
**[stackprotector's helpful answer](https://stackoverflow.com/a/78171751/45375) is definitely the best solution**: it wraps `Remove-Item` in _all_ aspects, including tab-completion, streaming pipeline-input support, and integration with [`Get-Help`](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/get-help):
To offer a **simpler - but limited - alternative**, which is **easier to author**, however:
```
Remove-Item -ErrorAction Ignore Alias:rm # remove the built-in `rm` alias
# Simple (non-advanced) wrapper function
function rm {
<#
.SYNOPSIS
Wrapper for Remove-Item with support for -rf to mean -Recurse -Force
#>
param([switch] $rf)
$extraArgs = if ($rf) { @{ Recurse=$true; Force=$true } } else { @{} }
if ($MyInvocation.ExpectingInput) { # pipeline input present
$input | Remove-Item @args @extraArgs
} else {
Remove-Item @args @extraArgs
}
}
```
Limitations:
* No tab-completion support for `Remove-Item`'s parameters (you do get it for `-rf`, and by default the files and directories in the current directory also tab-complete)
* Only simple `Get-Help` / `-?` output, focused on `-rf` only (conversely, however, forwarding the help to `Remove-Item`, as in the proxy-function approach doesn't describe the custom `-rf` parameter at all).
* While pipeline input is supported, it is collected in full up front, unlike with the proxy-function approach.
Note:
* The solution relies on [splatting](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Splatting) to pass arguments.
* In *non*-[advanced](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Functions_Advanced) functions, such as in this case, the [automatic `$args` variable](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Automatic_Variables#args) contains all arguments that weren't bound to declared parameters, and `@args` passes them through via splatting.<sup>[1]</sup>
* `$MyInvocation.ExpectingInput` indicates whether pipeline input is present, which can then be enumerated via the [automatic `$input` variable](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Automatic_Variables#input); as noted, doing so requires collecting all input objects in memory first, given that functions without a `process` block are only executed _after_ all pipeline input has been received (i.e., they execute as if their body were in an `end` block).
---
<sup>[1] Note that the automatic `$args` variable contains an _array_, and while splatting with an array - as opposed to with a [hashtable](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Hash_Tables) - normally doesn't support passing _named_ arguments through (those preceded with their target parameter name, e.g. `-Path *.txt`), `$args` has magic built into it that supports that too. For the details of this magic, see the footnote of [this answer](https://stackoverflow.com/a/71037345/45375).</sup> |
It ended up being caused by the Java SDK being too old. After updating the SDK to version 21, the issue was resolved. |
My approach at the start of the question was wrong. list/listkeys are not implemented in IPGroup.
I solved this by creating a new bicep file and referencing this as a module to get the existing IP Addresses. I then added the existing IP addresses to the new address.
**Get existing IP addresses module**
param ipGroupName string
resource existingIpGroup 'Microsoft.Network/ipGroups@2020-08-01' existing = {
name: ipGroupName
}
output existingIpAddresses array = existingIpGroup.properties.ipAddresses
**main file**
module extractipgroup 'extractipgroup.bicep' = {
name: 'extractipgroup'
params: {
ipGroupName:ipGroups_all_spokes_subnets_name
}
}
var ipGroupAddresses = concat([
'10.6.6.0/27'
], extractipgroup.outputs.existingIpAddresses)
resource ipGroup 'Microsoft.Network/ipGroups@2020-05-01' = {
name: ipGroups_all_spokes_subnets_name
location: location
properties: {
ipAddresses: ipGroupAddresses
}
}
Thanks to @notfound for directing me down the right path.
|
I have one dataset in R where I have data about the date of hospital admission and date of death.
For instance, let's take the code as -
```
set.seed(1)
tep <- data.frame(Date_of_birth= sample(c("11-12-1987", "11-10-1999", "19-01-1977", "20-12-1950"), 20, T),
Hospital_admission= sample(c("11-02-2019", "11-03-2019", "10-02-2019", "11-03-2019", "10-03-2019"), 20, T),
Death_date= sample(c("10-03-2019", "10-06-2019", "12-01-2020", "05-03-2019", "01-02-2020"), 20, T))
```
I have to calculate the date of death in three different timelines as follows-
1. I need to calculate people who died 30 days, and I have to make variables with values like 1 and 0. 1= for those who died in 30 days and 0= those who did not.
2. Everything is similar, here I have to calculate those who died within 6 months of hospital admission
3. Again, everything is similar, here I have to calculate those who died within one year of hospital admission
Later, I have to make a table together for all these three with percentages.
In last, I need to use the gender variable to make a table with these three.
Can anyone help with this confusion?
|
Try like this
WITH RECURSIVE EmployeeHierarchy AS (
( SELECT DISTINCT
ee.id,
generated_dates::date AS date,
ud.title AS designation_name,
CONCAT(ou.first_name, ' ', ou.last_name) AS name,
epp.employee_image,
CASE
WHEN extract(dow FROM date(dates)) IN (0, 6) THEN 'W'
WHEN ua.created_at IS NULL AND ele.start_date IS NOT NULL AND
(generated_dates::date BETWEEN ele.start_date::date AND ele.end_date::date) AND
ele.status = 'approved' AND elt.is_half_leave = false THEN 'L'
WHEN ua.created_at IS NOT NULL AND ele.start_date IS NOT NULL AND
(generated_dates::date BETWEEN ele.start_date::date AND ele.end_date::date) AND
ele.status = 'approved' AND elt.is_half_leave = false THEN 'L'
WHEN ele.start_date IS NOT NULL AND ua.created_at IS NOT NULL AND ele.status = 'approved' AND
elt.is_half_leave = true THEN 'PL'
WHEN ele.start_date IS NOT NULL AND ua.created_at IS NULL AND ele.status = 'approved' AND
elt.is_half_leave = true THEN 'AL'
WHEN ua.created_at IS NULL THEN 'A'
WHEN ua.created_at IS NOT NULL THEN 'P'
ELSE 'A'
END AS status,
0 AS level
FROM generate_series(
DATE_TRUNC('MONTH', CURRENT_DATE)::DATE,
CURRENT_DATE,
'1 day'
) AS generated_dates(dates)
JOIN employee_employee ee ON true = true
LEFT JOIN enterprise_designation ud ON ud.id = ee.designation_id
LEFT JOIN em_attendances ua ON ua.employee_id = ee.id AND ua.created_at::date = generated_dates::date
LEFT JOIN em_leaves ele ON ele.employee_id = ee.id AND (ele.start_date::date <= generated_dates::date AND ele.end_date::date >= generated_dates::date) AND ele.status = 'approved'
LEFT JOIN em_leavetypes elt ON elt.id = ele.leave_type_id
LEFT JOIN oms_user ou ON ou.id = ee.user_id
LEFT JOIN employee_profile_profile epp ON ee.id = epp.employee_id
LEFT JOIN oms_user_roles ous ON ee.user_id = ous.user_id
LEFT JOIN oms_role omr ON omr.id = ous.role_id
WHERE ee.id = 'f2c1f939-a9d6-49b4-a880-2bf6b6f4b3e2' AND ee.is_active = true AND ee.is_deleted = false
ORDER BY ee.id)
UNION ALL
(SELECT DISTINCT
employee.id,
generated_dates::date AS date,
ud.title AS designation_name,
CONCAT(ou.first_name, ' ', ou.last_name) AS name,
epp.employee_image,
CASE
WHEN extract(dow FROM date(dates)) IN (0, 6) THEN 'W'
WHEN ua.created_at IS NULL AND ele.start_date IS NOT NULL AND
(generated_dates::date BETWEEN ele.start_date::date AND ele.end_date::date) AND
ele.status = 'approved' AND elt.is_half_leave = false THEN 'L'
WHEN ua.created_at IS NOT NULL AND ele.start_date IS NOT NULL AND
(generated_dates::date BETWEEN ele.start_date::date AND ele.end_date::date) AND
ele.status = 'approved' AND elt.is_half_leave = false THEN 'L'
WHEN ele.start_date IS NOT NULL AND ua.created_at IS NOT NULL AND ele.status = 'approved' AND
elt.is_half_leave = true THEN 'PL'
WHEN ele.start_date IS NOT NULL AND ua.created_at IS NULL AND ele.status = 'approved' AND
elt.is_half_leave = true THEN 'AL'
WHEN ua.created_at IS NULL THEN 'A'
WHEN ua.created_at IS NOT NULL THEN 'P'
ELSE 'A'
END AS status,
0 AS level
FROM generate_series(
DATE_TRUNC('MONTH', CURRENT_DATE)::DATE,
CURRENT_DATE,
'1 day'
) AS generated_dates(dates)
JOIN employee_employee employee ON true = true
LEFT JOIN enterprise_designation ud ON ud.id = employee.designation_id
LEFT JOIN em_attendances ua ON ua.employee_id = employee.id AND ua.created_at::date = generated_dates::date
LEFT JOIN em_leaves ele ON ele.employee_id = employee.id AND (ele.start_date::date <= generated_dates::date AND ele.end_date::date >= generated_dates::date) AND ele.status = 'approved'
LEFT JOIN em_leavetypes elt ON elt.id = ele.leave_type_id
LEFT JOIN oms_user ou ON ou.id = employee.user_id
LEFT JOIN employee_profile_profile epp ON employee.id = epp.employee_id
LEFT JOIN oms_user_roles ous ON employee.user_id = ous.user_id
LEFT JOIN oms_role omr ON omr.id = ous.role_id
JOIN EmployeeHierarchy eh ON employee.supervisor_id = eh.id)
)
SELECT
jsonb_build_object(
'id', employee_info.id,
'name', employee_info.name,
'designation_name', employee_info.designation_name,
'employee_image', employee_info.employee_image,
'attendance_list', jsonb_agg(
jsonb_build_object(
'status', employee_info.status,
'date', employee_info.date::text
) ORDER BY employee_info.date
)
) AS employee_info
FROM (
SELECT
eh.id,
eh.name,
eh.designation_name,
eh.employee_image,
eh.date,
eh.status
FROM EmployeeHierarchy eh
ORDER BY eh.id
) AS employee_info
GROUP BY employee_info.id, employee_info.name, employee_info.designation_name, employee_info.employee_image; |
I modified the retention policy in `/etc/barman.d/<server>.conf` but it doesn't change the number of backup copies.
How to reload the barman config file without restart the barman process? |
How to reload barman config file |
|barman| |
on the right of your screenshot there are shown all templates as "All Applicable File Templates" menu is opened. To open quick list please right click on the project name, then select Add | New from Template.
[![New from Template][1]][1]
[1]: https://i.stack.imgur.com/4NwDs.png |
I have this code
```
public class DoubleSensor: ObservableObject {
private var cancellables: Set<AnyCancellable> = Set<AnyCancellable>()
private(set) var sensor: any VEntity
@Published public var state: doubleState = DefaultState.forDouble()
public init(withSensor sensor: any VEntity) {
self.sensor = sensor
}
private func updateData(fromState newState: doubleState) async {
self.state = newState
}
}
public final class TemperatureSensor: DoubleSensor {
public var measurement: Measurement<UnitTemperature> {
return Measurement(value: state.value, unit: .celsius)
}
public override init(withSensor sensor: any VEntity) {
super.init(withSensor: sensor)
}
}
public final class PercentageSensor: DoubleSensor {
public var measurement: Measurement<UnitPercentage> {
return Measurement(value: state.value, unit: UnitPercentage.percentage)
}
public override init(withSensor sensor: any VEntity) {
super.init(withSensor: sensor)
}
}
public final class EnergySensor: DoubleSensor {
public var measurement: Measurement<UnitEnergy> {
return Measurement(value: state.value, unit: .wattHours)
}
public override init(withSensor sensor: any VEntity) {
super.init(withSensor: sensor)
}
}
```
As you can see I have 3 class inheriting from `DoubleSensor` class (a small version of it). All of inheriting classes have a measurement variable specific to a measurement unit.
How can I create a generic class instead specific ones?
I can create something like
```
public final class MeasurementSensor<T: Dimension>: DoubleSensor {
public var measurement: Measurement<T> {
return Measurement(value: state.value, unit: ???)
}
public override init(withSensor sensor: any VEntity) {
super.init(withSensor: sensor)
}
}
```
but I don't know how to set the unit value |
The error `The type or namespace name ‘Tensor’ could not be found (are you missing a using directive or an assembly reference?)` means that you are missing a nuget package or assembly reference.
Unfortunately I cannot determine from your question alone which nuget package is missing, but you might try adding a nuget package reference to:
https://www.nuget.org/packages/TensorFlowLite.iOS
However, I can only find iOS versions of this library on nuget.org and you may want to avoid this package unless you know for certain you will only need to support iOS (seems unusual.) This package also does not list any dependencies which I find suspect, but, I'm not familiar enough with it to know if that is true or not. If it has dependencies and they are not listed you will have the struggle of figuring out which dependencies need to be added yourself.
Not sure if `Tensorflow.NET` nuget package would be usable from Unity, or compatible with the model you are trying to use, I lack experience with these packages to know.
The error `‘Interpreter.GetOutputTensor(int)’ is inaccessible due to its protection level` may be a symptom of incorrect or missing packages being used. When I look at the current reference docs for tensorflow lite this particular method is decorated with a `public` accessor, and should not be giving this error. |
|java|java-10| |
Another possibility would be to have the list of columns that you want to change and make a dictionary of it with the item as upper cases:
l = ['Col 2']
L = [x.upper() for x in l]
df.rename(columns=dict(zip(l, L))) |
I'm new to coding and want to make a code in wolfram mathematica.
What I want to do is decompose a perfect number into fractions for each digit.
Example for the 2nd perfect number:
I want 28 to be written as 20/100 and 8/100. I want to do this for the bigger perfect numbers aswell so I need a general code.
Next I want to find the ammount common divisors of 20 and 100. (which is 6) and the amount of common divisors of 8 and 100 (which is 3).
Now seems simple but for bigger numbers I get discrepancies between wolfram alpha calculations and wolfram mathematica calculations.
I can't really find the problem but the list I found was:
1st (x) 3
2nd (y) 6 3
3rd (z) 12 4 2
4th (g) 16 9 6 4
5th (t) 64 49 42 30 0 9 4 2
6th (r) 110 90 88 49 54 30 16 0 6 2
but I get a different output in mathematica. It seems that mathematica doubles some numbers and some times it just adds a few (for the 3rd number I get 12, 4, and 2 but mathematica gives me 15, 12, and 4)
The code I made up until now is:
(* Definieer het perfecte nummer dat u wilt analyseren *)
perfectNumber = PerfectNumber[6];
(* Converteer het perfecte nummer naar een lijst van zijn cijfers *)
digits = IntegerDigits[perfectNumber];
(* Bepaal het aantal cijfers in het perfecte nummer *)
numDigits = Length[digits];
(* Positioneer de cijfers op basis van hun plaats in het perfecte nummer *)
positionedDigits = MapIndexed[#1 10^(numDigits - #2[[1]]) &, digits];
(* Verdeel elk gepositioneerd cijfer door het aantal cijfers in het perfecte nummer *)
dividedDigits = Map[#/10^numDigits &, positionedDigits];
(* Bepaal de delers van elk gepositioneerd cijfer *)
numeratorDivisors = Map[Divisors, positionedDigits];
(* Bepaal het aantal delers van elk gepositioneerd cijfer *)
numDivisors = Map[Length, numeratorDivisors];
|
Actually, you can achieve the same result with some basic math :
```python
from pyspark.sql import functions as F
df.withColumn("rand", F.rand() * (F.col("max") - F.col("min")) + F.col("min"))
```
The new columns will be in float but you can either trunc it or round it depending on your usecase.
_______________
if you want to use the random package, you need an UDF. You almost did it. I just fixed your code :
```python
import random
from pyspark.sql import functions as F, types as T
randoUDF = F.udf(random.randrange, T.IntegerType())
df.withColumn("rand", randoUDF(F.col("min"), F.col("max"))).display()
``` |
Is there any chance of data leakage while splitting dataset in this way:
def split_dataset(ds, train_ratio=0.8, val_ratio=0.1, test_ratio=0.1, shuffle=True):
# Get dataset size
dataset_size = len(ds)
# Calculate split sizes
train_size = int(train_ratio * dataset_size)
val_size = int(val_ratio * dataset_size)
test_size = dataset_size - train_size - val_size
# Shuffle dataset if required
if shuffle:
ds = ds.shuffle(dataset_size)
# Split dataset
train_dataset = ds.take(train_size)
val_dataset = ds.skip(train_size).take(val_size)
test_dataset = ds.skip(train_size + val_size).take(test_size)
return train_dataset, val_dataset, test_dataset
Training a deep learning model
Just surprised by the model's accuracy, it's over 98%. |
On Linux you can set [`PR_SET_CHILD_SUBREAPER`](https://man7.org/linux/man-pages/man2/prctl.2.html) so that when descendent processes are orphaned they get reparented to the current process.
But other than polling, how do I know when that happens? Is a signal sent? |
Detect when a new orphan process is reparented to the current process on Linux |
|linux|signals|orphan| |
|sql-server|ssms| |
In some legacy code I have seen the following extension method to facilitate adding a new key-value item or updating an existing value:
Method-1 (legacy code):
public static void CreateNewOrUpdateExisting<TKey, TValue>(
this IDictionary<TKey, TValue> map, TKey key, TValue value)
{
if (map.ContainsKey(key))
{
map[key] = value;
}
else
{
map.Add(key, value);
}
}
Though, I have checked that `map[key] = value` does exactly the same job. That is, Method-1 could be replace with Method-2 below.
Method-2:
public static void CreateNewOrUpdateExisting<TKey, TValue>(
this IDictionary<TKey, TValue> map, TKey key, TValue value)
{
map[key] = value;
}
Now, my question is...
Could there be any problem if I replace Method-1 by Method-2?
Will it break in any possible scenario?
Also, I think this used to be the difference between HashTable and Dictionary. HashTable allows updating an item, or adding a new item by using indexer while Dictionary does not!
Has this difference been eliminated in C# > 3.0 versions?
The objective of this method is too not throw an exception if the user sends the same key-value again. The method should, if the key is:
- **found — update** the key's value, and
- **not found — create a new** key-value.
|
Generic measurement variable in Swift |
|swift|generics| |
null |
Here is one answer that I just tried, which worked, though I am not sure it is the best way to do this:
I did what I described in the "Update" to my question, namely `cd /usr/local/opt/libgit2/lib/` followed by `ln -s libgit2.1.7.2.dylib libgit2.1.6.dylib`. Now `ls` is working again.
But is that the right way to fix this issue? It feels a bit like a hack that might cause other problems. |
```
Error: Could not find the correct Provider above this...
This happens because you used a BuildContext that does not include the provider of your choice. There are a few common scenarios:
You added a new provider in your main.dart and performed a hot-reload. To fix, perform a hot-restart.
The provider you are trying to read is in a different route.
```
**This error you was get because not define provider before used or improper use of provider**
Two way to solve
**first**
You must have to used **ChangeNotifierProvider Or Provider or Similar** before you used provider for every state-full/stateless class.
```
@override
Widget build(BuildContext context) {
final size = MediaQuery.of(context).size;
return ChangeNotifierProvider(
create: (context) => PrescriptionProvider(),
builder: (context, child) {
return YourWidget();
});
}
```
But this is not good for multi-provider
We can user [MultiProvider][1] for multi provider or even single provider,
Advantages for used [MultiProvider][1] over my previous way (first way)
1. Need Declare only one time
2. All provider are in list
3. no need to used ChangeNotifierProvider with every widget (for used of provider)
4. used for single or [MultiProvider][1]
**How to used [MultiProvider][1]??**
```
void main() async {
WidgetsFlutterBinding.ensureInitialized();
await sheredPref.initPref();
runApp(
MultiProvider(
providers: [
ChangeNotifierProvider(
create: (_) => Adverter(),
),
... // other provider
],
child: const MyApp(), //. as usual
),
);
}
class MyNewWidget extends StatelessWidget {
@override
Widget build(BuildContext context) {
// no need ChangeNotifierProvider after once you were declare in providers list
final provider = context.watch<Adverter>(); // hope you know read vs watch in provider
return Column(
children: [],
);
}
}
```
**Hope this will help you**
[1]: https://pub.dev/documentation/provider/latest/provider/MultiProvider-class.html |
How about to use `Dictionary<string, object> dic = new Dictionary<string, object>();` ?
Generic<object> g1 = new Generic<object>;
g1.Data = r1
Generic<object> g2 = new Generic<object>;
g2.Data = r2
dic.Add("A", g1);
dic.Add("B", g2); |
I need to control in open and close of SliderDrawer IN FLUTTER WEB APPLICATION
Iam trying to control in open and close of SliderDrawer IN FLUTTER WEB APPLICATION if ypu can help me in this leave your answer here Slider viewer |
I need to control in open and close of SliderDrawer |
|flutter|dart|web| |
This is the error I'm getting:
```none
2024-03-15T10:05:58.263-04:00 INFO 12408 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1828 ms
2024-03-15T10:05:58.356-04:00 ERROR 12408 --- [ main] com.zaxxer.hikari.HikariConfig : Failed to load driver class oracle.jdbc.OracleDriver from HikariConfig class classloader jdk.internal.loader.ClassLoaders$AppClassLoader@76ed5528
2024-03-15T10:05:58.361-04:00 WARN 12408 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'dataSourceScriptDatabaseInitializer' defined in class path resource [org/springframework/boot/autoconfigure/sql/init/DataSourceInitializationConfiguration.class]: Unsatisfied dependency expressed through method 'dataSourceScriptDatabaseInitializer' parameter 0: Error creating bean with name 'dataSource' defined in class path resource [org/springframework/boot/autoconfigure/jdbc/DataSourceConfiguration$Hikari.class]: Failed to instantiate [com.zaxxer.hikari.HikariDataSource]: Factory method 'dataSource' threw exception with message: Failed to load driver class oracle.jdbc.OracleDriver in either of HikariConfig class loader or Thread context classloader
2024-03-15T10:05:58.366-04:00 INFO 12408 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2024-03-15T10:05:58.382-04:00 INFO 12408 --- [ main] .s.b.a.l.ConditionEvaluationReportLogger :
Error starting ApplicationContext. To display the condition evaluation report re-run your application with 'debug' enabled.
2024-03-15T10:05:58.411-04:00 ERROR 12408 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'dataSourceScriptDatabaseInitializer' defined in class path resource [org/springframework/boot/autoconfigure/sql/init/DataSourceInitializationConfiguration.class]: Unsatisfied dependency expressed through method 'dataSourceScriptDatabaseInitializer' parameter 0: Error creating bean with name 'dataSource' defined in class path resource [org/springframework/boot/autoconfigure/jdbc/DataSourceConfiguration$Hikari.class]: Failed to instantiate [com.zaxxer.hikari.HikariDataSource]: Factory method 'dataSource' threw exception with message: Failed to load driver class oracle.jdbc.OracleDriver in either of HikariConfig class loader or Thread context classloader
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:798) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:542) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1335) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1165) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:562) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:522) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:325) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:323) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:312) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1231) ~[spring-context-6.1.4.jar:6.1.4]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:949) ~[spring-context-6.1.4.jar:6.1.4]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:624) ~[spring-context-6.1.4.jar:6.1.4]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:146) ~[spring-boot-3.2.3.jar:3.2.3]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-3.2.3.jar:3.2.3]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:456) ~[spring-boot-3.2.3.jar:3.2.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:334) ~[spring-boot-3.2.3.jar:3.2.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1354) ~[spring-boot-3.2.3.jar:3.2.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-3.2.3.jar:3.2.3]
at com.example.socrates.SocratesApplication.main(SocratesApplication.java:9) ~[classes/:na]
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataSource' defined in class path resource [org/springframework/boot/autoconfigure/jdbc/DataSourceConfiguration$Hikari.class]: Failed to instantiate [com.zaxxer.hikari.HikariDataSource]: Factory method 'dataSource' threw exception with message: Failed to load driver class oracle.jdbc.OracleDriver in either of HikariConfig class loader or Thread context classloader
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:651) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:639) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1335) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1165) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:562) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:522) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:325) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:323) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:254) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1443) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1353) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:907) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:785) ~[spring-beans-6.1.4.jar:6.1.4]
... 21 common frames omitted
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.zaxxer.hikari.HikariDataSource]: Factory method 'dataSource' threw exception with message: Failed to load driver class oracle.jdbc.OracleDriver in either of HikariConfig class loader or Thread context classloader
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:177) ~[spring-beans-6.1.4.jar:6.1.4]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:647) ~[spring-beans-6.1.4.jar:6.1.4]
... 35 common frames omitted
Caused by: java.lang.RuntimeException: Failed to load driver class oracle.jdbc.OracleDriver in either of HikariConfig class loader or Thread context classloader
at com.zaxxer.hikari.HikariConfig.setDriverClassName(HikariConfig.java:488) ~[HikariCP-5.0.1.jar:na]
at org.springframework.boot.jdbc.DataSourceBuilder$MappedDataSourceProperty.set(DataSourceBuilder.java:479) ~[spring-boot-3.2.3.jar:3.2.3]
at org.springframework.boot.jdbc.DataSourceBuilder$MappedDataSourceProperties.set(DataSourceBuilder.java:373) ~[spring-boot-3.2.3.jar:3.2.3]
at org.springframework.boot.jdbc.DataSourceBuilder.build(DataSourceBuilder.java:183) ~[spring-boot-3.2.3.jar:3.2.3]
at org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration.createDataSource(DataSourceConfiguration.java:59) ~[spring-boot-autoconfigure-3.2.3.jar:3.2.3]
at org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration$Hikari.dataSource(DataSourceConfiguration.java:117) ~[spring-boot-autoconfigure-3.2.3.jar:3.2.3]
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[na:na]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:140) ~[spring-beans-6.1.4.jar:6.1.4]
... 36 common frames omitted
Process finished with exit code 1
```
Here is my pom.xml:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.2.3</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>Socrates</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>Socrates</name>
<description>Socrates</description>
<properties>
<java.version>21</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
```
Maybe it has to do with my version of Spring? This is my first time really working with it so I don't have a lot of background knowledge to go off of.
```ini
spring.datasource.url=jdbc:oracle:thin:@socratesdb_high?TNS_ADMIN=C:/Oracle/Wallet_SocratesDB
spring.datasource.username=ENS
spring.datasource.password=@wwnfK2&x#VpnPY7
spring.jpa.hibernate.ddl-auto=update
```
I've been trying to change/add things here to my application.properties file but it doesn't seem to help. |
I'm trying to access an Impala DB via SQLAlchemy - I have configured a DSN that allows me to connect to the DB when using directly pyodbc.
However when using SQLAlchemy I get an error:
When using a db called datamart_x in the DSN:
pyodbc.Error: ('HY000', '[HY000] [Cloudera][ImpalaODBC] (370) Query analysis error occurred during query execution: [HY000] : AnalysisException: **datamart_x.schema_name()** unknown for database datamart_x. Currently this db has 0 functions.\n (370) (SQLExecDirectW)')
sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('HY000', '[HY000] [Cloudera][ImpalaODBC] (370) Query analysis error occurred during query execution: [HY000] : AnalysisException: datamart_x.schema_name() unknown for database datamart_x. Currently this db has 0 functions.\n (370) (SQLExecDirectW)')
**[SQL: SELECT schema_name()]**
It is not a permission issue - as I can connect directly to the DB with pyodbc using the same DSN.
I suspect the issue is in the SQL statement: SELECT schema_name() that is executed when the SQLAlchemy engine is access ed (e.g. in my case with pandas read_sql)
Any ideas if there are connection parameters to get this to work?
Below a the code that I use to create the SQLAlchemy engine
connection_string = 'mssql+pyodbc://DataLake'
```
SQL = 'SHOW DATABASES'
args = {'autocommit': True}
engine = create_engine(connection_string, connect_args=args)
df = pd.read_sql(SQL, engine)
```
Kind Regards,
Ernst
Tried different kind of connection strings |
Issue with SQLAlchemy accessing Impala database via cloudera ODBC DSN |
|sqlalchemy|odbc|pyodbc|cloudera|impala| |
null |
I just had to restart my machine and it started working |
|python|sql-server|sqlalchemy| |
I am attempting to use Apache SSHD to start local port forwarding and then read data from the local port in Java.
I can do this without Apache SSHD by using OpenSSH to start local port forwarding and then just create and read from a Socket in Java. This works as expected.
When I try to replicate this with Apache SSHD instead of using OpenSSH, I get no errors but cannot read from the Socket. Why can't I read from the Socket in the same way I can when using OpenSSH? Am I missing something with Apache SSHD?
Example using OpenSSH:
```
ssh -i keyfile -l username -L 12345:127.0.0.1:20130 sshServerAddress
```
Then leave that connection open and run some simple Socket reading code in Java:
```
Socket socket = new Socket("127.0.0.1", 12345);
socket.setSoTimeout(0);
try (BufferedReader reader = new BufferedReader(new InputStreamReader(new BUfferedInputStream(socket.getInputStream())))) {
String line = reader.readLine();
System.out.println(line);
}
```
When I do this, it prints the expected data being delivered over the tunnel.
I then tried to replicate this in Apache SSHD, and the connection completed without error, but when I try to read from the socket, it just returns null. Does anyone have any clues as to what is missing?
```
SshClient sshClient = SshClient.setUpDefaultClient();
sshClient.setForwardingFilter(AcceptAllForwardingFilter.INSTANCE);
sshClient.start();
ClientSession session = sshClient.connect(username, sshServerAddress, 22).verify(10000, TimeUnit.MILLISECONDS).getSession();
FileKeyPairProvider keyPairProvider = new FileKeyPairProvider(keyfile.toPath());
keyPairProvider.setPasswordFinder(FilePasswordProvider.of(passphrase));
clientSession.setKeyIdentityProvider(keyPairProvider);
clientSession.setUsername(username); // is this redundant?
clientSession.auth().verify(10000, TimeUnit.MILLISECONDS);
SshdSocketAddress sshdSocketAddress = clientSession.startLocalPortForwarding(12345, new SshdSocketAddress("127.0.0.1", 20130));
Socket socket = new Socket("127.0.0.1", 12345);
socket.setSoTimeout(0);
try (BufferedReader reader = new BufferedReader(new InputStreamReader(new BUfferedInputStream(socket.getInputStream())))) {
String line = reader.readLine();
System.out.println(line);
}
```
So Apache SSHD didn't throw any errors. It authenticated and executed the port forwarding code. But why can't I then read from the local port?
For reference, I ran this test using org.apache.sshd sshd-core and sshd-common artifacts version 2.7.0.
UPDATE:
I discovered that if I also open a shell channel, I can see the same output I would see if I were instead starting the port forwarding with OpenSSH, and I can also then read data on the forwarded port. But I still need to determine how to set this up properly. Example (I put this code prior to starting local port forwarding):
```
ChannelShell cshell = clientSession.createShellChannel();
cshell.setOut(new NoCloseOutputStream(System.out));
cshell.setErr(new NoCloseOutputStream(System.err));
cshell.open().verify(10000, TimeUnit.MILLISECONDS);
```
It's possible the server wouldn't actually fully initialize until I had accepted it's output. Now I need to figure out how to implement this in a proper way. I don't really care about the shell, I really just want to ignore it and consume data from the local port that port forwarding was set up for.
|
Im trying to deploy django with the dist folder generated from the `npm run build` command, I tested the site with the serve npm package and works just fine, but when I copy that folder into the django project directory and run `python manage.py runserver` the page renders blank and the browser console tells these errors.
`The resource from “http://localhost:8000/assets/index-422b98df.css” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff).`
and
`Loading module from “http://localhost:8000/assets/index-4435c3df.js” was blocked because of a disallowed MIME type (“text/html”).`
I just wrote the simplest view to render the index.html.
def index(request):
return render(request, 'dist/index.html')
Any help will be useful, thanks in advance. |
Deplying django and vue don't load the site |
|django|vue.js|deployment| |
We are migrating from Azure ML Python SDK V1 to V2.
I have a requirement to train a model, containerize using a custom Dockerfile, and push it to a container registry.
Previously with V1 SDK, I used docker-in-docker build option to achieve the same. To accomplish this I created a RunConfiguration(), expose docker.sock inside the docker by setting volume mount command ```["-v", "/var/run/docker.sock:/var/run/docker.sock"]``` in ```docker.arguments``` and used docker build command inside the container.
But looks like with V2 SDK, RunConfiguration() is removed and command() component is introduced.
This command component has a docker_args to pass extra arguments to ```docker run``` command.
But, when I set the same volume mount command in this parameter, I get the error:
create /var/run/docker.sock: \" /var/run/docker.sock\" includes invalid characters for a local volume name, only \"[a-zA-Z0-9][a-zA-Z0-9_.-]\" are allowed. If you intended to pass a host directory, use absolute path"}. Container using custom Docker arguments failed to start. Try reviewing and adjusting the Docker arguments.
There also is Bug similar to this open for some time: https://github.com/Azure/azure-sdk-for-python/issues/30466
This got me wondering what is the best way or the recommended way to build container images out of Azure ML Pipelines.
Any input and ideas on how anyone is handling such requirements will be greatly beneficial.
**Sample Code:**
Following a sample model packaging command component.
```python
model_package_cmd = command(
name="Package model",
display_name="Package model",
description="Package model as container image",
code="./src",
command="./build/docker-build.sh ${{inputs.model_path}}",
compute="training-cluster",
environment=get_package_runtime_env(),
environment_variables=get_runtime_env_vars(),
docker_args="-v /var/run/docker.sock:/var/run/docker.sock",
is_deterministic=False,
inputs={
"model_path": Input(type=AssetTypes.URI_FOLDER),
"test_reports": Input(type=AssetTypes.URI_FOLDER),
},
)
```
When the pipeline runs for this model packaging node, it throws up the error:
Failed to start Docker container 132f7af967554b3188faed6a1b791434-execution-wrapper due to: API queried with a bad parameter: {"message":"create /var/run/docker.sock: \" /var/run/docker.sock\" includes invalid characters for a local volume name, only \"[a-zA-Z0-9][a-zA-Z0-9_.-]\" are allowed. If you intended to pass a host directory, use absolute path"}.
Container using custom Docker arguments failed to start. Try reviewing and adjusting the Docker arguments.
|
null |
How can I prevent GoogleSheets from updating NOW() on historical records, It seems to randomly update some of the dates and times when the sheet is opened or a new record is added.
The sheet is used for a small business to clock in and keep track of project hours, the records are added manually, throughout a workday.
Col A is added as the records date when the Tech selects their name from a dropdown list in Col B, formula: =IFS(B43="","",A43="",NOW(),TRUE,A43), once they select the relevant task, and the project they select START in Col F which sets the starting time in Col H, formula: =IF(H43,H43,IF(F43="START",NOW(),IF(F43="STOP","",""))),
Then once the task is complete they select STOP in Col G which sets the end time in I, formula:=IF(I43,I43,IF(G43="STOP",NOW(),IF(G43="START","",""))),
I am not familiar with appscript and its functions, can someone please assist?
Tried various LAMBDA functions to no avail, tried data validation also failed.
As indicated above I am not familiar with appscript or coding so I tried encasing my formula with the LAMBDA function but it kept giving me an error as I needed to base the setting of the various dates and times on the other fields input. |
Prevent Google sheet NOW() function from updating when now is set based on another cell's input |
|function|google-sheets|aws-lambda|setvalue| |
null |