instruction stringlengths 0 30k β |
|---|
R: Using a variable to to pass multiple values for a single dynamically-defined parameter into a function |
|r|parameter-passing|microbenchmark| |
Using Selenium in Python, I would like to load the entirety of a JS generated list from this webpage: https://partechpartners.com/companies. There is a 'LOAD MORE' button at the bottom.
The code I've written to press the button (it just does it once currently, I know I'll need to extend it to be able to do it multiple times with a `while`):
```
from selenium import webdriver #The Selenium webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import NoSuchElementException, StaleElementReferenceException, WebDriverException
from time import sleep
chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(options=chrome_options)
url = 'https://partechpartners.com/companies'
driver.get(url)
sleep(2)
load_more = driver.find_element('xpath','//*[ text() = "LOAD MORE"]')
sleep(2)
try:
ActionChains(driver).move_to_element(load_more).click(load_more).perform()
print("Element was clicked")
except Exception as e:
print("Element wasn't clicked")
```
The code returns `Element was clicked`. However, when I add the following code to the bottom of the above script I only get 30 items returned, which is the number if the button hadn't been clicked, and the relative Xpath is the same for the elements pre and post button click, so I know it's not that:
```
len(driver.find_elements('xpath','//h2'))
```
I've also tried commenting out `chrome_options.add_argument("--headless")` to see if it works not asa headless browser and to follow the clicks. An accept cookies button appears that I can't get rid of, but that doesn't seem to matter because it still returns elements when I run the script above. What could I do to ensure the webdriver browser is actually loading the page? |
Python Selenium button click allegedly successful but not returning newly generated elements |
|python|selenium-webdriver| |
I have a textarea where users can paste text from the clipboard.
The text pasted `clipboarddata.getData('text')` gets modified.
However, I need a switch that if pressing <kbd>CTRL</kbd> <kbd>SHIFT</kbd> <kbd>V</kbd> instead of <kbd>CTRL</kbd> <kbd>V</kbd> the pasted text does not get modified.
I tried to catch the shift key with keydown/keyup:
```
$('textarea').on('keyup', function(e)
{
shiftkey_down = e.shiftKey;
});
```
and then try to read the boolean variable in the paste event handler:
```
$('textarea').on('paste', function(e)
{
if (shiftkey_down)
{
// ...
}
});
```
but the `paste` event comes after keydown and keyup. So I cannot read what key has been pressed. And `shiftkey_down` is always `false` inside the paste handler.
What would be the right way to handle this?
The only idea I got is to save the last key combination pressed inside the keydown event and then check the last one pressed inside the paste handler. But it does not seem to work either.
<br>
**Update:**
I tried to use a tiny timeout so the keydown boolean variable is not overwritten immediately:
```
var shiftkey_paste = false;
$('textarea').on('keydown', function(e)
{
if (!shiftkey_paste)
{
shiftkey_paste = e.shiftKey && e.ctrlKey;
console.log('> '+shiftkey_paste);
setTimeout( function()
{
// set false again after timeout so paste event has chance to pick up a true value
shiftkey_paste = false;
}, 10);
}
console.log('>> ' + shiftkey_paste);
});
```
And inside the paste handler, I print the value with `console.log('>>> ' + shiftkey_paste);`.
Result:
```
> false
>> false
> true
>> true
> false
>> false
>>> false
```
Also `false` even though the timeout should have helped.
<br>
**Update 2:**
βοΈ Wow, it works with a timeout of `100ms` instead of `10ms` (!)
<br>
If someone finds a better solution, please post your answer. |
null |
I have recently authored a new hierarchical clustering algorithm specifically for 1D data. I think it would be suitable to the case in your question. It is called `hlust1d` and it is written in `R`. It is available under [this link](https://CRAN.R-project.org/package=hclust1d).
The algorithm addresses @Anony-Mousse's concern: it takes advantages of the particular structure of 1D data and runs in O(n*log n) time. This is much faster than general-purpose hierarchical clustering algorithms.
To segment your data into 3 bins (clusters) as you require in your question, you could run the following code
```
library(hclust1d)
dendrogram <- hclust1d(c(1, 1, 2, 3, 10, 11, 13, 67, 71))
plot(dendrogram)
```
Now, having a look at the dendrogram tree (which I cannot attach here) one can see that cutting at the height of 10 results in the segmentation required.
```
cutree(dendrogram, h=10)
# 1 1 2 3 10 11 13 67 71
# 1 1 1 1 2 2 2 3 3
```
The clustering is hierarchical, meaning that what you get is a hierarchy of clusters (a dendrogram) and it is up to you to decide, which number of clusters fits your particular data best. It is both an advantage (flexibility) and disadvantage (you have to decide something) of this method. For instance, another good cut of the dendrogram, as can be seen on a plot, is located somewhere near the height of 20, resulting in two clusters:
```
cutree(dendrogram, h=20)
# 1 1 2 3 10 11 13 67 71
# 1 1 1 1 1 1 1 2 2
```
For more complex data, you could also experiment with other linkage functions using the `method` argument, like this:
```
dendrogram <- hclust1d(c(1, 1, 2, 3, 10, 11, 13, 67, 71), method="ward.D2")
```
Generally, Ward linkage method would give you a clustering similar to K-Means (the loss function in both methods is the same with Ward hierarchical clustering being a greedy implementation) but the hierarchical clustering let's you decide what is an appropriate number of clusters with a dendrogram in front of you, while you have to provide that number for K-Means up front.
The list of all supported linkage methods can be read with a use of
```
> supported_methods()
# [1] "complete" "average" "centroid" "true_median" "median" "mcquitty" "ward.D" "ward.D2" "single"
``` |
I am working on an Azure Function that interacts with the Azure Active Directory Graph API (@azure/graph). The function API is designed to verify if an email is registered and verified in Azure AD(Entra Id). However, I am encountering an issue with the access token being missing or malformed.
When the function tries to make a request to the Graph API to get the user by email (`graphClient.users.list`), it throws the following error:
note: I am using postman to test the api `http://localhost:7071/api/checkEmail`
`Error verifying email: RestError: {"odata.error":{"code":"Authentication_MissingOrMalformed","codeForMetrics":"Authentication_MissingOrMalformed","message":{"lang":"en","value":"Access Token missing or malformed."}}}
`
This occurs despite using DefaultAzureCredential from @azure/identity to acquire the token.
I have checked my Azure AD configuration and environment variables `(AZURE_TENANT_ID, AZURE_CLIENT_ID, AZURE_CLIENT_SECRET)` which seem to be correct.
Here's the relevant part of my Azure Function code:
const { GraphRbacManagementClient } = require("@azure/graph");
const { DefaultAzureCredential } = require("@azure/identity");
const tenantId = process.env.AZURE_TENANT_ID;
module.exports = async function (context, req) {
context.log('Checking if email is verified...');
const { email } = req.body;
if (!email) {
context.res = {
status: 400,
body: "Please provide the email"
};
return;
}
try {
const credential = new DefaultAzureCredential();
const graphClient = new GraphRbacManagementClient(
credential,
tenantId
);
const verifyEmail = async (email) => {
try {
const users = await graphClient.users.list({ filter: `mail eq '${email}'` });
const user = users.next().value;
if (user) {
if (user.mailVerified) {
return true;
} else {
return false;
}
} else {
return false;
}
} catch (error) {
console.error("Error verifying email:", error);
return false;
}
};
const isEmailVerified = await verifyEmail(email);
if (isEmailVerified) {
context.res = {
status: 200,
body: "Email is verified"
};
} else {
context.res = {
status: 400,
body: "Email is not verified or does not exist"
};
}
} catch (error) {
console.error("Error:", error);
context.res = {
status: 500,
body: "Internal Server Error"
};
}
};
Also, I have already tried using ClientSecretCredential directly instead of DefaultAzureCredential, but the issue persists.
Azure AD permissions seem to be configured correctly for the app registration.
What could be causing this "Access Token missing or malformed" error in my Azure Function? How can I ensure the access token is properly acquired and used for authentication with the Graph API?
Any insights or suggestions would be greatly appreciated. Thank you!
|
I trust I can understand your requirement.
[![enter image description here][1]][1]
In js part, if we use the default comment way, it will give us `//content` which makes you worried about having too much `plain text comments in the code `, so that you want to have `@* content *@` as the default comment for cshtml file.
However, I didn't find such settings in my VS 2022 community version. So that I'm afraid we couldn't set it directly.
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/hH9DL.png
[2]: https://i.stack.imgur.com/ndPY5.png |
I have one edittext
<EditText
android:id="@+id/selectvalue"
style="@style/searchList"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@color/white"
android:focusable="true"
android:hint="@string/search_order_hint"
android:imeOptions="actionSearch"
android:padding="10dp"
android:singleLine="true"
android:textColorHint="@color/common_gray"
android:textCursorDrawable="@null"
android:visibility="invisible" />
I have attached TextWatcher to this.
selectedValue.setOnTouchListener(new View.OnTouchListener() {
@Override
public boolean onTouch(View view, MotionEvent motionEvent) {
showKeyBoard(mContext, selectedValue);
return false;
}
});
selectedValue.addTextChangedListener(new TextWatcher() {
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {}
@Override
public void beforeTextChanged(CharSequence s, int start, int count, int after) {}
@Override
public void afterTextChanged(Editable s) {
if (!selectedAssetType.equals(OrderBookTypes.MUTUAL_FUND_TYPE)) {
loadSearchFilterData();
}
}
});
public void showKeyBoard(Context context, EditText edit) {
InputMethodManager m = (InputMethodManager) context.getSystemService(Context.INPUT_METHOD_SERVICE);
if (m != null) {
edit.requestFocus();
m.toggleSoftInput(0, InputMethodManager.SHOW_IMPLICIT);
}
}
But I can not type there. Charaters are not shown when I type. If I remove `orderBookModel.addAll(filterList);` line then I can type but removing the line will break my functionality. Who can solve this? |
EditText not taking input in Android? |
|android|android-edittext|android-textwatcher| |
I have the basic Auth0 setup with app/api/[auth0]/route.ts and so users can log in/out but there is no validation to check and redirect a user to the start page if they haven't logged in.
So I've been trying to integrate some sort of user-authentication on my profile page. But my profile (app/user-dashboard/[user_id]/page.tsx) page is configured with dynamic routing like so:
```typescript
export async function generateMetadata({
params,
}: {
params: { user_id: string };
}) {
return {
title: 'User Dashboard',
description: `Dashboard for User: ${params.user_id}`,
};
}
const UserDashboard = ({ params }: { params: { user_id: string } }) => {
... rest of page
```
Very similar to how nextjs docs explain in it [nextjs.org](https://nextjs.org/docs/app/building-your-application/routing/dynamic-routes#:~:text=functions.-,Example,-For%20example%2C%20a).
And I've tried to implement "withPageAuthRequired" on this page: like so:
```typescript
... rest of user page above.
export default withPageAuthRequired(UserDashboard, {
returnTo: ({ params }: AppRouterPageRouteOpts) => {
if (params) {
const user_id = params['user_id'];
if (typeof user_id === 'string') {
return `${process.env.NEXT_PUBLIC_BASE_URL}/user-dashboard/${user_id}`
}
}
}
});
```
But I get some typescript error on "UserDashboard":
```
Argument of type '({ params }: { params: { user_id: string; };}) => React.JSX.Element' is not assignable to parameter of type 'AppRouterPageRoute'.
Types of parameters '__0' and 'obj' are incompatible.
Type 'AppRouterPageRouteOpts' is not assignable to type '{ params: { user_id: string; }; }'.
Types of property 'params' are incompatible.
Type 'Record<string, string | string[]> | undefined' is not assignable to type '{ user_id: string; }'.
Type 'undefined' is not assignable to type '{ user_id: string; }'.ts(2345)
const UserDashboard: ({ params }: {
params: {
user_id: string;
};
}) => React.JSX.Element
```
I'm very new to Auth0 and so I'm not sure if I should try to use "withPageAuthRequired" to check whether a user is logged in or not. I find the [documentation](https://auth0.github.io/nextjs-auth0/types/helpers_with_page_auth_required.WithPageAuthRequiredAppRouter.html) very abstract.
Have anyone had this issue before and know what to do?
In Summary, tried this:
```typescript
export default withPageAuthRequired(UserDashboard, {
returnTo: ({ params }: AppRouterPageRouteOpts) => {
if (params) {
const user_id = params['user_id'];
if (typeof user_id === 'string') {
return `${process.env.NEXT_PUBLIC_BASE_URL}/user-dashboard/${user_id}`
}
}
}
});
```
And expected some validation to happen that checks if user i logged in and returns them to the user page or log in page. |
This is for avoid writing multiple states in composable.
Here's an example. the manager
val manager by remember { mutableStateOf(downloadManager, neverEqualPolicy())}
manager contains the info we want to show in UI. ```manager``` has a listener:
DisposableEffect(true) {
val listener = object : DownloadManager.Listener {
override fun onDownloadChanged(downloadManager: DownloadManager, download: Download) {
manager = downloadManager
}
override fun onDownloadsPausedChanged(downloadManager: DownloadManager, downloadsPaused: Boolean) {
manager = downloadManager
}
override fun onDownloadRemoved(downloadManager: DownloadManager, download: Download) {
manager = downloadManager
}
}
manager.addListener(listener)
onDispose { manager.removeListener(listener) }
}
And render
LazyColumn {
itemsIndexed(manager.currentDownloads){index, item ->
ListItem(headlineContent = { Text("${item.percentDownloaded}") })
}
}
So every time there's an event, the manager state will be updated and then trigger the rerendering. But I want to know is this a suitable way to update UI or I have to bind every state in listener for updating UI? In that case is there any better way to update the state?
|
Is it good to use a single object (player, manger, etc) as state in Composable for listener? |
|android|android-jetpack-compose| |
I'm trying to get matches by date from a football API and show them grouped by league.
So, I've made a mutable list in ViewModel to store every match item with the fixtureId as its unique identifier. The problem is that items are added to the list in every call from the API without overwriting the old ones, so the list keeps growing in every call, and only one league (which contains two games that day) is shown with an infinite repetition. When changing the date, the new data is added to the list along with the old ones. I want to access the results in the following image.
If anyone can help, thank you.
```
@Composable
fun TodayMatchesLazy(
matchesList: List<TodayResponseItem?>
) {
val viewModel: MainViewModel = hiltViewModel()
matchesList.forEach {
viewModel.mainList
.add(MatchesByLeague(
leagueId = it!!.league!!.id!!,
leagueName = it.league!!.name!!,
leagueLogo = it.league.logo!!,
matchId = it.fixture!!.id!!,
teamHomeName = it.teams!!.home!!.name!!,
teamHomeLogo = it.teams.home!!.logo!!,
teamAwayName = it.teams.away!!.name!!,
teamAwayLogo = it.teams.away.logo!!,
teamHomeR = it.goals!!.home.toString(),
teamAwayR = it.goals.away.toString(),
time = it.fixture.date!!)
)
}
val groupedList = viewModel.mainList.groupBy { it.leagueId }
if (groupedList .isNotEmpty()) {
LazyColumn(
modifier = Modifier
.padding(20.dp)
.fillMaxSize()
) {
groupedList .forEach { (league, items) ->
item {
Row {
TodayMatchesHeader(leagueItem = matchesList, league = league)
}
}
itemsIndexed(items,
itemContent = { index, item ->
TodayMatchesRow(
teamHomeName = item.teamHomeName,
teamHomeLogo = item.teamHomeLogo,
teamHomeR = item.teamHomeR,
teamAwayName = item.teamAwayName,
teamAwayLogo = item.teamAwayLogo,
teamAwayR = item.teamAwayR,
time = item.time
)
}
)
}
}
}
}
```
//ViewModel
```
@HiltViewModel
class MainViewModel @Inject constructor(private val liveMatchesRepository: MainRepository): ViewModel() {
private var _mainList = mutableStateListOf<MatchesByLeague>()
val mainList: MutableList<MatchesByLeague> = _mainList
}
data class MatchesByLeague(
val leagueId: Int, val leagueName: String, val leagueLogo: String,
val matchId: Int, val teamHomeName: String, val teamHomeLogo: String,
val teamAwayName: String, val teamAwayLogo: String,
val teamHomeR: String, val teamAwayR: String, val time: String
)
```
[1]: https://i.stack.imgur.com/ILU2M.jpg
UPDATE:
Based on the answer by BenjyTec, I've made some updates to the code. Now, it's working perfectly.
```
@Composable
fun TodayMatchesLazy(
matchesList: List<TodayResponseItem?>
) {
val viewModel: MainViewModel = hiltViewModel()
viewModel.mainList.value = matchesList
val groupedList = viewModel.mainList.value.groupBy { it!!.league!!.id }
if (groupedList.isNotEmpty()) {
LazyColumn(
modifier = Modifier
.padding(20.dp)
.fillMaxSize()
) {
groupedList.forEach { (league, items) ->
item {
TodayMatchesHeader(leagueItem = items[0]!!, league = league!!)
}
itemsIndexed(items,
itemContent = { index, item ->
TodayMatchesRow(
teamHomeName = item!!.teams!!.home!!.name!!,
teamHomeLogo = item.teams!!.home!!.logo!!,
teamHomeR = item!!.goals!!.home!!.toString(),
teamAwayName = item.teams.away!!.name!!,
teamAwayLogo = item.teams.away.logo!!,
teamAwayR = item.goals!!.away!!.toString(),
time = item.fixture!!.date!!
)
}
)
}
}
}
}
```
And the viewModel based on BenjyTec answer.
```
@HiltViewModel
class MainViewModel @Inject constructor(private val liveMatchesRepository: MainRepository): ViewModel() {
private var _mainList = mutableStateOf<List<TodayResponseItem?>>(listOf())
var mainList: MutableState<List<TodayResponseItem?>> = _mainList
}
``` |
This is my code:
```
const redux = require("redux");
const thunkMiddleware = require("redux-thunk").default;
const axios = require("axios");
// -------------------------------
const createStore = redux.legacy_createStore;
const applyMiddleware = redux.applyMiddleware;
// action type ( intention )
const AXIOS_USER_REQUEST = "AXIOS_USER_REQUEST";
const AXIOS_USER_SUCCESS = "AXIOS_USER_SUCCESS";
const AXIOS_USER_ERROR = "AXIOS_USER_ERROR";
// action object creator ( function that creates action object )
const fetchUsers = () => {
return {
type: AXIOS_USER_REQUEST,
};
};
const fetchUsersSuccess = (users) => {
return {
type: AXIOS_USER_SUCCESS,
payload: users,
};
};
const fetchUsersError = (error) => {
return {
type: AXIOS_USER_ERROR,
payload: error,
};
};
// default state object
const intialState = {
loading: false,
users: [],
error: "",
};
// reducer
const reducer = (state = intialState, action) => {
switch (action.type) {
case AXIOS_USER_REQUEST:
return {
...state,
loading: true,
};
case AXIOS_USER_SUCCESS:
return {
...state,
loading: false,
users: action.payload,
};
case AXIOS_USER_ERROR:
return {
...state,
loading: false,
error: action.payload,
};
default:
return state;
}
};
// thunk actions
const thunkFetchUsers = () => {
return function (dispatch) {
dispatch(fetchUsers());
};
};
const thunkAjaxFetchUsersResponse = () => {
return function (dispatch) {
axios
.get("https://reqres.in/vijay/vijay?page=1")
.then((res) => dispatch(fetchUsersSuccess(res.data.data)))
.catch((error) => dispatch(fetchUsersError(error.response.statusText)));
};
};
// store
const store = createStore(reducer, applyMiddleware(thunkMiddleware));
console.log(store.getState());
console.log("********* subscribed ********* ");
// subscribe
store.subscribe(() => {
console.log(store.getState());
});
// dispatch
store.dispatch(thunkFetchUsers());
store.dispatch(thunkAjaxFetchUsersResponse());
```
and this is my error:
```
D:\Sadiq\Full Stack\ReactJS\day-11\plain-redux\node_modules\redux\dist\cjs\redux.cjs:407
const chain = middlewares.map((middleware) => middleware(middlewareAPI));
^
TypeError: middleware is not a function
at D:\Sadiq\Full Stack\ReactJS\day-11\plain-redux\node_modules\redux\dist\cjs\redux.cjs:407:51
at Array.map (<anonymous>)
at D:\Sadiq\Full Stack\ReactJS\day-11\plain-redux\node_modules\redux\dist\cjs\redux.cjs:407:31
at createStore (D:\Sadiq\Full Stack\ReactJS\day-11\plain-redux\node_modules\redux\dist\cjs\redux.cjs:133:33)
at legacy_createStore (D:\Sadiq\Full Stack\ReactJS\day-11\plain-redux\node_modules\redux\dist\cjs\redux.cjs:258:10)
at Object.<anonymous> (D:\Sadiq\Full Stack\ReactJS\day-11\plain-redux\step4.js:79:15)
at Module._compile (node:internal/modules/cjs/loader:1368:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1426:10)
at Module.load (node:internal/modules/cjs/loader:1205:32)
at Module._load (node:internal/modules/cjs/loader:1021:12)
Node.js v21.7.1
```
how do i solve this issue?
I first checked all versions of "redux", "redux-thunk" and "axios", updated it but it still does not work. then i tried to reinstall the node-packages to ensure that my code works properly and give me the output in JSON Format. But it does not work and i dont know exactly how to solve it |
Azure Function: "Access Token missing or malformed" when using Graph API |
|javascript|azure|authentication|azure-active-directory| |
Been hating Azure Portal because they never cared about how intuitive their UI, menus, and options are. It's very hard to navigate. I just want to rant. Here's how you can enable the Release option:
**Go to organization settings**
- Click the Azure DevOps logo
- At the lower left
[![enter image description here][1]][1]
**Turn off Disable classic release**
[![enter image description here][2]][2]
No idea why they disabled it by default. It says classic and not deprecated, so it should be visible.
Edit (11/03/23): I've read somewhere that Release will be deprecated in favor of a multi-staged pipeline. That explains. They could have added some notes though.
[1]: https://i.stack.imgur.com/JII2c.png
[2]: https://i.stack.imgur.com/mtnCz.png |
null |
Assuming that the legal document will not be just a single-line sentence.
In that case one can use a loop. First use a minimum line height and generate the pdf and calculate the total number of pages, then increase the line height by say 0.05 each time (line height=line height +0.05) and perform the iteration until suddenly the total number of pages increase by one, then fall back to (line height = previous line height before the increase of page number), now generate the pdf
So, please run the following script (e.g. pre-genpdf.php) to initalize the setting:
```
<?php
session_start();
unset($_SESSION["pageCount"]);
unset($_SESSION["initialpageCount"]);
unset($_SESSION["line-height"]);
unset($_SESSION["good-line-height"]);
?>
Initalization completed, please run genpdf.php by clicking <A href=genpdf.php>HERE</a>
```
After that, click the hyperlink to trigger the following genpdf.php
```
<?php
session_start();
$initiallineheight=1.0;
if (!isset( $_SESSION["pageCount"] )) {
$_SESSION["pageCount"]=0;
}
if (!isset( $_SESSION["initialpageCount"] )) {
$_SESSION["initialpageCount"]=0;
}
if (!isset( $_SESSION["line-height"] )) {
$_SESSION["line-height"]=$initiallineheight;
}
if (!isset( $_SESSION["good-line-height"] )) {
$_SESSION["good-line-height"]=$initiallineheight;
}
$teststring0="
This section will guide you through the general configuration and installation of PHP on Unix systems. Be sure to investigate any sections specific to your platform or web server before you begin the process.
As our manual outlines in the General Installation Considerations section, we are mainly dealing with web centric setups of PHP in this section, although we will cover setting up PHP for command line usage as well.
There are several ways to install PHP for the Unix platform, either with a compile and configure process, or through various pre-packaged methods. This documentation is mainly focused around the process of compiling and configuring PHP. Many Unix like systems have some sort of package installation system. This can assist in setting up a standard configuration, but if you need to have a different set of features (such as a secure server, or a different database driver), you may need to build PHP and/or your web server. If you are unfamiliar with building and compiling your own software, it is worth checking to see whether somebody has already built a packaged version of PHP with the features you need.
Prerequisite knowledge and software for compiling:
Basic Unix skills (being able to operate make and a C compiler)
An ANSI C compiler
A web server
Any module specific components (such as GD, PDF libs, etc.)
When building directly from Git sources or after custom modifications you might also need:
autoconf: 2.59+ (for PHP >= 7.0.0), 2.64+ (for PHP >= 7.2.0)
automake: 1.4+
libtool: 1.4.x+ (except 1.4.2)
re2c: 0.13.4+
bison:
PHP 7.0 - 7.3: 2.4 or later (including Bison 3.x)
End TEST.
";
$teststring=$teststring0;
$teststring.=$teststring0;
$teststring.=$teststring0;
require_once __DIR__ . '/vendor/autoload.php';
$mpdf = new \Mpdf\Mpdf();
$mpdf->useFixedNormalLineHeight = true;
$mpdf->useFixedTextBaseline = true;
$mpdf->normalLineheight = $_SESSION["line-height"];
$teststring=str_replace(chr(13),'<br>',$teststring);
$mpdf->WriteHTML($teststring);
$mpdf->Output('temp1.pdf', \Mpdf\Output\Destination::FILE);
$pageCount = count($mpdf->pages);
$_SESSION["pageCount"]=$pageCount;
if ($_SESSION["line-height"]==$initiallineheight) {
$_SESSION["initialpageCount"]=$pageCount;
}
if ($_SESSION["initialpageCount"]==$_SESSION["pageCount"]){
$_SESSION["good-line-height"]=$_SESSION["line-height"];
$_SESSION["line-height"]=$_SESSION["line-height"]+0.05;
?>
<script>
location.reload();
</script>
<?php
} else{
$mpdf = new \Mpdf\Mpdf();
$mpdf->useFixedNormalLineHeight = true;
$mpdf->useFixedTextBaseline = true;
$mpdf->normalLineheight = $_SESSION["good-line-height"];
$teststring=str_replace(chr(13),'<br>',$teststring);
$mpdf->WriteHTML($teststring);
$mpdf->Output('final.pdf', \Mpdf\Output\Destination::FILE);
?>
<script>
alert("Done ! Please use the final.pdf");
</script>
<?php
}
?>
```
Please note that I have used some PHP documentation as `$teststring0` for testing, for real case please use the actual textual data of your legal document.
It will generate temp.pdf on the loop (just ignore it, each iteration over the loop will generate the file and overwrite the previous one since they are of the same name), but finally will generate the "final.pdf" which is the one with best line-height and then the process will STOP.
See result (initial line-height):
[![enter image description here][1]][1]
and final.pdf (best line-height)
[![enter image description here][2]][2]
Note: for better result you may adjust the increment value of line-height each time from 0.05 to say 0.01 (each time smaller step) and the result may even be better, but of course it may then take longer time for the iteration to complete.
[1]: https://i.stack.imgur.com/SZxjs.jpg
[2]: https://i.stack.imgur.com/xCqab.jpg |
{"OriginalQuestionIds":[47355038],"Voters":[{"Id":1458738,"DisplayName":"Sal"}]} |
**tl;dr:** scroll down to the last section on squashing commits
**Long:**
When you have conflicts, the 2 normal ways to resolve them are via merge or rebase. Let's assume your remote is called `origin`.
Resolve conflicts with merge:
```
git fetch
git switch branch-A
git merge origin/branch-F
# resolve the conflicts, stage and commit
git push
```
With a merge, you will add one merge commit to your branch which resolves the conflicts, and when you push your PR will be updated without any conflicts.
Resolve conflicts with rebase:
```
git fetch
git switch branch-A
git rebase origin/branch-F
# resolve the conflicts, stage them, and
git rebase --continue
# repeat resolving conflicts for all commits that have them, stage and continue
# once finished:
git push --force-with-lease
```
With a rebase you aren't adding new commits, but you are rewriting your commits to resolve the conflicts, so you mush force push your branch to update the PR.
Note with merge, there is only one place you can have conflicts, but with rebase, it's possible to have conflicts on multiple commits. If this is your case and you don't wish to deal with resolving conflicts multiple times, you could first squash your commits to a fewer number, and then do the rebase. (Or, as suggested in your question, if you would rather squash your commits anyway.)
**If you wish to squash your commits**, I recommend doing this first because it's not possible to have conflicts when squashing commits as long as you maintain the same order. There are multiple ways to squash commits, here's one:
```
git switch feature-A
git rebase -i --keep-base origin/feature-F
# now pick the top commit, squash the rest, save and exit the editore
# When the rebase finishes it will prompt you to write a combined commit message.
# now you have 1 commit on your branch instead of multiple
# now repeat the steps above for rebasing to resolve merge conflicts
```
Note that the options to rebase:
* `-i`: Interactive rebase. This enables you to control the rebase TODO list.
* `--keep-base`: Instead of rebasing onto `origin/feature-F` it will keep the base you started from so you don't have to deal with the conflicts yet.
|
I have just resolved this issue. It appears that you may have used a Macbook M1 for the pip install process.
The solution involves not using a different runtime to build the layer file.
Use Docker to create an environment identical to that of your lambda function. For me, the issue is resolved. |
My virtual environment is accidentally not working. i found out this message when i tried to add new package to my virtual environment My problem is as following:
I've run this command.[`$ pip install <package_name>`]
But i met an error message.
`ERROR: Could not install packages due to an OSError: Missing dependencies for SOCKS support.`
How can i fix the error? I need detailed information.
What did I try?
1. upgrade my pip version.
2. create new VENV(for python project).
How can i fix the error? pls, I need detailed information.
[1]: https://i.stack.imgur.com/gHaHt.jpg |
{"Voters":[{"Id":23239746,"DisplayName":"Lew Xin Yan"}]} |
I have been trying to code collision for a school project, however I quickly noticed that when I draw a bitmap using the `canvas.DrawBitmap(currentImage, position.X, position.Y, paint)` function, it did not match the size of the radius, as the image appeared bigger then the actual circle. this is not an issue when i use the `canvas.DrawCircle(position.X, position.Y, size.Width / 2, new Paint() { Color = Color.Black })` function, as it is scaled properly. How do I create a bitmap that is scaled correctly to the actual circle?
- relevent code:
public Sprite(float x, float y, float size, Context context) : base(x, y)
{
int id = (int)typeof(Resource.Drawable).GetField(General.IMG_BALL).GetValue(null);
image = BitmapFactory.DecodeResource(context.Resources, id);
this.size = new SizeF(size, size);
paint = new Paint(PaintFlags.AntiAlias);
paint.AntiAlias = false;
paint.FilterBitmap = false;
paint.Dither = true;
paint.SetStyle(Paint.Style.Fill);
}
public void DisplayImage(Canvas c)
{
Matrix matrix = new Matrix();
matrix.PostTranslate(position.X, position.Y);
matrix.PreScale(size.Width, size.Height);
c.DrawBitmap(image, matrix, paint);
}
- results using `canvas.DrawCircle(position.X, position.Y, size.Width / 2, new Paint() { Color = Color.Black }) ` (correct scaling, no bitmap, radius = 100):
[screenshot][1]
- results using the DrawImage(canvas) function I made (wrong scaling, using bitmap, radius = 1 so it can fit in the image)
[screenshot][2]
[1]: https://i.stack.imgur.com/tppxP.png
[2]: https://i.stack.imgur.com/khDTR.png |
On tiles you can set either a andorid resource id or a image byte array. In your case, if the image is dynamic (is not set as a internal resource) you need to convert it to byte array and then set it.
(I recommend for you to convert the image to byte array outside the tile service, in the editor activity for example, to save resources and battery)
I got a code for converting drawable into byte array, in case u want:
```
private byte[] toByteArray(Drawable d) {
try {
final Bitmap b = Bitmap.createBitmap(d.getIntrinsicWidth(), d.getIntrinsicHeight(), Bitmap.Config.ARGB_8888);
final Canvas canvas = new Canvas(b);
d.setBounds(0, 0, d.getIntrinsicWidth(), d.getIntrinsicHeight());
d.draw(canvas);
final ByteArrayOutputStream stream = new ByteArrayOutputStream();
b.compress(Bitmap.CompressFormat.PNG, 100, stream);
return stream.toByteArray();
} catch (Exception e) {
return null;
}
}
```
To convert the byte[] to string you can:
```
pricate String byteArrayToString(ByteArrayOutputStream s) {
return Base64.getEncoder().encodeToString(s.toByteArray());
}
private byte[] stringToByteArray(String s) {
return s == null ? new byte[0] : Base64.getDecoder().decode(s);
}
```
Then you can set it on tile in the "onResourcesRequest":
```
@NonNull
@Override
protected ListenableFuture<ResourceBuilders.Resources> onResourcesRequest(@NonNull RequestBuilders.ResourcesRequest requestParams) {
final ResourceBuilders.Resources.Builder resources = new ResourceBuilders.Resources.Builder();
//Your code to get the byte array
resources.addIdToImageMapping(
ID,
new ResourceBuilders.ImageResource.Builder()
.setInlineResource(
new ResourceBuilders.InlineImageResource.Builder()
.setData(
//BYTE ARRAY HERE <<
).setFormat(ResourceBuilders.IMAGE_FORMAT_UNDEFINED) //This is required unless you know the format, I recommend leaving this way
.setHeightPx(SIZE * 2) //2 for better quality
.setWidthPx(SIZES * 2) //2 for better quality
.build()
).build()
);
return Futures.immediateFuture(resources.setVersion(VERSION).build());
}
```
Then you set the image using its ID... |
TypeError: middleware is not a function, how to fix this issue? |
|typeerror|middleware| |
null |
`GestureDetector` requires a child Widget to execute, the Gesture behaviour will execute on the child widget.
Whatever widget you want clickable, just wrap with the `GestureDetector` and add **onTap** and other required property.
Basic example:
```
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: GestureDetector(
onTap: () {
print('AppBar title tapped!');
},
child: Text('Gesture Detector Example'),
),
),
body: Center(
child: GestureDetector(
onTap: () {
print('Container tapped!');
},
child: Container(
width: 200,
height: 200,
color: Colors.blue,
child: Center(
child: Text(
'Tap me!',
style: TextStyle(color: Colors.white, fontSize: 20),
),
),
),
),
),
),
);
}
}
``` |
No, it is not because of the cache's max size (you would get the same results with `maxsize=3`). You get these statistics because you make life easier for the call to `fib(9)` by already having made the call to `fib(8)` before the loop. Realize also that these statistics are not only about *one* of your calls, but of *all* your calls thus far. In your main program, the call of `fib(9)` does not make as many hits as you suggest. It just has a hit for `fib(8)` and for `fib(7)` and that's it.
If you would not have made the call to `fib(8)` before the loop, you would get one less hit in the statistics in the first iteration of the loop. This is expected.
In your version, you made one more call to `fib(8)` in total: by the time you make the call `fib(9)`, there is an extra hit counted for `fib(8)`, while if you would not have made `fib(8)` before the loop, then it is not a hit when `fib(9)` is called (and it will have all the misses that `fib(8)` would have had, plus one).
This effect is similar when you go to the next iteration of the loop: since the previous iteration already cached the result for an argument that is one less, the current call will have a hit in its first recursive call. |
I am running a simple code on the Replit.com, but I get an error. This code worked a week ago, but it doesnβt work now. The url is accessible and works from the browser without problems. Searching for solutions on the Internet did not lead to results. Please help me, how can I fix the error?
import requests
url = βhttps://iss.moex.com/iss/statistics/engines/stock/markets/index/analytics.csvβ
print(url)
r = requests.get(url).text
print (r)
Traceback (most recent call last):β¦
requests.exceptions.SSLError: HTTPSConnectionPool(host=βiss.moex.comβ, port=443): Max retries exceeded with url: /iss/statistics/engines/stock/markets/index/analytics.csv (Caused by SSLError(SSLEOFError(8, β[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)β)))
I tried the solution from here
https://stackoverflow.com/questions/16230850/httpsconnectionpool-max-retries-exceeded
there is no result |
For a multi-container pod on minikube this is how the service and deployment yamls of zookeeper and kafka will look like.
#---------------------------broker-conf---------------------
apiVersion: v1
kind: ConfigMap
metadata:
name: broker-conf
data:
KAFKA_BROKER_ID: "1"
ZOOKEEPER_CONNECT: 127.0.0.1:2181
#ZOOKEEPER_CONNECT: kafka-service:2181
BOOTSTRAP_SERVER: 127.0.0.1:9092
#BOOTSTRAP_SERVER: kafka-service:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-service:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OPTS: -javaagent:/confluent-7.0.1/share/java/kafka/jmx_prometheus_javaagent-0.16.1.jar=9393:/confluent-7.0.1/etc/kafka/kafkaMetricConfig.yaml
listeners: PLAINTEXT://0.0.0.0:9092
advertised.listeners: PLAINTEXT://kafka-service:9092
---
#----------------------------------------zookeeper-service-----------------------
apiVersion: v1
kind: Service
metadata:
name: zookeeper-service
spec:
type: ClusterIP
ports:
- name: client
port: 2181
protocol: TCP
selector:
component: kafka-svc
---
#----------------------------------kafka-service-------------------
apiVersion: v1
kind: Service
metadata:
name: kafka-service
spec:
#type: ClusterIP
type: NodePort
ports:
- name: plaintext
port: 9092
protocol: TCP
- name: plaintext-host
port: 29092
protocol: TCP
- name: jmx-prom-port
port: 9393
protocol: TCP
nodePort: 31500
selector:
component: kafka-svc
---
# ----------------------------Zookeeper Deployment----------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
spec:
replicas: 1
selector:
matchLabels:
component: kafka-svc
template:
metadata:
labels:
component: kafka-svc
spec:
containers:
- name: zookeeper
image: localhost:5000/zookeeper
resources:
requests:
memory: "128Mi"
cpu: "125m"
limits:
memory: "256Mi"
cpu: "250m"
command: [ "/bin/sh","-c" ]
args: ["/confluent-7.0.1/bin/zookeeper-server-start /confluent-7.0.1/etc/kafka/zookeeper.properties; sleep infinity"]
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
- name: ZOOKEEPER_TICK_TIME
value: "2000"
- name: ALLOW_ANONYMOUS_LOGIN
value: yes
# -------------------------------------------kafka-------------------
- name: kafka
image: localhost:5000/kafka
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "1000m"
envFrom:
- configMapRef:
name: broker-conf
command: [ "/bin/sh","-c"]
args:
- /confluent-7.0.1/bin/kafka-server-start /confluent-7.0.1/etc/kafka/server.properties --override zookeeper.connect=$(ZOOKEEPER_CONNECT);
- /confluent-7.0.1/bin/kafka-topics --create --topic microTopic --bootstrap-server $(BOOTSTRAP_SERVER);
- mkdir /prometheus/
- sleep infinity;
ports:
- containerPort: 9092
name: kafka-port
- containerPort: 29092
name: kafka-ad-port
- containerPort: 9393
name: jmx-export-port
|
You have to create an extension and upload it to the marketplace. The extension will be a node application and in your case a custom control. Once you've loaded the extension, you can share it with your organization and then add it to the form page for your process. When the form is displayed, the extension will run and can read and write fields for your Work Item.
This document will get you started: https://learn.microsoft.com/en-us/azure/devops/extend/develop/custom-control
I found that rather than building the extension from scratch, it's easier to download the samples and hack away the extension samples you don't need. Just delete the folders in src/samples you don't need, build it and upload into devops. The browser console will be your friend for debugging your extension.
You can get the samples here: https://github.com/Microsoft/azure-devops-extension-sample |
I have to create an application for a Doctor. Where the doctor can see the past records of all the prescription which were taken in the form of images (Physical paper prescription captured by mobile's camera). I can call the api's to get from server when the network is available. But there is no way to save those huge amount of images in local storage and access when there is no network.
I'm looking for the solution. Your help will be highly appreciated. |
How to effectivetly store tons of images in local database in flutter/android |
|android|flutter|kotlin|dart|offline| |
I have a User entity which has one many to many relationship with Authority table and many to one with UserGroup and another many to one with company.
I am using a Projection Interface named UserDto and trying to fetch all users with the help of entity graph with attributePaths as authorities, company and userGroup but before spring 3 it always gave me distinct parents but now it is doing cartesian product.
I read somewhere that Hibernate 6 automatically de-duplicates the result set. So i tried writing a jpa query by myself using join fetch and adding distinct key word and used User entity instead of project but same result.
@Entity
@Table(name = "USERS")
public class User extends Cacheable<Integer, User> implements AuditableEntity {
private static final long serialVersionUID = -4759265801462008942L;
@Id
@Column(name = "USER_ID", nullable = false)
@TableGenerator(name = "USER_ID", table = "ID_GENERATOR", pkColumnName = "GEN_KEY", valueColumnName = "GEN_VALUE",
pkColumnValue = "USER_ID", allocationSize = 10)
@GeneratedValue(strategy = GenerationType.TABLE, generator = "USER_ID")
private Integer id;
@Column(name = "EMAIL_ID", unique = true, nullable = false, length = 254)
private String emailId;
@JsonIgnore
@Column(name = "PASSWORD", nullable = false, length = 60)
private String password;
@Column(name = "ENABLED", nullable = false)
private boolean enabled = false;
@Column(name = "LOCKED", nullable = false)
private boolean locked = false;
@ManyToMany(fetch = FetchType.EAGER)
@JoinTable(
name = "USER_AUTHORITY",
joinColumns = {@JoinColumn(name = "USER_ID", referencedColumnName = "USER_ID")},
inverseJoinColumns = {@JoinColumn(name = "AUTHORITY_NAME", referencedColumnName = "NAME")})
@NotNull
private Set<Authority> authorities = new HashSet<>();
@NotEmpty
@Size(max = 60)
@Column(name = "FIRST_NAME", length = 60)
private String firstName;
@Column(name = "MIDDLE_NAME", length = 60)
private String middleName;
@NotEmpty
@Size(max = 60)
@Column(name = "LAST_NAME", length = 60)
@FieldDescription(name = "Last Name", order = 1, type = ExcelColumnType.STRING, required = true)
private String lastName;
@NotNull
@ManyToOne(fetch = FetchType.EAGER)
@JoinColumn(name = "COMPANY_ID")
@FieldDescription(name = "Company", order = 4, type = ExcelColumnType.COMPANY, required = true)
private Company company;
@ManyToOne
@JoinColumn(name = "USER_GROUP_ID")
private UserGroup userGroup;
//hashcode and equals based on id
}
Here is projection interface
public interface UserDto {
Integer getId();
String getEmailId();
String getFirstName();
String getLastName();
@Value("#{target.firstName + ' ' + target.lastName}")
String getFullName();
Company getCompany();
@Value("#{target.userGroup.id}")
Integer getUserGroupId();
@Value("#{target.userGroup.name}")
String getUserGroupName();
}
This is the repository
@Repository
public interface UserRepository extends JpaRepository<User, Integer>, CacheableRepository<User, Integer> {
//other methods..
@EntityGraph(attributePaths = { "authorities", "company", "userGroup" })
<E> List<E> findBy(Class<E> type);
//another way
//@Query("Select u from User u left join fetch u.authorities a left join fetch u.company c left join fetch u.userGroup")
// @QueryHints(value = { @QueryHint(name = org.hibernate.jpa.QueryHints.HINT_PASS_DISTINCT_THROUGH, value = "false")})
//List<User> findBy();
}
Service
@Transactional
public List<UserDto> getAll() {
return userRepository.findBy(UserDto.class);
} |
null |
Here is a minimal snippet to perform interpolation:
import numpy as np
from scipy import interpolate
def model(x, y, z):
return np.sqrt(x ** 2 + y ** 2 + z ** 2)
x = np.linspace(-1, 1, 100)
y = np.linspace(-1, 1, 101)
z = np.linspace(-1, 1, 102)
X, Y, Z = np.meshgrid(x, y, z, indexing="ij")
F = model(X, Y, Z)
interpolator = interpolate.RegularGridInterpolator((x, y, z), F)
points = np.array([
[0, 0, 0],
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
])
interpolator(points)
# array([0.01414426, 1.00005101, 1.00004901, 1.00010003])
The memory issue is due to cubic complexity of the rectangular 3D grid.
F.nbytes / 2**20 # 7.8598 Mb
Increasing resolution by a factor 10, will consume 1000 times more memory, in this scenario it will consume approximately 8 Gb. |
I'm trying to write a xUnit test for `HomeController`, and some important configuration information is put into **Nacos**.
**The problem now is that I can't get the configuration information in nacos.**
# Here is my test class for HomeController
```
using Xunit;
namespace ApiTestProject
public class HomeControllerTest
{
public HomeControllerTest()
{
Init(); // using DI to mock register services
controller = new HomeController();
}
//in this method, I can not access the nacos config strings
private void Init()
{
var builder = WebApplication.CreateBuilder();
//add appsettings.json
builder.Host.ConfigureHostConfiguration(cbuilder =>
{
cbuilder.AddJsonFile("appsettings.Test.json", optional: false, reloadOnChange: true);
});
//get nacosconfig in appsettings
var nacosconfig = builder.Configuration.GetSection("NacosConfig");
builder.Host.ConfigureAppConfiguration((context, builder) =>
{
// add nacos
builder.AddNacosV2Configuration(nacosconfig);
});
//try to get the "DbConn" in nacos, but connstr is null
string connstr = builder.Configuration["DbConn"];
//other register logic...
}
}
```
# and the appsettings.Test.json file:
```
{
"NacosConfig": {
"Listeners": [
{
"Optional": false,
"DataId": "global.dbconn",
"Group": "DEFAULT_GROUP"
},
{
"Optional": false,
"DataId": "global.redisconfig",
"Group": "DEFAULT_GROUP"
},
{
"Optional": false,
"DataId": "global.baseurlandkey",
"Group": "DEFAULT_GROUP"
}
],
"Namespace": "my-dev",
"ServerAddresses": [
"http://mynacos.url.address/"
],
"UserName": "dotnetcore",
"Password": "123456",
"AccessKey": "",
"SecretKey": "",
"EndPoint": "",
"ConfigUseRpc": false,
"NamingUseRpc": false
}
}
```
**Update:** I've checked in detail to make sure there aren't any spelling mistakes, case sensitivity issues. and the code in **Init()** function works well in the **Program.cs** file in the tested API project, but in this xUnit project, it's not working at all. |
Fetch Nacos Config in Unit Test in .NET Core 6 |
I have the following data
| Partition key | Sort key | value |
| ------------- | -------- | ----- |
| aa | str1 | 1 |
| aa | str2 | 123 |
| aa | str3 | 122 |
| ab | str1 | 12 |
| ab | str3 | 111 |
| ac | str1 | 112 |
And by using `QueryRequest` I want to select entries where the partition key is "aa" and the sort key is either "str1" or "str3". I have tried to create a `Condition`, but nothing works with different exceptions, so could someone point out how this query can be written?
```cs
var pkCondition = new Condition
{
ComparisonOperator = ComparisonOperator.EQ,
AttributeValueList =
{
new AttributeValue { S = "aa" },
},
};
// Exception One or more parameter values were invalid: Condition parameter type does not match schema type
var skCondition1 = new Condition
{
ComparisonOperator = ComparisonOperator.EQ,
AttributeValueList =
{
new AttributeValue { SS = { "str1", "str3" } },
},
};
// Exception: One or more parameter values were invalid: Invalid number of argument(s) for the EQ ComparisonOperator
var skCondition2 = new Condition
{
ComparisonOperator = ComparisonOperator.EQ,
AttributeValueList =
{
new AttributeValue { S = "str1" },
new AttributeValue { S = "str3" },
},
};
// Well this one is clear because IN cannot be performed on keys
// Exception: One or more parameter values were invalid: ComparisonOperator IN is not valid for SS AttributeValue type
var skCondition3 = new Condition
{
ComparisonOperator = ComparisonOperator.IN,
AttributeValueList =
{
new AttributeValue { SS = { "str1", "str3" } },
},
};
// Works, but not what I need
var skCondition4 = new Condition
{
ComparisonOperator = ComparisonOperator.EQ,
AttributeValueList =
{
new AttributeValue { S = "str1" },
},
};
return new QueryRequest
{
TableName = "My table",
KeyConditions =
{
{ "Partition key", pkCondition },
{ "Sort key", skCondition },
},
};
```
After some research I have found out that `KeyConditionExpression` can also be used, but I have not found any combination that could work for my case.
```cs
// Exception: Invalid operator used in KeyConditionExpression: OR
const string keyConditionExpression1 = "#pk = :pkval AND (#sk = :sk1 OR #sk = :sk2)";
// Exception: Invalid operator used in KeyConditionExpression: IN
const string keyConditionExpression2 = "#pk = :pkval AND #sk IN (:sk1, :sk2)";
new QueryRequest
{
TableName = "My table",
KeyConditionExpression = keyConditionExpression,
ExpressionAttributeNames = new Dictionary<string, string>
{
{ "#pk", "Partition key" },
{ "#sk", "Sort key" },
},
ExpressionAttributeValues = new Dictionary<string, AttributeValue>
{
{ ":pkval", new AttributeValue { S = "aa" } },
{ ":sk1", new AttributeValue { S = "str1" } },
{ ":sk2", new AttributeValue { S = "str3" } },
},
};
```
Next, I have also tried using `FilterExpression` where IN and OR are allowed, but again there was another exception: `Filter Expression can only contain non-primary key attributes`.
```cs
new QueryRequest
{
TableName = "My table",
KeyConditionExpression = "#pk = :pkval",
FilterExpression = "#sk IN (:sk1, :sk2)",
ExpressionAttributeNames = new Dictionary<string, string>
{
{ "#pk", "Partition key" },
{ "#sk", "Sort key" },
},
ExpressionAttributeValues = new Dictionary<string, AttributeValue>
{
{ ":pkval", new AttributeValue { S = "aa" } },
{ ":sk1", new AttributeValue { S = "str1" } },
{ ":sk2", new AttributeValue { S = "str3" } },
},
};
``` |
requests.exceptions.SSLError: HTTPSConnectionPool on Replit.com, |
|ssl|python-requests| |
null |
Does CdiCenterOutput support in lipari-mid & kiska - mid (55ppm) when GB finisher attached?
enum CdiOffsetModes
{
cdiCenterOutput = 0x04
Need to know Whether the lipari-mid & kiska - mid (55ppm) mechines support CdiCenterOutput or NOT? |
Does CdiCenteroutput support in lipari-mid & kiska - mid (55ppm) |
|iot| |
null |
I'm trying to create a google sheet that sorts new entries automatically in the "Name" column (see image1).
[MySheet][1]
I found the sorting script online, and it works as needed regarding the "Name" sorting.
However, if I typed a new name in the "Name" column (for example "N"), it sorts it automatically and creates a new row for "S1", but not for "S2" (See image2); likewise, if I deleted a name (for example "K"), it shifts up the data below it and moves the related cells of "Name" and "S1" to the bottom in a new row, but it doesn't do that for "S2" (See image3).
So the data is messed up whenever a new entry or a delete is made.
How to make the code creates a new row for S1 and S2 whenever a name is added, and how to make it automatically deletes the whole row if a name is deleted.
Here is my script:
```lang-js
function autoSort(e){
const row = e.range.getRow();
const column = e.range.getColumn() ;
const ss = e.source;
const currentSheet = ss.getActiveSheet();
const currentSheetName = currentSheet.getSheetName();
if(!(currentSheetName === "Scores" && column === 2 && row >=2)) return
const range = currentSheet.getRange(2,2, currentSheet.getLastRow()-1,2);
range.sort({column: 2, ascending: true});
}
function onEdit(e){
autoSort(e)
}
```
[image1]![][2]
[image2]![][3]
[image3]![][4]
[1]: https://docs.google.com/spreadsheets/d/1RlR47T9Fw2eZl0fT-NQvjHfRmtJPaL0kWCUbgE8KMQs/edit?usp=sharing
[2]: https://i.stack.imgur.com/hixee.png
[3]: https://i.stack.imgur.com/dZVsF.png
[4]: https://i.stack.imgur.com/uCw4x.png |
|google-sheets|google-apps-script|insert|addition| |
Another way to do it using the field path that works with any Q type
```
public static OrderSpecifier<?> getSortColumn(EntityPathBase<?> entity, Order order, String fieldName) {
final var entityPath = new PathBuilder<>(entity.getType(), entity.getMetadata().getName());
final var sortColumnPath = entityPath.getComparable(fieldName, Comparable.class); //All sorting columns should be implementing Comparable
return switch (order) {
case ASC -> sortColumnPath.asc();
case DESC -> sortColumnPath.desc();
};
}
```
I'm making use of some Java 17+ features, but you can still do without.
then use it with any Q type i.e.
```
getSortColumn(QPerson.person, "name");
getSortColumn(QCat.cat, "dateOfBirth");
```
and you don't have to manually construct new instance of `OrderSpecifier` and deal with 'use of raw type' warnings. |
|c#|.net-core|nacos| |
null |
You could reshape with [`.melt()`](https://docs.pola.rs/docs/python/dev/reference/dataframe/api/polars.DataFrame.melt.html)
```python
shape: (12, 2)
ββββββββββββββββ¬ββββββββββββββ
β variable β value β
β --- β --- β
β str β str β
ββββββββββββββββͺββββββββββββββ‘
β sub-category β tv β
β sub-category β mobile β
β sub-category β tv β
β sub-category β wm β
β sub-category β micro β
β β¦ β β¦ β
β category β mobile β
β category β electronics β
β category β electronics β
β category β kitchen β
β category β electronics β
ββββββββββββββββ΄ββββββββββββββ
```
In which case it is then the length of each group:
```python
df.melt().group_by(pl.all()).len()
```
```python
shape: (7, 3)
ββββββββββββββββ¬ββββββββββββββ¬ββββββ
β variable β value β len β
β --- β --- β --- β
β str β str β u32 β
ββββββββββββββββͺββββββββββββββͺββββββ‘
β category β kitchen β 1 β
β sub-category β tv β 2 β
β sub-category β mobile β 1 β
β category β mobile β 1 β
β sub-category β wm β 2 β
β sub-category β micro β 1 β
β category β electronics β 4 β
ββββββββββββββββ΄ββββββββββββββ΄ββββββ
```
[`.pivot()`](https://docs.pola.rs/docs/python/dev/reference/dataframe/api/polars.DataFrame.pivot.html#polars.DataFrame.pivot) can be used to reshape into individual columns if required.
```python
(df.melt()
.pivot(
index = "value",
columns = "variable",
values = "value",
aggregate_function = pl.len()
)
)
```
```python
shape: (6, 3)
βββββββββββββββ¬βββββββββββββββ¬βββββββββββ
β value β sub-category β category β
β --- β --- β --- β
β str β u32 β u32 β
βββββββββββββββͺβββββββββββββββͺβββββββββββ‘
β tv β 2 β null β
β mobile β 1 β 1 β
β wm β 2 β null β
β micro β 1 β null β
β electronics β null β 4 β
β kitchen β null β 1 β
βββββββββββββββ΄βββββββββββββββ΄βββββββββββ
``` |
It looks like the original developer wanted these files to be common to both projects, so added them to `Project1`. If you want to keep them separate then do the following:
- Right click on each alias file in `Project1` in the Solution Explorer Window of VS. Select `Remove`. Do not select the `Delete` option as that deletes the physical file from `Project10`.
If you want `Project10/Utility.h` and `Project10/Utility.cpp` linked with `Project1` you should create copies of those files and put them inside `Project1`. It's not possible to link .o files across projects.
While you can create shortcuts to `Project10/Utility.h` and `Project10/Utility.cpp` and put them inside `Project1`, this sidesteps Visual Studio's file-tracking (VS will consider them copies inside File Explorer)
The sensible way to share .cpp files is to create a library project that you can link against. |
null |
Serenity is unable to find the `msedgedriver` executable from the `serenity.properties` file. My properties file looks like this:
```
#webdriver.driver=chrome
#webdriver.chrome.driver = src/test/resources/webdriver/windows/chromedriver-win64/chromedriver.exe
webdriver.driver=edge
webdriver.edge.driver = src/test/resources/webdriver/windows/edgedriver-win64/msedgedriver.exe
#webdriver.driver=firefox
#webdriver.geckodriver.driver = src/test/resources/webdriver/windows/firefoxdriver-win64/geckodriver.exe
...
```
When using `chromedriver` or `geckodriver`, Serenity has no issue finding their executables and instantiating the webdrivers as expected. The path to the `msedgedriver` is correct, and I've also tried absolute path with no luck.
My Edge Browser version is compatible with the downloaded msedgedriver version (123).
I cannot understand how Serenity is unable to successfully instantiate the driver, when I have correctly set the system property exactly like the other 2 drivers that work as expected.
Full error:
`Caused by: net.thucydides.core.webdriver.DriverConfigurationError: Could not instantiate new WebDriver instance of type class org.openqa.selenium.edge.EdgeDriver (The path to the driver executable must be set by the webdriver.edge.driver system property; for more information, see https://github.com/SeleniumHQ/selenium/wiki/MicrosoftWebDriver. The latest version can be downloaded from http://go.microsoft.com/fwlink/?LinkId=619687). See below for more details.` |
Daily temperature problem with sample i/o
Input: temperatures = [73,74,75,71,69,72,76,73]
Output: [1,1,4,2,1,1,0,0]
I'm a beginner and was trying to do this without stack. Is it possible?
My code:
```
class Solution {
public int[] dailyTemperatures(int[] temperatures)
{
int l=temperatures.length;
int answer[]=new int[l];
int i;
for(i=0;i<l;i++)
{
int c=0;int j;int f=0;
if(i==l-1)
answer[i]=0;
else
{
for(j=i+1;j<l;j++)
{
if(temperatures[i]<temperatures[j])
{
c++;
f=1;
break;
}
if(temperatures[i]>=temperatures[j])
{
c++;
f=0;
}
}
if(f!=0)
answer[i]=c;
else
answer[i]=0;
}
}
return answer;
}
}
```
the code failed for 1 test class :( resulting in the error |
How do i overcome 'Time limit exceeded' error on leetcode on the daily temperatures problem? |
|java|arrays| |
null |
Does having large number of parquet files causes memory overhead while reading using Spark? |
I really hope someone will be able to help me. a new debian 12 installation, with latest dbeaver community installed to maintain various network databases.
in order fro dbeaver dump to work had to install mariadb-client locally on the laptop in order to get mysqldump and be able to configure localclient section of the dbeaver (previous versions did not require that)
The primary issue is that I cannot dump databases. The error that I am getting: mysqldump: Couldn't execute 'select column_name, extra, generation_expression, data_type from information_schema.columns where table_schema=database() and table_name=table_name order by ordinal_position': Unknown column 'generation_expression' in 'field list' (1054)
so after some searching it seems that the issue is column-statistics.
I sense that I missing something really basic, but cannot figure out what I need to do. Is this a mismatch between dbeaver and mysqldump, or is it a mismatch between locally installed mariadb-client and the actual sql server?
debeaver version 24.0.1
mysqldump version from mariadb-client is 1:10.11.6-0+deb12u1
please push me in the right direction. Thank you in advance for any pointers
what I have tried:
1. tried adding the switch [mysqldump] column-statistics=0, but this gives me an error mysqldump: unknown variable 'column-statistics=0'
2. tried running the command in the terminal to dump the database the same issue. |
dbeaver community version cannot dump database |
|mysql|dbeaver| |
null |
[](https://i.stack.imgur.com/hCuqI.png)
This is my Roulette Game Project and I am new learner in Unity. In this project, I am facing an issue with creating an undo button.
This is my code for the undo button:
```csharp
public void Undo() {
if (Bets.Count != 0 && BetSpaces.Count != 0) {
var lastBet = Bets.Last();
var lastSpace = BetSpaces.Last();
//UserDetail.Balance += lastBet.chip;
UserDetail.Balance += BetSpaces.Last().stack.value;
UpdateBalance(UserDetail.Balance);
lastSpace.SetAllMeshes(false);
lastSpace.Clear();
Bets.Remove(lastBet);
BetSpaces.Remove(lastSpace);
var totalBets = int.Parse(TotalBetCount.text);
if (totalBets > 0) TotalBetCount.text = (totalBets--).ToString();
TakeButton.interactable = false;
}
var indexesToRemove = new List<int>();
BetSpaces.ForEach(X => {
GlobalFunctions.PrintLog($"X: {X.stack.value}");
if (X.stack.value == 0) {
indexesToRemove.Add(BetSpaces.IndexOf(X));
} else {
X.SetAllMeshes(true);
}
});
for (int i = 0; i < indexesToRemove.Count; i++) {
Bets.RemoveAt(indexesToRemove[i]);
BetSpaces.RemoveAt(indexesToRemove[i]);
}
UndoButton.interactable = Bets.Count != 0;
SceneRoulette._Instance.clearButton.interactable = Bets.Count != 0;
SceneRoulette._Instance.rollButton.interactable = Bets.Count != 0;
}
```
To test this code, I place 1Rs's 3 bet on same spot, then I press the undo button. It will remove whole 3Rs. Bet at a time, but I want to remove bet on a separate time. Can anyone help me? |
Since your package is shared by multiple projects, you may consider installing this package into your Python environment. To achieve this, you can create a file named `setup.py` in the same directory as my_project and the content looks like this:
```python
from setuptools import find_packages, setup
if __name__ == '__main__':
setup(
name='my_package',
version='0.0.1',
description='Short description for your package.',
long_description='Long description for your package.',
long_description_content_type='text/markdown',
author='Your name',
author_email='Your email',
packages=find_packages(),
include_package_data=True,
)
```
Then run `python setup.py install` to install this package. After doing this, all of your import statements like `from my_package.module.script import foo` are no longer relative imports, thus avoiding the problem. |
It is hard to determine from that image alone but there are some common issues that you can test for.
1. Page Width (Horizontal Overflow)
2. Page Height (Vertical Overflow)
2. Duplicate Page Breaks
3. Overlapping Containers
## Page Width (Horizontal Overflow)
The first issue is page width. Check that the width of the container that holds your tablix is less than the printable page width, minus the margins. Todo this, we first need to know the current margins. One way to do this is to right click the grey canvas outside of the report in the designer and select the `Report Properties` from the context menu:
[![enter image description here][1]][1]
You can see that this report has A4 page size, 21cm width and left and right margins of 1cm:
[![enter image description here][2]][2]
You could also have left clicked the same grey canvas area and pressed `F4` to view the Properties Grid for the Report. You will find both `PageSize` and `Margins` Properties:
[![enter image description here][3]][3]
Now we can go back to the report and right click any element in the report, goto the `Select` menu and click on the `body` container:
[![find the root body container][4]][4]
If the _Properties_ explorer is open you will see the properties for the body, otherwise press `F4` to open the properties explorer.
In the `Size` property you will see the `Width`:
[![enter image description here][5]][5]
- We need the `Body` width, in this case `~17.13cm` to be less than the page width (21cm) minus the margins (2cm):
21 - 2 = 19
17.13 < 19 [true]
so this particular report will not overflow horizontally!
## Page Height (Vertical Overflow)
Similar to the page height, if your content exceeds the vertical height of the page, minus the margins, then a new page will be added. If your content gets close to the bottom edge of the page, you may not need to add deliberate page breaks at all.
To detect this, I often set the background colour of containers that might be overflowing, then in the print preview or PDF export output you will be able to identify containers that span multiple pages that you were not expecting to.
## Duplicate Page Breaks
It is pretty easy to set multiple page breaks that will fall at the same point. Check that your tablix and Row Groups do not both have page breaks set:
[![enter image description here][6]][6]
[![enter image description here][7]][7]
Using the temporary background colour hack here will also help, but set different colours for different containers or groups to distinguish between them.
## Overlapping Containers
If the previous solutions do not help, the last common issue with export to PDF is that containers cannot overlap in the same _extended horizontal plane_.
This is hard to explain, but in the preview and html views, the following alignment for textboxes or rectangles will work
[![enter image description here][8]][8]
But if the layout was like this:
[![enter image description here][9]][9]
It can end up rendering like this if the content in the right box is allowed to grow (`Can Grow = true`) and the content makes it expand:
[![enter image description here][10]][10]
- I don't think this is an issue in this case, but can't easily tell. Use the background colour hack again to help identify these issues.
[1]: https://i.stack.imgur.com/9ERkL.png
[2]: https://i.stack.imgur.com/Eh3U6.png
[3]: https://i.stack.imgur.com/KhTf8.png
[4]: https://i.stack.imgur.com/wCxDL.png
[5]: https://i.stack.imgur.com/wHSM5.png
[6]: https://i.stack.imgur.com/iiT4Z.png
[7]: https://i.stack.imgur.com/F1Xhb.png
[8]: https://i.stack.imgur.com/eG1Kq.png
[9]: https://i.stack.imgur.com/y6G84.png
[10]: https://i.stack.imgur.com/tDWkA.png |
Could not instantiate class org.openqa.selenium.edge.EdgeDriver [Serenity] |
|selenium-webdriver|serenity-bdd|selenium-edgedriver| |
null |
I think that you have something like [Javers][1] in mind. Note that change tracking libraries typically require that you have access to the the original version of the object, so make sure that you keep that around without making changes.
[1]: https://javers.org |
On the project I am working we use azure services. When i was working on my old computer it all worked fine. <br>
I had to switch computer recently and i cloned the code from devops, so the code should work. I installed the Azure Developer CLI and now when I try to build the project i get the following error:
[![Azure no developer environments found][1]][1]
My code is organized in the following way and as you can see there is the .azure folder in which i have the env file (I updated the folder name now but also with the original name i get the same error)
[![Folder structure][2]][2]
[1]: https://i.stack.imgur.com/47FsP.png
[2]: https://i.stack.imgur.com/LVB1J.png |
Azure Developer CLI does not recognize environment |
|azure|visual-studio-code|environment| |
I believe the types are coming not from Pandas itself, but [from `pandas-stubs`][1]:
```py
AxisInt: TypeAlias = int
AxisIndex: TypeAlias = Literal["index", 0]
AxisColumn: TypeAlias = Literal["columns", 1]
Axis: TypeAlias = AxisIndex | AxisColumn
```
There's also this disclaimer at [the top of the project's README][2]:
> The stubs are likely incomplete in terms of covering the published API of pandas. NOTE: The current 2.0.x releases of pandas-stubs do not support all of the new features of pandas 2.0.
[1]: https://github.com/pandas-dev/pandas-stubs/blob/ae7e4737070f42658b0d684456f0abd4f2d1ac5f/pandas-stubs/_typing.pyi#L486-L489
[2]: https://The%20stubs%20are%20likely%20incomplete%20in%20terms%20of%20covering%20the%20published%20API%20of%20pandas.%20NOTE:%20The%20current%202.0.x%20releases%20of%20pandas-stubs%20do%20not%20support%20all%20of%20the%20new%20features%20of%20pandas%202.0.%20See%20this%20tracker%20to%20understand%20the%20current%20compatibility%20with%20version%202.0. |
I need to use this cmd command:
"C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64\signtool.exe" verify /pa "C:\Program Files (x86)\myprogram\bin\aaa.exe"
in subprocess.run with python.
I try:
subprocess.run("cmd", "/c","C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64\signtool.exe"verify /pa "C:\Program Files (x86)\myprogram\bin\aaa.exe")
but not work.
What is the correct form for this command to work?
|
How use subprocess.run with cmd external program in phython |
|python|cmd|subprocess| |
This is my `message` table. Now I am trying to write an query to group records by `collection_id` (only first instance) if `type` `multi-store` else by `message`.`id` .
```
id | collection_id | type | affiliation_id | status | scheduled_for_date
--------------------------------------+--------------------------------------+----------------+----------------+--------------+--------------------
1143c066-01ed-4eb5-a146-68487de702a9 | bce85e31-4d2f-43b1-b263-57fca356856f | multi-store | 12091 | draft |
e1183732-e91d-42eb-9998-110e039cfc25 | bce85e31-4d2f-43b1-b263-57fca356856f | multi-store | 12092 | draft |
8f962a49-7da1-46d0-87b9-595788767dfe | bce85e31-4d2f-43b1-b263-57fca356856f | multi-store | 12097 | draft |
6e4dee09-7a4e-47be-bdd8-935a67bb2063 | 740f6b42-bbf1-4aeb-8fe9-6874635d9e29 | multi-store | 12091 | draft |
79afab0e-14e7-4d1b-9a15-358763743c3e | 740f6b42-bbf1-4aeb-8fe9-6874635d9e29 | multi-store | 12092 | draft |
7bc78bee-074a-4031-9492-954e7c4eeb09 | 740f6b42-bbf1-4aeb-8fe9-6874635d9e29 | multi-store | 12097 | draft |
3bb38fbd-d411-4f78-9c42-c858bf57b784 | | standard-store | 10511 | draft |
fbb3b175-1a3b-4515-b0b3-0ce6d6d0145f | | standard-store | 10511 | draft |
84004999-d2cf-4af4-bfaa-c1077d1d8621 | | standard-store | 10511 | sent | 2017-05-21
cbea0789-6886-431a-a8e1-723d5aafc7b9 | | standard-store | 10511 | scheduled | 2019-02-12
ec8988ff-5136-4b81-b448-cd456dc487a4 | | standard-store | 10511 | review | 2019-01-13
0e119440-5fbc-4afe-a784-a6bcfe3a6e4d | | standard-store | 10511 | draft |
98503a20-4396-4809-b3ec-8e330c15afa9 | | standard-store | 10511 | needs_action | 2018-12-11
d33a9173-dc64-464f-8e58-49b4c9c2fdae | | standard-store | 10511 | draft |
bee0dc72-acca-44e2-82ea-d18e830f91a2 | | standard-store | 10511 | sent | 2016-03-12
```
So the output will be like
```
id | collection_id | type | affiliation_id | status | scheduled_for_date
--------------------------------------+--------------------------------------+----------------+----------------+--------------+--------------------
1143c066-01ed-4eb5-a146-68487de702a9 | bce85e31-4d2f-43b1-b263-57fca356856f | multi-store | 12091 | draft |
6e4dee09-7a4e-47be-bdd8-935a67bb2063 | 740f6b42-bbf1-4aeb-8fe9-6874635d9e29 | multi-store | 12091 | draft |
3bb38fbd-d411-4f78-9c42-c858bf57b784 | | standard-store | 10511 | draft |
fbb3b175-1a3b-4515-b0b3-0ce6d6d0145f | | standard-store | 10511 | draft |
84004999-d2cf-4af4-bfaa-c1077d1d8621 | | standard-store | 10511 | sent | 2017-05-21
cbea0789-6886-431a-a8e1-723d5aafc7b9 | | standard-store | 10511 | scheduled | 2019-02-12
ec8988ff-5136-4b81-b448-cd456dc487a4 | | standard-store | 10511 | review | 2019-01-13
0e119440-5fbc-4afe-a784-a6bcfe3a6e4d | | standard-store | 10511 | draft |
98503a20-4396-4809-b3ec-8e330c15afa9 | | standard-store | 10511 | needs_action | 2018-12-11
d33a9173-dc64-464f-8e58-49b4c9c2fdae | | standard-store | 10511 | draft |
bee0dc72-acca-44e2-82ea-d18e830f91a2 | | standard-store | 10511 | sent | 2016-03-12
```
One way to do it is by `union`:
```
SELECT DISTINCT ON (collection_id)
collection_id,
id,
type,
affiliation_id,
status,
scheduled_for_date
from
message
where
type = 'multi-store'
union
SELECT
collection_id,
id,
type,
affiliation_id,
status,
scheduled_for_date
from
message
where
type = 'standard-store'
```
But I feel that's less efficient, another option could be case based grouping. But that includes complexity when need to add all the select fields to group.
What is the most efficient way to write the query? |
This is an other solution to resolve this gaps and islands, it can be resolved by calculating the difference between row numbers (`DENSE_RANK` is used because various customers may place orders on the same day) , which assigns a unique identifier to each sequence of consecutive orders for a customer :
WITH cte AS (
SELECT
*, DENSE_RANK() over(ORDER BY orderdate) - DENSE_RANK() over (PARTITION BY customerid ORDER BY orderdate) as grp
FROM orders
)
SELECT customerid, COUNT(DISTINCT orderdate) AS consecutive_days
FROM cte
WHERE grp > 0
GROUP BY customerid, grp
HAVING consecutive_days > 1;
Results :
customerid consecutive_days
3 2
4 4
4 2
5 3
[Demo here][1]
[1]: https://dbfiddle.uk/L-SrNzTI |
I have 2 lights in my scene. They work fine until I look in the opposite direction: they just stop working. The even weirder thing is that one of the walls becomes darker when this light disappears and starts glitching with the other walls. It only happens in a specific part of the scene.
I tried playing with the light's settings and nothing worked. I even tried copying and pasting working lights but in that area it happens and I'm extremely frustrated. |
I have this light in my scene that stops working and I have no Idea how to fix it. Can anyone solve this? |
|unity-game-engine|lighting|light| |
null |
[](https://i.stack.imgur.com/ujcMb.jpg)
ΠΠ½Π΅ Π½ΡΠΆΠ½ΠΎ Π²ΡΠ±ΡΠ°ΡΡ ΠΈΠ΅ΡΠ°ΡΡ
ΠΈΡΠ΅ΡΠΊΡΡ ΡΠΈΡΡΠ΅ΠΌΡ ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ Π±Π°Π·ΠΎΠΉ Π΄Π°Π½Π½ΡΡ
Π΄Π»Ρ ΠΌΠΎΠ΅Π³ΠΎ ΠΏΡΠΎΠ΅ΠΊΡΠ°, ΠΊΠ°ΠΊΡΡ ΠΈΠ΅ΡΠ°ΡΡ
ΠΈΡΠ΅ΡΠΊΡΡ ΡΠΈΡΡΠ΅ΠΌΡ ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ Π±Π°Π·ΠΎΠΉ Π΄Π°Π½Π½ΡΡ
Π²Ρ Π±Ρ ΠΏΠΎΡΠΎΠ²Π΅ΡΠΎΠ²Π°Π»ΠΈ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ ΠΌΠ½Π΅? ΠΠ°ΡΠ°Π½Π΅Π΅ Π±Π»Π°Π³ΠΎΠ΄Π°ΡΡ. Π‘ΠΏΠ°ΡΠΈΠ±ΠΎ
haven't tried anything yet. |
Which database management system should I use for this task? |
|nosql|hierarchical-data| |
null |
I am running a simple code on the Replit.com, but I get an error. This code worked a week ago, but it doesnβt work now. The url is accessible and works from the browser without problems. Searching for solutions on the Internet did not lead to results. Please help me, how can I fix the error?
import requests
url = βhttps://iss.moex.com/iss/statistics/engines/stock/markets/index/analytics.csvβ
print(url)
r = requests.get(url).text
print (r)
Traceback (most recent call last):β¦
requests.exceptions.SSLError: HTTPSConnectionPool(host=βiss.moex.comβ, port=443): Max retries exceeded with url: /iss/statistics/engines/stock/markets/index/analytics.csv (Caused by SSLError(SSLEOFError(8, β[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)β)))
I tried the solution from here
https://stackoverflow.com/questions/16230850/httpsconnectionpool-max-retries-exceeded
there is no result |
|jquery|events|clipboard| |
I'd like to provide a clear explanation of my scenario, which should be easy to reproduce.
I've been attempting to facilitate ROS2 topic communication between containers. One container resides on Machine A, while another is on Machine B. Both machines are Windows hosts connected via the same local network.
Requirements:
Two Windows host machines, version 10 or above.
Docker installed on both machines.
Cyclone DDS.
ROS2 Humble docker image image.
Thank you.
When I publish data from Machine A, I'm not receiving any response on the subscriber container located on Machine B. I've tried running both Docker images with the --net=host option, one on Machine A and the other on Machine B. Additionally, setting up the ROS_DOMAIN_ID in the environment hasn't resolved the issue.
Additionally i can ping from publisher container to subscriber container.
I would greatly appreciate it if I could receive a working solution for this.
Thanks in Advance. |