instruction stringlengths 0 30k β |
|---|
I have a folder containing `delta` files. I try to create an external table as follows:
spark.sql("""
CREATE TABLE IF NOT EXISTS my_table
(
id INT,
name STRING
)
USING DELTA
LOCATION 'abfss://xxx@yyy.dfs.core.windows.net/aaa/cc' """)
However, I get the following error:
> The specified schema does not match the existing schema at
> abfss://xxx@yyy.dfs.core.windows.net/aaa/cc
The problem is that the `delta` files foresee an additional column (`age`) which I don't want to have in my table. How can I achieve that? |
Schema Modification when creating External Tables |
|databricks-unity-catalog| |
The code:
```lua
function randomNumbers (str)
local matches = {}
for match in str:gmatch("#") do
table.insert(matches, match)
end
for i = #matches, 1, -1 do
local n = tostring(math.random(0, 9))
str = string.gsub(str, "#", n, 1)
end
return str
end
```
Examples:
```lua
print(randomNumbers("a#bcd")) -- a7bcd
print(randomNumbers("a#b#cd")) -- a6b5cd
print(randomNumbers("a#b#c#d")) -- a7b0c6d
print(randomNumbers("a#b#c#d#")) -- a3b7c4d5
print(randomNumbers("a#b#c#d##")) -- a1b0c9d71
``` |
Upon executing the commands `composer require laravel/ui` and `php artisan ui vue --auth`, Laravel throws an error. I have been searching for a solution but have yet to find one.
> Call to undefined method
> App\Http\Controllers\Auth\LoginController::middleware()
|
Laravel 11 login system error: undefined method |
|php|laravel|authentication|laravel-11| |
|amazon-web-services|amazon-sqs|message-bus| |
null |
This is how WebView2 sample lets us get the user data folder:
auto environment7 = m_webViewEnvironment.try_query<ICoreWebView2Environment7>();
CHECK_FEATURE_RETURN(environment7);
wil::unique_cotaskmem_string userDataFolder;
environment7->get_UserDataFolder(&userDataFolder);
But, is it possible to get the **user data folder** without creating an instance of the browser control?
I know you can do this with the SDK / Runtime values:
std::wstring CWebBrowser::GetRuntimeVersion()
{
wil::unique_cotaskmem_string runtimeVersion;
GetAvailableCoreWebView2BrowserVersionString(nullptr, &runtimeVersion);
return runtimeVersion.get();
}
std::wstring CWebBrowser::GetSdkBuild()
{
auto options = Microsoft::WRL::Make<CoreWebView2EnvironmentOptions>();
wil::unique_cotaskmem_string targetVersion;
CHECK_FAILURE(options->get_TargetCompatibleBrowserVersion(&targetVersion));
// The full version string A.B.C.D
const wchar_t* targetVersionMajorAndRest = targetVersion.get();
// Should now be .B.C.D
const wchar_t* targetVersionMinorAndRest = wcschr(targetVersionMajorAndRest, L'.');
CHECK_FAILURE((targetVersionMinorAndRest != nullptr && *targetVersionMinorAndRest == L'.') ? S_OK : E_UNEXPECTED);
// Should now be .C.D
const wchar_t* targetVersionBuildAndRest = wcschr(targetVersionMinorAndRest + 1, L'.');
CHECK_FAILURE((targetVersionBuildAndRest != nullptr && *targetVersionBuildAndRest == L'.') ? S_OK : E_UNEXPECTED);
// Return + 1 to skip the first . so just C.D
return targetVersionBuildAndRest + 1;
}
I would also like to get the **runtime path** without creating a browser instance (if possible). |
I am learning Spark, so as a task we had to create a wheel locally and later install it in Databricks (I am using Azure Databricks), and test it by running it from a Databrick Notebook. This program involves reading a CSV file (timezones.csv) included inside the wheel file. The file *is* inside the wheel (I checked it) and also the wheel works properly when I install it and run it from a local PC Jupyter Notebook. However, when I install it in Databricks Notebook it gives this error, as you can see below in the snapshot:
```lang-py
[PATH_NOT_FOUND] Path does not exist: dbfs:/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/motor_ingesta/resources/timezones.csv. SQLSTATE: 42K03
File <command-3771510969632751>, line 7
3 from pyspark.sql import SparkSession
5 spark = SparkSession.builder.getOrCreate()
----> 7 flights_with_utc = aniade_hora_utc(spark, flights_df)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/motor_ingesta/agregaciones.py:25, in aniade_hora_utc(spark, df)
23 path_timezones = str(Path(__file__).parent) + "/resources/timezones.csv"
24 #path_timezones = str(Path("resources") / "timezones.csv")
---> 25 timezones_df = spark.read.options(header="true", inferSchema="true").csv(path_timezones)
27 # Concateno los datos de las columnas del timezones_df ("iata_code","iana_tz","windows_tz"), a la derecha de
28 # las columnas del df original, copiando solo en las filas donde coincida el aeropuerto de origen (Origin) con
29 # el valor de la columna iata_code de timezones.df. Si algun aeropuerto de Origin no apareciera en timezones_df,
30 # las 3 columnas quedarΓ‘n con valor nulo (NULL)
32 df_with_tz = df.join(timezones_df, df["Origin"] == timezones_df["iata_code"], "left_outer")
File /databricks/spark/python/pyspark/instrumentation_utils.py:47, in _wrap_function.<locals>.wrapper(*args, **kwargs)
45 start = time.perf_counter()
46 try:
---> 47 res = func(*args, **kwargs)
48 logger.log_success(
49 module_name, class_name, function_name, time.perf_counter() - start, signature
50 )
51 return res
File /databricks/spark/python/pyspark/sql/readwriter.py:830, in DataFrameReader.csv(self, path, schema, sep, encoding, quote, escape, comment, header, inferSchema, ignoreLeadingWhiteSpace, ignoreTrailingWhiteSpace, nullValue, nanValue, positiveInf, negativeInf, dateFormat, timestampFormat, maxColumns, maxCharsPerColumn, maxMalformedLogPerPartition, mode, columnNameOfCorruptRecord, multiLine, charToEscapeQuoteEscaping, samplingRatio, enforceSchema, emptyValue, locale, lineSep, pathGlobFilter, recursiveFileLookup, modifiedBefore, modifiedAfter, unescapedQuoteHandling)
828 if type(path) == list:
829 assert self._spark._sc._jvm is not None
--> 830 return self._df(self._jreader.csv(self._spark._sc._jvm.PythonUtils.toSeq(path)))
831 elif isinstance(path, RDD):
833 def func(iterator):
File /databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py:1322, in JavaMember.__call__(self, *args)
1316 command = proto.CALL_COMMAND_NAME +\
1317 self.command_header +\
1318 args_command +\
1319 proto.END_COMMAND_PART
1321 answer = self.gateway_client.send_command(command)
-> 1322 return_value = get_return_value(
1323 answer, self.gateway_client, self.target_id, self.name)
1325 for temp_arg in temp_args:
1326 if hasattr(temp_arg, "_detach"):
File /databricks/spark/python/pyspark/errors/exceptions/captured.py:230, in capture_sql_exception.<locals>.deco(*a, **kw)
226 converted = convert_exception(e.java_exception)
227 if not isinstance(converted, UnknownException):
228 # Hide where the exception came from that shows a non-Pythonic
229 # JVM exception message.
--> 230 raise converted from None
231 else:
232 raise
```
Databricks Error Snapshot 1:

Databricks Error Snapshot 2:

Has anyone experienced this problem before? Is there any solution?
I tried installing the file both with pip and from Library, and I got the same error, also rebooted the cluster several times.
Thanks in advance for your help.
I am using Python 3.11, Pyspark 3.5 and Java 8 and created the wheel locally from PyCharm. If you need more details to answer, just ask and I'll provide them.
I explained all the details above. I was expecting to be able to use the wheel I created locally from a Databricks Notebook.
Sorry about my English is not my native tongue and I am a bit rusty.
-----
Edited to answer comment:
> Can u navigate to %sh ls
> /dbfs/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/motor_ingesta/resources and share the folder content as image? β Samuel Demir 11 hours ago
I just did what you asked for and I got this result (The file is actually there even if Databricks says it can't find it...
[Snapshot of suggestion result][1]
[1]: https://i.stack.imgur.com/SnJ6M.png |
I am trying to have duplicate repository of my source but duplicate has certain modifications that the source repo won't have. I make these modifications using clean/smudge filters. I want my smudge filter to run once per commit, i.e. if a pull contains four commits then I want the smudge filter to run four times (per commit being pulled/checkedout). This smudge filter currently runs only once per pull even if the pull contains multiple commits.
I want certain information about the commits written to a log file. My current smudge filter script looks like this:
```shell
full_commit=$(git log -1 --pretty=format:"%an|<%ae>|%ad|%H%n%B") # %n for new-line
commit_info=$(echo "$full_commit" | head -1)
commit_msg=$(echo "$full_commit" | tail -n +2)
IFS=$'|' read -r name mail dt hash <<< "$(echo $commit_info)" # get everything separated out
echo commit hash: $hash >> smudge-file.messages
echo commit author-name: $name, author-mail: $mail, commit-datetime: "$dt" >> smudge-file.messages
echo commit message: "$commit_msg" >> smudge-file.messages
echo "" >> smudge-file.messages
cat
```
Here I assume that as the smudge would run once per commit being pulled hence the `git log -1` will output the current commit's details which I append to a log file.
It makes sense that the smudge filter should run once per commit being pulled but this doesn't seem to be the case in reality as only the most recent commit information is being written to my log file.
Any help to make the smudge run per commit being checkedout or similar is appreciated. |
Unlike imported modules/packages `get_a` is an ordinary identifier that points to the function that you want to patch without looking it up in `mypackage.mymodule`, as this is already done during the import. So in this case you have to patch it directly via `f"{__name__}.get_a"` like in your first test. This is not a special behavior of `pytest-mock`. |
**1st solution** - automatic.
The PyCharm plugin **"PyCharm Help"** allows you to automatically download web help for offline use: when help is invoked, pages are delivered via a built-in Web server.
This solution has drawbacks - for me, it downloaded help only for one version of Python, but not for newer versions which I use in parallel.
Also, in that version of Python help, the local search doesn't work.
**2nd solution** - better, very flexible, but manual.
1. Download [pythons' html help][1] and unpack it into the folder with the corresponding version name, e.g., for Windows to "C:\py_help_server\3.12".
*Folder "py_help_server" will become root folder for our server, and "3.12" naming should correspond online helps' URL format.*
2. Run cmd as admin and run such commands:
cd C:\py_help_server\3.12
python -m http.server 80 --bind 127.0.0.1
3. For Chrome/Brave, download the plugin "Requestly - Intercept, Modify & Mock HTTP Requests". In its settings, go to "HTTP Rules", then "My Rules", click "New Rule" with the type "Replace String".
And create a rule like this:
If the URL contains "https://docs.python.org/3.12/", replace "https://docs.python.org/" with "http://127.0.0.1/".
Now, all pages of Python help for the 3.12 version will be redirected to our local server, which we started in the step 2.
This works for me like a charm. I tried to edit the hosts file too, but that didn't work for me at all.
Also, this last method has an advantage over the "PyCharm Help" plugin - the local web help's search function works well!
[1]: https://docs.python.org/3/download.html
|
Accessing Secret Variables in Classic Pipelines through Java app in Azure DevOps |
|java|maven|variables|azure-devops|azure-pipelines| |
null |
I spent a while stuck on this question. Let me explain why an incorrect answer doesn't work, then give the solution.
I first tried to solve this problem "imperatively." I tried to sketch a solution that said, "if the first element is TRUE, use the list (and we're done), otherwise call itself with skip-the-first-element." This incorrect solution looks like
```
t: IF (FST t) t (Y (x: SND t))
```
Look what happens with the test cases:
Case 1 (list=FALSE FALSE TRUE TRUE FALSE): ... failure.
Case 2 (list=TRUE FALSE): The first element in the list is TRUE, so the `IF` statement returns the entire list. Success.
Case 3 (list=FALSE TRUE): The `IF` statement fails, this reduces to
```
(f: f (PAIR FALSE (PAIR TRUE FALSE))) t: IF (FST t) t (Y (x: SND t))
...
Y (x: SND (PAIR FALSE (PAIR TRUE FALSE)))
...
SND (PAIR FALSE (PAIR TRUE FALSE))
...
PAIR TRUE FALSE # single element list containing TRUE
```
Case 3 succeeds, but only by accident. The Y combinator isn't doing anything helpful.
-----
Let me copy some text from the problem:
> Suppose you want to write a function that receives two arguments and applies *itself* to both arguments in inverted order. You can write: `F = Y (f: x:y: f y x)`.
Pondering the above example and the case 3 above finally made it clear how to use the Y Combinator. You need to start with the Y Combinator, and use a function that you want to apply to itself. This function will have a list argument, but will more or less follow the same logic as I tried to implement the first time. Correct solution:
```
Y (f: x: IF (FST x) x (f (SND x)))
```
This solution says:
Apply this function to itself. If the first element in the list is TRUE, return the entire list and we're done (the `IF (FST x) x` part).
Otherwise, apply itself on the skip-the-first-element list (the `f (SND x)` part). |
Basically I have an authentication application, where I have an authentication page where my user is sent every time the system does not recognize that he is authenticated, authentication occurs when he has a valid token registered in his localstorage with "authorization", so through "requireAuth" I created a mechanism where when a valid token is not identified in localstorage, the user is sent to the authentication page when he tries to access "/", and if he is authenticated, then he goes straight to "/ " and cannot access the auth page. This way
```
import { useContext } from "react"
import { AuthContext } from "./AuthContext"
import Authpage from "../pages/Authpage";
import { Navigate } from "react-router-dom";
export const RequireAuth = ({ children }: { children: JSX.Element }) => {
const auth = useContext(AuthContext);
if (!auth.user) {
return <Navigate to="/auth" replace />;
}
return children;
}
```
As you can see, the "/" page is protected
```
import { Routes, Route, useNavigate, useLocation } from 'react-router-dom';
import './App.css';
import Homepage from './pages/Homepage';
import Authpage from './pages/Authpage';
import Signuppage from './pages/Signuppage/Signuppage';
import { RequireAuth } from './context/RequireAuth';
import { RequireNotAuth } from './context/RequireNotAuth';
import React from 'react';
import GoogleAuthRedirect from './pages/GoogleAuthRedirect';
export function InvalidRoute() {
const navigate = useNavigate();
React.useEffect(() => {
navigate('/');
}, [navigate]);
return null;
}
function App() {
return (
<Routes>
<Route path='/' element={<RequireAuth><Homepage /></RequireAuth>} />
<Route path='/auth' element={<Authpage />} />
<Route path='/signup' element={<Signuppage />} />
<Route path='/googleauth/:id' element={<GoogleAuthRedirect />} />
</Routes>
);
}
export default App;
```
This is the part where my user gets thrown after authentication before being sent to "/"
```
import * as C from './styles';
import { useNavigate, useParams } from 'react-router-dom';
import { useContext, useEffect, useState } from 'react';
import { useApi } from '../../hooks/useApi';
import { AuthContext } from '../../context/AuthContext';
type Props = {}
const GoogleAuthRedirect = (props: Props) => {
const { id } = useParams();
const api = useApi();
const auth = useContext(AuthContext);
const navigate = useNavigate();
const [loading, setLoading] = useState(false);
useEffect(() => {
const getToken = async() => {
try {
if (id) {
const token = await api.verifyToken(id);
setLoading(true);
setLoading(false);
if (token.success) {
localStorage.setItem('authorization', id);
setLoading(true);
setLoading(false);
navigate('/');
}
} else {
console.log("Token nΓ£o definido.");
}
} catch (err) {
console.log(err);
}
}
getToken();
}, []);
return (
<div>
<h3>Autenticando...</h3>
</div>
)
}
export default GoogleAuthRedirect;
```
The problem is that after authenticating and sending the page "/", nothing is rendered on the page and it is only rendered if I press f5 and refresh the page. Somehow I thought it could be because it is being sent before authenticating in useEffect, so I tried loading it to extend this time and without success.
ps: Homepage
```
import { useContext, useEffect, useState } from 'react';
import * as C from './styles';
import { useNavigate } from 'react-router-dom';
import { useApi } from '../../hooks/useApi';
import { AuthContext } from '../../context/AuthContext';
type Props = {}
const Homepage = (props: Props) => {
const navigate = useNavigate();
const auth = useContext(AuthContext);
const api = useApi();
const [loading, setLoading] = useState(true);
useEffect(() => {
const verifyAuthTK = async () => {
const authorization = localStorage.getItem('authorization');
if (authorization) {
const isAuth = await api.verifyToken(authorization);
if (isAuth.success) {
setLoading(false);
}
}
};
verifyAuthTK();
}, []);
if (loading) {
return <div>Carregando...</div>;
}
return (
<C.Container>
homepage
</C.Container>
)
}
export default Homepage;
```
It sends me exactly to the "/" page that I would like, but the problem is that the screen is completely white and I need to press f5 to render the elements and components. |
{"Voters":[{"Id":3001761,"DisplayName":"jonrsharpe"},{"Id":10867454,"DisplayName":"A Haworth"},{"Id":2802040,"DisplayName":"Paulie_D"}]} |
|html|css| |
I use storybook 7.6.7 for my project. I want to be able to use `*.module.scss` and `*.jpeg` files in my stories. however, when I import a module.scss file, I get this error.
Cannot find module './*.module.scss' or its corresponding type declarations.ts(2307)
this is just a typescript error and the styles work fine in stories in production. here is my storybook config in main.ts.
import type { StorybookConfig } from "@storybook/react-webpack5";
const path = require("path");
function getAbsolutePath(value) {
return path.dirname(require.resolve(path.join(value, "package.json")));
}
const config: StorybookConfig = {
stories: ["../src/**/*.stories.mdx", "../src/**/*.stories.@(js|jsx|ts|tsx)"],
addons: [
"@storybook/addon-links",
"@storybook/addon-essentials",
"@storybook/addon-interactions",
"@storybook/addon-styling-webpack",
"@storybook/manager-api",
"@storybook/theming",
"storybook-dark-mode",
],
framework: getAbsolutePath("@storybook/react"),
core: {
builder: {
name: "@storybook/builder-webpack5",
options: {
fsCache: false,
lazyCompilation: true,
},
},
},
webpackFinal: async (config, { configType }) => {
config.module?.rules?.push({
test: /\.module\.s(a|c)ss$/,
use: [
"style-loader",
{
loader: "css-loader",
options: {
modules: {
localIdentName:
configType === "PRODUCTION"
? "[local]__[hash:base64:5]"
: "[name]__[local]__[hash:base64:5]",
exportLocalsConvention: "camelCase",
},
sourceMap: true,
},
},
{
loader: "sass-loader",
options: {
sourceMap: true,
},
},
],
include: path.resolve(__dirname, "../"),
});
return config;
},
};
export default config;
I have no idea how to fix it. I have already declared a `index.d.ts` file with module declarations included but no luck.
any help is appreciated. |
null |
You can use a guard to prevent recursive calls
``` vb
Private m_updating As Boolean
```
Then in the TextChanged event handlers check, set and reset the guard. Here
`HEX_TextChanged` as an example
``` vb
Private Sub HEX_TextChanged(sender As Object, e As EventArgs) Handles HEX.TextChanged
If Not m_updating Then
m_updating = True
Try
' TODO: put your conversion and updating logic here.
' (Don't remove or add event handlers)
Finally
m_updating = False
End Try
End If
End Sub
```
The Try-Finally statement makes sure the guard is reset in any case, even if a exception should occur or the code was left prematurely with a Return-statement.
Implement this in `HEX_TextChanged`, `DECDEC_TextChanged`, `bin_TextChanged` and `oct2_TextChanged`. The `KeyPress` methods do not require a guard or removing/adding the event handlers, because they are only filtering keys and don't update other TextBoxes (this is what raises the TextChanged events and caused the recursive calls).
You can also slightly simplify them by directly assigning the Boolean value:
``` vb
Private Sub bin_KeyPress(sender As Object, e As KeyPressEventArgs) Handles bin.KeyPress
e.Handled = Not (e.KeyChar = "0"c OrElse e.KeyChar = "1"c OrElse Char.IsControl(e.KeyChar))
End Sub
```
|
I have a ViewPager 2 inside a plannerFragment, which calls 2 fragments: one (localeChoosingFragment) contains a recyclerview for the user to choose a locale, and another to choose places within that locale (localeExploringFragment). A ViewModel (japanGuideViewModel) stores the data (as MutableLiveData), including the int currentLocaleBeingViewed. When the user chooses a locale, this is updated, and an observer in the plannerFragment causes one of the tabs in the PlannerFragment to update with the name of that locale. Clicking that tab the loads the localeExploringFragment for that locale.
When the observer in the plannerFragment is triggered, the tab is updated. This causes the recyclerview in the localeChoosingFragment to reset to the first position. As a workaround I have tried using a Handler to automatically scroll the recyclerview to the correct place (according to the ViewModel) but I am confused and concerned about why this is happening. Before using the ViewPager2, I added both (localeChoosingFragment and localeExploringFragment) to a framelayout manually, and tried show/hide and attach/detach, but had the same problem (the recyclerview resetting to the first position).
Does anyone know why this could be? It seems a small thing, but I am concerned that there is something going on I don't and should know about, especially this early in the project.
That ViewPager 2 is actually part of a fragment that is selected from another ViewPager 2, but I don't think that's what's causing the problem.
I will attach a cleaned up version of my code.
Here is the plannerFragment:
public class PlannerFragment extends Fragment {
Context mContext;
JapanGuideViewModel japanGuideViewModel;
View plannerFragmentView;
ViewPager2 plannerFragmentViewPager2;
TabLayout tabLayout;
public static final String TAG = "JapanGuideTAG";
public PlannerFragment() {
Toast.makeText(mContext, "New Planner Fragment, empty constructor", Toast.LENGTH_SHORT).show();
}
public PlannerFragment(Context mContext, JapanGuideViewModel japanGuideViewModel) {
this.japanGuideViewModel = japanGuideViewModel;
this.mContext = mContext;
Log.d(TAG, "New Planner Fragment, arguments passed");
}
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Log.d(TAG, "oncreate in planner fragment");
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
Log.d(TAG, "oncreateview in planner fragment");
plannerFragmentView = inflater.inflate(R.layout.fragment_planner, container, false);
return plannerFragmentView;
}
@Override
public void onViewCreated(@NonNull View view, @Nullable Bundle savedInstanceState) {
super.onViewCreated(view, savedInstanceState);
Log.d(TAG, "onViewCreated in PlannerFragment");
plannerFragmentViewPager2 = plannerFragmentView.findViewById(R.id.plannerFragmentViewPager2);
tabLayout = plannerFragmentView.findViewById(R.id.plannerFragmentTabLayout);
PlannerFragmentViewPager2Adaptor plannerFragmentViewPager2Adaptor = new PlannerFragmentViewPager2Adaptor(getChildFragmentManager(), getLifecycle(), mContext, japanGuideViewModel);
plannerFragmentViewPager2.setAdapter(plannerFragmentViewPager2Adaptor);
plannerFragmentViewPager2.setUserInputEnabled(false);
TabLayoutMediator.TabConfigurationStrategy tabConfigurationStrategy = new TabLayoutMediator.TabConfigurationStrategy() {
@Override
public void onConfigureTab(@NonNull TabLayout.Tab tab, int position) {
Log.d(TAG, "onConfigure in tabConfigurationStrategy");
if (position == 0) {
Log.d(TAG, "Chooser");
tab.setText("Explore Prefectures");
} else if (position == 1) {
tab.setText("Explore Chosen Prefecture");
}
}
};
Log.d(TAG, "attachingTabLayoutMediator");
new TabLayoutMediator(tabLayout, plannerFragmentViewPager2, tabConfigurationStrategy).attach();
Observer dataDownloadedObserver = new Observer() {
@Override
public void onChanged(Object o) {
if (japanGuideViewModel.getDataDownloaded().getValue() > 0) {
Log.d(TAG, "onChanged called in data downloaded observer. Value is " + japanGuideViewModel.getDataDownloaded().getValue());
japanGuideViewModel.getCurrentLocaleBeingViewed().setValue(0);
}
}
};
Observer localeToExploreChangedObserver = new Observer() {
@Override
public void onChanged(Object o) {
Log.d(TAG, "localeChangedObserver in planner fragment");
TabLayout.Tab tab = tabLayout.getTabAt(1);
if (tab != null) {
Log.d(TAG, "the tab is" + tab.getPosition());
tab.setText(japanGuideViewModel.getLocaleNamesArray().getValue().get(japanGuideViewModel.getCurrentLocaleBeingViewed().getValue()));
}
}
};
japanGuideViewModel.getCurrentLocaleBeingViewed().observe(getViewLifecycleOwner(), localeToExploreChangedObserver);
japanGuideViewModel.getDataDownloaded().observe(getViewLifecycleOwner(), dataDownloadedObserver);
} //onViewCreated
public class PlannerFragmentViewPager2Adaptor extends FragmentStateAdapter {
Context mContext;
JapanGuideViewModel japanGuideViewModel;
public PlannerFragmentViewPager2Adaptor(FragmentManager fragmentManager, Lifecycle lifecycle, Context mContext, JapanGuideViewModel japanGuideViewModel) {
super(fragmentManager, lifecycle);
this.japanGuideViewModel = japanGuideViewModel;
this.mContext = mContext;
Log.d(TAG, "full constructor in planner fragment");
}
@NonNull
@Override
public Fragment createFragment(int position) {
Log.d(TAG, "createFragment called in plannerFragment. Position is ");
switch (position) {
case 0:
Log.d(TAG, "planner fragment, case 0, localechoosingfragment");
LocaleChoosingFragment localeChoosingFragment = new LocaleChoosingFragment(mContext, japanGuideViewModel);
return localeChoosingFragment;
case 1:
Log.d("planner fragment localeExploringFragmentTAG", "case 1");
LocaleExploringFragment localeExploringFragment = new LocaleExploringFragment(mContext, japanGuideViewModel);
return localeExploringFragment;
default:
Log.d(TAG, "default constructor so returning localeChoosingFragment");
localeChoosingFragment = new LocaleChoosingFragment(mContext, japanGuideViewModel);
return localeChoosingFragment;
}
}
@Override
public int getItemCount() {
return 2;
}
}
}
And here is the localeChoosingFragment:
public class LocaleChoosingFragment extends Fragment {
JapanGuideViewModel japanGuideViewModel;
Context mContext;
View localeChoosingFragmentView;
public static final String TAG = "JapanGuideTAG";
RecyclerView localeChoosingRecyclerView;
LinearLayoutManager recyclerViewLayoutManager;
LocaleChoosingRecyclerViewAdaptor localeChoosingAdaptor;
SnapHelperOneByOne SnapHelper;
public LocaleChoosingFragment() {
Log.d(TAG, "EMPTY constructor called for localeChoosingFragment");
}
public LocaleChoosingFragment(Context mContext, JapanGuideViewModel japanGuideViewModel) {
this.japanGuideViewModel = japanGuideViewModel;
this.mContext = mContext;
Log.d(TAG, "constructor called for localeChoosingFragment");
Toast.makeText(mContext, "Creating a new localeChoosingFragment", Toast.LENGTH_SHORT).show();
}
@Override
public void onCreate(Bundle savedInstanceState) {
Log.d(TAG, "onCreate in localeChoosingFragment");
super.onCreate(savedInstanceState);
Log.d(TAG, "creating a NEW LOCALECHOOSING RECYCLERVIEW");
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
// Inflate the layout for this fragment
Log.d(TAG, "onCreateView called in LocalChoosingFragment");
localeChoosingFragmentView = inflater.inflate(R.layout.fragment_locale_choosing, container, false);
localeChoosingRecyclerView = localeChoosingFragmentView.findViewById(R.id.localeChoosingRecyclerView);
return localeChoosingFragmentView;
}
@Override
public void onViewCreated(@NonNull View view, @Nullable Bundle savedInstanceState) {
super.onViewCreated(view, savedInstanceState);
Log.d(TAG, "onviewcreated in locale choosing fragment.");
localeChoosingAdaptor = new LocaleChoosingRecyclerViewAdaptor(mContext, japanGuideViewModel);
SnapHelper = new SnapHelperOneByOne();
recyclerViewLayoutManager = new LinearLayoutManager(mContext, LinearLayoutManager.HORIZONTAL, false);
localeChoosingRecyclerView.setLayoutManager(recyclerViewLayoutManager);
localeChoosingRecyclerView.setAdapter(localeChoosingAdaptor);
SnapHelper.attachToRecyclerView(localeChoosingRecyclerView);
// localeChoosingRecyclerView.scrollToPosition(japanGuideViewModel.getCurrentLocaleBeingViewed().getValue());
}
public class LocaleChoosingRecyclerViewAdaptor extends RecyclerView.Adapter {
Context mContext;
JapanGuideViewModel japanGuideViewModel;
Button exploreNowButton;
Button wontGoHereButton;
TextView localeNameTextView;
TextView localeDescriptionTextView;
ImageView localePhotoImageView;
TextView localePhotoCaptionTextView;
public LocaleChoosingRecyclerViewAdaptor() {
Log.d(TAG, "localeChoosingAdaptor created with empty constructor");
}
public LocaleChoosingRecyclerViewAdaptor(Context mContext, JapanGuideViewModel japanGuideViewModel) {
this.mContext = mContext;
this.japanGuideViewModel = japanGuideViewModel;
Log.d(TAG, "localeChoosingAdaptor created with full constructor");
}
@NonNull
@Override
public LocaleChoosingAdaptorHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
Log.d(TAG, "oncreateviewholder in localechoosingfragment");
View view = LayoutInflater.from(mContext).inflate(R.layout.locale_recyclerview_holder, parent, false);
return new LocaleChoosingAdaptorHolder(view);
}
@Override
public void onBindViewHolder(@NonNull RecyclerView.ViewHolder holder, int position) {
Log.d(TAG, "onBindViewHolder called in localeChoosingFragment. The position is " + position + " and the total size is " + getItemCount());
localeNameTextView = holder.itemView.findViewById(R.id.localeRecyclerviewHolderHeadingTextview);
localeNameTextView.setText(japanGuideViewModel.getLocalesDetailsArray().getValue().get(position).getLocaleName());
localeDescriptionTextView = holder.itemView.findViewById(R.id.localeRecyclerviewHolderDescriptionTextView);
if (japanGuideViewModel.getLocalesDetailsArray().getValue().get(holder.getAdapterPosition()).getLocaleDescription() != null) {
localeDescriptionTextView.setText(japanGuideViewModel.getLocalesDetailsArray().getValue().get(holder.getAdapterPosition()).getLocaleDescription());
}
localePhotoImageView = holder.itemView.findViewById(R.id.localeRecyclerviewHolderImageView);
if ((japanGuideViewModel.getLocalesDetailsArray().getValue().get(holder.getAdapterPosition()).getPhotoSet().size() != 0)) {
if ((japanGuideViewModel.getLocalesDetailsArray().getValue().get(holder.getAdapterPosition()).getPhotoSet().get(0).getURL() != null) && (!(japanGuideViewModel.getLocalesDetailsArray().getValue().get(holder.getAdapterPosition()).getPhotoSet().get(0).getURL().equals("")))) {
Glide.with(mContext).load(japanGuideViewModel.getLocalesDetailsArray().getValue().get(holder.getAdapterPosition()).getPhotoSet().get(0).getURL()).into(localePhotoImageView);
}
localePhotoCaptionTextView = holder.itemView.findViewById(R.id.localeRecyclerviewHolderPhotoCaptionTextView);
if ((japanGuideViewModel.getLocalesDetailsArray().getValue().get(holder.getAdapterPosition()).getPhotoSet().get(0).getCaption() != null) && (japanGuideViewModel.getLocalesDetailsArray().getValue().get(holder.getAdapterPosition()).getPhotoSet().get(0).getCaption() != "")) {
localePhotoCaptionTextView.setText(japanGuideViewModel.getLocalesDetailsArray().getValue().get(holder.getAdapterPosition()).getPhotoSet().get(0).getCaption());
}
}
exploreNowButton = holder.itemView.findViewById(R.id.LocaleRecyclerviewHolderExploreNowButton);
wontGoHereButton = holder.itemView.findViewById(R.id.LocaleRecyclerviewHolderWontGoHereButton);
exploreNowButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
Log.d(TAG, "Explore now button clicked. Setting the locale to explore to " + japanGuideViewModel.getCurrentLocaleBeingViewed().getValue());
japanGuideViewModel.getCurrentLocaleBeingViewed().setValue(holder.getAdapterPosition());
}
});
} //bind
@Override
public int getItemCount() {
return japanGuideViewModel.getLocalesDetailsArray().getValue().size();
}
} //recyclerview adaptor
public class LocaleChoosingAdaptorHolder extends RecyclerView.ViewHolder {
TextView localeNameTextView;
TextView localeDescriptionTextView;
Button exploreNowButton;
Button wontGoHereButton;
public LocaleChoosingAdaptorHolder(@NonNull View itemView) {
super(itemView);
Log.d(TAG, "localechoosing recyclerview holder consructor called");
localeNameTextView = itemView.findViewById(R.id.localeRecyclerviewHolderHeadingTextview);
localeDescriptionTextView = itemView.findViewById(R.id.localeRecyclerviewHolderDescriptionTextView);
exploreNowButton = itemView.findViewById(R.id.LocaleRecyclerviewHolderExploreNowButton);
wontGoHereButton = itemView.findViewById(R.id.LocaleRecyclerviewHolderWontGoHereButton);
}
}
}
I was not expecting the RecyclerView in LocaleChoosingFragment to reset to the first position when the observer in the plannerFragment is trigg`your text`ered by the observer on currentLocaleBeingViewed in the View model. Everything else works as expected. |
When doing data wrangling, How to deal with missing values of categorical columns.
I am a newbie in python and want to learn more and more. I want to clean and tidy up the 'deck' column in "Titanic" dataset, which contain a lot of NaN values. |
dealing with missing values of categorical columns in Python |
|python|statistics|data-science|data-wrangling| |
null |
|docker|dockerfile| |
I'm trying to downgrade libffi=3.3 because I have read that the current version may be the reason for a bug im encountering.
conda install libffi==3.3 -n mismatch
Channels:
- defaults
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: \ warning libmamba Added empty dependency for problem type SOLVER_RULE_UPDATE
failed
LibMambaUnsatisfiableError: Encountered problems while solving:
- package python-3.11.8-h955ad1f_0 requires libffi >=3.4,<3.5, but none of the providers can be installed
Could not solve for environment specs
The following packages are incompatible
ββ libffi 3.3 is requested and can be installed;
ββ matplotlib is installable with the potential options
β ββ matplotlib [3.6.2|3.7.1|3.7.2] would require
β β ββ matplotlib-base [>=3.6.2,<3.6.3.0a0 |>=3.7.1,<3.7.2.0a0 |>=3.7.2,<3.7.3.0a0 ] with the potential options
β β ββ matplotlib-base [3.6.2|3.7.1|3.7.2] would require
β β β ββ python >=3.11,<3.12.0a0 , which requires
β β β ββ libffi >=3.4,<3.5 , which conflicts with any installable versions previously reported;
β β ββ matplotlib-base [3.6.2|3.7.1|3.7.2] would require
β β β ββ python >=3.10,<3.11.0a0 , which can be installed;
β β ββ matplotlib-base [3.2.1|3.2.2|...|3.7.2] would require
β β β ββ python >=3.8,<3.9.0a0 , which can be installed;
β β ββ matplotlib-base [3.3.2|3.6.2|3.7.1|3.7.2] would require
β β ββ python >=3.9,<3.10.0a0 , which can be installed;
β ββ matplotlib 3.8.0 would require
β β ββ python >=3.11,<3.12.0a0 , which cannot be installed (as previously explained);
β ββ matplotlib [2.0.2|2.1.0|...|2.2.3] would require
β β ββ python >=2.7,<2.8.0a0 , which can be installed;
β ββ matplotlib [2.0.2|2.1.0|...|3.0.0] would require
β β ββ python >=3.5,<3.6.0a0 , which can be installed;
β ββ matplotlib [2.0.2|2.1.0|...|3.3.4] would require
β β ββ python >=3.6,<3.7.0a0 , which can be installed;
β ββ matplotlib [2.2.2|2.2.3|...|3.5.3] would require
β β ββ python >=3.7,<3.8.0a0 , which can be installed;
β ββ matplotlib [3.1.1|3.1.2|...|3.7.2] would require
β β ββ python >=3.8,<3.9.0a0 , which can be installed;
β ββ matplotlib [3.2.1|3.2.2|3.3.1] would require
β β ββ matplotlib-base [>=3.2.1,<3.2.2.0a0 |>=3.2.2,<3.2.3.0a0 |>=3.3.1,<3.3.2.0a0 ] with the potential options
β β ββ matplotlib-base [3.2.1|3.2.2|...|3.7.2], which can be installed (as previously explained);
β β ββ matplotlib-base [3.2.1|3.2.2|3.3.1|3.3.2] would require
β β β ββ python >=3.6,<3.7.0a0 , which can be installed;
β β ββ matplotlib-base [3.2.1|3.2.2|3.3.1|3.3.2] would require
β β ββ python >=3.7,<3.8.0a0 , which can be installed;
β ββ matplotlib 3.3.2 would require
β β ββ matplotlib-base >=3.3.2,<3.3.3.0a0 , which can be installed (as previously explained);
β ββ matplotlib [3.3.4|3.4.2|...|3.8.0] would require
β β ββ python >=3.9,<3.10.0a0 , which can be installed;
β ββ matplotlib [3.5.0|3.5.1|...|3.8.0] would require
β β ββ python >=3.10,<3.11.0a0 , which can be installed;
β ββ matplotlib 3.8.0 would require
β ββ python >=3.12,<3.13.0a0 , which can be installed;
ββ pin-1 is not installable because it requires
ββ python 3.11.* , which conflicts with any installable versions previously reported.
(mismatch) :$ conda list python -n mismatch
# packages in environment at /home/x/anaconda3/envs/mismatch:
#
# Name Version Build Channel
python 3.11.8 h955ad1f_0
As you can see I already have downgraded python to 3.11 -- why is the conda solver complaining about "pin1" and its requirement for python 3.11? |
Unknown dependency "pin-1" prevents conda installation |
|python|anaconda|libffi| |
A quick aside, we can cast to-and-fro from one table type to another, with restrictions:
/** create a TYPE by using a temp table**/
CREATE TABLE tyNewNames AS SELECT 0 a, 0 b , 0 c ;
SELECT
/** note: order, type & # of columns must match exactly**/
ROW((rec).*)::tyNewNames AS "rec newNames" -- f1,f2,f3 --> a,b,c
, (ROW((rec).*)::tyNewNames).* -- expand the new names
,'<new vs. old>' AS "<new vs. old>"
,*
FROM
(
SELECT
/** inspecting rec: PG assigned stand-in names f1, f2, f3, etc... **/
rec /* a record*/
,(rec).* -- expanded fields f1, f2, f3
FROM (
SELECT ( 1, 2, 3 ) AS rec -- an anon type record
) cte0
)cte1
;
+---------+-+-+-++------------+--------+--+--+--+
|rec |rec |
|newnames |a|b|c|<new vs. old>|oldnames|f1|f2|f3|
+---------+-+-+-++------------+--------+--+--+--+
|(1,2,3) |1|2|3|<new vs. old>|(1,2,3) |1 |2 |3 |
+---------+-+-+-++------------+--------+--+--+--+
|
|javascript|reactjs|json|remix.run| |
I'm deleting your post because this seems like a question, rather than a conversation starter. However, this question is not on topic for Stack Overflow, but it may be on topic on some other site in the network https://stackexchange.com/sites
Please read the site's help center before posting to find whether your question is suitable for that site. |
null |
You could try using the following formulas, this assumes there is no `Excel Constraints` as per the tags posted:
[![enter image description here][1]][1]
----------
=TEXT(MAX(--TEXTAFTER(B$2:B$7,"VUAM")*($A2=A$2:A$7)),"V\U\A\M\00000")
----------
Or, using the following:
=TEXT(MAX((--RIGHT(B$2:B$7,5)*($A2=A$2:A$7))),"V\U\A\M\00000")
----------
Or, you could use the following as well using `XLOOKUP()` & `SORTBY()`:
[![enter image description here][2]][2]
----------
=LET(
x, SORTBY(A2:B7,--RIGHT(B2:B7,5),-1),
y, TAKE(x,,1),
XLOOKUP(A2:A7, y, TAKE(x,,-1)))
----------
The above can be made bit shorter:
=LET(_z, A2:A7, XLOOKUP(_z,_z, TAKE(SORTBY(A2:B7,--RIGHT(B2:B7,5),-1),,-1)))
----------
<sup> Notes On **`Escape Characters`**: The use of `backslash` before & after the `V`, `U`, `A` & `M` is an **`escape character`**. Because the `V`, `U`, `A` & `M` on its own serves a different purpose, we are escaping it meaning hence asking Excel to **`literally form text`** with that character. </sup>
[1]: https://i.stack.imgur.com/K0j2K.png
[2]: https://i.stack.imgur.com/ovCkj.png
|
In my Android app, I want to create files preserved in case of app uninstallation.
I can create hidden file in
```
File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOCUMENTS), ".SecretFolder")
```
My goal is to change read and write permission for this file.
I tried to uninstall my app -> File is preserved. After installation at reaf time I get AccessDeniedException
Now if I want to change read and write permission using ```File.setReadable(true, false)``` and ```File.setWritable(true, false)```, I can observe no change in Android Device File explorer (remains -rw-------) even if the API return true for both.
Note: I'm using a Virtual Device based on Android 11, API 30. I'm not targetting this version specifically
here is my code:
```
private fun writeFileOnInternalStorage(remainingToken : Int) {
val dir = File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOCUMENTS), ".SecretFolder")
if(!dir.exists()){
if(dir.mkdirs()){
var allowed = dir.setReadable(true, false)
println("DBEUG dir ${dir.toPath()} is world-readable ? $allowed")
allowed = dir.setWritable(true, false)
println("DBEUG dir ${dir.toPath()} is world-writable ? $allowed")
}else{
println("ERROR")
return
}
}
val f = File(dir, ".secretFile")
if(!f.exists()){
f.createNewFile()
var allowed = f.setReadable(true, false)
println("DBEUG ${f.toPath()} is world-readable ? $allowed")
allowed = f.setWritable(true, false)
println("DBEUG ${f.toPath()} is world-writable ? $allowed")
}
f.writeText(remainingToken.toString())
println("DEBUG: written Internal Storage \"${remainingToken}\" in " + f.toPath().toString())
}
```
|
You can use PyInstaller. It generates a build dist so you can execute it as a single "binary" file.
http://pythonhosted.org/PyInstaller/#using-pyinstaller
Python 3 has the native option of create a build dist also:
https://docs.python.org/3.10/library/distutils.html |
Will the following work? And if so, will it provide a significant performance improvement.
My AppUser object includes:
public class AppUser
{
public int Id { get; private set; }
// lots of other properties
public List<Tag>? Tags { get; set; }
}
The total number of tags is presently 22 and is unlikely to grow beyond 200. So I read all of them in and cache them for any call where I need the tag(s).
Would it work to have a singleton service that creates a `DbContext` on first use and keeps that DbContext for the life of my application. And this is a DbContext with tracking on. And on startup it reads in all the tags. As follows:
async Task Startup() {
dbContext = await TrackingDbFactory.CreateDbContextAsync();
tags = await dbContext.Tags.ToListAsync();
}
Then when I need to read in an AppUser, I do:
async Task<AppUser?> GetUserAsync(int id){
return await dbConect.Include(u => u.Tags).FirstOrDefaultAsync(u => u.Id == id);
}
In the above case, will it re-read the AppUser.Tasks from the database? Or will the DbContext use the tags list it read in earlier and re-use those already read in objects?
I do know it will need to read the AppUserTags join table. But not also reading the Tags table again would be a performance improvement. And I have 3 other list properties I would do this for, so the total performance savings would be decent.
This seems to work, but I donβt know Entity Framework well enough to test this thoroughly. So:
1. Will this work consistently?
2. Are there any problems I can hit if I do this?
3. Is there any downside to having 1 DbContext that is retained for the life of the application?
4. Is there any downside to having all this in a singleton service?
|
Can I share a List<T> property across multiple queries via a tracking DbContext? |
Thanks to hints in the comments, I came up with this solution:
``` rust
trait ReadAndSeek: Read+Seek {}
impl <R: Read+Seek> ReadAndSeek for R {}
fn open_file<P: AsRef<Path> + Debug>(path: P) -> Result<Reader<Box<dyn ReadAndSeek>>>
{
if path.as_ref().extension().unwrap_or_default() == "gz"
{
let in_file = BGZFReader::new(File::open(path)?)?;
Ok(Reader::new(Box::new(in_file)))
}
else
{
let in_file = File::open(path)?;
Ok(Reader::new(Box::new(in_file)))
}
}
```
and downstream:
``` rust
let read_file = open_file(read_bed)?;
let coord_file = open_file(coord_bed)?;
let processor = Processor::new(read_file, coord_file)?;
```
The two things I didn't understand initially were:
1. Creating a compound trait as described [here](https://stackoverflow.com/questions/71905183/how-to-have-a-vec-of-boxes-which-implement-2-traits)
2. `Box`ing the inner file reader with a `dyn` trait
|
{"Voters":[{"Id":724039,"DisplayName":"Luuk"},{"Id":1191247,"DisplayName":"user1191247"},{"Id":10138734,"DisplayName":"Akina"}],"SiteSpecificCloseReasonIds":[11]} |
storybook 7 does not recognize module declarations |
|typescript|webpack|storybook|tsconfig| |
I am trying to write a little program, which is resistant against buffer overflow and similar vulnerabilities.
Since we cannot trust the user input, I thought it would be a good idea to concatenate all the strings that are inputted into one string and then pass it to a static path, which is a bash script in the same folder and then add all the parameters/flags/arguments (e.g. ./script.sh test1 test2 test3 test4).
My logic on paper is the following:
1. Check whether the number of argc is 5 exactly (so program name + the 4 arguments), if not - exit immediately
2. The specific char array for 5 strings (so array elements) with a max length of 4096 bytes length is initialized.
3. Since `argv[0]` is equal to the program name, we need to skip it. Hence the loop starts at 1 (first argument) and ends at argc-1. So good so far
4. We append the argv[1] string the ending `.key` in-memory.
5. We `memcpy` all the strings to make it safe to use
6. For each iteration we don't forget to add the null terminator at the end of the string.
7. After we have all the arguments safe to use we concat it in the function parse_output and call the bash script called script.sh and add all the required arguments
8. We return the output from the `script.sh 4argshere` to the user
I tried freeing the memory like in the comments but it seems to not work or I have errors somewhere else.
My segfaulting Proof of Concept:
```
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#define BUFSIZE 1000
char *concatenate(size_t size, char *array[size], const char *joint);
char *concatenate(size_t size, char *array[size], const char *joint){
size_t jlen, lens[size];
size_t i, total_size = (size-1) * (jlen=strlen(joint)) + 1;
char *result, *p;
for(i=0;i<size;++i){
total_size += (lens[i]=strlen(array[i]));
}
p = result = malloc(total_size);
for(i=0;i<size;++i){
memcpy(p, array[i], lens[i]);
p += lens[i];
if(i<size-1){
memcpy(p, joint, jlen);
p += jlen;
}
}
*p = '\0';
return result;
}
int parse_output(char *safeargv[]) {
char safeargs = *concatenate(5, safeargv, " ");
char cmd[BUFSIZE];
snprintf(cmd, BUFSIZE, "./script.sh %s", safeargs);
char buf[BUFSIZE] = {0};
FILE *fp;
if ((fp = popen(cmd, "r")) == NULL) {
printf("Error opening pipe!\n");
//free(safeargs);
return -1;
}
while (fgets(buf, BUFSIZE, fp) != NULL) {
printf("OUTPUT: %s", buf);
}
if (pclose(fp)) {
printf("Command not found or exited with error status\n");
//free(safeargs);
return -1;
}
//free(safeargs);
return 0;
}
int main(int argc, char *argv[]) {
if(argc != 5) {
exit(1);
}
char *safeargv[5][4096];
for (int i = 1; i < argc - 1; i++) {
if (i == 1)
strcat(argv[1], ".key");
for (int x = 0; x < strlen(argv[i]); x++) {
char *unsafe_string = argv[i];
size_t max_len = 4096;
size_t len = strnlen(unsafe_string, max_len) + 1;
char *x = malloc(len * sizeof(char));
if (x != NULL) {
strncpy(x, unsafe_string, len);
x[len-1] = '\0'; // Ensure null-termination
strcpy(safeargv[i], x);
free(x);
}
}
}
parse_output(safeargv);
return 0;
}
```
The warnings I get while compiling:
```
chal.c: In function 'main':
chal.c:78:32: warning: passing argument 1 of 'strcpy' from incompatible pointer type [-Wincompatible-pointer-types]
78 | strcpy(safeargv[i], x);
| ~~~~~~~~^~~
| |
| char **
In file included from chal.c:2:
/usr/include/string.h:141:39: note: expected 'char * restrict' but argument is of type 'char **'
141 | extern char *strcpy (char *__restrict __dest, const char *__restrict __src)
| ~~~~~~~~~~~~~~~~~^~~~~~
chal.c:83:18: warning: passing argument 1 of 'parse_output' from incompatible pointer type [-Wincompatible-pointer-types]
83 | parse_output(safeargv);
| ^~~~~~~~
| |
| char * (*)[4096]
chal.c:32:24: note: expected 'char **' but argument is of type 'char * (*)[4096]'
32 | int parse_output(char *safeargv[]) {
| ~~~~~~^~~~~~~~~~
```
It seems only my argc check works, because if I call ./programname abc abc abc abc it segfaults. Also what's the proper way to detect an error, if for case there's a typo in the argument for the script?
What mistake(s) did I make? |
User input sanitization program, which takes a specific amount of arguments and passes the execution to a bash script |
|arrays|c|string|strcpy|strlen| |
I'm attempting to implement the case change feature available in Microsoft Word with Shift + F3 into a TinyMCE React editor. The problem I'm running into is the last part where it should keep the same text selected/highlighted. The below works fine, as long as I haven't highlighted the last word of a node. If I have selected the end of a line, I get an error: `Uncaught DOMException: Index or size is negative or greater than the allowed amount`
So far I have the following:
```typescript
const handleCaseChange = (ed: Editor) => {
ed.on("keydown", (event: KeyboardEvent) => {
if (event.shiftKey && event.key === "F3") {
event.preventDefault();
const selection = ed.selection.getSel();
const selectedText = selection?.toString();
const startOffset = selection?.getRangeAt(0).startOffset;
const endOffset = selection?.getRangeAt(0).endOffset;
if (selectedText !== undefined && selectedText.length > 0) {
let transformedText;
if (selectedText === selectedText.toUpperCase()) {
transformedText = selectedText.toLowerCase();
} else if (selectedText === selectedText.toLowerCase()) {
transformedText = capitalizeEachWord(selectedText);
} else {
transformedText = selectedText.toUpperCase();
}
ed.selection.setContent(transformedText);
const range = ed.getDoc().createRange();
// This is what's currently erroring
range.setStart(selection.anchorNode, startOffset);
if (endOffset === selection?.anchorNode?.textContent?.length) {
range.setEndAfter(selection.anchorNode);
} else {
range.setEnd(selection.anchorNode, endOffset);
}
selection.removeAllRanges();
selection.addRange(range);
}
}
}
}
const capitalizeEachWord = (str: String) => str.replace(/\b\w/g, (char: string) => char.toUpperCase());
```
What else could I try in the `range.setStart` to get this to work correctly? |
How to re-select the same text after modifying it in TinyMCE React |
|tinymce|selection|tinymce-react| |
Here the base class creates the `node` object that it later returns:
node = new BSTree(value);
This object is *always* of the type `BSTree`, so can never be downcast to an inherited type.
If you want to make this variable, you could consider making use of the [Factory Method pattern][1]. Something like this:
node = CreateNode(value);
where `CreateNode` is a `virtual` method that a deriving class can override.
[1]: https://en.wikipedia.org/wiki/Factory_method_pattern |
I'm trying to populate a 2D array of objects and get a result like this example:
```
Guid guid = Guid.NewGuid();
var data = new[] {
new object[] { 22, "cust1_fname","cust1_lname",guid },
new object[] { 23, "cust2_fname","cust2_lname",guid },
new object[] { 24, "cust3_fname","cust3_lname",guid },
};
```
[enter image description here](https://i.stack.imgur.com/IyIVn.png)
I tried this way:
[enter image description here](https://i.stack.imgur.com/nu6Fp.png)
But the objects are not added as direct children under the 2D array as in the first example |
When openning a folder using remote ssh in Visual Studio code with Python extension of Microsoft, the "discovering python interpreters" goes on forever. On the remote (ubuntu), it can be observed that vscode-server is taking 100% of the CPU. Also, the Python extension functionality is not available.
This is the problem. |
Visual Studio Code keeps discovering python interpreters forever and vscode-server on remote is busy 100% |
|visual-studio-code|vscode-extensions| |
{"Voters":[{"Id":341994,"DisplayName":"matt"},{"Id":7325599,"DisplayName":"Fedor"},{"Id":11841571,"DisplayName":"Lamanus"}],"SiteSpecificCloseReasonIds":[18]} |
i am trying to incorporate a 3D model in my project, but for some reason, the canvas renders two 3D models instead of one. If someone could point me in the right direction, fairly beginner in three.js, I have tried removing the scene, altering canvas, camera details. but doesn't seem to work.
as you can see in the image has two 3D models, is it tiling for some reason? i checked the 3D software scene, it doesn't have two models in it.
```function Mobile() {
useEffect(() => {
const scene = new THREE.Scene();
// Add lights to the scene
const ambientLight = new THREE.AmbientLight(0xffffff, 0.5);
scene.add(ambientLight);
const directionalLight = new THREE.DirectionalLight(0xffffff, 0.5);
directionalLight.position.set(5, 5, 5);
scene.add(directionalLight);
const camera = new THREE.PerspectiveCamera(cameraProps.fov, cameraProps.aspect, cameraProps.near, cameraProps.far);
camera.position.copy(cameraProps.position);
const renderer = new THREE.WebGLRenderer({ alpha: true }); // Set alpha to true for transparent background
// Adjust renderer size to fit within the container
const containerWidth = window.innerWidth;
const containerHeight = window.innerHeight;
renderer.setSize(containerWidth, containerHeight);
renderer.setClearColor(0x000000, 0); // Set clear color to transparent
containerRef.current.appendChild(renderer.domElement);
// Load the phone model
const gltfLoader = new GLTFLoader();
gltfLoader.load("/phone_port.gltf", (gltfScene) => {
const phoneModel = gltfScene.scene;
phoneModel.scale.set(0.5, 0.5, 0.5);
scene.add(phoneModel);
});
// Set camera position and orientation
camera.lookAt(0.5, 0, 0);
// Update camera aspect ratio based on container size
camera.aspect = containerWidth / containerHeight;
camera.updateProjectionMatrix();
// Clean up Three.js resources on unmount
return () => {
renderer.dispose();
};
}, []);
return (
<div ref={containerRef}>
<Canvas
camera ={{
fov: cameraProps.fov,
aspect: cameraProps.aspect,
near: cameraProps.near,
far: cameraProps.far,
position: cameraProps.position,
}}
gl={{ alpha: true }}
onCreated={({ camera }) => {
controls.current.enabled = true;
}}
onPointerEnter={() => {
// Enable OrbitControls when the mouse enters the canvas
controls.current.enabled = true;
}}
onPointerLeave={() => {
// Disable OrbitControls when the mouse leaves the canvas
controls.current.enabled = false;
}}
/>
</div>
);
}
export default Mobile;```

|
Change file permsission on Android app programmatically |
|android|file-permissions| |
null |
`=VLOOKUP(A1:A6,SORT(A1:B6,2,-1),2,0)`
Or if already sorted as in example:
`=XLOOKUP(A1:A6,A1:A6,B1:B6,,,-1)` |
```
#first block: calculating last purchase date
from datetime import timedelta
last_purchase_date = (sales_data['TRANSAC_DATE'].max()) + timedelta(days=1)
print("Last purchase Date: ", sales_data['TRANSAC_DATE'].max())
print("Recency/Last purchase Date: ", last_purchase_date)
#Second block: calculating Recency of last purchase in RFM analysis
RFM = sales_data.groupby(['CLIENT_ID']).agg({
'CLIENT_ID': lambda x: (last_purchase_date - x.max()).days,
'Transaction_ID': 'count',
'NET': 'sum'
})
#Error line: lambda x: (last_purchase_date - x.max()).days
RFM.rename(columns={'CLIENT_ID': 'Recency', 'Transaction_ID': 'Frequency', 'NET': 'MonetaryValue'}, inplace= True)
display(RFM)
```
**Problem:** i want to have the recency in days but i cant subtract the output x.max() which in array of intergers from last_purchase_date (timestamp)
**#Error line:** lambda x: (last_purchase_date - x.max()).days
**#Error msg:** *Addition/subtraction of integers and integer-arrays with Timestamp is no longer supported. Instead of adding/subtracting `n`, use `n * obj.freq`* |
How can i add/substract integers and integer-arrays with Timestamp |
Use this code instead of your code:
```js
import type { LinksFunction } from "@remix-run/node";
import stylesheet from "~/styles/tailwind.css?url";
export const links: LinksFunction = () => [
{ rel: "stylesheet", href: stylesheet },
];
``` |
We used to not be able to access all properties at once from a union in TypeScript? |
I would like to convert a `flax.linen.Module`, taken from [here](https://colab.research.google.com/drive/1SeXMpILhkJPjXUaesvzEhc3Ke6Zl_zxJ?usp=sharing) and replicated below this post, to a `torch.nn.Module`.
However, I find it extremely hard to figure out how I need to replace
1. The `flax.linen.Dense` calls;
2. The `flax.linen.Conv` calls;
2. The custom class `Dense`.
For (1.), I guess I need to use `torch.nn.Linear`. But what do I need to specify as `in_features` and `out_features`?
For (2.), I guess I need to use `torch.nn.Conv2d`. But, again, what do I need to specify as `in_channels` and `out_channels`.
I guess I know how I can port the `GaussianFourierProjection` class and how I can mimic the "swish activation function". Obviously, it would be extremely helpful if someone is that familiar with both modules so that he/she can provide the corresponding `torch.nn.Module` as an answer. But it would also already be helpful, if someone could at least answer how (1.) - (3.) need to be replaced. Any help is highly appreciated!
---
#@title Defining a time-dependent score-based model (double click to expand or collapse)
import jax.numpy as jnp
import numpy as np
import flax
import flax.linen as nn
from typing import Any, Tuple
import functools
import jax
class GaussianFourierProjection(nn.Module):
"""Gaussian random features for encoding time steps."""
embed_dim: int
scale: float = 30.
@nn.compact
def __call__(self, x):
# Randomly sample weights during initialization. These weights are fixed
# during optimization and are not trainable.
W = self.param('W', jax.nn.initializers.normal(stddev=self.scale),
(self.embed_dim // 2, ))
W = jax.lax.stop_gradient(W)
x_proj = x[:, None] * W[None, :] * 2 * jnp.pi
return jnp.concatenate([jnp.sin(x_proj), jnp.cos(x_proj)], axis=-1)
class Dense(nn.Module):
"""A fully connected layer that reshapes outputs to feature maps."""
output_dim: int
@nn.compact
def __call__(self, x):
return nn.Dense(self.output_dim)(x)[:, None, None, :]
class ScoreNet(nn.Module):
"""A time-dependent score-based model built upon U-Net architecture.
Args:
marginal_prob_std: A function that takes time t and gives the standard
deviation of the perturbation kernel p_{0t}(x(t) | x(0)).
channels: The number of channels for feature maps of each resolution.
embed_dim: The dimensionality of Gaussian random feature embeddings.
"""
marginal_prob_std: Any
channels: Tuple[int] = (32, 64, 128, 256)
embed_dim: int = 256
@nn.compact
def __call__(self, x, t):
# The swish activation function
act = nn.swish
# Obtain the Gaussian random feature embedding for t
embed = act(nn.Dense(self.embed_dim)(
GaussianFourierProjection(embed_dim=self.embed_dim)(t)))
# Encoding path
h1 = nn.Conv(self.channels[0], (3, 3), (1, 1), padding='VALID',
use_bias=False)(x)
## Incorporate information from t
h1 += Dense(self.channels[0])(embed)
## Group normalization
h1 = nn.GroupNorm(4)(h1)
h1 = act(h1)
h2 = nn.Conv(self.channels[1], (3, 3), (2, 2), padding='VALID',
use_bias=False)(h1)
h2 += Dense(self.channels[1])(embed)
h2 = nn.GroupNorm()(h2)
h2 = act(h2)
h3 = nn.Conv(self.channels[2], (3, 3), (2, 2), padding='VALID',
use_bias=False)(h2)
h3 += Dense(self.channels[2])(embed)
h3 = nn.GroupNorm()(h3)
h3 = act(h3)
h4 = nn.Conv(self.channels[3], (3, 3), (2, 2), padding='VALID',
use_bias=False)(h3)
h4 += Dense(self.channels[3])(embed)
h4 = nn.GroupNorm()(h4)
h4 = act(h4)
# Decoding path
h = nn.Conv(self.channels[2], (3, 3), (1, 1), padding=((2, 2), (2, 2)),
input_dilation=(2, 2), use_bias=False)(h4)
## Skip connection from the encoding path
h += Dense(self.channels[2])(embed)
h = nn.GroupNorm()(h)
h = act(h)
h = nn.Conv(self.channels[1], (3, 3), (1, 1), padding=((2, 3), (2, 3)),
input_dilation=(2, 2), use_bias=False)(
jnp.concatenate([h, h3], axis=-1)
)
h += Dense(self.channels[1])(embed)
h = nn.GroupNorm()(h)
h = act(h)
h = nn.Conv(self.channels[0], (3, 3), (1, 1), padding=((2, 3), (2, 3)),
input_dilation=(2, 2), use_bias=False)(
jnp.concatenate([h, h2], axis=-1)
)
h += Dense(self.channels[0])(embed)
h = nn.GroupNorm()(h)
h = act(h)
h = nn.Conv(1, (3, 3), (1, 1), padding=((2, 2), (2, 2)))(
jnp.concatenate([h, h1], axis=-1)
)
# Normalize output
h = h / self.marginal_prob_std(t)[:, None, None, None]
return h
|
How can I convert a flax.linen.Module to a torch.nn.Module? |
|python|machine-learning|pytorch|neural-network|flax| |
My aim is to make an interactive bar chart. The reader sees summary statistics of a variable, and can then choose to see the distribution of that variable conditioned on other variables. For example, they see the average total_bill for restaurant guests at lunch and dinner time. They can then choose to see how that varies by sex OR by age. Below is an example, which I modified from [here](https://rpubs.com/eshel_s/plotlytutorial).
```
library(ggplot)
library(plotly)
#Sex
dat1 <- data.frame(
sex = c("Female", "Female", "Male", "Male", 'Any', 'Any'),
time = c("Lunch", "Dinner", "Lunch", "Dinner", 'Lunch', 'Dinner'),
total_bill = c(13.53, 16.81, 16.24, 17.42, 14.5, 17.3)
)
p <- ggplot(data=dat1, aes(x=time, y=total_bill, fill=sex)) +
geom_bar(stat="identity", position=position_dodge())
fig1 <- ggplotly(p)
fig1
#Age
dat2 <- data.frame(
age = c('Old', 'Old', 'Young', 'Young', 'Any', 'Any'),
time = c("Lunch","Dinner","Lunch","Dinner", 'Lunch', 'Dinner'),
total_bill = c(14.53, 15.81, 18.24, 19.42, 14.5, 17.3)
)
p <- ggplot(data=dat2, aes(x=time, y=total_bill, fill=age)) +
geom_bar(stat="identity", position=position_dodge())
fig2 <- ggplotly(p)
fig2
```
Here are the problems with this approach:
1) in both plots, the total_bill for all three groups is shown per the default.
2) conditioning on sex and age occurs in two separate plots. It would be nice with a dropdown menu to the left, where the user could choose to condition on sex/age.
I am open to other interactive solutions. The issue with `shiny` is that it requires me to host it on a server, and I need to be able to send the report to people so they can also have it when they are offline. There is [this solution](https://quarto.org/docs/interactive/#observable-js), but I am hoping to achieve this without having to learn a new programming language (i.e., OJS). |
How to populate two dimensional array |
|c#|arrays|multidimensional-array| |
null |
Foreign keys of the join table are linked incorrectly - `TagId` to `Articles.ArticleId` and `ArticleId` to `Tags.TagId`. It can also be seen in the error message or the generated migration. And of course in the model in case you look carefully - one reason I don't like `ForeignKey` attribute is its multipurpose and different meanings depending on where you apply it, thus very error prone.
You need to correct the model and generate/apply new migration:
```cs
public class Article
{
[ForeignKey("ArticleId")] // <-- was [ForeignKey("TagId")]
[InverseProperty("Articles")]
public virtual ICollection<Tag> Tags { get; set; }
}
public class Tag
{
[ForeignKey("TagId")] // <-- was [ForeignKey("ArticleId")]
[InverseProperty("Tags")]
public virtual ICollection<Article> Articles { get; set; }
}
``` |
Recently I met an issue with CUDA programming.
I have an array *a, and I want to do the inter-element subtraction, like a[0]-a[2], a[1]-a[3], ..., and so on. Later, I need to multiply these results, or in other words, like this: (a[0]-a[2])*(a[1]-a[3]), (a[4]-a[6])*(a[5]-a[7]),... and so on. All above instructions should happen in GPU kernel(s).
So far, my kernels can give me the correct result, and looks like these:
```
__global__ void subtractKernel(short* a, __int64 numElements)
{
int index = blockDim.x * blockIdx.x + threadIdx.x;
int stride = blockDim.x * gridDim.x;
#pragma unroll
for (int i = index; i < numElements / 4; i += stride)
{
a[i * 4] = (a[i * 4] - a[i * 4 + 2]);
a[i * 4 + 1] = (a[i * 4 + 1] - a[i * 4 + 3]);
}
}
```
```
__global__ void multiplyKernel(short* a, int* dev_a, __int64 numElements)
{
int index = blockDim.x * blockIdx.x + threadIdx.x;
int stride = blockDim.x * gridDim.x;
#pragma unroll
for (int i = index; i < numElements /4; i+=stride)
{
dev_a[i] = (int)a[i * 4 ] * (int)a[i * 4 + 1];
}
}
```
However, the efficiency is too bad. There are too many I/O in subtractKernel and Compute throughput is only 3.15 % (whereas memory throughput is 47 %). I know I should use shared memory to complete this task, but I don't know how to perform it. Could anyone help me with this? Or any other thoughts? Thanks
It's not like reduction-type of problem, so I think I don't need to do the whole warp level reduction. Since I am new to CUDA, I don't know how to extend this reduction notion to my case. |
Subtraction and multiplication of an array with compute-bound in CUDA C program |
|c|cuda| |
null |
I'm fairly new to .NET Core and I'm working on a miniproject, an apartment booking website.
I've already made my data models and data tables for the site. When I scaffold a new Controller with Razor Views, the create form just does'nt work when there is a foreign key in the table. The basic create for simple tables works. I tried everything, I gave ChatGPT all of my code and it still does not work. What am I doing wrong?
There is 2 data tables named *Apartments *and *Bookings*.
Here are the 2 data classes which are needed in the form (they inherit their ID from BaseEntity):
```
using System.ComponentModel.DataAnnotations.Schema;
using System.ComponentModel.DataAnnotations;
namespace Vendeghazak.Data
{
public class Booking : BaseEntity
{
public int GuestID { get; set; }
public DateTime CheckInDate { get; set; }
public DateTime CheckOutDate { get; set; }
public string? Comment { get; set; }
public string? Status { get; set; }
public bool Cancelled { get; set; }
[ForeignKey("ApartmentId")]
public int ApartmentId { get; set; }
public Apartment Apartment { get; set; }
public int NumberOfPersons { get; set; }
public int PersonsUnder18 { get; set; }
public string BillingAddress { get; set; }
public string BillingEmail { get; set;}
}
}
namespace Vendeghazak.Data
{
public class Apartment : BaseEntity
{
public string Name { get; set; }
public int Capacity { get; set; }
}
}
namespace Vendeghazak.Data
{
public partial class BaseEntity
{
public int Id { get; set; }
public DateTime DateCreated { get; set; }
public DateTime DateModified { get; set; }
}
}
```
I tried removing the "Id" from the foreign key annotation, but it didn't help. When I submit the form, the "Apartments" subkey alway null and the ModelState is invalid because of that. There is a dropdown list for the apartments and I tried different methods to populate it correctly, without success. I also tried hard coding the ID value of the apartment into the view then manually assign it in the controller, but the "Apartments" subkey is always null after submission.
What am I doing wrong?
|
coded one, but renderer renders two 3D models in three.js |
|javascript|next.js|three.js|3d| |
I would personally add `://` after `http` and also add an optional `s`
for the SSL URLs. This way, you won't match "*httpd is Apache daemon*"
and only match URLs.
Instead of using `.*?`, you can match any char which isn't a comma, but
normally, they are allowed in URLs, at specific places (typically not in
the domain but allowed in the path, mentioned in
[chapter 3.3 of RFC2396](https://www.ietf.org/rfc/rfc2396.txt), so the
use of `\bhttps?://[^,]+` may not be the correct solution but will be
working in most cases.
### A) No comma in the URL
The regular expression would become: `(?<=,)\bhttps?:\/\/[^,]+(?=,)`
See it in action here: https://regex101.com/r/yKHDiC/1
### B) No comma in the URL, but spaces before allowed
But it's a bit of a shame that we cannot use `(?<=,\s*)` as positive
lookbehind, which would let us also match URLs if some spaces are
placed behind the comma. This is because adding `\s*` makes it of
undefined length, which is not allowed in lookbehinds for most regex
engines.
But depending on your use case, we could replace the lookarounds by
some capturing groups, which are more flexible, because they can have
an unfixed length:
`(,\s*)\b(https?:\/\/[^,]+)(\s*,)`
Version 2 with groups: https://regex101.com/r/yKHDiC/2
Your URL would be in group number 2.
### C) Comma allowed in the URL, spaces before and after
Using groups, like before, we can also improve it by accepting commas
in the URL, but searching for lines starting with some text without
commas, followed by a comma, then the URL and finally the end of line
which should be a comma and any chars not beeing commas.
This would become: `^([^,]+,\s*)\b(https?:\/\/.*?)(\s*,[^,]+)$`
Version 3: https://regex101.com/r/yKHDiC/3
But this can only work if you always have 3 items per line.
So you might have to adapt and choose the best regex depending on your
real use case. |
For elevation you'd need some data source / provider. `elevatr` package potentially can provide you values for point locations, for example through *Amazon Web Service Terrain Tiles*, but you'd probably need to evaluate those results and work out acceptable resolution / zoom levels.
Here I'm assuming that "relative elevation estimation" means an elevation range, though it can be a number of metrics when it comes to a linestring. Linestring length is kind of trivial when working with `sf` : `st_length()` ; for building those linestrings, perhaps try dplyer-style `group()` + `summarise()` first, you can look into alternatives if it proves to be impractical due to the number of points.
``` r
library(elevatr)
library(dplyr)
library(sf)
#> Linking to GEOS 3.11.2, GDAL 3.6.2, PROJ 9.2.0; sf_use_s2() is TRUE
aus_sf <-
aus |>
# make sure points are correctyl sorted
arrange(country, id, point_id) |>
# convert to sf object
st_as_sf(coords = c("lon", "lat"), crs = "WGS84") |>
# get elevation from Amazon Web Service Terrain Tiles
get_elev_point(src = "aws")
aus_sf
#> Simple feature collection with 17 features and 5 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 129.251 ymin: -20.06129 xmax: 137.2865 ymax: -18.30723
#> Geodetic CRS: WGS 84
#> First 10 features:
#> id country point_id geometry elevation elev_units
#> 1 1 Australia 0 POINT (130.1491 -19.5752) 382 meters
#> 2 1 Australia 1 POINT (129.9958 -19.4876) 425 meters
#> 3 1 Australia 2 POINT (129.7156 -19.25788) 444 meters
#> 4 1 Australia 3 POINT (129.7104 -19.20223) 441 meters
#> 5 2 Australia 0 POINT (129.251 -18.59016) 376 meters
#> 6 2 Australia 1 POINT (129.5436 -18.30723) 398 meters
#> 7 3 Australia 0 POINT (137.284 -20.06129) 229 meters
#> 8 3 Australia 1 POINT (137.2865 -20.04308) 234 meters
#> 9 3 Australia 2 POINT (137.1915 -20.00782) 237 meters
#> 10 3 Australia 3 POINT (137.122 -19.97166) 234 meters
```
Build linestrings from ordered points:
``` r
aus_lines <-
aus_sf |>
# group by `country` to keep it as attribute and by `id`
# to create 4 multipoints, one per id
group_by(country, id) |>
# points to multipoints,
# do not use do_union as it will likely change point order,
# add elevation range
summarise(elev_range = max(elevation) - min(elevation), do_union = FALSE, .groups = "drop") |>
# multipoints to linestrings
st_cast("LINESTRING") |>
# add length column
mutate(length = st_length(geometry))
```
Resulting linestrings with elevation range and length:
``` r
aus_lines
#> Simple feature collection with 4 features and 4 fields
#> Geometry type: LINESTRING
#> Dimension: XY
#> Bounding box: xmin: 129.251 ymin: -20.06129 xmax: 137.2865 ymax: -18.30723
#> Geodetic CRS: WGS 84
#> # A tibble: 4 Γ 5
#> country id elev_range geometry length
#> * <chr> <int> <dbl> <LINESTRING [Β°]> [m]
#> 1 Australia 1 62 (130.1491 -19.5752, 129.9958 -19.4876, 129.β¦ 63941.
#> 2 Australia 2 22 (129.251 -18.59016, 129.5436 -18.30723) 44072.
#> 3 Australia 3 8 (137.284 -20.06129, 137.2865 -20.04308, 137β¦ 48462.
#> 4 Australia 4 4 (136.8961 -19.85932, 136.8791 -19.88669, 13β¦ 10190.
```
Dataset example:
``` r
aus <- read.table(header = TRUE, text =
"id country point_id lon lat
1 Australia 0 130.1491 -19.57520
1 Australia 1 129.9958 -19.48760
1 Australia 2 129.7156 -19.25788
1 Australia 3 129.7104 -19.20223
2 Australia 0 129.2510 -18.59016
2 Australia 1 129.5436 -18.30723
3 Australia 0 137.2840 -20.06129
3 Australia 1 137.2865 -20.04308
3 Australia 2 137.1915 -20.00782
3 Australia 3 137.1220 -19.97166
3 Australia 4 137.0650 -19.91363
3 Australia 5 136.8961 -19.85932
4 Australia 0 136.8961 -19.85932
4 Australia 1 136.8791 -19.88669
4 Australia 2 136.8594 -19.91227
4 Australia 3 136.8454 -19.92507
4 Australia 4 136.8360 -19.92976")
```
|
Trying to add an AD user account to a AD group using python with ldap3 using the following script:
```
# Import necessary modules and libraries
import requests
from flask import json
from ldap3 import Server, Connection, ALL_ATTRIBUTES, SUBTREE, NTLM
from ldap3.extend.microsoft.addMembersToGroups import ad_add_members_to_groups
# Test API data
testuser = r"TS\testuser"
# Define LDAP server details
Server_ip = '192.168.2.3'
# Define bind user credentials
#BIND_Username = 'CN=Automation,CN=Users,DC=testnetwerk,DC=com'
BIND_Username = 'TESTNETWERK\\Automation'
BIND_Password = 'Welkom123!'
# Define LDAP paths
Base_DN = "dc=testnetwerk,dc=com"
Filter = "(sAMAccountName={0}*)" # LDAP filter to search for users based on sAMAccountName
Group_DN = "CN=testgroup,CN=Users,DC=testnetwerk,DC=com" # DN of the group to which users will be added
# Function to create an LDAP Server object
def server_ldap():
return Server(Server_ip)
# Function to establish connection to LDAP server
def connect_ldap():
server = server_ldap()
# return Connection(server, user=BIND_Username, password=BIND_Password, auto_bind=True)
return Connection(server, user=BIND_Username, password=BIND_Password, authentication=NTLM)
# Function to search for a user in LDAP directory based on sAMAccountName
def find_user(username):
with connect_ldap() as c:
print("Connected to LDAP server")
# Perform LDAP search operation
c.search(search_base=Base_DN, search_filter=Filter.format(username[3:]), search_scope=SUBTREE,
attributes=ALL_ATTRIBUTES, get_operational_attributes=True)
# Return search results in JSON format
print(json.loads(c.response_to_json()))
return json.loads(c.response_to_json())
# Function to add the found user to the specified LDAP group
def add_user_to_group(username):
# Retrieve the DN (Distinguished Name) of the user from search results
user = find_user(username)["entries"][0]["dn"]
print(user)
# Add user to the specified group
ad_add_members_to_groups(connect_ldap(), user, Group_DN)
# Return confirmation message
return "Added " + user + " to the group!"
print(find_user(testuser))
try:
# Attempt to add test user to the group and print confirmation
print(add_user_to_group(testuser))
except Exception as e:
# Print error message if an exception occurs
print("ai ai ai")
print(e)
```
However printing out the value that should be returned using `print(json.loads(c.response_to_json()))` it responds, when returning it it does not and gives me the following error: `TypeError: the JSON object must be str, bytes or bytearray, not NoneType`
Uncommenting `#BIND_Username = 'CN=Automation,CN=Users,DC=testnetwerk,DC=com'` and `# return Connection(server, user=BIND_Username, password=BIND_Password, auto_bind=True)`
and commenting the other it works.
Any ideas?
|
> Or some kind of **direct indexing** and have a map with possibly **many empty entries** in it?
After researching the topic - this seems to have been the case before the new **Virtual Stub Dispatch** mechanism was introduced ([BookOftheRuntime link][1])
One of the reasons mentioned -> [Working Set Reduction][2]
> Interface dispatch was previously implemented using a **large**, somewhat **sparse** vtable lookup map dealing with process-wide interface identifiers.
And from the Introduction
> This requirement meant that **all interfaces and all classes that implemented interfaces had to be restored at runtime** in NGEN scenarios, causing significant startup working set increases.
So, it looks like there was a Class-Interface table with a fixed order of interfaces for each class.
This would enable the following example (from the excellent "Pro .NET Performance" book by Sasha Goldshtein et al.
mov ecx, dword ptr [ebp-64] ; object reference
mov eax, dword ptr [ecx] ; method table pointer
mov eax, dword ptr [eax+12] ; interface map pointer
mov eax, dword ptr [eax+48] ; compile time offset for this interface in the map
call dword ptr [eax] ; first method at EAX, second method at EAX+4, etc.
At offset 12 we had a pointer to offset for Class A in the Global Interface Map. Then to access the specific Interface Table we had a fixed offset at +48. The address there was pointing back to the beginning of the Interface Map for the speficic interface in our own Method Table.
As the order of the methods for the interface is fixed we can use offset for the specific method slot.
The above assembly would work without walking any hierarchies just by dereferencing and relying on a large matrix class x interface.
This also is shown in Figure 9 from the 2005 Article about the [.NET CLR Internals][3]
[![enter image description here][4]][4]
Another confirmation for this is the following paragraph from "Essential .NET, Vol. 1 The common language runtime" by Don Box and Chris Sells (2002):
> the interface offset table is an array of offsets into the type's
method table. There is one entry in this table for every interface type that has been initialized
by the CLR independent of whether or not the type supports the interface. As the CLR
initializes interface types, it assigns them a zero-based index into this table. When the CLR
initializes a concrete type, the CLR allocates a new interface offset table for the type. The
interface offset table will be sparsely populated, but it must be at least as long as the index of
any of its declared interfaces. When the CLR initializes a concrete type, the CLR populates its
interface offset table by storing the appropriate method table offsets into the entries for
supported interfaces. Because the CLR's verifier ensures that interface-based references refer
only to objects that support the declared type, interface offset table entries for unsupported
interfaces are never used and their contents are immaterial.
[1]: https://github.com/dotnet/runtime/blob/main/docs/design/coreclr/botr/virtual-stub-dispatch.md
[2]: https://github.com/dotnet/runtime/blob/main/docs/design/coreclr/botr/virtual-stub-dispatch.md#working-set-reduction
[3]: https://learn.microsoft.com/en-us/archive/msdn-magazine/2005/may/net-framework-internals-how-the-clr-creates-runtime-objects#S9
[4]: https://i.stack.imgur.com/FiQcr.png |
Set `allowUnreachableCode: true` in `tsconfig.json`. |
I need to replace .env variables in the Azure DevOps release pipeline. I have used 'Replace tokens' task for the same. I have verified that the task is able to replace my variables from the variable group, but still, the application is not able to read the deployed variables.
Adding screenshots below for more understanding.
Here is my CI build pipeline
[![enter image description here][1]][1]
.env file from the code
[![enter image description here][2]][2]
I use replace tokens to replace the env variables from the artifact drop
[![enter image description here][3]][3]
Task to show content post the replacement of variables
[![enter image description here][4]][4]
Logs confirming tokens replaced in the task
[![enter image description here][5]][5]
show content logs confirming the replaced variables
[![enter image description here][6]][6]
But, when I run the application for making an api call, I still see Not replaced token variable in the console.log.
I'm reading the variables in the code using process.env.{VariableName}
[![enter image description here][7]][7]
I think the variables are getting replaced in the .env file but as the .env file is not a part of the build folder which gets deployed, the changes are not reflecting in the target environment. Could someone help here?
[1]: https://i.stack.imgur.com/0kC2y.png
[2]: https://i.stack.imgur.com/ha2DD.png
[3]: https://i.stack.imgur.com/ccrGy.png
[4]: https://i.stack.imgur.com/5xQC6.png
[5]: https://i.stack.imgur.com/vBQp7.png
[6]: https://i.stack.imgur.com/Se2er.png
[7]: https://i.stack.imgur.com/mxVZX.png |
Hi im kinda new in php may i ask why i didnt show any records from my database
i hope someone can help me how to show the records of my database in html page
i double check everything cant still find the error
how can i fix this?
heres my source code of my work
```
<table class="table">
<thead>
<tr>
<th class="font-weight-bold">No</th>
<th class="font-weight-bold">Branch</th>
<th class="font-weight-bold">Shift</th>
<th class="font-weight-bold">Nationality</th>
<th class="font-weight-bold">Classification</th>
<th class="font-weight-bold">Local Male</th>
<th class="font-weight-bold">Local Female</th>
<th class="font-weight-bold">Foreign Male</th>
<th class="font-weight-bold">Foreign Female</th>
<th class="font-weight-bold">Admissin Date</th>
<th class="font-weight-bold">Status</th>
<th class="font-weight-bold">Action</th>
</tr>
</thead>
<tbody>
<?php
if (isset($_GET['pageno'])) {
$pageno = $_GET['pageno'];
} else {
$pageno = 1;
}
// Formula for pagination
$no_of_records_per_page = 15;
$offset = ($pageno-1) * $no_of_records_per_page;
$ret = "SELECT ID FROM tblstudent";
$query1 = $dbh -> prepare($ret);
$query1->execute();
$results1=$query1->fetchAll(PDO::FETCH_OBJ);
$total_rows=$query1->rowCount();
$total_pages = ceil($total_rows / $no_of_records_per_page);
$sql="SELECT
tblstudent.StuID,tblstudent.ID as sid,
tblstudent.Branch,
tblstudent.Shift,
tblstudent.Nationality,
tblstudent.Classification,
tblstudent.LocalMale,
tblstudent.LocalFemale,
tblstudent.ForeignMale,
tblstudent.ForeignFemale,
tblstudent.DateofAdmission,
tblstudent.Status
LIMIT $offset, $no_of_records_per_page";
$cnt=1;
if($query->rowCount() > 0)
{
foreach($results as $row)
{
?>
<tr>
<td><?php echo htmlentities($cnt);?></td>
<td><?php echo htmlentities($row->Branch);?></td>
<td><?php echo htmlentities($row->Shift);?></td>
<td><?php echo htmlentities($row->Nationality);?></td>
<td><?php echo htmlentities($row->Classification);?></td>
<td><?php echo htmlentities($row->LocalMale);?></td>
<td><?php echo htmlentities($row->LocalFemale);?></td>
<td><?php echo htmlentities($row->ForeignMale);?></td>
<td><?php echo htmlentities($row->ForeignFemale);?></td>
<td><?php echo htmlentities($row->DateofAdmission);?></td>
<td><?php echo htmlentities($row->Status);?></td>
<td>
<a href="edit-student-detail.php?editid=<?php echo htmlentities ($row->sid);?>" class="btn btn-primary btn-sm"><i class="icon-eye"></i></a>
<a href="manage-students.php?delid=<?php echo ($row->sid);?>" onclick="return confirm('Do you really want to Delete ?');" class="btn btn-danger btn-sm"> <i class="icon-trash"></i></a>
</td>
</tr><?php $cnt=$cnt+1;}} ?>
</tbody>
</table>
</div>
<div align="left">
<ul class="pagination" >
<li><a href="?pageno=1"><strong>First></strong></a></li>
<li class="<?php if($pageno <= 1){ echo 'disabled'; } ?>">
<a href="<?php if($pageno <= 1){ echo '#'; } else { echo "?pageno=".($pageno - 1); } ?>"><strong style="padding-left: 10px">Prev></strong></a>
</li>
<li class="<?php if($pageno >= $total_pages){ echo 'disabled'; } ?>">
<a href="<?php if($pageno >= $total_pages){ echo '#'; } else { echo "?pageno=".($pageno + 1); } ?>"><strong style="padding-left: 10px">Next></strong></a>
</li>
<li><a href="?pageno=<?php echo $total_pages; ?>"><strong style="padding-left: 10px">Last</strong></a></li>
</ul>
</div>
</div>
</div>
</div>
</div>
</div>
```
I hope someone can help me with this much appriciated |
Records not showing php |
|php|mysqli| |
null |
I figured it out by messing around settings.
1. Open Edge file location.
2. Open properties -> compatibility. Click change settings for all users
3. Check 'Run this program as an Administrator'.
4. Apply. Ok
Now, if we open Edge it opens with Elevated Status 'Yes'.
If its still opening with Status 'No' . As highlighted by @YuZhou
1. Open Task Manager
2. Go to Details
3. End all Microsoft Edge instances.
4. Open Edge
Works in Selenium via existing code, no changes. |
I'm interested in integrating phone call communication between my app users. I'm trying to gather information on implementation solutions and guidelines.
From what I've gathered there is an option to use in-app call services where the application would handle the communication using VOIP while potentially masking the actual personal data?
Use some sort of phone proxy to secure personal data?
Whereas an alternative simple solution is asking the user for authorization and providing his phone number to the other user.
I'm not sure which methodology to use, if there are use cases which one is okay and one is not. For instance, Uber has an in app communication system which secures the users data, while on the other hand Google Maps which connects users with businesses, uses the phone's calling service.
All my searches on Google came up empty handed, if anyone has gone through this process and or has access to any development principles and guidelines on this topic I'd be grateful. Thanks! |
Pyhton ldap3 NTLM unable to return json.loads data |
|python|ntlm|ldap3| |
null |
{"Voters":[{"Id":721855,"DisplayName":"aled"},{"Id":230513,"DisplayName":"trashgod"},{"Id":10871900,"DisplayName":"dan1st might be happy again"}],"SiteSpecificCloseReasonIds":[13]} |
I am trying to add overlay text to images using the GD library in php, while it works perfectly for characters in english, I am having rendering issues when the text is in hindi, while the font used supports hindi font. The text is read correctly from the database but upon transoformation and rendering on the image using the font the result is not as desired.
I am using the font downloaded from here [RozhaOne](https://fonts.google.com/specimen/Rozha+One)
This is the code which I am executing
```
function createImageWithText()
{
$image = imagecreatefromjpeg($this->asset_url.'/images/input.jpeg');
$fontSize = 32;
list($r, $g, $b) = sscanf("#00bfff", "#%02x%02x%02x");
$fontColor = imagecolorallocate($image, $r, $g, $b);
$font = $this->asset_url.'/fonts/RozhaOne-Regular.ttf';
$text = "ΰ€ΰ₯ΰ€€ΰ€Ώ";
imagettftext(
$image,
$fontSize,
0,
500,
500,
$fontColor,
$font,
$text
);
$save_path = $this->asset_url.'/images/output.png';
imagepng($image, $save_path);
imagedestroy($image);
}
```
Actual Output
[](https://i.stack.imgur.com/MzHf7.png)
Expected Output
[](https://i.stack.imgur.com/MTTPH.png)
I have tried the solution suggested [here](https://stackoverflow.com/questions/21188046/writing-hindi-fonts-with-gd-library-do-not-render-as-desired) and [here](https://stackoverflow.com/questions/16075553/gd-library-create-image-with-dynamic-text-on-it) but it did not work when I am using the font specified or other hindi supported fonts.
This seems to be an issue with the GD library (Using php 7.4) handling hindi font, would be great to get some insights on how this can be solved/supported. |