instruction
stringlengths
0
30k
# Issue The error you see is caused by importing and using the low-level `Router` component which has a couple required props. See [Router][1]: ```typescript declare function Router( props: RouterProps ): React.ReactElement | null; interface RouterProps { basename?: string; children?: React.ReactNode; location: Partial<Location> | string; // <-- required! navigationType?: NavigationType; navigator: Navigator; // <-- required! static?: boolean; } ``` It's the missing `location` prop that has the `pathname` property that the router needs. You can see in [source][2] where the low-level `Router` attempts to destructure from the undefined `location` prop. ```typescript export function Router({ basename: basenameProp = "/", children = null, location: locationProp, // <-- no default/fallback value navigationType = NavigationType.Pop, navigator, static: staticProp = false, future, }: RouterProps): React.ReactElement | null { ..... if (typeof locationProp === "string") { locationProp = parsePath(locationProp); } let { pathname = "/", // <-- attempted destructure from undefined! search = "", hash = "", state = null, key = "default", } = locationProp; .... } ``` You are also missing other key parts of the React-Router v6 API in the other components, i.e. all `Route` components need to be rendered into a `Routes` component which holds the path matching logic, and the `Route` component API changed significantly as well. # Solution You'll pretty much never need to use the low-level `Router` component. You should instead use one of the high-level routers, e.g. `BrowserRouter`, `HashRouter`, `MemoryRouter`, etc, which all implement and incapsulate the base required props of the `Router` component. main.jsx ```jsx import React from 'react' import ReactDOM from 'react-dom/client' import App from './App.jsx' import './index.css' import { BrowserRouter as Router} from 'react-router-dom' ReactDOM.createRoot(document.getElementById('root')).render( <React.StrictMode> <Router> <App /> </Router> </React.StrictMode> ); ``` Update the rendering of routes in your `App` component to wrap all `Route` components in a `Routes` so they can be matched, and update your `Route` components to use the correct `element` prop to render the routed content. The `element` prop expects a `React.ReactNode`, or more colloquially known as JSX. app.jsx ```jsx import { Routes, Route } from "react-router-dom" import Home from "./Pages/Home" function App() { return ( <Routes> <Route path="/" element={<Home />} /> ... other routes ... </Routes> ); } export default App; ``` For more complete details see: * [Migrating from v5 guide][3] * [Main Concepts][4] * [Picking a Router][5] * [Routes][6] * [Route][7] [1]: https://reactrouter.com/en/main/router-components/router [2]: https://github.com/remix-run/react-router/blob/9e7486b89e712b765d947297f228650cdc0c488e/packages/react-router/lib/components.tsx#L448 [3]: https://reactrouter.com/en/main/upgrading/v5 [4]: https://reactrouter.com/en/main/start/concepts [5]: https://reactrouter.com/en/main/routers/picking-a-router [6]: https://reactrouter.com/en/main/components/routes [7]: https://reactrouter.com/en/main/route/route
I've several apps running with docker compose. All the applications are accessible except my NextJS app is suddenly inaccessible after a new deployment. If I sent a curl request to http://localhost:8100 I get: > curl: (52) Empty reply from server If I sent a curl request to http://localhost:8082 I receive the actual page response. It might be due to changes I've made to the nextjs app. However during development everything runs fine and the build also runs as expected. The NextJS app seems to be deployed correctly, I at least get this output after starting the NextJS container: > ▲ Next.js 14.1.0 > - Local: http://localhost:8100 > - Network: http://0.0.0.0:8100 > ✓ Ready in 219ms docker-compose file version: '3' services: admin: image: 'registry.gitlab.com/...react-app:latest' container_name: admin restart: always expose: - 8082 ports: - 8082:80 networks: - app-network nextjs-app: image: 'registry.gitlab.com/...nextjs-14-app:latest' container_name: nextjs-app restart: always expose: - 8100 ports: - 8100:80 networks: - app-network environment: - NODE_ENV=production networks: app-network: driver: bridge Dockerfile FROM node:18-alpine AS base FROM base AS deps RUN apk add --no-cache libc6-compat WORKDIR /app COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ RUN \ if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ elif [ -f package-lock.json ]; then npm ci; \ elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ else echo "Lockfile not found." && exit 1; \ fi FROM base AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . ENV NEXT_TELEMETRY_DISABLED 1 RUN npx prisma generate RUN yarn build FROM base AS runner WORKDIR /app ENV NODE_ENV production ENV NEXT_TELEMETRY_DISABLED 1 RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs COPY --from=builder /app/public ./public RUN mkdir .next RUN chown nextjs:nodejs .next RUN chown nextjs:nodejs public COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static USER nextjs EXPOSE 8100 ENV PORT 8100 ENV HOSTNAME "0.0.0.0" CMD ["node", "server.js"]
ERR_CONNECTION_REFUSED NextJS 14 app in docker
|docker|next.js|docker-compose|deployment|dockerfile|
I know how to fix it. "tpool::post -nowait" here should be without "-nowait"!!! I reference this below: https://www.tcl.tk/man/tcl/ThreadCmd/tpool.htm#M10
Hello Great question you have asked here. Personally I haven't come across such a tool but what I usually do is that,since I know I use the same framework I do setup a reuse a template which I had created earlier and change the placeholders before deployment and test to see if I encounter bugs. Rarely do I encounter bugs since I have modified the template. I hope this helps and if you have any questions feel free to reach out.
|java|agents-jade|multi-agent|
If you really want to style the `<options>` within a `<select>`, consider switching to a Javascript/CSS based drop down such as http://getbootstrap.com/2.3.2/components.html#dropdowns or https://silviomoreto.github.io/bootstrap-select/examples/. This because browsers such as IE [do not allow styling of options within `<select>` elements][1]. Chrome/OSX also has this problem - you cannot style options. However a warning is attached to that approach. These types of menus work very differently on mobile platforms because native elements aren't used. They can have annoying quirks on desktop as well. My advice is 1) don't write your own and 2) find a library that's been really well tested. [1]: https://stackoverflow.com/questions/6655625/styling-options-in-bold-in-internet-explorer
``` flutter clean flutter pub get pod install ```
I have an Azkaban executor server process, which is a Java service. I noticed that when running a random sleep script, the CPU usage becomes very high, consistently exceeding 2000%, and the "top" command shows high sys usage. I captured a jstack file hoping to analyze the cause, but I found that many of the stack traces were showing normal calls. For example, there are over 60 instances stuck at "at azkaban.execapp.JobRunner.run(JobRunner.java:652)", [![enter image description here][1]][1] where it hangs at "Thread.currentThread().setName", and 96 instances stuck at "at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1019)". I feel that these are supposed to be quick operations and should not be causing a bottleneck. ---- The same program, when run on a KVM machine (create by myself) with 10 cores and 86GB of memory, uses around 200% CPU and handles around 700 concurrent tasks. However, when run on an Alibaba Cloud instance with 32 cores and 128GB of memory, the CPU usage goes over 2000% and seems to handle only about 400 concurrent tasks. This makes me suspect there might be a performance issue with the cloud instance. How should I go about troubleshooting this problem? this is my jstack file by Alibaba Cloud server https://drive.google.com/file/d/1FXPfndCuhVHFKjQUKZYomvaRoZQ5Q5aP/view?usp=drive_link [1]: https://i.stack.imgur.com/kupx8.png
Azkaban Executor Java Process CPU usage very high, The "top" command shows high sys usage
|java|cpu-usage|kvm|azkaban|
I have did multiple process to configure my flutter web from `http://localhost` to `https://localhost`. Like I have created certificate as below process `openssl genrsa -out localhost.key 2048` to create private key file Generate certificate signing request (CSR): ``` openssl req -new -key localhost.key -out localhost.csr -subj "/CN=localhost" ``` Generate self-signed certificate using CSR and key: ``` openssl x509 -req -days 365 -in localhost.csr -signkey localhost.key -out localhost.crt ``` I have added these .key and .csr file with creating a folder called cert and added the above created certificate. then I have try with `flutter run --web-launch-url=https://localhost:8444` no luck getting error as `https://localhost:8444` is not a valid HTTP URL. I have created "serve.json" file as set-up like ```yml { "request_handlers": [ { "url_pattern": "/", "delegate": { "require_trusted_origin": false, "delegate_name": "flutter-tools/web-server", "secure": true, "web-port": 8444, "key_path":"cert/localhost.key", "certificate_path":"cert/localhost.crt" } } ] } ``` and run the command - as flutter run -d web-server --web-port 8444, it is still open the http://locallost:8444 Then I have tried wile creating "webdev.yaml" file with configuration ```yml serve: hostname: localhost port: 8444 use_https: true certificate_path: cert/localhost.crt key_path: cert/localhost.key https_port: 8444 ``` and active the webdev with command flutter pub global activate webdev and run the command to open the chrome with https "webdev serve" still it id going on "http" not on "https" And no luck till day end. I need to open my localhost with https as secure service.
null
here's my issue, I'm using sveltekit + Lucia v3 + MongoDB. followed the instructions for email + password authentication. signup works fine, user and session are added in database, sign in also works fine, createSession and createSessionCookie both return correct values on sign in. cookies are also stored and I can see it in devtools , however the validateSession always returns null. hooks.server.js : ``` import { lucia } from '$lib/server/auth'; export async function handle({ event, resolve }) { const sessionId = event.cookies.get(lucia.sessionCookieName); if (!sessionId) { event.locals.user = null; event.locals.session = null; return resolve(event); } const { session, user } = await lucia.validateSession(sessionId); //console.log(session, sessionId); if (session && session.fresh) { const sessionCookie = lucia.createSessionCookie(session.id); event.cookies.set.bind(event.cookies)(sessionCookie.name, sessionCookie.value, { path: '.', ...sessionCookie.attributes }); } // if (!session) { // const sessionCookie = lucia.createBlankSessionCookie(); // event.cookies.set.bind(event.cookies)(sessionCookie.name, sessionCookie.value, { // path: '.', // ...sessionCookie.attributes // }); // } event.locals.user = user; event.locals.session = session; return resolve(event); } ``` sessionId is also correct. the only problem is with validateSession auth.js ``` import { Lucia } from 'lucia'; import { dev } from '$app/environment'; import { adapter } from './MongoClient'; export const lucia = new Lucia(adapter, { sessionCookie: { attributes: { secure: !dev } }, getUserAttributes: (attributes) => { return { // attributes has the type of DatabaseUserAttributes username: attributes.username }; } }); ``` MongoClient: ``` const client = new MongoClient(uri); await client.connect(); const db = client.db(); const User = db.collection('user'); const Session = db.collection('sessions'); const adapter = new MongodbAdapter(Session, User); export { db, adapter }; ```
lucia.validateSession() returns null
|mongodb|authentication|sveltekit|
Commit count for one revision (`HEAD`, `master`, or `commit hash`): git rev-list --count <revision> Commit count for all branches: git rev-list --count --all update: this is for local repositories
I'm encountering an issue with fetching data from my database in a production environment. The code works perfectly fine on localhost, but when deployed to production, it fails to retrieve updated data. Here's the relevant code snippet: ``` import {connect} from "@/dbConnection/dbConnection"; import Post from "@/modals/postModal"; export const GET = async () => { console.log('Request Made on Get Posts...' ); try{ await connect(); const posts = await Post.find().sort({createdAt: -1}).limit(20); if(!posts){ return Response.json({error: "Posts not found"}); } return Response.json({posts}); }catch (e) { return Response.json({error: "Database error"}); } } ``` Code for fetching data : ``` const [posts, setPosts] = useState([]); const [loading, setLoading] = useState(false); const [error, setError] = useState(null); useEffect(() => { setLoading(true) setTimeout( () => { fetchPosts().then((r) => { setPosts(r.posts) setLoading(false) }).catch((e) => { setError("Error fetching posts") setLoading(false) }); } , 2000 ) } , []); async function fetchPosts() { try { const response = await axios.get('/api/post/getblogs'); return response.data; } catch (error) { setError("Error fetching posts") } } ``` Link of production : https://blogging-website-nextjs-blue.vercel.app/ [Link](https://blogging-website-nextjs-blue.vercel.app/) **Steps taken:** Checked database connection configuration, which seems correct. Tested the code in localhost, where it works perfectly. Reviewed logs for any errors or warnings, but found none related to this issue. **Expected behavior:** When a request is made to fetch posts, the code should retrieve the latest data from the database
In my case the issue is caused by running `sudo vim`. I have the colorscheme applied without problems before, but now it refuse to take effect at all. It turns out that by adding `sudo` before `vim`, the program loads `/root/.vimrc` instead of `~/.vimrc`. Just copy the file from your home directory to `/root/` and voila!
That assembly language is MASM, not NASM. For starters, NASM segments are defined differently. Instead of Code segment word public 'CODE' we write section .text And that "ASSUME" declaration... You must have an ancient book. That is old, old MASM code. Brings back memories from the early 1980s for me! There are many differences between NASM and MASM, and your code needs quite a bit of work to port. If you want to port that MASM code to NASM, see https://stackoverflow.com/questions/2035747/masm-nasm-differences or the NASM documentation or google "NASM vs MASM" TL;DR: you are writing MASM code, not NASM.
Faced this problem and solved it after installing the below version of SQLAlchemy. pip install SQLAlchemy==1.4.45
|javascript|jquery|riot.js|
|javascript|jquery|angularjs|riot.js|
I've tried multiple times to import a library to my React project, but every time I try, an error like this pops up: ``` # npm resolution error report While resolving: react-server-dom-webpack@0.0.1 Found: react@18.2.0 node_modules/react peer react@">= 16" from @heroicons/react@2.0.18 node_modules/@heroicons/react @heroicons/react@"^2.0.18" from the root project peerOptional react@"^18.0.0" from framer-motion@11.0.8 node_modules/framer-motion framer-motion@"^11.0.8" from the root project peer react@"^18.2.0" from next@14.0.2 node_modules/next next@"^14.0.2" from the root project peer react@"^18.2.0" from react-dom@18.2.0 node_modules/react-dom peerOptional react-dom@"^18.0.0" from framer-motion@11.0.8 node_modules/framer-motion framer-motion@"^11.0.8" from the root project peer react-dom@"^18.2.0" from next@14.0.2 node_modules/next next@"^14.0.2" from the root project react-dom@"^18.2.0" from the root project peer react@">= 16.8.0 || 17.x.x || ^18.0.0-0" from styled-jsx@5.1.1 node_modules/styled-jsx styled-jsx@"5.1.1" from next@14.0.2 node_modules/next next@"^14.0.2" from the root project react@"^18.2.0" from the root project peer react@"*" from react-icons@5.0.1 node_modules/react-icons react-icons@"*" from the root project Could not resolve dependency: peer react@"^17.0.0" from react-server-dom-webpack@0.0.1 node_modules/react-server-dom-webpack react-server-dom-webpack@"^0.0.1" from the root project Conflicting peer dependency: react@17.0.2 node_modules/react peer react@"^17.0.0" from react-server-dom-webpack@0.0.1 node_modules/react-server-dom-webpack react-server-dom-webpack@"^0.0.1" from the root project Fix the upstream dependency conflict, or retry this command with --force or --legacy-peer-deps to accept an incorrect (and potentially broken) dependency resolution. ``` I have no idea how to fix it, and it seems like ChatGBT doesn't know either. I tried to import the react-icons library and expected it to be installed without an error.
This script is used to send mails using in SendGrid and code is written in powershell script. I am trying to add CC and BCC in the below code, but it is not working as expected. param ( [Parameter(Mandatory=$false)] [object] $WebhookData ) write-output "start" write-output ("object type: {0}" -f $WebhookData.gettype()) write-output $WebhookData write-output "`n`n" if ($WebhookData.RequestBody) { $details = (ConvertFrom-Json -InputObject $WebhookData.RequestBody) foreach ($x in $details) { $destEmailAddress = $x.destEmailAddress Write-Output "To - $destEmailAddress" $fromEmailAddress = $x.fromEmailAddress Write-Output "From - $fromEmailAddress" $subject = $x.subject Write-Output "Subject Line - $subject" $content = $x.content Write-Output "Mail Body - $content" # Optional CC and BCC as arrays $ccEmailAddresses = $x.ccEmailAddress $bccEmailAddresses = $x.bccEmailAddress } } else { Write-Output "No details received" } Disable-AzContextAutosave -Scope Process $SENDGRID_API_KEY = "XYZ" $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add("Authorization", "Bearer " + $SENDGRID_API_KEY) $headers.Add("Content-Type", "application/json") $body = @{ personalizations = @( @{ to = @( @{ email = $destEmailAddress } ) } ) } if ($ccEmailAddresses) { $cc = @() foreach ($ccAddress in $ccEmailAddresses) { $cc += @{ email = $ccAddress } } $body.personalizations[0].cc = $cc } if ($bccEmailAddresses) { $bcc = @() foreach ($bccAddress in $bccEmailAddresses) { $bcc += @{ email = $bccAddress } } $body.personalizations[0].bcc = $bcc } $body.from = @{ email = $fromEmailAddress } $body.subject = $subject $body.content = @( @{ type = "text/html" value = $content } ) $bodyJson = $body | ConvertTo-Json -Depth 4 $response = Invoke-RestMethod -Uri https://api.sendgrid.com/v3/mail/send -Method Post -Headers $headers -Body $bodyJson But below is the error I get. I can't seem to figure out what needs to be changed. TIA. > At line:60 char:11 + if ($ccEmailAddress) { + ~ Missing '=' operator > after key in hash literal. At line:71 char:9 + } + ~ Missing closing > ')' in subexpression. At line:72 char:5 + } + ~ Missing closing ')' in > subexpression. At line:73 char:1 + ) + ~ Unexpected token ')' in > expression or statement. At line:74 char:1 + } + ~ Unexpected token > '}' in expression or statement. At line:76 char:1 + from = @{ + ~~~~ > The 'from' keyword is not supported in this version of the language. > At line:86 char:1 + } + ~ Unexpected token '}' in expression or > statement. JSON I am passing to this script is as below. CC and BCC are passed as arrays. {'destEmailAddress' : 'to@abc.com', 'fromEmailAddress' : 'from@abc.com', 'subject' : 'mailSubject', 'content' : 'mailContent', 'ccEmailAddresses' : '['cc1@abc.com','cc2@abc.com']', 'bccEmailAddresses' : '['']' }
Powershell - Error while adding CC and BCC - Missing '=' after key in hash literal
|powershell|azure-powershell|
I fixed putting the image at SRC folder, when I tried to load from another folder nothing happens.
I think if I were to answer, I would try to be as efficient as possible and not create a "one-liner," as it likely doesn't do what you expect. I know you mention not looping but for those that see your question and do not mind looping I would do: function RemoveExcessiveSpaces(const Input: string): string; var i, StartIndex, EndIndex: Integer; sb: TStringBuilder; begin StartIndex := 1; EndIndex := Length(Input); // Find the first non-space character while (StartIndex <= EndIndex) and (Input[StartIndex] = ' ') do Inc(StartIndex); // Find the last non-space character while (EndIndex >= StartIndex) and (Input[EndIndex] = ' ') do Dec(EndIndex); // Exit if the string is empty or all spaces if StartIndex > EndIndex then Exit(''); sb := TStringBuilder.Create(EndIndex - StartIndex + 1); try for i := StartIndex to EndIndex do begin // Append current character if it's not a space or if it's a single space between words if (Input[i] <> ' ') or ((i > StartIndex) and (Input[i - 1] <> ' ')) then sb.Append(Input[i]); end; Result := sb.ToString; finally sb.Free; end; end;
Turns out `IOPCIDevice::_CopyDeviceMemoryWithIndex` was indeed the function needed to implement this (but the fact that it's private is still an inconvenient). ### Sample code Bellow is some sample code showing how this could be implemented (the code uses `MyDriver` for the driver class name and `MyDriverUserClient` for the user client). Relevant sections from `MyDriver.cpp` implementation: ```cpp struct MyDriver_IVars { IOPCIDevice* pciDevice = nullptr; }; // MyDriver::init/free/Start/Stop/NewUserClient implementation ommited for brevity IOMemoryDescriptor* MyDriver::copyBarMemory(uint8_t barIndex) { IOMemoryDescriptor* memory; uint8_t barMemoryIndex, barMemoryType; uint64_t barMemorySize; // Warning: error handling is omitted for brevity ivars->pciDevice->GetBARInfo(barIndex, &barMemoryIndex, &barMemorySize, &barMemoryType); ivars->pciDevice->_CopyDeviceMemoryWithIndex(barMemoryIndex, &memory, this); return memory; } ``` Relevant sections from `MyDriverUserClient.cpp` implementation: ```cpp struct MyDriverUserClient_IVars { MyDriver* myDriver = nullptr; }; // MyDriverUserClient::init/free/Start/Stop implementation ommited for brevity kern_return_t IMPL(MyDriverUserClient, CopyClientMemoryForType) //(uint64_t type, uint64_t *options, IOMemoryDescriptor **memory) { *memory = ivars->myDriver->copyBARMemory(kPCIMemoryRangeBAR0); return kIOReturnSuccess; } ``` ### Additional Resources A complete implementation that uses this pattern can be found in the [ivshmem.dext](https://github.com/vially/ivshmem.dext) project (which implements a macOS driver for [IVSHMEM](https://github.com/qemu/qemu/blob/5012e522aca161be5c141596c66e5cc6082538a9/docs/specs/ivshmem-spec.rst)).
I am creating a MLM (multi level marketing) system in PHP and MYSQL database. I want to fetch child user id based on parent id. I have found a solution https://stackoverflow.com/questions/45444391/how-to-count-members-in-15-level-deep-for-each-level-in-php but getting errors- I have created a class - ``` <?php Class Team extends Database { private $dbConnection; function __construct($db) { $this->dbConnection = $db; } public function getDownline($id, $depth=5) { $stack = array($id); for($i=1; $i<=$depth; $i++) { // create an array of levels, each holding an array of child ids for that level $stack[$i] = $this->getChildren($stack[$i-1]); } return $stack; } public function countLevel($level) { // expects an array of child ids settype($level, 'array'); return sizeof($level); } private function getChildren($parent_ids = array()) { $result = array(); $placeholders = str_repeat('?,', count($parent_ids) - 1). '?'; $sql="select id from users where pid in ($placeholders)"; $stmt=$this->dbConnection->prepare($sql); $stmt->execute(array($parent_ids)); while($row=$stmt->fetch()) { $results[] = $row->id; } return $results; } } ``` Ans using class like that - ``` $id = 4; $depth = 2; // get the counts of his downline, only 2 deep. $downline_array = $getTeam->getDownline($id, $depth=2); ``` I am getting errors - Fatal error: Uncaught TypeError: count(): Argument #1 ($value) must be of type Countable|array, int given in and second Warning: PDOStatement::execute(): SQLSTATE[HY093]: Invalid parameter number: number of bound variables does not match number of tokens in I want to fetch child user id in 5 levels
Welcome to StackOverflow! The error you are seeing comes from the expected type (`Record<string, string | string[]> | undefined`) not matching your specified type `user_id: string`. I agree the documentation is a bit lacking in this regard. My suggestion is to not type your params in your `Page.tsx` ```typescript const UserDashboard = ({ params }) => { // ... } ``` And then before using the params check that they are valid, either manually or by using a validation library such as [Zod](https://zod.dev/). That could look like this: ```typescript const UserDashboard = ({ params: rawParams }) => { const params = z .object({ user_id: z.string() }) .safeParse(rawParams); if (!params.success) redirect('/'); // ... } export default withPageAuthRequired(UserDashboard, { returnTo: ({ params: rawParams }) => { const params = z .object({ user_id: z.string() }) .safeParse(rawParams); if (params.success) return `${process.env.NEXT_PUBLIC_BASE_URL}/user-dashboard/${user_id}`; } }); ```
I get a file from a third party. The file seems to contain both ANSI and UTF-8 encoded characters (not sure if my terminology is correct). Changing the encoding in Notepad++ yields the following: [![Notepad++ screenshot][1]][1] [1]: https://i.stack.imgur.com/G0hcd.png So when using ANSI encoding, Employee2 is incorrect. And when using UTF-8 encoding, Employee1 is incorrect. Is there a way in C# to set 2 encodings for a file? Whichever encoding I set in C#, one of the two employees is incorrect: string filetext = ""; // Employee1 is correct, Employee2 is wrong filetext = File.ReadAllText(@"C:\TESTFILE.txt", Encoding.GetEncoding("ISO-8859-1")); filetext = File.ReadAllText(@"C:\TESTFILE.txt", Encoding.GetEncoding("Windows-1252")); filetext = File.ReadAllText(@"C:\TESTFILE.txt", Encoding.UTF7); filetext = File.ReadAllText(@"C:\TESTFILE.txt", Encoding.Default); // Employee1 is wrong, Employee2 is correct filetext = File.ReadAllText(@"C:\TESTFILE.txt", Encoding.UTF8); Has anyone else encountered this and found a solution?
So my code is basically copied from a walktrough for making a discord bot use chatgpt. Im using Pycharm for the project. This would be the main - code: ``` from typing import Final import os import discord from dotenv import load_dotenv from chatgpt_ai.openai import chatgpt_response load_dotenv() TOKEN: Final[str] = os.getenv('DISCORD_TOKEN') class MyClient(discord.Client): async def on_ready(self): print("Successfully logged in as: ", self.user) async def on_message(self, message): print(message.content) if message.author == self.user: return command, user_message=None, None **for text in ['/ai', '/bot', 'chatgpt']:** if message.content.startswith(text): command = message.content.split('')[0] user_message = message.content.replace(text, '') print(command, user_message) if command == '/ai' or command == '/bot' or command == '/chatgpt': bot_response = chatgpt_response(prompt=user_message) await message.channel.send(f"Answer: {bot_response}") intents = discord.Intents.default() intents.message_content = True client = MyClient(intents=intents) Terminal error code: File "C:\Users\Flavi\PycharmProjects\discord_flavio_bot\discord_bot\main.py", line 28 await message.channel.send(f"Answer: {bot_response}") IndentationError: unexpected indent ``` whats also weird - is that from line`**for text in ['/ai', '/bot', 'chatgpt']**:` everly line with "message" is marked as error with suggestions to import from "email".... I tried various things, checked my code for errors, spellchecking and see if i missed something within the walktrough. But also its my very first attempt on a python project so i might just be to inexcpirienced to even know where to look for the error.[enter image description here](https://i.stack.imgur.com/EKMjY.png)
Why does this error keep showing, what am i missing? await message.channel.send(f"Answer: {bot_response}") IndentationError: unexpected indent
|python|discord|artificial-intelligence|openai-api|
null
remove your `Column` widget try to add `GridView.builder` like this return GridView.builder( physics: NeverScrollableScrollPhysics(), gridDelegate: SliverGridDelegateWithFixedCrossAxisCount( crossAxisCount: 2, crossAxisSpacing: 40, mainAxisSpacing: 20, childAspectRatio: 0.7, ), shrinkWrap: true, padding: EdgeInsets.symmetric( vertical: 30, horizontal: 25), itemCount: state.drinksList.length, itemBuilder: (context, index) { DrinksModel drink = state.drinksList[index];
My Firebase save each user with 4 features:name,email,password and confirm password. The class of users: [enter image description here](https://i.stack.imgur.com/pF3M1.png) The Firebase data that i saved: [enter image description here](https://i.stack.imgur.com/TPCFf.png) In my app i have an option to change the current password to new password,its require to put you previous password(the current that the Firebase saved) and then put new password(the user put the data about the passwords on UI 'EditText') . See here how it looks: [enter image description here](https://i.stack.imgur.com/xvLcQ.png) And then the user has to click on 'confirm button' to change the old password to new.But before its change,***the computer need to check if the old password compare to the current password that saved on Firebase*** .I don't know how to do that,please help me!! I tried to do that code but not sure if i get the access for the variable the i want. [enter image description here](https://i.stack.imgur.com/gMWla.png) It always changed even when the previous pass don't compare to the current.. code of database(for one user): [enter image description here][1] [1]: https://i.stack.imgur.com/kHroc.png
The `end_of_string` is required. For example, in a case like inserting "1234" and "123456". The expected common prefix is: "1234": (root) <br> | <br> '1' <br> | <br> '2' <br> | <br> '3' <br> | <br> '4' (end_of_string) <br> | <br> '5' <br> | <br> '6' (end_of_string) <br> -- [Demo][1] The data structure is: ```cpp // Trie__ // Structure representing a node in the trie. struct Node final { std::unordered_map<char, std::unique_ptr<Node>> child_node{}; // Map of child nodes. bool end_of_string{ false }; // Flag to indicate end of a string }; Node root_{}; // Root node of the trie (Represents a null prefix.) // __Trie ``` [1]: https://onlinegdb.com/68EHiIuuM
So I have been having issues with printing all columns in a csv file. The data within is set out strange in my opinion however it is not relevant. I have been using pandas to see trends within a data csv file. I was trying to print out the data as well as showing it graphically with pandas. When I try to print the filtered variable with all the data, because the data spans across many columns with the dates at the top ( I will attach a photo of the code in terminal and the data file so it is not too confusing) **it does not seem to want to print all the data** I have tried different methods such as changing it to the list() function, Trying to conver it into .to_string or .to_html however they dont work. In the photo attached it is after doing .to_string however it just outputs this at the start "<bound method DataFrame.to_html of" and then prints some collumns dates however not the data under them def region_check(region, startdate, enddate, house_type): # region, startdate, enddate df1 = df.loc[:, startdate:enddate] df2 = df.loc[:, 'Region Code':'Rooms'] prop_filter = df["Property_Type"] == house_type result = pd.concat([df2, df1], axis=1, join='inner').where(df2["Region"] == region).where(prop_filter) result = pd.DataFrame(result) result.dropna(inplace=True) print(result) # This is where it prints the data as text. At the end of result is where I was adding .to_string/html ave = df1.mean() ave.plot() plt.show() return result [Picture of problem with .to_html][1] [Picture of problem with only printing data][2] [Picture of data file.][3] [1]: https://i.stack.imgur.com/or9nw.png [2]: https://i.stack.imgur.com/So0dg.png [3]: https://i.stack.imgur.com/ki9bp.png
I have a table which one column is full of comments. There are multiple rows added and deleted each day. Each Comment is set to "do not move or size with cells" so when I click view comment it appears next to the box which is great. This seems to be worse when I have a filter applied to the table. However, if I click Edit Comment it jumps down the page, way after the actual bottom of the sheet. I actually tried all three of the options in Format Comment and it doesn't change. I have Page Breaks on which hasn't helped. I've searched and searched online and I can't find anything that relates to the edit comment jumping.
Edit Comment jumping to bottom of page when filter applied
|excel|
How do I import a library into my React project without getting this error?
|javascript|reactjs|import|react-icons|
null
One workaround is using window.Resize(width, height)
The comment posted by jasonharper is "on point." Unless the work done by `function` is sufficiently CPU-intensive, the time saved by running it in parallel will not make up for the additional overhead incurred by creating child processes. If you are to use multiprocessing, I would simply break up the half-open interval [0, 100_000) into N smaller, non-overlapping half-open intervals where N is the number of CPU processors you have. I have chosen the `f(x)` function (`x ** 2`) such that the value of `x` that satisfies a specific value of `f(x)` (`9801198001`) is skewed so that a serial solution will not find a result (`99001`) until it has checked almost all possible values of `x`. Even so, for such a simple function a multiprocessing solution runs 10 times more slowly than the serial solution. If the `f(x)` function is monotonically increasing or decreasing, then the serial solution can be further sped up using a binary search, which I have also included: ```python from multiprocessing import Pool, cpu_count def search(r): for x in r: if x ** 2 == 9801198001: return x return None def main(): import time # Parallel processing: pool_size = cpu_count() interval_size = 100_000 // pool_size lower_bound = 0 args = [] for _ in range(pool_size - 1): args.append(range(lower_bound, lower_bound + interval_size)) lower_bound += interval_size # Final interval: args.append(range(lower_bound, 100_000)) t = time.time() with Pool(pool_size) as pool: for result in pool.imap_unordered(search, args): if result is not None: break # An implicit call to pool.terminate() will be called # to terminate any remaining submitted tasks elapsed = time.time() - t print(f'result = {result}, parallel elapsed time = {elapsed}') # Serial processing: t = time.time() result = search(range(100_000)) elapsed = time.time() - t print(f'result = {result}, serial elapsed time = {elapsed}') # Serial processing using a binary search # for monotonically increasing function: t = time.time() lower = 0 upper = 100_000 result = None while lower < upper: x = lower + (upper - lower) // 2 sq = x * x if sq == 9801198001: result = x break if sq < 9801198001: lower = x + 1 else: upper = x elapsed = time.time() - t print(f'result = {result}, serial binary search elapsed time = {elapsed}') if __name__ == '__main__': main() ``` Prints: ```lang-None result = 99001, parallel elapsed time = 0.2470550537109375 result = 99001, serial elapsed time = 0.02435159683227539 result = 99001, serial binary search elapsed time = 0.0 ```
I have a SliverAppBar, but I want to make it so that when the user starts scrolling, the title becomes like a regular AppBar (centered on top) + before scrolling, it was on the left, not in the center. I don't quite understand how to do this + I want the text style to be different for the two text states, but now it turns out to increase the current through expandedTitleScale, which is not quite the same ``` SliverAppBar( pinned: true, snap: false, floating: false, surfaceTintColor: Colors.transparent, backgroundColor: Colors.transparent, collapsedHeight: 50, expandedHeight: 150.0, automaticallyImplyLeading: false, flexibleSpace: FlexibleSpaceBar( centerTitle: true, title: Text( title, style: context.appStyles.paragraphP1Medium, ), expandedTitleScale: 2, titlePadding: EdgeInsets.zero, ), ),
when i using tensorrt to inference my model, it seems that the cpu memory is leaked! I used and modified the official nvidia tensorrt code my common excute code is official nvidia tensorrt code,and i do a little modify. ``` class HostDeviceMem: """Pair of host and device memory, where the host memory is wrapped in a numpy array""" def __init__(self, size: int, dtype: np.dtype): nbytes = size * dtype.itemsize host_mem = cuda_call(cudart.cudaMallocHost(nbytes)) pointer_type = ctypes.POINTER(np.ctypeslib.as_ctypes_type(dtype)) self._host = np.ctypeslib.as_array(ctypes.cast(host_mem, pointer_type), (size,)) self._device = cuda_call(cudart.cudaMalloc(nbytes)) self._nbytes = nbytes @property def host(self) -> np.ndarray: return self._host @host.setter def host(self, arr: np.ndarray): if arr.size > self.host.size: raise ValueError( f"Tried to fit an array of size {arr.size} into host memory of size {self.host.size}" ) np.copyto(self.host[:arr.size], arr.flat, casting='safe') @property def nbytes(self) -> int: return self._nbytes @nbytes.setter def nbytes(self, nbytes: int): self._nbytes = nbytes @property def device(self) -> int: return self._device # @property # def nbytes(self) -> int: # return self._nbytes def __str__(self): return f"Host:\n{self.host}\nDevice:\n{self.device}\nSize:\n{self.nbytes}\n" def __repr__(self): return self.__str__() def free(self): cuda_call(cudart.cudaFree(self.device)) cuda_call(cudart.cudaFreeHost(self.host.ctypes.data)) # Allocates all buffers required for an engine, i.e. host/device inputs/outputs. # If engine uses dynamic shapes, specify a profile to find the maximum input & output size. # Frees the resources allocated in allocate_buffers def free_buffers(inputs: List[HostDeviceMem], outputs: List[HostDeviceMem], stream: cudart.cudaStream_t): for mem in inputs + outputs: mem.free() cuda_call(cudart.cudaStreamDestroy(stream)) # Wrapper for cudaMemcpy which infers copy size and does error checking def memcpy_host_to_device(device_ptr: int, host_arr: np.ndarray): nbytes = host_arr.size * host_arr.itemsize cuda_call(cudart.cudaMemcpy(device_ptr, host_arr, nbytes, cudart.cudaMemcpyKind.cudaMemcpyHostToDevice)) # Wrapper for cudaMemcpy which infers copy size and does error checking def memcpy_device_to_host(host_arr: np.ndarray, device_ptr: int): nbytes = host_arr.size * host_arr.itemsize cuda_call(cudart.cudaMemcpy(host_arr, device_ptr, nbytes, cudart.cudaMemcpyKind.cudaMemcpyDeviceToHost)) def _do_inference_base(inputs, outputs, stream, execute_async): # Transfer input data to the GPU. kind = cudart.cudaMemcpyKind.cudaMemcpyHostToDevice [cuda_call(cudart.cudaMemcpyAsync(inp.device, inp.host, inp.nbytes, kind, stream)) for inp in inputs] # Run inference. execute_async() # Transfer predictions back from the GPU. kind = cudart.cudaMemcpyKind.cudaMemcpyDeviceToHost [cuda_call(cudart.cudaMemcpyAsync(out.host, out.device, out.nbytes, kind, stream)) for out in outputs] # Synchronize the stream cuda_call(cudart.cudaStreamSynchronize(stream)) # Return only the host outputs. return [out.host for out in outputs] # This function is generalized for multiple inputs/outputs. # inputs and outputs are expected to be lists of HostDeviceMem objects. def do_inference(context, bindings, inputs, outputs, stream, batch_size=1): def execute_async(): context.execute_async(batch_size=batch_size, bindings=bindings, stream_handle=stream) return _do_inference_base(inputs, outputs, stream, execute_async) # This function is generalized for multiple inputs/outputs for full dimension networks. # inputs and outputs are expected to be lists of HostDeviceMem objects. def do_inference_v2(context, bindings, inputs, outputs, stream): def execute_async(): context.execute_async_v2(bindings=bindings, stream_handle=stream) return _do_inference_base(inputs, outputs, stream, execute_async) ``` my inference code is here, the input is a img with nchw shape. the model inputs num is 1,outputs num is 2 ``` import common import os import sys from common import cuda_call, HostDeviceMem import ctypes import numpy as np import tensorrt as trt from cuda import cudart class TensorRTInfer: """ Implements inference for the TensorRT engine. """ def __init__(self, engine_path): """ :param engine_path: The path to the serialized engine to load from disk. """ # Load TRT engine print("Loading engine from file {}".format(engine_path)) self.logger = trt.Logger(trt.Logger.ERROR) trt.init_libnvinfer_plugins(self.logger, namespace="") with open(engine_path, "rb") as f, trt.Runtime(self.logger) as runtime: assert runtime self.engine = runtime.deserialize_cuda_engine(f.read()) assert self.engine self.context = self.engine.create_execution_context() assert self.context self.context.active_optimization_profile = 0 self.inputs, self.outputs, self.bindings, self.stream = \ self.allocate_buffers() def infer(self, batch): rt_input_shape = batch.shape rt_size = trt.volume(rt_input_shape) rt_nbytes = rt_size * batch.dtype.itemsize batch = np.ascontiguousarray(batch) self.inputs[0].host = batch self.inputs[0].nbytes = rt_nbytes tensor_names = [self.engine.get_tensor_name(i) for i in range(self.engine.num_io_tensors)] output_tensor_names = [itm for itm in tensor_names if self.engine.get_tensor_mode(itm) == trt.TensorIOMode.OUTPUT] input_tensor_names = [itm for itm in tensor_names if self.engine.get_tensor_mode(itm) == trt.TensorIOMode.INPUT] self.context.set_input_shape(input_tensor_names[0], rt_input_shape) ## set output shape rt_output_shapes = [] for i, binding in enumerate(output_tensor_names): rt_output_shape = self.context.get_tensor_shape(binding) rt_output_shapes.append(rt_output_shape) dtype = np.dtype(trt.nptype(self.engine.get_tensor_dtype(binding))) self.outputs[i].nbytes = trt.volume(rt_output_shape) * dtype.itemsize trt_outs = common.do_inference_v2(self.context, bindings=self.bindings, inputs=self.inputs, outputs=self.outputs, stream=self.stream) # trt_outs = [out.reshape(shape) for out, shape in zip(trt_outs, rt_output_shapes)] size_0 = trt.volume(rt_output_shapes[0]) trt_out = trt_outs[0][:size_0].reshape(rt_output_shapes[0]) return trt_out # for rt_out_shape in rt_output_shapes: # print("output shape: ", rt_out_shape) # self.outputs[0].nbytes = trt.volume(rt_out_shape) * trt.nptype(self.engine.get_tensor_dtype(tensor_names[1])).itemsize def allocate_buffers(self): inputs = [] outputs = [] bindings = [] stream = cuda_call(cudart.cudaStreamCreate()) tensor_names = [self.engine.get_tensor_name(i) for i in range(self.engine.num_io_tensors)] output_tensor_names = [itm for itm in tensor_names if self.engine.get_tensor_mode(itm) == trt.TensorIOMode.OUTPUT] input_tensor_names = [itm for itm in tensor_names if self.engine.get_tensor_mode(itm) == trt.TensorIOMode.INPUT] max_profile_shape = self.engine.get_tensor_profile_shape(input_tensor_names[0], 0)[-1] self.context.set_input_shape(input_tensor_names[0], max_profile_shape) for binding in input_tensor_names: size = trt.volume(max_profile_shape) dtype = np.dtype(trt.nptype(self.engine.get_tensor_dtype(binding))) bindingMemory = HostDeviceMem(size, dtype) bindings.append(int(bindingMemory.device)) inputs.append(bindingMemory) for binding in output_tensor_names: output_shape = self.context.get_tensor_shape(binding) size = trt.volume(output_shape) dtype = np.dtype(trt.nptype(self.engine.get_tensor_dtype(binding))) bindingMemory = HostDeviceMem(size, dtype) bindings.append(int(bindingMemory.device)) outputs.append(bindingMemory) return inputs, outputs, bindings, stream ``` tthis question has been bugging me for days, I would feel so much better if someone could give me a little boost. i try to debug whole pipeline, and locate the problem in this block.but i cannot solve it
tensorrt inference problem: CPU memory leak
|memory|tensorrt|
null
|docker|apache-kafka|docker-compose|
I have the following configuration with Spring Cloud "2021.0.9": Spring gateway with defined rotes registered into Eureka server with name `gateway_1`: cloud: gateway: routes: - id: microservice_1 uri: lb://microservice_1 predicates: - Path=/api/microservice_1/dashboard/users filters: - RemoveRequestHeader=Cookie - name: CircuitBreaker args: name: microservice_1 - RewritePath=/api/dashboard/users, /dashboard/users Microservice_1 registered into Eureka server with name `microservice_1` Endpoint which listen on endpoint `dashboard/users`. When I make a POST http://123.123.123.123:30057/api/microservice_1/dashboard/users?startDate=2024-02-28T15:01:22.258Z&endDate=2024-03-28T15:01:22.597Z I get error: 2024-03-29 14:35:40.688 DEBUG 1 --- [ parallel-1] o.s.c.g.handler.FilteringWebHandler : Sorted gatewayFilterFactories: [[GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.RemoveCachedBodyFilter@66bacdbc}, order = -2147483648], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.AdaptCachedBodyGlobalFilter@55f8669d}, order = -2147482648], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyWriteResponseFilter@453d496b}, order = -1], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardPathFilter@4b54af3d}, order = 0], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.GatewayMetricsFilter@203c20cf}, order = 0], [[RemoveRequestHeader name = 'Cookie'], order = 1], [[SpringCloudCircuitBreakerResilience4JFilterFactory name = 'microservice_1', fallback = [null]], order = 2], [[RewritePath /api/dashboard/users = '/dashboard/users'], order = 3], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.RouteToRequestUrlFilter@2c6ee758}, order = 10000], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ReactiveLoadBalancerClientFilter@10667848}, order = 10150], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.LoadBalancerServiceInstanceCookieFilter@191a709b}, order = 10151], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.WebsocketRoutingFilter@7bb35cc6}, order = 2147483646], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyRoutingFilter@77c7ed8e}, order = 2147483647], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardRoutingFilter@640dc4c6}, order = 2147483647]] 2024-03-29 14:35:40.691 TRACE 1 --- [ parallel-1] o.s.c.g.filter.RouteToRequestUrlFilter : RouteToRequestUrlFilter start 2024-03-29 14:35:40.692 TRACE 1 --- [ parallel-1] s.c.g.f.ReactiveLoadBalancerClientFilter : ReactiveLoadBalancerClientFilter url before: lb://microservice_1/api/microservice_1/dashboard/users?startDate=2024-02-29T14:35:19.172Z&endDate=2024-03-29T14:35:40.516Z 2024-03-29 14:35:40.693 TRACE 1 --- [ parallel-1] s.c.g.f.ReactiveLoadBalancerClientFilter : LoadBalancerClientFilter url chosen: http://10.123.122.116:8138/api/microservice_1/dashboard/users?startDate=2024-02-29T14:35:19.172Z&endDate=2024-03-29T14:35:40.516Z 2024-03-29 14:35:40.789 TRACE 1 --- [or-http-epoll-2] o.s.c.gateway.filter.NettyRoutingFilter : outbound route: fc7ef1f1, inbound: [ea46ab43-29] 2024-03-29 14:35:41.605 TRACE 1 --- [or-http-epoll-2] o.s.c.g.filter.NettyWriteResponseFilter : NettyWriteResponseFilter start inbound: fc7ef1f1, outbound: [ea46ab43-29] 2024-03-29 14:35:41.607 TRACE 1 --- [or-http-epoll-2] o.s.c.g.filter.GatewayMetricsFilter : spring.cloud.gateway.requests tags: [tag(httpMethod=GET),tag(httpStatusCode=404),tag(outcome=CLIENT_ERROR),tag(routeId=microservice_1),tag(routeUri=lb://microservice_1),tag(status=NOT_FOUND)] 2024-03-29 14:36:07.009 DEBUG 1 --- [freshExecutor-0] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition microservice_1 applying {_genkey_0=/api/microservice_1/dashboard/users} to Path I think that Eureka address is not resolved for some reason. Both microservices are deployed as pods into Kubernetes cluster and registered into Eureka Server after startup with their names. Do you know what might be the issue?
Resolve address using Eureka Server in Kubernetes
|kubernetes|spring-cloud-gateway|
This problem was originally a bug in our class-path isolation that is only triggered on Windows. The bug was fixed in 23.1.1, but with 23.1.2 also the class-path isolation code is gone, so you should no longer have any issues. In other words, just use these dependencies instead: <dependency> <groupId>org.graalvm.polyglot</groupId> <artifactId>polyglot</artifactId> <version>23.1.2</version> </dependency> <dependency> <groupId>org.graalvm.polyglot</groupId> <artifactId>js</artifactId> <version>23.1.2</version> <type>pom</type> </dependency>
-- e.g. scenario for demo --Creating a rollup monthly customer invoice --For customers that buy individual items on credit. --What I am demonstrating here is that you can update --a table based on some joins etc.. from a complex query WITH cte as ( SELECT c.customerId --get cusomers ,sum(o.Cost) as cost --sum all the individual orders FROM customers c LEFT JOIN orders o on c.orderId = o.orderId where 1=1 AND orderId in ( 115254 ,115270 ,115285 ,115291 ,115319 ,115324 ,115325 ) group by orderId ) --updating based on the sum of all orders by customer UPDATE invoice SET Customer_Invoice_Amount = cte.cost FROM cte WHERE invoice.orderId = cte.orderId
I'm having an issue with JS (with which my knowledge base is not large). Excuse me if this seems like a simple fix, but I cannot figure out what my problem is. I'm pulling from an array, and displaying images with links tied to the images, that lead to an external site. My issue is, I want to display the "genre" underneath the images and JS isn't responding to any of it. Here's my JS: ``` async function getData (type) { const url = 'https://itunes.apple.com/search?country=us&media=podcast&term=' + type; const containerElement=window.document.querySelector("."+type+"-content") const options = { method: 'GET', } try { const response = await fetch(url, options); const result = await response.json(); result.results.map((item) =>{ console.log(item) const {collectionName, artworkUrl600, trackViewUrl, collectionViewUrl, primaryGenreName}=item; console.log(collectionName, artworkUrl600, trackViewUrl, collectionViewUrl, primaryGenreName); const itemElement=window.document.createElement("a"); itemElement.setAttribute("href", collectionViewUrl); const collectionNameElement=window.document.createElement("h2"); const imageElement=window.document.createElement("img"); const genreElement = window.document.createElement("small"); itemElement.style.marginRight = "20px"; imageElement.classList.add('podcast-image'); genreElement.classList.add('small-genre'); imageElement.setAttribute("src", artworkUrl600); collectionNameElement.textContent=collectionName; genreElement.textContent=primaryGenreName; imageElement.after(genreElement); itemElement.append(imageElement); containerElement.append(itemElement); }); } catch (error) { console.error(error); } } getData("popular") getData("trending") ``` I want the "primaryGenreName" to be present on the site underneath each image pulled. Just can't figure it out, any ideas?
Using this code: data.frame(Group = LETTERS[1:6], Value = c(10,20,30,40,50,60), Shade = c("A","A","B","B","C","C"), UCI = c(1,2,3,4,5,6), LCI = c(1,2,3,4,5,6)) |> plot_ly(x =~Group, y=~Value, color = ~Shade, type = 'bar', error_y=~list(type="data", symmetric = FALSE, array=UCI, arrayminus = LCI)) I can create a plotly bar chart with error bars coloured by a vector [![enter image description here][1]][1] However if I use some other strings to define my colours the error bars do not correspond to the data data.frame(Group = LETTERS[1:6], Value = c(10,20,30,40,50,60), Shade = c("similar","similar","higher","higher","lower","lower"), UCI = c(1,2,3,4,5,6), LCI = c(1,2,3,4,5,6)) |> plot_ly(x =~Group, y=~Value, color = ~Shade, type = 'bar', error_y=~list(type="data", symmetric = FALSE, array=UCI, arrayminus = LCI)) [![enter image description here][2]][2] Note the tool tips indicates that the error bars are now +/- 1 not 3 on column C. [1]: https://i.stack.imgur.com/V0MzN.png [2]: https://i.stack.imgur.com/hwEdb.png Could someone explain what is happening here and how to solve it?
Colour plotly bar chart with error bars by group
|r|plotly|
I've installed my program. But if I try to install it again, it does and the program is replaced. I saw this question https://stackoverflow.com/questions/26333869/inno-setup-how-to-display-notifying-message-while-installation-if-application-i Can I create a certain registry entry so I can check it and prevent a new installation? In this question there is some related information: https://stackoverflow.com/questions/5908651/inno-setup-skip-installation-if-other-program-is-not-installed.
I have a requirement to create a screenflow on account object where first screen shows two user choices 1. share Account and 2.Remove access when running user selects "1.share account" Option flow creates an' Account Team Member' record after collecting appropriate data from the user through Screens. When user selects "2. Remove access " option the flows shows a data table with the existing "Account Team Member" records related to that account, Inorder to achieve this i used a Get Records element in the flow that fetches "All Account Team Member" records, with condition AccountId = recordId(id Of the Account on which the flow is triggered), It is throwing error if a Partner user is running the flow but working fine if a standard user runs the flow. Please help me with this issue. I have posted this in community, there i got this Actual error from salesforce support team. "java.lang.IllegalStateException: User.Title: alias not found: t_spn0_USER_CUSTOMIZATION" But I am not getting where I am going wrong, could anyone please help me?
An unexpected error occurred. Please include this ErrorId if you contact support: 1878486530-323938 (1541428280)
SliverAppBar and title
|flutter|flutter-sliver|
I have a C# game emulator which uses TcpListener and originally had a TCP based client. A new client was introduced which is HTML5 (web socket) based. I wanted to support this without modifying too much of the existing server code, still allowing `TcpListener` and `TcpClient` to work with web socket clients connecting. Here is what I have done, but I feel like I'm missing something, as I am not getting the usual order of packets, therefor handshake never completes. 1. Implement protocol upgrade mechanism `public static byte[] GetHandshakeUpgradeData(string data) { const string eol = "\r\n"; // HTTP/1.1 defines the sequence CR LF as the end-of-line marker var response = Encoding.UTF8.GetBytes("HTTP/1.1 101 Switching Protocols" + eol + "Connection: Upgrade" + eol + "Upgrade: websocket" + eol + "Sec-WebSocket-Accept: " + Convert.ToBase64String( System.Security.Cryptography.SHA1.Create().ComputeHash( Encoding.UTF8.GetBytes( new Regex("Sec-WebSocket-Key: (.*)").Match(data).Groups[1].Value.Trim() + "258EAFA5-E914-47DA-95CA-C5AB0DC85B11" ) ) ) + eol + eol); return response; }` This is then used like so: ```cs private async Task OnReceivedAsync(int bytesReceived) { var data = new byte[bytesReceived]; Buffer.BlockCopy(_buffer, 0, data, 0, bytesReceived); var stringData = Encoding.UTF8.GetString(data); if (stringData.Length >= 3 && Regex.IsMatch(stringData, "^GET")) { await _networkClient.WriteToStreamAsync(WebSocketHelpers.GetHandshakeUpgradeData(stringData), false); return; }``` 2. Encode all messages after switching protocol response `public static byte[] EncodeMessage(byte[] message) { byte[] response; var bytesRaw = message; var frame = new byte[10]; var indexStartRawData = -1; var length = bytesRaw.Length; frame[0] = 129; if (length <= 125) { frame[1] = (byte)length; indexStartRawData = 2; } else if (length >= 126 && length <= 65535) { frame[1] = 126; frame[2] = (byte)((length >> 8) & 255); frame[3] = (byte)(length & 255); indexStartRawData = 4; } else { frame[1] = 127; frame[2] = (byte)((length >> 56) & 255); frame[3] = (byte)((length >> 48) & 255); frame[4] = (byte)((length >> 40) & 255); frame[5] = (byte)((length >> 32) & 255); frame[6] = (byte)((length >> 24) & 255); frame[7] = (byte)((length >> 16) & 255); frame[8] = (byte)((length >> 8) & 255); frame[9] = (byte)(length & 255); indexStartRawData = 10; } response = new byte[indexStartRawData + length]; int i, reponseIdx = 0; // Add the frame bytes to the reponse for (i = 0; i < indexStartRawData; i++) { response[reponseIdx] = frame[i]; reponseIdx++; } // Add the data bytes to the response for (i = 0; i < length; i++) { response[reponseIdx] = bytesRaw[i]; reponseIdx++; } return response; } ` Used here: public async Task WriteToStreamAsync(byte[] data, bool encode = true) { if (encode) { data = WebSocketHelpers.EncodeMessage(data); }``` 3. Decoding all messages `public static byte[] DecodeMessage(byte[] bytes) { var secondByte = bytes[1]; var dataLength = secondByte & 127; var indexFirstMask = dataLength switch { 126 => 4, 127 => 10, _ => 2 }; var keys = bytes.Skip(indexFirstMask).Take(4); var indexFirstDataByte = indexFirstMask + 4; var decoded = new byte[bytes.Length - indexFirstDataByte]; for (int i = indexFirstDataByte, j = 0; i < bytes.Length; i++, j++) { decoded[j] = (byte)(bytes[i] ^ keys.ElementAt(j % 4)); } return decoded; }`
C#: Use TcpListener & TcpClient with WebSocket client?
|c#|
null
I found a quite counterintuitive example of behavior of `@DependsOn` annotations. I'm using Spring Boot 3.2.0. In the following toy example, I would expect that `ConditionalConfig` is matched only when `FalseConfig` is also matched, but actually `ConditionalConfig` is always matched. It seems to me that my expectation should be exactly the intended behavior of `@DependsOn` (obviously!): wait for the initialization of a `OtherBean`, then, if necessary, initialize a `MyBean`. What is wrong with my understanding? Adding explicit component scanning did not solve the issue. @Configuration public class Config { public static class MyBean {} @AllArgsConstructor public static class OtherBean { Object o; } @Configuration @DependsOn("otherBean") @ConditionalOnMissingBean(MyBean.class) public static class ConditionalConfig { @Bean public MyBean myBean() { return new MyBean(); } } @Configuration @ConditionalOnProperty(value = "org.example.condition", havingValue = "false") public static class FalseConfig { @Bean public OtherBean otherBean() { return new OtherBean(null); } } @Configuration @ConditionalOnProperty(value = "org.example.condition", havingValue = "true", matchIfMissing = true) public static class TrueConfig { @Bean public OtherBean otherBean(MyBean myBean) { return new OtherBean(myBean); } @Bean public MyBean myBean() { return new MyBean(); } } }
Wait for processing of whole configuration using @DependsOn annotations
|java|spring-boot|
I'm trying to use a triple nested ```for loop``` to: 1) loop over a list of dataframes 2) for each dataframe, loop over the each unique value in the "treatment" column 3) for each unique treatment value, perform a statistical test (```shapiro.test```) on the 4 possible variables ("QY_max","FvFo","NPQ_Lss","QY_Lss") of the dataframe. Therefore, I've written following function: ``` treatment_normality<-function(df_list){ for (df in df_list){ #df.name <- deparse(substitute(df)) #print(df.name) for (treat in unique(df$treatment)){ print(treat) for (parameter in c("QY_max","FvFo","NPQ_Lss","QY_Lss")) print(parameter) print(shapiro.test(df[[parameter]])) } } } ``` I regularly print the variables here to know where I'm at in the ```for loop``` when I check the output. However, I'm experiencing several issues which I cannot fix. My results look like: [1] "A" [1] "QY_max" [1] "FvFo" [1] "NPQ_Lss" [1] "QY_Lss" Shapiro-Wilk normality test data: df[[parameter]] W = 0.93088, p-value = 0.4566 [1] "B" [1] "QY_max" [1] "FvFo" [1] "NPQ_Lss" [1] "QY_Lss" Shapiro-Wilk normality test data: df[[parameter]] W = 0.93088, p-value = 0.4566 [1] "C" [1] "QY_max" [1] "FvFo" [1] "NPQ_Lss" [1] "QY_Lss" Shapiro-Wilk normality test data: df[[parameter]] W = 0.93088, p-value = 0.4566 [1] "D" [1] "QY_max" [1] "FvFo" [1] "NPQ_Lss" [1] "QY_Lss" I cannot figure out why this for loop does not print my treatment (```print(treat)```), then print the parameter (```print(parameter)```) and then the test output (```print(shapiro...)```). Why does it print all parameters after one another? And why does it show ```data: df[[parameter]]``` and not the name of my name of my dataframe + variable name e.g. df1[["QY_max"]]? I'd also like to print NAME of the dataframe which is being used with ```df.name <- deparse(substitute(df))``` and then ```print(df.name)```, which should do the trick, but messes up all the results (therefore commented out). Any idea how can I fix these issues? Below, you can find a MRE (ChatGPT): ``` # Set seed for reproducibility set.seed(123) # Create a list to store dataframes list_of_dataframes <- list() # Define treatments and levels treatments <- c("A", "B", "C", "D", "E") num_levels <- length(treatments) # Number of rows in each dataframe num_rows <- 2 * num_levels # Generate data for each dataframe for (i in 1:3) { treatment <- rep(treatments, each = 2) # Each treatment has 2 occurrences QY_max <- runif(num_rows, min = 0, max = 100) # Random QY_max values FvFo <- runif(num_rows, min = 0, max = 1) # Random FvFo values NPQ_Lss <- runif(num_rows, min = 0, max = 2) # Random NPQ_Lss values QY_Lss <- runif(num_rows, min = 0, max = 1) # Random QY_Lss values # Create dataframe df_name <- paste0("df", i) list_of_dataframes[[df_name]] <- data.frame(treatment, QY_max, FvFo, NPQ_Lss, QY_Lss) } ``` Thanks!
Newb with class project issue
|javascript|arrays|
null
I have a Dataframe that looks like this: OrdNo year 1 20059999 2 20070830 3 20070719 4 20030719 5 20039999 6 20070911 7 20050918 8 20070816 9 20069999 How to replace last 4 digits in the Pandas Dataframe by 0101 if they are 9999? Thanks
How to replace last 4 digits in Pandas Dataframe by 0101 if they are 9999 (Python)
|python|pandas|
Suppose I have many models I would like to summarize in separate tables. One way to do this would be to write out each model in a `modelsummary` call as follows ```r library(modelsummary) fit_1 <- lm(vs ~ am, data=mtcars) fit_2 <- lm(mpg ~ am, data=mtcars) modelsummary( models = list('First Model' = fit_1), statistic = c( 'Std.Error' = 'std.error', 'P val' = 'p.value', 'Lower' = 'conf.low', 'Upper' = 'conf.high'), gof_map = NA, shape= term ~ statistic ) modelsummary( models = list('Second Model' = fit_2), statistic = c( 'Std.Error' = 'std.error', 'P val' = 'p.value', 'Lower' = 'conf.low', 'Upper' = 'conf.high'), gof_map = NA, shape= term ~ statistic ) ``` When I render my pdf using quarto, this produces two model summaries as expected. However, one can imagine that this gets repetitive for more than a few models, especially if I would like to change the tables slightly. Hence, I can write a function to make these tables ```r models<- list( 'First Model'=fit_1, 'Secpond Model' = fit_2 ) do_modelsummary <- function(model, model_name){ mod <- list(model) names(mod) <- model_name modelsummary( models = mod, statistic = c( 'Std.Error' = 'std.error', 'P val' = 'p.value', 'Lower' = 'conf.low', 'Upper' = 'conf.high'), gof_map = NA, shape= term ~ statistic ) } ``` I can then loop through the list as follows ```r for(i in length(models)) { fit <- models[[i]] nm <- names(models)[[i]] do_modelsummary(fit, nm) } ``` This _does not_ display the table upon rendering the pdf. Additionally, `purrr::imap` -- which does this more succinctly -- errors when attempting to render the pdf. ``` purrr::imap(models, do_modelsummary) ERROR: compilation failed- error File ended while scanning use of \@xverbatim. <inserted text> \par <*> Untitled.tex see Untitled.log for more information. ``` If I want to display multiple model summaries individually and render to pdf, what is the preferred way which is _not_ doing each table indivudually.
How can I render multiple modelsummary calls to my pdf?
|r|quarto|modelsummary|
how to get data from markdown file and send it to component using nuxt content
I have a httpd service on ec2 that hosts a wordpress on /wordpress subdomain eg: Now the wordpress is live on https://example.com/wordpress Now I want that wordpress site to be hosted on the root https://example.com should point to the wordpress site What should be done? is it enough to copy the contents from /var/www/html/wordpress to var/www/html folder?
null
I have a class that has a field which in turn is an abstract class. I would like MapStruct to be able to map from the incoming DTO to the internal model including all the subfields of the abstract class, but I only get the empty object in return —rightfully so, since that's the way I've defined the mapping function—. How can I make it so that MapStruct not only maps the top class, but also the field's classes? Do I have to create mappers for the `Cat` and `Dog` classes too, and then bring them in to the `OwnerMapper` class? Thank you in advance! # Example setup # ## Mapper interface ## ```java @Mapper(componentModel = MappingConstants.ComponentModel.CDI) public interface OwnerMapper { @Mapping(source = "pet", target = "pet", qualifiedByName = "mapPet") Owner toEntity(OwnerDTO owner); @Named("mapPet") default Pet mapPet(final PetDTO pet) { if (pet instanceof CatDTO) { return new Cat(); } else { return new Dog(); } } } ``` ## Internal classes ## ```java public class Owner { private String name; private Pet pet; public Owner() { } // Omitting getters and setters. } ``` ```java public abstract class Pet { public abstract String makeNoise(); } ``` ```java public final class Dog extends Pet { private UUID dogTag; private String name; public Dog() { } @Override public String makeNoise() { return "Bark!"; } // Omitting getters and setters. } ``` ```java public final class Cat extends Pet { private boolean bell; public Cat() { } @Override public String makeNoise() { return "Meow!"; } // Omitting getters and setters. ``` ## DTO classes ## ```java public class OwnerDTO { private String name; private PetDTO pet; public OwnerDTO() { } // Omitting getters and setters. ``` ```java public abstract class PetDTO { } ``` ```java public final class CatDTO extends PetDTO { private UUID catTag; private boolean bell; public CatDTO() { } // Omitting getters and setters. ``` ```java public final class DogDTO extends PetDTO { private UUID dogTag; private String name; public DogDTO() { } // Omitting getters and setters. ```
MapStruct maps the model's field as empty when the field is of an abstract type
|field|abstract|mapstruct|
I am creating a plot and have few Issues: a. Legends are duplicated b. Legends are shown Incorrectly on the left , instead of right c. Same value of legend is shown incorrect for different values and colors d. Plots are overlapped Below is my DF: [![enter image description here][1]][1] - I am plotting svIonoStdev vs ITow when sourceid= 0 and 1 . - With this I need to groupthem based on svGnssId and svSvId . - I am able to group them but the legends are not shown correctly **Current Output:** [![enter image description here][2]][2] **Expected Output:** [![enter image description here][3]][3] **Below is code I tried:** new_gk_stddev=list(new_msg_df.groupby(['svGnssId','svSvId','sourceId', 'svIonoStdev'])) fig, ax = plt.subplots(2, constrained_layout = True) for i in new_gk_stddev: if i[0][2]==1.0: ax[0].plot(i[1]['iTOW'] , i[1]['svIonoStdev'] , label=f"{satellite} {i[0][1]}") ax[0].set_ylabel("Standard deviation (TECU)") ax[0].set_xlabel("Itows (mSec)") ax[0].grid(True) ax[0].legend(fontsize=4 ) ax[0].set_title('Correction Handler Output') elif i[0][2]==2.0: ax[1].plot(i[1]['iTOW'] , i[1]['svIonoStdev'] , label=f"{satellite} {i[0][1]}") ax[1].set_ylabel("Standard deviation (TECU)") ax[1].set_xlabel("Itows (mSec)") ax[1].grid(True) ax[1].legend(fontsize=4) ax[1].set_ylim([0, 0.05]) ax[1].set_title('HPG Filter Output') [1]: https://i.stack.imgur.com/UkyZC.png [2]: https://i.stack.imgur.com/Xabys.png [3]: https://i.stack.imgur.com/lfDkq.png
When i run my command "yarn start" the project to launch the development server the build i get no error but in the logcat in android studio i get the following error can't find the source of the problem. "react-native": "0.67.5", "react-native-device-info": "^10.10.0", [enter image description here](https://i.stack.imgur.com/arq3T.png) `` Unhandled SoftException java.lang.RuntimeException: Catalyst Instance has already disappeared: requested by DeviceInfo at com.facebook.react.bridge.ReactContextBaseJavaModule.getReactApplicationContextIfActiveOrWarn(ReactContextBaseJavaModule.java:66) at com.facebook.react.modules.deviceinfo.DeviceInfoModule.invalidate(DeviceInfoModule.java:114) at com.facebook.react.bridge.ModuleHolder.destroy(ModuleHolder.java:110) at com.facebook.react.bridge.NativeModuleRegistry.notifyJSInstanceDestroy(NativeModuleRegistry.java:108) at com.facebook.react.bridge.CatalystInstanceImpl$1.run(CatalystInstanceImpl.java:368) at android.os.Handler.handleCallback(Handler.java:942) at android.os.Handler.dispatchMessage(Handler.java:99) at com.facebook.react.bridge.queue.MessageQueueThreadHandler.dispatchMessage(MessageQueueThreadHandler.java:27) at android.os.Looper.loopOnce(Looper.java:201) at android.os.Looper.loop(Looper.java:288) at com.facebook.react.bridge.queue.MessageQueueThreadImpl$4.run(MessageQueueThreadImpl.java:226)``` I'm expecting the home page of my app to be displayed but i got blank page nothing is displayed
You can add it as a regular index: db.search_results_data.createIndex( { flag_send_to_kafka: 1 } ) Creating a regular index for a field allows MongoDB to index all documents that have that field, and it indirectly indexes by difference those that do not have it. This can be achieved by running a query with `$exists: false`. The intended purpose of a partial index is often to create a constraint rather than focusing solely on performance optimization. A common scenario is when you want to prevent certain pairs of values from uniquely existing, but only under specific conditions defined by the partial filter expression. For example, consider the following case where you want to prohibit two children of the same parent from having the same name, but only in some special cases that match the partial filter expression. db.tree.createIndex( // prohibit two children of the same parent from having the same name { parentId: 1, name: 1 }, // but only if they have (for example) the unique flag enabled { partialFilterExpression: { uniqueFlag: true } } ) So, while regular indexes cover all documents, partial indexes are useful when you want to impose constraints on a subset of documents based on specific conditions, ensuring that the index only includes documents that meet those criteria.
|salesforce|salesforce-service-cloud|
null