instruction
stringlengths
0
30k
The snippet below shows a struct (`ckmsgq`) with a function pointer as a member (`func`). Then there is a function (`create_ckmsgq`) that is assigning a value to the function pointer. However the function isn't using the usual syntax for a function pointer parameter. It's just accepting what looks like a normal pointer to a variable, and it has the const keyword (`const void *func`). ``` struct ckmsgq { ... void (*func)(ckpool_t *, void *); ... }; ckmsgq_t *create_ckmsgq(ckpool_t *ckp, const char *name, const void *func) { ... ckmsgq->func = func; ... } static void *ckmsg_queue(void *arg) { ... ckmsgq->func(ckp, msg->data); ... } ``` I think I know the answer to what will be my first question, but I would like to hear someone explain it to me (I can't find an explanation anywhere). Why is the syntax of the 3rd function parameter for `create_ckmsgq` not `const void (*func)(ckpool_t *, void *)`? My guess is that we are only assigning the value and not actually calling the function from this context. However, how would I have known to do that if I where writing this code? Other question is what is the effect of the `const` keyword on that 3rd parameter for `create_ckmsgq` if it's just a pointer to a function?
You can use the `classes` prop and add a class to the component and then use that class to style the component with `css`. ``` <FileUploader classes="custom-fileUploader" multiple={true} handleChange={handleChange} name="file" types={fileTypes} /> ``` Now just give `.custom-fileUploader` the `height` (and any other style) you want and make the component taller.
I have a simple Django API served with gunicorn on a Kubernetes cluster. And after long time of running the pod seems to CPU throttle. So I tried to investigate, I used Lucost to swarm my API of requests to see how it handles a unsual amount of requests. Here is a recap of normal activity of the API: - 60 requests per hour - 10 lines of log per request - 600 lines of logs per hour So it's not an intensive requested API. Internally the API will check the body and make a request to CosmosDB server to retreive some data and then format it to send it back to the caller. Less than 200ms requets with very few memory needed. When doing a swarm with locust, I see that the cpu throttles and when using `top` command in the pod, I see that `filebeat` uses 40-60% of the cpu. While in normal activity it stays at 0.1-0.5% CPU. I `kill -9` the PID of the filebeat, did the same swarm, and everything was smooth. I thought that my log file was too big and filebeat had issue with reading the file. Here is how my django logger is defined: "app_json_file": { "level": "INFO", "class": "logging.handlers.TimedRotatingFileHandler", "filename": APP_LOG_FILE, "when": "D", "interval": 7, "backupCount": 3, "formatter": "app_json_formatter", } I tried to modify the strategy to have a more frequent rotation to make files smaller, it does help a bit, but latter fileabet CPU usage will ramp up. Here are the ressources for the k8s pod: - cpu request: 250m - cpu limit: 500m - memory request: 125Mi - memory limit: 250Mi I can not enable Horizontal Pod Autoscaling (HPA), because the K8S cluster does not have this option enabled. Here is the filebeat configuration file: #================================ Logging ===================================== logging: level: error json: true to_files: true to_syslog: false files: path: ${path.logs} keepfiles: 2 permissions: 0600 metrics: enabled: false period: 30s #================================ Inputs ===================================== filebeat.inputs: - type: filestream enabled: true id: "access-log" paths: - ${LOG_DIR}/${ACCESS_LOG_FILE_NAME} I did not find a way to limit the cpu usage of filebeat directly from the configuration. So I don't know how to handle this situation where when we get more logs, filebeat will eat all the CPU and will make the API suffocate without dying, and making Time Outs for all requests. How could I limit the filebeat CPU usage in larger workload, maybe a queue in filebat?
I have a dataset which shows the religious adherence of Party A and Party B in X country, in addition to the percentage of religious adherents in each country. df <- data.frame( PartyA = c("Christian","Muslim","Muslim","Jewish","Sikh"), PartyB = c("Jewish","Muslim","Christian","Muslim","Buddhist"), ChristianPop = c(12,1,74,14,17), MuslimPop = c(71,93,5,86,13), JewishPop = c(9,2,12,0,4), SikhPop = c(0,0,1,0,10), BuddhistPop = c(1,0,2,0,45) ) # PartyA PartyB ChristianPop MuslimPop JewishPop SikhPop BuddhistPop # 1 Christian Jewish 12 71 9 0 1 # 2 Muslim Muslim 1 93 2 0 0 # 3 Muslim Christian 74 5 12 1 2 # 4 Jewish Muslim 14 86 0 0 0 # 5 Sikh Buddhist 17 13 4 10 45 With this, I want to add together the total sum of "involved" religious adherents. So row one would get a variable equal to 12 + 9, row two only 93 (no addition since Party A and Party B are the same), etc. # PartyA PartyB ChristianPop MuslimPop JewishPop SikhPop BuddhistPop PartyRel # 1 Christian Jewish 12 71 9 0 1 21 # 2 Muslim Muslim 1 93 2 0 0 93 # 3 Muslim Christian 74 5 12 1 2 79 # 4 Jewish Muslim 14 86 0 0 0 86 # 5 Sikh Buddhist 17 13 4 10 45 55 I'm having a hard time even finding where to begin, and help would be much appreciated.
How to summarize values depending on category of other variable in R?
|r|database|dataframe|
Recently we released the first release candidate of a [high-performance JSON masker library in Java with no runtime dependencies](https://github.com/Breus/JSON-masker) which you might want to use to do this. It works on any JSON value type, has an extensive and configurable API, and adds minimal computation overhead both in terms of CPU time and memory allocations. Additionally, it has limited JsonPath support. In case the masking happens in a flow where performance is critical, or you don't want to have in-house code (depending on the APIs of Jackson) for this, you might want to use this library instead. The README should contain plenty of examples to show how to use it for your use case. Additionally, all methods part of the API are documented using JavaDoc.
With python I had to do this to set the right billing ID when building a GCP client (service account and service api in different projects): client = translate.TranslationServiceClient( credentials=google.auth.default(quota_project_id=project_id)[0], ) What is the equivalent of `google.auth.default()` for PHP? I can't seem to find a corresponding method. Edit ==== Unsure if I should be using getMiddleware or getCredentials directly, I didn't seen any official docs calling getCredentials directly because getMiddleware calls getCredentials and all the examples use that: https://github.com/googleapis/google-auth-library-php/blob/main/src/ApplicationDefaultCredentials.php#L127 # Call via getMiddleware $credentials = ApplicationDefaultCredentials::getMiddleware( 'quotaProject' => 'my-quota-project' ); # Or just call getCredentials myself? $credentials = ApplicationDefaultCredentials:: getCredentials( 'quotaProject' => 'my-quota-project' ); Also, do I need to specify a scope if I don't want to restrict at scope level? I just want all scopes ("cloud-platform").
Filebeat CPU throttle in kubernetes with django logging
|django|kubernetes|python-logging|
To achieve something like this, you will want to conditionally skip your tests. This will need added to your test block (test.describe) if you want entire blocks to be skipped, or to each test individually if you want each one to be skipped independently. This is all from a quick mock up so tailor to your needs! In your test, create a boolean variable called something like 'passed', and assign to default of true. Make sure you declare this outside your tests. ` let passed:boolean = true; ` Then, in each test, add something like: ` test.skip(passed===false, 'Test failed. Skipping.'); ` At the very beginning of the test. Like: `test('do stuff', async ({ page}) => { test.skip(passed === false, 'Skipping'); });` This means each test checks the result variable and skips if it failed previously. All that is left is to add logic to each test that if it meets your desired condition, set `passed` to true. Else, set it to false. That can just be a simple if statement or something more complex. Your test goal will dictate this so I cant write that for you! As mentioned, this is all just as an idea and im doing this on mobile, so ignore any syntax issues. This should steer you in a good direction! If you want to keep the test going after failure, but still mark it as failed, look at adding soft assertions which are designed to do just that. Read up on this: https://playwright.dev/docs/test-assertions#soft-assertions
This is a weired Layout issue I recently occured. I have tried many different ways to construct all fails. Here is what I want: - In a row; - the text should expand to the row; - the text has a Container as background, I want the container always fits the text; - it's should expand or soft wrap but when text are not long, the contains should fits the text. I am now trying like this: ``` return Padding( padding: const EdgeInsets.symmetric(vertical: 8, horizontal: 16), child: Row( mainAxisSize: MainAxisSize.min, mainAxisAlignment: MainAxisAlignment.center, children: [ Flexible( child: Align( alignment: Alignment.center, child: Container( padding: const EdgeInsets.symmetric(vertical: 0, horizontal: 8), decoration: BoxDecoration( color: PlatformTheme.getMainBkColorSection(context), borderRadius: BorderRadius.circular(9), boxShadow: [ BoxShadow( color: isDarkModeOnContext(context) ? const Color.fromARGB(68, 27, 27, 27) : const Color.fromARGB(159, 245, 245, 245), blurRadius: 9, spreadRadius: 5) ], ), child: Text( (msg as types.SystemMessage).text, style: NORMAL_S_TXT_STYLE.copyWith( color: Colors.grey.shade500), textAlign: TextAlign.center, softWrap: true, ), ), ), ), ], )); ``` the problem is: **to expand the text, I have to using Expanded or Flexiable in a Row, but everytime I do it, I need a container wrap text provide a bcakground, but it's always expanded to whole width** Any one knows how to make come true?
I'm starting a project with .NET 8 and GraphQL (hot chocolate) and I use Visual Studio editor.. Every time I create a new mutation for GraphQL, I need to remember to create the class as sealed, and to add two attributes, for example: ``` [ExtendObjectType(typeof(GraphMutation))] [Authorize] public sealed class CreateMutation { public async Task<TodoItemDto> CreateTodo(IMediator _mediator, CreateTodoItemCommand createTodoItem) => await _mediator.Send(createTodoItem); } ``` For the queries the template it's a little bit different, I need to extend another class, and let's say in other folders may be different.. I'd like to help myself to don't miss nothing, so I would like that all the files that will be created under a certain path eg: src/Web/GraphQL/Mutations will have a certain template (the one showed), then all the files that will be created under src/Web/GraphQL/Queries another template, and so on.. is it possible that kind of personalization on Visual Studio? Not sure if it matters but it's .NET 8. I know there is a main C# template class somewhere in the C/Programs/.. and I already edited it to have the class 'public' instead of 'internal', but I would like to know if it's possible to have different templates and all of them related just to that project, so that everyone which will clone the repository and will generate a new class under Mutations folders, the class will be generated like that!
{"Voters":[{"Id":3474181,"DisplayName":"user3474181"}],"DeleteType":1}
If you're using `docker compose` to manage your docker containers you can simply connect via the convenient image name specified in the `docker-compose.yaml`-file. For instance, I have two named containers running called `postgres` and `pgadmin` with the following setup ```yaml services: postgres: env_file: .env image: postgres:latest container_name: postgres ports: - "5432:5432" volumes: - db:/var/lib/postgresql/data pgadmin: image: dpage/pgadmin4 container_name: pgadmin ports: - "5050:80" env_file: .env volumes: db: ``` Since the postgres process is named `postgres` and is running inside a docker container, on the same network as the pgadmin-process, I can simply refer to it as `postgres` in the connection settings in the pgadmin gui. Like so: [![adding postgres server in a docker network][1]][1] [1]: https://i.stack.imgur.com/4rcMW.png
{"Voters":[{"Id":3474181,"DisplayName":"user3474181"}]}
Can't index arrays in assembly
I got this problem where Django return "No Books matches the given query", even though the object exist in database. For context, I want to create an update page without relying the use of URL. First, here is my model : ``` User = get_user_model() class Books(models.Model): owner = models.ForeignKey(User, on_delete=models.CASCADE, blank=True, null=True) title = models.CharField(max_length=50) author = models.CharField( max_length=40, ) author_nationality = models.CharField(max_length=20, default='Japanese') author_medsos = models.CharField(max_length=30, default='None') book_type = models.CharField( max_length=10, null=True, choices=BOOKS_TYPES ) tl_type = models.CharField( max_length=20, null=True, choices=TRANSLATION_BY ) series_status = models.CharField( max_length=20, choices=SERIES_STATUS, default=ONG ) source = models.CharField(max_length=100) # source can refer to downloage page, example = https://www.justlightnovels.com/2023/03/hollow-regalia/ reading_status = models.CharField( max_length=20, choices=READING_STATUS, default=TOR ) current_progress = models.CharField( # For tracking how many chapter have been read max_length=30, null=True ) cover = models.FileField(upload_to='cover/', default='cover/default.png') genre = models.CharField(max_length=40, null=True) class Review(models.Model): books = models.ForeignKey('Books', on_delete=models.CASCADE, null=True) volume = models.IntegerField() review = models.TextField() ``` Before the user get to the update page, they first have to visit `/library` page, ``` {% for book in user_books %} <div class='lib-item'> <form method='POST' action="{% url 'core:update_library' %}"> {% csrf_token %} <input type='hidden' name='book_id' value='{{ book.id }}'> <button type='submit'> <img src={{ book.cover.url }} width='185'> <span> {{ book.title }} </span> </button> </form> </div> {% endfor %} ``` Notice the hidden input? The idea is to pass the book.id to the next page, which is `/library/update`. Here is the page for `/library/update` : ``` {% block content %} <div class='content'> <h3>{{ book.title }} by {{ book.author }}</h3> <small> _id : {{ book_id }} </small> <small> id : {{ book }} </small> <form method='POST'> {% csrf_token %} {{ form.as_p }} <button type='submit'>submit</button> </form> {% endblock %} ``` And here is the views.py for both page ``` @login_required def library(request): user = request.user user_books = user.books_set.all() context = { 'user' : user, 'user_books' : user_books } return render(request, 'user_library.html', context) @login_required def update_library(request): user = request.user book_id = request.POST.get('book_id') print(f"book id : {book_id}") book = get_object_or_404(user.books_set, pk=book_id) if request.method == 'POST': form = UpdateBooks(request.POST, instance=book) if form.is_valid(): form.save() context = { 'book_id' : book_id, 'book' : book, 'form' : form } # messages.success(request, 'Library have been updated') # return HttpResponseRedirect('/library/update/') return render(request, 'user_update.html', context) else : context = { 'book_id' : book_id, 'book' : book } return render(request, 'user_update.html', context) ``` And the form using ModelForm : ``` class AddBooks(forms.ModelForm): class Meta: model = Books exclude = ["owner"] class UpdateBooks(forms.ModelForm): class Meta: model = Review exclude = ["books"] ``` Now, when the user load the `/library` page, the page works fine. When user choose a book, they are redirected to `/library/update` with the book's id being passed with the use of hidden `input`. And the `/library/update` able to display the information about the chosen book correctly, including the id, object, title, and author. So the book's id should be present right? But when the user submit the form, the page return an error : "Page not found (404) No Books matches the given query". I am confused as to why the error says there is "No Books matches the given query". I have check Django admin page and the object exist. Even the `/library/update` is correctly displaying the information about the books, so the book object should exist. But when submitting the form, the books does not exist. Does anyone have any ideas about this error happens? Thank you EDIT : I find a new clue about the the missing book_id So when `/library/update` is rendered, book_id exist, that's why the HTML able to display book_id with `{{ book_id }}`. Now, I was checking the terminal when the page is rendered. Notice there is `print` statement in `views.py`? When the page is rendered, my terminal output `book id : 14`. So the `book_id` exist when the page is rendered. But when the form is submitted, `print` is triggered again and printed out `book id : None`. I don't know why, but `book_id` exist when the page is rendered, but missing when the form is submitted. Below is screenshot of my terminal, red arrow when the page is rendered, and green arrow when the form is submitted. [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/kNTrj.png
I'm currently trying to deploy a Next.js app on GitHub Pages using GitHub Actions, but I get a page 404 error even after it successfully deploys. I've looked around a bunch of similarly named questions and am having trouble figuring this out. Here is my GitHub repo: https://github.com/Mctripp10/mctripp10.github.io Here is my website: https://mctripp10.github.io I used the *Deploy Next.js site to Pages* workflow that GitHub provides. Here is the `nextjs.yml` file: ```lang-yaml # Sample workflow for building and deploying a Next.js site to GitHub Pages # # To get started with Next.js see: https://nextjs.org/docs/getting-started # name: Deploy Next.js site to Pages on: # Runs on pushes targeting the default branch push: branches: ["dev"] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: # Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages permissions: contents: read pages: write id-token: write # Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued. # However, do NOT cancel in-progress runs as we want to allow these production deployments to complete. concurrency: group: "pages" cancel-in-progress: false jobs: # Build job build: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Detect package manager id: detect-package-manager run: | if [ -f "${{ github.workspace }}/yarn.lock" ]; then echo "manager=yarn" >> $GITHUB_OUTPUT echo "command=install" >> $GITHUB_OUTPUT echo "runner=yarn" >> $GITHUB_OUTPUT exit 0 elif [ -f "${{ github.workspace }}/package.json" ]; then echo "manager=npm" >> $GITHUB_OUTPUT echo "command=ci" >> $GITHUB_OUTPUT echo "runner=npx --no-install" >> $GITHUB_OUTPUT exit 0 else echo "Unable to determine package manager" exit 1 fi - name: Setup Node uses: actions/setup-node@v4 with: node-version: "20" cache: ${{ steps.detect-package-manager.outputs.manager }} - name: Setup Pages uses: actions/configure-pages@v4 with: # Automatically inject basePath in your Next.js configuration file and disable # server side image optimization (https://nextjs.org/docs/api-reference/next/image#unoptimized). # # You may remove this line if you want to manage the configuration yourself. static_site_generator: next - name: Restore cache uses: actions/cache@v4 with: path: | .next/cache # Generate a new cache whenever packages or source files change. key: ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}-${{ hashFiles('**.[jt]s', '**.[jt]sx') }} # If source files changed but packages didn't, rebuild from a prior cache. restore-keys: | ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}- - name: Install dependencies run: ${{ steps.detect-package-manager.outputs.manager }} ${{ steps.detect-package-manager.outputs.command }} - name: Build with Next.js run: ${{ steps.detect-package-manager.outputs.runner }} next build - name: Static HTML export with Next.js run: ${{ steps.detect-package-manager.outputs.runner }} next export - name: Upload artifact uses: actions/upload-pages-artifact@v3 with: path: ./out # Deployment job deploy: environment: name: github-pages url: ${{ steps.deployment.outputs.page_url }} runs-on: ubuntu-latest needs: build steps: - name: Deploy to GitHub Pages id: deployment uses: actions/deploy-pages@v4 ``` I got this on the build step: ```lang-none Route (app) Size First Load JS ┌ ○ /_not-found 875 B 81.5 kB ├ ○ /pages/about 2.16 kB 90.2 kB ├ ○ /pages/contact 2.6 kB 92.5 kB ├ ○ /pages/experience 2.25 kB 90.3 kB ├ ○ /pages/home 2.02 kB 92 kB └ ○ /pages/projects 2.16 kB 90.2 kB + First Load JS shared by all 80.6 kB ├ chunks/472-0de5c8744346f427.js 27.6 kB ├ chunks/fd9d1056-138526ba479eb04f.js 51.1 kB ├ chunks/main-app-4a98b3a5cbccbbdb.js 230 B └ chunks/webpack-ea848c4dc35e9b86.js 1.73 kB ○ (Static) automatically rendered as static HTML (uses no initial props) ``` Full image: [Build with Next.js][1] I read in https://stackoverflow.com/questions/58039214/next-js-pages-end-in-404-on-production-build that perhaps it has something to do with having sub-folders inside the `pages` folder, but I'm not sure how to fix that as I wasn't able to get it to work without sub-foldering `page.js` files for each page. [1]: https://i.stack.imgur.com/wSlPq.png
404 error on deploying Next.js app with GitHub Actions
|reactjs|next.js|github-pages|
You can use `theme(aspect.ratio = ...)` to give your whole plot a particular aspect ratio. This will make your title come down closer to the bar. It makes it easier to reason about plots if you avoid flipping co-ordinates. In addition, I would use a super-thick segment instead of a bar here. By setting `expand = FALSE` inside `coord_cartesian` you can get a tightly fitting x axis. The breaks can be set exactly as desired with `scale_x_continuous` ```r library(ggplot2) data.frame(min = 7, value = 15, max = 33) |> ggplot(aes(y = 1)) + geom_linerange(aes(xmin = min, xmax = max), lwd = 100, color = "salmon") + geom_vline(aes(xintercept = value), lwd = 10, alpha = 0.5) + coord_cartesian(ylim = c(0, 2), expand = FALSE) + scale_x_continuous(breaks = c(7, 10, 15, 20, 25, 30, 33)) + ggtitle("Anxiety Score") + theme_void(base_size = 25) + theme(aspect.ratio = 1/5, axis.ticks.x = element_line(), axis.ticks.length.x = unit(5, "mm"), axis.text.x = element_text(), axis.line.x = element_line(), plot.title = element_text(hjust = 0.5, margin = margin(20, 20, 20, 20)), plot.margin = margin(20, 20, 20, 20)) ``` [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/NHQO3.png
I was hunting for an answer (but for C#) and landed here; found a clue in edwabr123's post. This is the relevant option (2022-17.8.6): Text_Editor/C#/Advanced/Fix_text_pasted_into_string_literals_(experimental). HUGE headache . . . gone.
I have a databricks notebook that uses chrome webdriver. I had been using the same notebook and code for several months, yet all of a sudden I get this error: ``` ImportError: cannot import name 'deprecated' from 'typing_extensions' (/databricks/python/lib/python3.10/site-packages/typing_extensions.py) ``` For the line: ``` from selenium import webdriver ``` Is anyone able to guide me to a solution? Thanks in advance!
ImportError: cannot import name 'deprecated' from 'typing_extensions' Chrome Webdriver
|python|selenium-webdriver|databricks|
You want `skip-worktree`. `assume-unchanged` is designed for cases where it is expensive to check whether a group of files have been modified; when you set the bit, `git` (of course) assumes the files corresponding to that portion of the index have not been modified in the working copy. So it avoids a mess of `stat` calls. This bit is lost whenever the file's entry in the index changes (so, when the file is changed upstream). `skip-worktree` is more than that: even where `git` *knows* that the file has been modified (or needs to be modified by a `reset --hard` or the like), it will pretend it has not been, using the version from the index instead. This persists until the index is discarded. There is a good summary of the ramifications of this difference and the typical use cases here: [http://fallengamer.livejournal.com/93321.html][1]. From that article: * `--assume-unchanged` assumes that a developer **shouldn’t** change a file. This flag is meant for **improving performance** for not-changing folders like SDKs. * `--skip-worktree` is useful when you instruct git not to touch a specific file ever because developers **should** change it. For example, if the main repository upstream hosts some production-ready **configuration files** and you don’t want to accidentally commit changes to those files, `--skip-worktree` is exactly what you want. [1]: https://web.archive.org/web/20200604104042/http://fallengamer.livejournal.com/93321.html
The agent does not have any profiling settings, so it cannot record data. If you specify the "nowait" option, also specify the config=<PATH> and the id=<ID> parameters to refer to a particular session configuration. The total parameter will look like -agentpath:/opt/jprofiler14/bin/linux-x64/libjprofilerti.so=port=8849,nowait,config=/path/to/config,id=123 The session ID can be seen in the top-right corner on the application settings tab of the session settings. I would recommend configuring a session in JProfiler on your desktop machine in JProfiler and exporting it with Session->Export Session Settings. Then you can omit the "id" parameter because the config just contains a single session. If you do that, jpcontroller can start recording.
It looks like this code works for me. I put it into [Directory.Packages.props][1]. ``` <GlobalPackageReference ExcludeAssets="build" Include="Transitive library Y">       <IncludeAssets>runtime; native; contentfiles; analyzers; buildtransitive</IncludeAssets>       <PrivateAssets>all</PrivateAssets> </GlobalPackageReference> ``` Effectively, I just included that `Transitive library Y` as direct dependency, globally (via Central Package Management), but I forbid `compile` [action][2]. As result, I still can use it in runtime through other dependencies, but I cannot add my code, which will use it. [1]: https://learn.microsoft.com/en-us/nuget/consume-packages/central-package-management [2]: https://learn.microsoft.com/en-us/nuget/consume-packages/package-references-in-project-files
How to make Container wrap to text while Expanded to Row, and fit it's own width?
|android|flutter|algorithm|mobile|layout|
You can use it as a mask like below. Note the use of `80px 80px` which will control the size of the SVG <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> img { border-radius: 5px 0 5px 5px; -webkit-mask: linear-gradient(#000 0 0), url('data:image/svg+xml;utf8,<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 43.45 43.45"><path fill="black" fill-rule="evenodd" d="M0 0v.05c1.15.11 2.24.6 3.06 1.42l38.93 39.06c.79.79 1.26 1.83 1.4 2.92h.06V0H0Z"/></svg>') 100% 0/80px 80px no-repeat; -webkit-mask-composite: xor; mask-composite: exclude; } <!-- language: lang-html --> <img src="https://picsum.photos/id/1/300/250" > <!-- end snippet -->
I am using ECS with EC2 launch type, where my four services are running like Frontend in angular, backend in python django, redis and rabbitmq, Now my my backend is must connect with rabbitmq where rabbitmq make connections with backend service and in backend service celery is there which is give workers to rabbitmq via ampq protocol now for i need to make connection with between two, and i have make connection via .env file. I have enable Service discovery where i have made namespace named "local" , its automatically create privaate hostedzone named "local" into route 53 , now i have create service with service discovery. In Local machine its working with mention envrionment "amqp://guest:**@127.0.0.1:5671//" but in ecs in task defination i have decair this environment varibale with service discovery namespace its give me an error like this " consumer: Cannot connect to amqp://guest:**@rbmq.local:5671//: [Errno 111] Connection refused." - connot resove the DNS name > Note: - with out .env file not able to connect any service via mentions credentials, Please help me with this. I want to make it work by only task defination environment variable not by .env file and also its also able to resolve service discovery by create private DNS zone.
consumer: Cannot connect to amqp://awaazde:**@rbmq.local:5671/awaazde: [Errno 111] Connection refused
|django|amazon-web-services|amazon-ec2|rabbitmq|amazon-ecs|
null
I am trying to fetch an OS image with pycurl and write the decompressed data to disk. With gzip it is straight forward, only with lz4 formats I face issues, it seems the write_lz4(buf) decompresses and writes to disk, only when I try to resize the partition, I get an error: > entries is 0 bytes, but this program supports only 128-byte entries. Adjusting accordingly, but partition table may be garbage. Warning: Partition table header claims that the size of partition table entries is 0 bytes, but this program supports only 128-byte entries. Adjusting accordingly, but partition table may be garbage. Creating new GPT entries in memory. The operation has completed successfully. Error: Partition doesn't exist I could also manage it with io.Byitesio: ``` if url.endswith('.lz4'): with io.BytesIO() as output_buffer: curl.setopt(pycurl.WRITEDATA, output_buffer) curl.perform() output_buffer.seek(0) decompressed_data = lz4.frame.decompress(output_buffer.read()) disk.write(decompressed_data) ``` But it seems this step is unnecessary. I tried the direct approach but it didn't work. Here is the code: ``` def write_to_disk(self, url, dev, proxy=None): if os.path.isfile(dev): size = os.path.getsize(dev) with open(os.path.realpath(dev), 'wb') as disk: disk.seek(0) def write_gz(buf): disk.write(d.decompress(buf)) def write_lz4(buf): disk.write(lz4.decompress(buf)) try: curl = pycurl.Curl() curl.setopt(pycurl.URL, url) if proxy is not False: curl.setopt(pycurl.PROXY, proxy) curl.setopt(pycurl.BUFFERSIZE, 1024) if url.endswith('.lz4'): curl.setopt(pycurl.WRITEFUNCTION, write_lz4) elif url.endswith('.gz'): d = zlib.decompressobj(zlib.MAX_WBITS | 32) curl.setopt(pycurl.WRITEFUNCTION, write_gz) curl.perform() except pycurl.error: return False if os.path.isfile(dev): disk.seek(size - 1) disk.write(b"\0") return True ``` Thanks
|flutter|firebase|dart|firebase-authentication|flutter-getx|
null
I've been trying to get a pie chart working and was following the instructions from here [https://livecharts.dev/docs/WPF/2.0.0-rc2/Overview.Installation](https://livecharts.dev/docs/WPF/2.0.0-rc2/Overview.Installation) [https://livecharts.dev/docs/WPF/2.0.0-rc2/PieChart.Pie%20chart%20control](https://livecharts.dev/docs/WPF/2.0.0-rc2/PieChart.Pie%20chart%20control) but when I get to adding a pie chart it gives me this error (https://i.stack.imgur.com/jiTaq.png) does anyone know how I can fix this? ```xml <lvc:PieChart x:Name="BudgetChart" Width="250" Height="250" Margin="0,150,0,0" LegendPosition="Bottom"/> ``` This is my current code for the pie chart and both lvc:PieChart and LegendPosition give me the same error I installed the package through the NuGet package manager through visual studio if that helps
I have a list of field name in a database table (my actual list is 15+ in length and can vary) ``` flds = ["foo", "bar", "foobar"] ``` I also have field name in a function that correspond to those values ``` foo = "red" bar = "" foobar = "green" ``` I have code be low that is the same for each variable ``` sql = "UPDATE table SET a = 1 " if foo == "": sql = sql + ", foo = NULL" elif foo is not None: sql = sql + ",foo = " + foo if bar == "": sql = sql + ", bar = NULL" elif foo is not None: sql = sql + ",bar= " + bar*** if foobar == "": sql = sql + ", foobar = NULL" elif foo is not None: sql = sql + ",foobar= " + foobar ``` The code is the exact same for each variable in the list and I don't want to have to write this segment of code for each variable every time the list changes; I want a way to create it dynamically based on the current values in the list. I have tried using the exec() command ``` for fld in flds: tmp_fld = f"{fld}") # the actual value now of tmp_fld is "foo" exec(f"tmp_fld_val = {tmp_fld}) # I want the value of tmp_fld_val to be "red" but no joy here if tmp_fld_val == "": sql = f"{sql} ,{tmp_fld} = NULL" elif tmp_fld_val is not None: sql = f"{sql} ,{tmp_fld} = '{tmp_fld_val}'" ```
I want to dynamically create python code based on values in a list
|python|
null
You may remove a single occurrence of a character from a string with something like the following: ```typescript const removeChar = (str: string, charToBeRemoved: string) => { const charIndex: number = str.indexOf(charToBeRemoved); let part1 = str.slice(0, charIdx); let part2 = str.slice(charIdx + 1, str.length); return part1 + part2; }; ``` Remove the type decorators if you prefer to use pure JavaScript.
null
I have the problem when syncing the build.gradle.kts file: plugins { id("com.android.application") id("org.jetbrains.kotlin.android") id("com.google.devtools.ksp") id("kotlin-kapt") } android { namespace = "com.example.contactapp" compileSdk = 34 defaultConfig { applicationId = "com.example.contactapp" minSdk = 21 targetSdk = 34 versionCode = 1 versionName = "1.0" testInstrumentationRunner = "androidx.test.runner.AndroidJUnitRunner" } buildTypes { release { isMinifyEnabled = false proguardFiles( getDefaultProguardFile("proguard-android-optimize.txt"), "proguard-rules.pro" ) } } compileOptions { sourceCompatibility = JavaVersion.VERSION_1_8 targetCompatibility = JavaVersion.VERSION_1_8 } kotlinOptions { jvmTarget = "1.8" } buildFeatures { viewBinding = true } } dependencies { val room_version = "2.6.1" implementation("androidx.room:room-runtime:$room_version") annotationProcessor("androidx.room:room-compiler:$room_version") kapt("androidx.room:room-compiler:$room_version") ksp("androidx.room:room-compiler:$room_version") implementation("androidx.room:room-ktx:$room_version") implementation("androidx.room:room-rxjava2:$room_version") implementation("androidx.room:room-rxjava3:$room_version") implementation("androidx.room:room-guava:$room_version") testImplementation("androidx.room:room-testing:$room_version") implementation("androidx.room:room-paging:$room_version") implementation("androidx.core:core-ktx:1.12.0") implementation("androidx.appcompat:appcompat:1.6.1") implementation("com.google.android.material:material:1.11.0") implementation("androidx.constraintlayout:constraintlayout:2.1.4") testImplementation("junit:junit:4.13.2") androidTestImplementation("androidx.test.ext:junit:1.1.5") androidTestImplementation("androidx.test.espresso:espresso-core:3.5.1") } It said "org.gradle.internal.event.ListenerNotificationException: Failed to notify project evaluation listener." I'm using Gradle 8.2 for this project, I tried to use "classpath" to change the version but the project didn't even understand that keyword to change the Gradle version.
|python|windows-task-scheduler|
I'm working on a project in Reac Native Expo and using "expo-router" to navigate between pages. I need to send a nested object array from one screen to another. My object structure is something like this: item: { id: '123', name: 'name', nestedObjectArray: [{id: 1, name: 'name'}, {id: 2, name: 'name'}, {id: 3, name: 'name'}], } I've used `router.navigate(`({pathname: 'path/to/screen', params: item})` but in the other screen I'm receiving this: item: { id: '123', name: 'name', nestedObjectArray: [object Object], [object Object], [object Object], } Any suggestions?
How to send nested objects with router.navigate()?
|typescript|react-native|expo|expo-router|
null
I want the pass the dynamic values in `params` from input.txt file? **input.txt file:-** ``` term=ditech process solutions,country=IN,action=get_search_companies ``` **For this code:-** import requests from bs4 import BeautifulSoup def read_params(file_path): params = {} with open(file_path, 'r') as file: for line in file: key, value = line.strip().split('=') params[key] = value return params api_url = "https://lei-registrations.in/wp/wp-admin/admin-ajax.php" input_file_path = "input.txt" params = read_params(input_file_path) # manual using params # params = { # "term": "ditech process solutions", # <-- search term # "country": "IN", # "action": "get_search_companies", # } headers = { "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:122.0) Gecko/20100101 Firefox/122.0" } data = requests.get(api_url, params=params, headers=headers).json() if data["success"]: soup = BeautifulSoup(data["data"], "html.parser") for r in soup.select(".searchResults_title"): name = r.select_one(".searchResults_name").text number = r.select_one(".searchResults_number").text print(f"{name:<50} {number}")
It's a known issue with the latest release, it now has been resolved and the fix will come in the next release. One way to fix it is to rollback to a previous version i.e. 10.7.1 or if the isssue persists like mine you can just add this line of code: window.navigator.userAgent = "ReactNative"; I pasted mine inside the App.js file before any firebase related import. Edited to add credits: https://github.com/firebase/firebase-js-sdk/issues/7962#issuecomment-1902290249
## 2024 update Even startxwin is no longer present. But `XWin` is (note the uppercase which is not tab-completion friendly when you're trying to find it!).
I have a function, which takes two vectors and computes a numeric value (like `cor` correlation would do). However, I have two datasets with around 6000 columns (the two datasets have the same dimensions), where the function should return one vector with the values of the correlation. The code with a loop would look like this: set.seed(123) m=matrix(rnorm(9),ncol=3) n=matrix(rnorm(9,10),ncol=3) colNumber=dim(m)[2] ReturnData=rep(NA,colNumber) for (i in 1:colNumber){ ReturnData[i]=cor(m[,i],n[,i]) } This works fine, but for efficiency reasons I want to use the apply-family, obviously, the mapply function. However,`mapply(cor,m,n)` returns a vector with length 9 of `NA`s, where it should return: > ReturnData [1] 0.1247039 -0.9641188 0.5081204 **EDIT/SOLUTION** The solution as given by @akrun was the usage of dataframes instead of matrices. Furthermore, a speed test between the two proposed solutions showed, that the `mapply`-version is faster than `sapply`: require(rbenchmark) set.seed(123) #initiate the two dataframes for the comparison m=data.frame(matrix(rnorm(10^6),ncol=100)) n=data.frame(matrix(rnorm(10^6),ncol=100)) #indx is needed for the sapply function to get the column numbers indx=seq_len(ncol(m)) benchmark(s1=mapply(cor, m,n), s2=sapply(indx, function(i) cor(m[,i], n[,i])), order="elapsed", replications=100) # test replications elapsed relative user.self sys.self user.child sys.child # 2 s2 100 4.16 1.000 4.15 0 NA NA # 1 s1 100 4.33 1.041 4.32 0 NA NA
I'm encountering an issue with notification delivery in my application, specifically utilizing FCM. Notifications on my device are experiencing delays when the app is closed. Interestingly, notifications are promptly delivered when the app is running or in the background, even when the phone is locked. However, the problem arises when the app is completely closed, leading to delayed notification reception. This delay is particularly problematic for the call function within my app, as I need to display the call bar to the user instantly when the app is closed. Unfortunately, due to this notification delivery issue, I'm unable to achieve this functionality. Could anyone provide assistance with resolving this issue? Your help would be greatly appreciated. Thank you in advance! I've explored various approaches to handle notifications, including utilizing FCM and direct sending from the web. However, despite my efforts, I've found that neither approach has been successful. My preference is to continue using FCM for notification delivery.
Notification Delivery Issue: Seeking Assistance for Instant Call Functionality in Flutter
|flutter|firebase|agora.io|
null
I've been writing a jolt for json transformation and I'm halfway through there. Need to modify json object in array if it is present in array and need to create new if it is not present at all. The object can appear at any position in json. ``` { "id": "id_1", "targetId": "targetId42", "externalId": "1extid", "textArray": [ { "name": "attribute1", "value": "value1" }, { "name": "attribute2", "value": "value2" }, { "name": "attribute3", "value": "to be modified" } ] "createdDate": "2020-09-10", "description": "Value Pack", "id": "20020", "state": "Complete", "requestedCompletionDate": "2022-09-13" } ``` Expected output ``` { "id": "id_1", "targetId": "targetId42", "externalId": "1extid", "textArray": [ { "name": "attribute1", "value": "value1" }, { "name": "attribute2", "value": "value2" }, { "name": "attribute3", "value": "modified value" } ] "createdDate": "2020-09-10", "description": "Value Pack", "id": "20020", "state": "Complete", "requestedCompletionDate": "2022-09-13" } ``` And if the block attribute3 is not present we create it with value that can be passed through custom attributes. Note - the attribute3 object can appear at any position in textArray array. Tried following different jolt but none desired output.
Modify json array object with jolt
|json|apache-nifi|jolt|
null
For example there are two tables t1 and t2: t1 as, the only value in the table is either 'Prime', 'Non-Prime' or **'All'**. So it's something like this below, shipment_type can be one of the three values. It's 'Prime' in this example. 'All' means it can be either Prime or Non-Prime. shipment_type Prime and t2 as: shipment_type Prime Non-Prime I want to filter table t2 based on the value in the t1 table but I don't know how to select all of the values in t2 table if shipment_type in the table t1 is 'All'. I tried to filter table t2 like this, however this only includes Prime or Non-Prime results. I want to obtain all of the values in the t2 table if t1.shipment_type = 'All' WHERE t2.shipment_type = (SELECT CASE WHEN t1.shipment_type = 'Prime' THEN 'Prime' WHEN t1.shipment_type = 'Non-Prime' THEN 'Non-Prime' END AS customer_type FROM t1) I don't think adding the third condition in case when will help since there is no 'All' field in table t2. Thanks and appreciating your response
|spring|spring-security|
null
With my quarto extension [mathjax3eqno](https://github.com/ute/mathjax3eqno), you can use mathjax capabilities also within html, but unfortunately not for books, because these would require keeping track of labelled equations across several rendered files. It is not working for word format, either. If you want to use `\tag` within a single document that you only plan to render as html or pdf, add the extension, and you should be good to go as long as you use latex commands to refer to equations.
I'm just building an (older) GCC version from source. During the make step, I get the error message ``` "WARNING: `makeinfo' is missing on your system. You should only need it if you modified a `texi' or a `.texinfo' file, or any other file indirectly affecting the aspect of the manual. The spurious call might also be the consequence of using a buggy `make' (AIX, DU, IRIX). You might want to install the `Texinfo' package or the `GNU make' package. Grab either from any GNU archive site." ``` texinfo was installed with the command `sudo apt install texinfo`. What can I do about the problem? texinfo was installed with the command `sudo apt install texinfo`.
makeinfo is not found, although installed. How can I fix this problem?
|gcc|
null
You use the [**`|json_script`** template tag&nbsp;<sup>\[Django-doc\]</sup>](https://docs.djangoproject.com/en/stable/ref/templates/builtins/#json-script) for this: <pre><code>{{ actual_data<mark><strong>|json_script:'actual_data'</strong></mark> }} {{ predicted_data<mark><strong>|json_script:'predicted_data'</strong></mark> }} &lt;select id=&quot;visualSelector&quot; name=&quot;model&quot; onchange=&quot;printSelectedValue(<mark><strong>JSON.parse(document.getElementById('actual_data').textContent</strong></mark>), <mark><strong>JSON.parse(document.getElementById('predicted_data').textContent</strong></mark>))&quot;&gt; &lt;option value=&quot;0&quot;&gt;---&lt;/option&gt; &lt;option value=&quot;1&quot;&gt;SCATTERPLOT&lt;/option&gt; &lt;option value=&quot;2&quot;&gt;LINEPLOT&lt;/option&gt; &lt;/select&gt;</code></pre>
I am newbie in bash linux. I already make Boot Menu in Batch and Powershell. I want to make it in bash. In bash, when i type efibootmgr, the output: ``` BootCurrent: 0003 Timeout: 2 seconds BootOrder: 0011,0000,0001,0013,0014,0015,0004,0003,0016,0017 Boot0000* ThrottleStop UEFI Boot0001* rEFInd Boot Manager Boot0003* MX23 LinuX Boot0004* Windows Boot Manager etc... ``` I tried: ``` #!/bin/bash cd /home/mrkey7/Desktop/ sudo efibootmgr > file.txt echo "$(grep "Boot00" file.txt)" echo [R] Reboot echo [S] Shutdown echo [E] Exit read -n 1 -p "Choose:" ans; case $ans in r|R) sudo reboot;; s|S) sudo poweroff;; *) exit;; esac ``` Output: ``` Boot0000* ThrottleStop UEFI Boot0001* rEFInd Boot Manager Boot0003* MX23 LinuX Boot0004* Windows Boot Manager [R] Reboot [S] Shutdown [E] Exit Choose: ``` I want output like: ``` [1] ThrottleStop UEFI [2] rEFInd Boot Manager [3] MX23 LinuX [4] Windows Boot Manager [R] Reboot [S] Shutdown [E] Exit Choose: ``` Then when i press "1", pc will boots to "ThrottleStop UEFI". Press "2" boots to "rEFInd Boot Manager". etc. I try [@Grobu][1] code: ``` #!/bin/bash sudo efibootmgr | awk '/^Boot[0-9]/{ gsub(/[Bot*0]/, "", $1) Index = ++ $1; sub(/^\S+ /, "", $0) printf("[% 2u] %s\n", Index, $0); }' echo [R] Reboot echo [S] Shutdown echo [E] Exit read -n 1 -p "Choose:" ans; case $ans in r|R) sudo reboot;; s|S) sudo poweroff;; *) exit;; esac ``` the Output: ``` [ 1] ThrottleStop UEFI [ 2] rEFInd Boot Manager [ 4] MX23 LinuX [ 5] Windows Boot Manager [ 2] Setup [12] Boot Menu [13] Diagnostic Splash [14] USB FDD: [15] ATA HDD: ADATA SU650 [16] USB HDD: [17] USB CD: [18] PCI LAN: [R] Reboot [S] Shutdown [E] Exit Choose: ``` new problem, new ask (maybe my first question wrong). I don't want number greater than 9. So, i want to replace the output number with Alphabet sequance ABCDE like this: ``` [A] ThrottleStop UEFI [B] rEFInd Boot Manager [C] MX23 LinuX [D] Windows Boot Manager [E] Setup etc... [1] Reboot [2] Shutdown [3] Exit Choose: ``` [1]: https://stackoverflow.com/users/22126467/grobu
How do I prompt an input on two different lines at the same time?
|python|python-3.x|input|
null
I have an error with php function chmod https://www.php.net/manual/en/function.chmod.php Any solutions? I'm running Linux Debian 13.05 with root access and PHP 8.2. Php Warning: chmod(): No such file or directory in file: index.php on line: 8 After I added a empty file, I we get: Php Warning: chmod(): Operation not permitted in file: index.php on line: 8 Php Warning: fopen(./access.log): Failed to open stream: Permission denied in file: index.php on line: 9 $root_path = './'; define('LOG_FILE', $root_path . 'access.log'); function add_log_entry($access = '') { if (LOG_FILE) { chmod(LOG_FILE, 0755); $fopen = fopen(LOG_FILE, 'ab'); } }
Php Warning: chmod(): Operation not permitted
I'm trying to get some environnement variables that I set in my nuxt.config.ts like this : ``` const config = { space: useRuntimeConfig().public.CTF_SPACE_ID, accessToken: useRuntimeConfig().public.CTF_CDA_ACCESS_TOKEN, host: useRuntimeConfig().public.CTF_HOST, environment: useRuntimeConfig().public.CTF_ENVIRONMENT }; ``` But when I run npm run generate, I have an error : ``` Error [nuxt] instance unavailable ``` From what I understand, I can't access ```useRuntimeConfig()``` wth ```ssr: true```. But how should I do it ?
Error [nuxt] instance unavailable when using useRuntimeConfig in Nuxt 3
|vue.js|nuxt.js|nuxt3|
There be no straight way for an Android app to detect if another app is using the camera. Android's security model prohibit one app from accessing the resources or monitoring the behavior of another app without explicit permission and appropriate APIs. While Android does provide APIs for accessing certain system information, such as the list of running processes or the usage statistics of other apps, these APIs have limitations and privacy considerations. They may not provide fine-grained information about whether a specific app be actively using the camera at any given moment. Furthermore, even if ye could detect that another app be using the camera, accessing or interfering with its operation would likely be a violation of the Android platform's security principles and may lead to yer app being flagged as potentially harmful or intrusive. If ye have concerns about privacy or resource usage related to the camera, it be best to address them within your own app's functionality and not attempt to monitor or interfere with other apps' behavior. Additionally, if ye believe an app be misbehaving or violating user privacy, ye can report it to the appropriate platform or store for investigation. Refrence link - https://stackoverflow.com/questions/15862621/how-to-check-if-camera-is-opened-by-any-application
[![enter image description here][1]][1] I get this error whenever I want to access my frontend and I don't know what to do Author Controller cs ``` // Controllers/AuthorController.cs using Microsoft.AspNetCore.Mvc; using Microsoft.EntityFrameworkCore; using System.Collections.Generic; using System.Threading.Tasks; namespace Pencraft.Controllers { [Route("api/[controller]")] [ApiController] public class AuthorController : ControllerBase { private readonly AppDbContext _context; public AuthorController(AppDbContext context) { _context = context; } [HttpGet] public async Task<ActionResult<IEnumerable<Author>>> GetAuthors() { var authors = await _context.Authors.ToListAsync(); return Ok(authors); } [HttpGet("{id}")] public async Task<ActionResult<Author>> GetAuthor(int id) { Author? author = await _context.Authors.FindAsync(id); if (author == null) { // Return NotFound result if author is null return NotFound(); } // Return the author value return Ok(author); } // Other actions for creating, updating, and deleting authors // Additional action method examples [HttpPost] public ActionResult<Author> CreateAuthor(Author newAuthor) { // Your logic to create a new author // ... // Return the newly created author return Ok(newAuthor); } [HttpPut("{id}")] public IActionResult UpdateAuthor(int id, Author updatedAuthor) { // Your logic to update the author with the specified id // ... // Return NoContent if successful return NoContent(); } [HttpDelete("{id}")] public IActionResult DeleteAuthor(int id) { // Your logic to delete the author with the specified id // ... // Return NoContent if successful return NoContent(); } } } ``` Startup.cs ``` using System.ComponentModel.DataAnnotations; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.EntityFrameworkCore; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; namespace Pencraft_app.pencraft_backend { public class Startup { public Startup(IConfiguration configuration) { Configuration = configuration; } public IConfiguration Configuration { get; } public void ConfigureServices(IServiceCollection services) { services.AddDbContext<AppDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"), b => b.MigrationsAssembly("Pencraft_app.pencraft_backend"))); // Other services can be added here... services.AddCors(options => { options.AddPolicy("AllowOrigin", builder => { builder.AllowAnyOrigin() .AllowAnyHeader() .AllowCredentials() .AllowAnyMethod(); }); }); services.AddControllers(); } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } else { // Production configuration goes here } app.UseRouting(); app.UseDefaultFiles(); app.UseStaticFiles(); app.UseCors("AllowOrigin"); // Make sure this is before UseRouting() app.UseEndpoints(endpoints => { endpoints.MapControllers(); endpoints.MapRazorPages(); // Conventional routing for API controllers endpoints.MapControllerRoute( name: "default", pattern: "api/{controller}/{action=Index}/{id?}"); // If you still want a specific route for "AuthorApi," you can add it separately endpoints.MapControllerRoute( name: "AuthorApi", pattern: "api/author", defaults: new { controller = "Author", action = "Index" }); }); } } public class AppDbContext : DbContext { public DbSet<YourEntity> YourEntities { get; set; } public AppDbContext(DbContextOptions<AppDbContext> options) : base(options) { } protected override void OnModelCreating(ModelBuilder modelBuilder) { // Configure your entities and relationships here // Example configuration for YourEntity modelBuilder.Entity<YourEntity>(entity => { entity.HasKey(e => e.Id); entity.Property(e => e.Name).IsRequired(); // Add other property configurations as needed }); } } public class YourEntity { public int Id { get; set; } [Required] public string Name { get; set; } = string.Empty; // Add other properties as needed } } ``` I tried this and despite every suggestion I find online, I cannot seem to fix this. I don't know what I'm doing wrong as the localhost/api/authors also doesn't exist despite me creating an endpoint for it [1]: https://i.stack.imgur.com/uxt1x.png
null
Have a question regarding Pytorch framework that I started to use recently (while always used keras/tf in the past). So I would like to convert simple pandas `DataFrame` to the pytorch `Dataset`. import pandas as pd df = pd.DataFrame(np.array([[1, 2], [4, 5], [7, 8]]), columns=['A', 'B']) For applying desirable transformation, I create custom Dataset class and inherit from the pytorch Dataset in the following way: from datasets import Dataset class CustomDataset(Dataset): def __init__(self, src_file): df = pd.read_csv(src_file) self.A = df['A'] self.B = df['B'] def __len__(self): return len(self.A) def __getitem__(self, idx): A_py = self.A.iloc[idx] B_py = self.B.iloc[idx] return A_py, B_py When I try to execute the code doing: data = CustomDataset('src_file') data receiving an error of the following kind: AttributeError: 'CustomDataset' object has no attribute '_info' What is wrong here and how should I change my approach?
CASE WHEN Statement in the Filter
|sql|mysql|case|
i want to make my extension I want to suggest "CompletionList" to users. users can run "editor.action.triggerSuggest" The process of my extensions as is follows 1. users write some texts 2. if he press "completion command", 3. vscode extension provides Completion 4. and executeCommand("editor.action.triggerSuggest") But i encounter some issues. When a specific character is already entered, no matter how much I create CompletionItems, they won't appear unless that specific character is included in the suggestions. For instance, if the cursor is positioned right after the letter 'r' in the word 'for,' the suggestion list won't appear, but if the cursor is one space after 'r,' the suggestion list appears as expected [when the cursor is right behind the text](https://i.stack.imgur.com/JJbEv.png) [When there is a space](https://i.stack.imgur.com/9tGqq.png) if there is a way that solve this issue? I apologize if my English proficiency makes it difficult for you to understand https://code.visualstudio.com/docs/getstarted/settings In this site, I try to modify settings.json, but i can't resolve errors
Safari's stricter autofill regulations especially about private data like credit card numbers—are probably to blame. Additionally, you might try using JavaScript to trigger it programmatically. This can be achieved by programmatically blurring the form's initial input field after focusing on it. This might aid Safari in identifying the form and completing the forms. Also, make sure you use `name` attribute while working with inputs in HTML. function triggerAutofill() { let firstInput = document.querySelector("iframe").contentDocument.querySelector("input"); if (firstInput) { firstInput.focus(); firstInput.blur(); } }
The core issue lies in the @Lock(LockMode.PESSIMISTIC_READ) annotation being incorrectly applied to the findAll() method. Spring Data JPA's @Lock annotation is intended for locking individual entity instances, not for locking entire query results. The framework is attempting to interpret findAll() as a property of the SeatDTO entity, leading to the "No property 'findAll' found" error. **Remove the @Lock annotation from the findAll() method:** public interface SeatDAO extends CrudRepository<SeatDTO, Integer> { Collection<SeatDTO> findAll(); } Alternatives for Pessimistic Locking: **Lock individual entities within a transaction:** Java // Inside a transactional context for (SeatDTO seat : seatDAO.findAll()) { // Lock each seat individually transactionManager.lock(seat, LockMode.PESSIMISTIC_READ); // Perform operations on the locked seat }
i want to make my extension I want to suggest "CompletionList" to users. users can run "editor.action.triggerSuggest" The process of my extensions as is follows 1. users write some texts 2. if he press "completion command", 3. vscode extension provides Completion 4. and executeCommand("editor.action.triggerSuggest") But i encounter some issues. When a specific character is already entered, no matter how much I create CompletionItems, they won't appear unless that specific character is included in the suggestions. For instance, if the cursor is positioned right after the letter 'r' in the word 'for,' the suggestion list won't appear, but if the cursor is one space after 'r,' the suggestion list appears as expected [when the cursor is right behind the text](https://i.stack.imgur.com/JJbEv.png) [When there is a space](https://i.stack.imgur.com/9tGqq.png) if there is a way that solve this issue? I apologize if my English proficiency makes it difficult for you to understand https://code.visualstudio.com/docs/getstarted/settings In this site, I try to modify settings.json, but i can't resolve errors
I don't need to do any validation over users, etc. I only need that the client send the certificate and the server (me in this case) validate the certificate through truststore or something (apply mTLS in other words). I have one endpoint, lets say "/authentication", that need to be mTLS, the other ones only need server auth (TLS). This is a possible solution, I haven't tried it yet, but I think there's a even easier approach [https://stackoverflow.com/questions/76187946/how-to-guard-some-endpoints-with-mtls-and-some-with-jwt/78030895#78030895](https://stackoverflow.com/questions/76187946/how-to-guard-some-endpoints-with-mtls-and-some-with-jwt/78030895#78030895) For more context if helps, its for 3DS EMVCO UPDATED STATUS: made a filter which validates if the request contains at least one certificate. Combine the filter with ssl.client-auth WANT for a specific route. (With WANT the certificate will be validated throught truststore) This is the filter if someone asks: @Suppress("UNCHECKED_CAST") override fun doFilterInternal( request: HttpServletRequest, response: HttpServletResponse, filterChain: FilterChain ) { try { if (isMutualTLSEnabled()) { val certificates: Array<X509Certificate>? = request .getAttribute("jakarta.servlet.request.X509Certificate") as Array<X509Certificate>? if (certificates.isNullOrEmpty()) { throw CertificateException() } } filterChain.doFilter(request, response) } catch (e: CertificateException) { resolver?.resolveException(request, response, null, e) } }
Encountering a specific problem with Angular routing when deploying my application on Wildfly, especially when packaged as a `.war` using the command `jar -cfv`. The primary concern is that while I can successfully navigate to the base URL (e.g., [http://127.0.0.1:8982/oferta-app](https://i.stack.imgur.com/AMPG8.png)), attempting to access specific routes like `http://127.0.0.1:8982/oferta-app/pageError401` results in navigation failure. [![enter image description here](https://i.stack.imgur.com/voYFO.png)](https://i.stack.imgur.com/voYFO.png) ## Key Configuration Details Undertow in Wildfly is configured to handle all requests and redirect them to `index.html`. The `<base>` element in my `index.html` is set as `<base href="/oferta-app/">`. Angular routing is defined in my application with routes like `/pageError401`, `/pageError402`, etc. Despite adjusting configurations, the routing issue persists. Seeking detailed guidance or recommendations on resolving this routing problem specifically within a Wildfly deployment when packaged as a `.war` file using the `jar -cfv` command. Any insights would be highly appreciated. * I attempted to adjust the Undertow configuration in Wildfly based on the suggestions provided in the official documentation. * I modified the standalone.xml file to redirect all requests to index.html. * I changed the value of the href attribute in the <base> element in index.html to reflect the route structure in my Angular application.
null
You should try set your html language to urdu. Add the following line to the html head ``` <html lang="ur"> ```
|c#|.net|
I would like to ask you how I could build a dataframe inside a loop starting from the values of a column. The initial dataframe is: data = {'Name': ['Computer','Tablet','Monitor','Printer'], 'Price': [900,300,450,150], 'Identifier': ['11$10qw-IDAA','2222-IL$DB123-ABC','33-$IDCCC-12345', '1as3211-223$eww3'] } First I would like to split the "Identifier" column based on the '-' character. Secondly, I would like to create a dataframe made up of columns: - Name of the df dataframe - String_02 as part of the "Identifier" string. example: Name Stringa_02 Computer 11$10qw Computer IDAA Tablet 2222 Tablet IL$DB123 Tablet ABC Thank you and hello to everyone
Pandas Create a data frame from a loop
|pandas|string|dataframe|list|function|