text
string
meta
dict
Q: rant_category() got an unexpected keyword argument 'slug' Running into an rant_category() got an unexpected keyword argument 'slug' on my django project. Basically, I just need to get the slug of the #category in my app to show it in the url. Here's my code: views.py class RantListView(ListView): model = Rant context_object_name = "rants" template_name = "rants/rant_list.html" class RantDetailView(DetailView): model = Rant template_name = "rants/rant_detail.html" def rant_category(request, category): rants = Rant.objects.filter(categories__slug__contains=category) context = {"category": category, "rants": rants} return render(request, "rants/rant_category.html", context) models.py class Category(models.Model): title = models.CharField(max_length=50) slug = AutoSlugField(populate_from="title", slugify_function=to_slugify) class Meta: get_latest_by = "-date_added" verbose_name = _("Category") verbose_name_plural = _("Categories") def get_absolute_url(self): return reverse("rants:rant-category", kwargs={"slug": self.slug}) def __str__(self): return self.slug class Rant(BaseModel, models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) post = models.TextField(blank=False) slug = AutoSlugField(populate_from="post", slugify_function=to_slugify) categories = models.ManyToManyField(Category, related_name="rants") class Meta: get_latest_by = "-date_added" verbose_name = _("rant") verbose_name_plural = _("rants") def get_absolute_url(self): return reverse("rants:rant-detail", kwargs={"slug": self.slug}) def __str__(self): return self.slug html code {% for rant in rants %} {{ rant.post }} {% for category in rant.categories.all %} <a href="{% url 'rants:rant-category' category.slug %}">#{{ category.title }}</a> {% endfor %} {% endfor %} I'm getting an: TypeError at /rants/category/category 1/ rant_category() got an unexpected keyword argument 'slug' haven't coded in awhile so I based everything on my old tutorial https://github.com/reyesvicente/cookiecutter-blog-tutorial-learnetto but it seems to not be working. EDIT Here's my urls.py on the app: path("", RantListView.as_view(), name="rant"), path("<str:slug>/", RantDetailView.as_view(), name="rant-detail"), path("category/<str:slug>/", rant_category, name="rant-category"), A: It should be slug in rant_category view not category, like so: def rant_category(request, slug): rants = Rant.objects.filter(categories__slug__contains=slug) context = {"category": slug, "rants": rants} return render(request, "rants/rant_category.html", context) A: Figured it out with the help of @sunderam dubey def rant_category(request, slug): rants = Rant.objects.filter(categories__slug__contains=slug) context = {"slug": slug, "rants": rants} return render(request, "rants/rant_category.html", context) {% for rant in rants %} {{rant.title}} {% for category in rant.categories.all %} {{category.title}} {% endfor %} {% endfor %} Now I have this data: A: To explain the issue ... def rant_category(request, category): category here is actually the category slug from the url. path("category/<str:slug>/", rant_category, name="rant-category"), So the view function is getting slug but only has a parameter called category. As the previous answer notes you need to change the function parameters for the view to reference slug instead. Then if you specifically need to reference the category object itself in the template you need to get that object first and pass that in the context, rather than the slug. So .. def rant_category(request, slug): rants = Rant.objects.filter(categories__slug__contains=slug) category = Category.objects.get(slug=slug) context = {"category": category, "rants": rants} return render(request, "rants/rant_category.html", context) Hope this gives a little explanation as to why it was wrong and the different options you have to fix it.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I remove recent projects from Embarcadero C++Builder? How do I remove recent projects in Embarcadero C++Builder 10.4? I tried regedit, removing files; all to no avail. A: Either File - Reopen - Properties or Tools - Options - User Interface - Reopen Menu
{ "language": "en", "url": "https://stackoverflow.com/questions/75633810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Power BI dax measure issue I am having a challenge to write a measure for the following. I have a table called products and each row is a different sales. I can work out the number of sales for each product but my challenge is the following. From this table I am after a measure that will tell me from this table the % products with more than 1000 sales. Really confused on how I do the following in a measure. * *Number of products *Number of sales per product *Number of products with more than 1000 sales *% of products with more than 1000 sales I have tried the following that returns a (blank) value. % products Plus 1000 sales = CALCULATE ( DISTINCTCOUNT ( 'Products'[PRODUCT NAME] ), FILTER ( 'Products', [Total Sales] > 1000 ) ) Total sales = COUNT(‘Products’[PRODUCT NAME]) A: Number Of Products = DISTINCTCOUNT('Products’[PRODUCT NAME]) Number Of Products with High Sales = CALCULATE( DISTINCTCOUNT('Products’[PRODUCT NAME), SUM('Products’, [TOTAL SALES])>1000) ) % = DIVIDE([Number Of Products with High Sales],[Number of Products]) A: Here are some measures. First off we write a measure that counts sales. From your attempts it looks like one row in the Product table is one sale. So to count sales we just need to count rows. Total Sales = COUNTROWS ( 'Product' ) Then we make a simple measure that calculates the number of products. Apparently you have duplicates in your product table, so you have used DISTINCTCOUNT, which is fine: # Products = DISTINCTCOUNT ( 'Products'[PRODUCT NAME] ) After this, we can make a third measure that counts products with more than 1000 sales. We reference the measure we have written previously here, and use a trick with IF within the COUNTX iterator to count. (For the curious, I tried COUNTAX with just the predicate, but COUNTAX counts both true and false - I would expect a false to be skipped, but that is from spending too much time in Python lately!) Since the count of products is an aggregate, we also need to calculate the number of sales from a similar starting point, which is why we feed the iterator with a one-column table with VALUES that contain the distinct product names. # Products with Sales > 1000 = COUNTX ( VALUES ( 'Products'[PRODUCT NAME] ), IF ( [Total Sales] > 1000 , 1 ) ) At this point, the final calculation is trivial, since we just invoke the previously defined measures within a DIVIDE: % of Products with Sales > 1000 = DIVIDE ( [# Products with Sales > 1000], [# Products] )
{ "language": "en", "url": "https://stackoverflow.com/questions/75633811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: window.open() Preventing the rest of the function running PROBLEM I have a function which first sends users to a new page and then renders information on it. The Issue is that using "window.Open()" reloads the page and prevents the rest of the function from running Intended Behaviour "window.open()" will open up the page but will not refresh it once it gets opened up. CODE: const ControlNavigationLinks = async function () { try { // 1) Redirect users to the pain page window.open("main.html", "_self"); //RELOADS THE PAGE, prevents function from continuing // 2) Prevent The Search bar from reloading the page event.preventDefault(); // 3) If Clicked on Logo, do nothing const element = event.target; if (element.classList.contains("logo")) return; // 4) Collect information about what country has been searched const form = document.querySelector(".search-bar-section"); const search = form.elements["search"].value; if (search === null || "") return; model.state.search = await search; // 5) Clear DOM document.querySelector(".container-search").innerHTML = ""; // 6) Get Information from External API about Countries const _ = await model.setSearchResults(); // 7) Trigger Loading the page by calling seperate function ControlRenderSearchResults(); } catch (e) { console.error(e.message); } }; If I remove "window.open()" the function performs as intended (renders information) but I need it to switch to another page and render information their. What Ive tried // 1) Redirect users to the pain page window.open("main.html", "_self"); //RELOADS THE PAGE, prevents function from continuing window.preventDefault(); //Prevents window.open() from working A: Try removing the "_self", from the window.open method. A: That's just your idea of how browser's JavaScript should work. But in reality, unless it's a sub frame, your script is only allowed to operate on the page which loaded said script. If you need that script on another page, move it to that other page. A: Well simply move all your JS code (except window.open()) to the html file that will be opened in the new window. Like that, once the new window is opened, the html file, which contains your JS code, will be rendered there. Code to adapt: window.open("main.html", "_self"); <script> const ControlNavigationLinks = async function () { try { // 1) Redirect users to the pain page // 2) Prevent The Search bar from reloading the page event.preventDefault(); // 3) If Clicked on Logo, do nothing const element = event.target; if (element.classList.contains("logo")) return; // 4) Collect information about what country has been searched const form = document.querySelector(".search-bar-section"); const search = form.elements["search"].value; if (search === null || "") return; model.state.search = await search; // 5) Clear DOM document.querySelector(".container-search").innerHTML = ""; // 6) Get Information from External API about Countries const _ = await model.setSearchResults(); // 7) Trigger Loading the page by calling seperate function ControlRenderSearchResults(); } catch (e) { console.error(e.message); } }; ControlNavigationLinks(); </script> (PS: The <script> tag is in main.html don't forget this)
{ "language": "en", "url": "https://stackoverflow.com/questions/75633812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: matplotlib trick to have unequal space beween each integer on x axis here is a code that mimic a distribution and make a stacked histogram plot import numpy as np import matplotlib.pyplot as plt f1f2=[(i**6-int(i**6))/5 for i in np.random.exponential(0.8, size=100)] widz=[i*(1/max(f1f2)) for i in f1f2] pos=[i for i in range(100)] f1=[np.random.uniform()*i for i in f1f2] f2=[s-i for s,i in zip(f1f2,f1)] fig = plt.figure(figsize=(20,2), constrained_layout=False) ax = fig.add_gridspec(nrows=1, ncols=1, hspace=0).subplots(sharex=True) ax.bar(pos, f1, color="red", edgecolor="none",width=widz) ax.bar(pos, f2, bottom=f1, color="blue", edgecolor="none",width=widz) And here is the result : I would like the exact same width for each bar (proportional to the sum of the 2 values) but without space between each bar. So the x axis will be hard to read .. yes. A: I would like the exact same width for each bar (proportional to the sum of the 2 values) but without space between each bar. If I understand correctly, setting the position of each bar so that it falls after the width of the previous one should do what you want. Especially: * *The first position should be zero, *The second position should be the width of the first bar, *The third position should be the cumulative width of the first and second bars, *and so on ... # [...] cs_widths = np.cumsum(widz) pos = np.concatenate([ [0], cs_widths[:-1] ]) # instead of range(100) # [...] Changing the pos variable that way and setting the alignment mode to "edge" (align="edge" in the bar() calls) should give you the following figure. Note that the remaining white spaces are because of very small bars. As pointed out by @Tranbi, the axis xticks now range from 0 to sum(widz). If needed, you can relabel the xticks to keep the original [0,100] range. # Example with ticks every 5 bars, ranging from 0 to 100 ticks_labels = range(100) ax.set_xticks(pos[::5], labels=ticks_labels[::5]) Although, as you mention, "the x axis will be hard to read". Some tick labels will probably overlap.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Multiple Order By On Same Column in Laravel ,Does Any Way Exist? I want Data From Both Queries One After Another ->when($todayDate, function ($query) use ($todayDate) { $query->whereDate('startdate', '>=', $todayDate)->orderBy('batches.startdate', 'desc'); })->when($todayDate, function ($query) use ($todayDate) { $query->whereDate('startdate', '<=', $todayDate)->orderBy('batches.startdate', 'desc'); }) A: `$todayDate = now()->toDateString(); $upcoming = DB::table('table_name') ->when($todayDate, function ($query) use ($todayDate) { $query->whereDate('startdate', '>=', $todayDate) ->orderBy('batches.startdate', 'desc'); }); $past = DB::table('table_name') ->when($todayDate, function ($query) use ($todayDate) { $query->whereDate('startdate', '<=', $todayDate) ->orderBy('batches.startdate', 'desc'); }); $results = $upcoming->union($past)->get();' '->when($todayDate, function ($query) use ($todayDate) { $query->orderBy('startdate', 'desc') ->orderByRaw("startdate >= '{$todayDate}' desc"); })` A: In Laravel, you can use the union() method to combine the results of two queries. Here's an example: $todayDate = date('Y-m-d'); $query1 = DB::table('table_name') ->when($todayDate, function ($query) use ($todayDate) { $query->whereDate('startdate', '>=', $todayDate) ->orderBy('batches.startdate', 'desc'); }); $query2 = DB::table('table_name') ->when($todayDate, function ($query) use ($todayDate) { $query->whereDate('startdate', '<=', $todayDate) ->orderBy('batches.startdate', 'desc'); }); $results = $query1->union($query2)->get(); A: $packages1 = Package::select('packages.*')->join('package_prices', 'packages.id', '=', 'package_prices.package_id') ->Join('package_itineraries', 'packages.id', '=', 'package_itineraries.package_id') ->leftJoin('batches', 'packages.id', '=', 'batches.package_id') ->when($states, function ($query) use ($states) { $states = Collection::make(explode(",", $states)); $query->whereIn('state_id', $states); }) ->when($countries, function ($query) use ($countries) { $countries = Collection::make(explode(",", $countries)); $query->whereIn('country_id', $countries); }) ->when($difficulty, function ($query) use ($difficulty) { $difficulty = Collection::make(explode(",", $difficulty)); $query->whereIn('trek_difficulty_id', $difficulty); }) ->when($category, function ($query) use ($category) { $query->where('category_id', $category); }) ->when($prices, function ($query) use ($prices) { $query->priceBetween($prices); }) ->when($batches, function ($query) use ($batches) { $query->batchesBetween($batches); }) ->when($duration, function ($query) use ($duration) { $query->DurationBetween($duration); }) ->when($startingLocations, function ($query) use ($startingLocations) { $query->Startinglocations($startingLocations); }) ->when($columnName, function ($query) use ($columnName, $columnSortOrder) { if ($columnName == "price") { $query->where('package_prices.is_default', 1)->orderBy('package_prices.price', $columnSortOrder); } elseif ($columnName == "duration") { $query->orderBy('package_itineraries.duration', $columnSortOrder); } $query->orderBy($columnName, $columnSortOrder); }) ->when($todayDate, function ($query) use ($todayDate) { $query->whereDate('batches.startdate', '>=', $todayDate)->orderBy('batches.startdate', 'asc'); });
{ "language": "en", "url": "https://stackoverflow.com/questions/75633814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can we use tanStackQuery client in my react app to get the bloglist How can we use tanStackQuery client in my react app to get the bloglist. Could someone please advise ! import React, { useState, useEffect, useCallback } from 'react'; import { useQuery, UseQueryOptions } from '@tanstack/react-query' const [popularResults, setPopularResults] = useState([]); useEffect(() => { const fetchData = async () => { try { const res = await axios.get(`${appUrl}/service/listofblogs`); setPopularResults(res.data.blogData); } catch (e) { console.log(e); } } fetchData(); }, []); const MostPopularBlogs = () => <div className='row'> <div className='trendingArea'> { popularResults.map (({id, blogdetails, tags, views, createdAt }) => ( <a key={id}> <div key={id} onClick={ () => navigate("popularBlogDetails", { state: { id, blogdetails, views, createdAt} }) } className='popularArea'> <ReactMarkdown children={blogdetails} className='dataDate renderElipsis tags readmoreLink views' remarkPlugins={[remarkGfm]}> </ReactMarkdown> <div className='blogDate'> {moment(createdAt).format('DD-MMM-YYYY')} <a onClick={() => getClickCount(id)} className='readmoreLink'> Read more → </a> <div className='blogViews'> {views > 999 ? (views / 1000).toFixed(2) + "K" : views} </div> </div> </div> </a> ))} </div> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/75633815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Inputs changing on parent altering (HTML5) I have an issue. I am doing a school project to do, I am in charge of back-end/front-end software, I am currently making a interactive list of values that can get removed and added, like little cache tabs, It is for a software named Rimi, Short for "Remind Me", working from an alarm-clock type system and the info for the alarm is stored in the tabs. But, Whenever a tab is created in the dropdown, all the cached data gets erased from the other tabs. I think it is due to the javascript acessing the html; reading the information by tag name instead of the original object, not reading the "value" attribute. List.innerHTML = List.innerHTML + "*Tab html*"; How do I read the object as html and add tag-based html to it? Help. I am expecting a simple tab system
{ "language": "en", "url": "https://stackoverflow.com/questions/75633817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: React hook form passing value to onSubmit How to correctly pass data to onSubmit function data property? When i click + i see the numbers changing but when clicking Submit button it does not pass data to onSubmit function, this can seen in console.log it's always 0 import { useState } from 'react'; import { InputAdornment, TextField } from '@mui/material'; import { Controller, useForm } from 'react-hook-form'; export const InputNumberStepper = () => { const { control, formState: { errors }, handleSubmit, register, } = useForm(); const [val, setVal] = useState<number>(0); console.log(' ~ val:', val); const onSubmit = async (data: any) => { console.log(' ~ data:', data); }; return ( <form onSubmit={handleSubmit(onSubmit)}> <Controller name="stepper" control={control} render={({ field: { onChange, value } }) => { console.log(' ~ value:', value); return ( <TextField {...register('stepper')} id="stepper" value={val} InputProps={{ startAdornment: ( <InputAdornment position="start" onClick={() => { setVal(val - 1); // onChange(value - 1); }} > - </InputAdornment> ), endAdornment: ( <InputAdornment position="end" onClick={() => { setVal(val + 1); // onChange(value + 1); }} > + </InputAdornment> ), }} /> ); }} /> <button type="submit">Submit</button> </form> ); }; Example: https://stackblitz.com/edit/react-ts-xmyqch?file=App.tsx A: You aren't really using React Hook Form properly. You are managing the field state in your component instead of letting React Hook Form store and manage the form state. import * as React from 'react'; import { InputAdornment, TextField } from '@mui/material'; import { Controller, useForm } from 'react-hook-form'; export default function App() { const { control, formState: { errors }, handleSubmit, setValue, register, } = useForm({ defaultValues: { stepper: 0 } }); const onSubmit = async (data: any) => { console.log(' ~ data:', data); }; return ( <form onSubmit={handleSubmit(onSubmit)}> <Controller name="stepper" control={control} render={({ field }) => { return ( <TextField {...register('stepper')} id="stepper" {...field} InputProps={{ startAdornment: ( <InputAdornment position="start" onClick={() => { setValue('stepper', parseInt(field.value) - 1); }} > - </InputAdornment> ), endAdornment: ( <InputAdornment position="end" onClick={() => { setValue('stepper', parseInt(field.value) + 1); }} > + </InputAdornment> ), }} /> ); }} /> <button type="submit">Submit</button> </form> ); }
{ "language": "en", "url": "https://stackoverflow.com/questions/75633818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to use MDX custom elements in contentlayer I'm building a site that makes use of Nextjs and Contentlayer to render a bunch of MDX files. I basically followed this tutorial to get it set up, except I'm going with MDX instead of plain markdown. I'm using Contentlayer because the standard way of supporting MDX in Next just doesn't fit my use-case very well. The Next.js MDX docs have a section on Custom elements. It lets you, for example, decide that all your h1s will be rendered with your custom component or whatever. My question is: Is there a way to set up custom elements when using Contentlayer? I assume there is some way to configure rehype or remark, I'm new to both of these tools.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to check for possible double bookings in a PHP JavaScript appointment planner? I've built an appointment planner in PHP JavaScript which is functioning well. However, there is the possibility of double bookings when multiple vistots have reached the page containing the appointment planner at more or less the same time. Obviously in this case they all have the same options as for picking a free time slot. So if one user selects timeslot "A" and the next one does the same shortly after, there'll be a double booking. What's the best practice to tackle this problem? Perform a check after a visitor has selected a timeslot and notify them in case it has just been taken by another visitor? Many thanks for your support.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Getting error after click on the the send button also not getting accesstoken and refreh token getting bad request error { "error": "invalid_grant" }It shows something like these one click of the send button shows access and refresh token i want these A: If you are using Auth Code Grant with Postman, you'll need the following * *Integration Key (ClientId) *Secret Key (clientSecret) *redirectURI These must be configured on the App and Keys page correctly Then there are two calls you need to make, one is done via a web browser where you need to log in to DocuSign (unless already logged on) and where you will provide consent. That first call gives you a token, which you exchange for an access token in the second call, which is this one: POST https://account-d.docusign.com/oauth/token?code={Code}&grant_type=authorization_code Note the code you got in the browser must be here in the request One of the headers for this call is called "Authorization" and should have the word "Basic " and then the two values (IK and Secret) encoded together: For example, if your integration key is 7c2b8d7e-xxxx-xxxx-xxxx-cda8a50dd73f and the secret key is d7014634-xxxx-xxxx-xxxx-6842b7aa8861 you can get the base64 value in a JavaScript console with the following method call: btoa('7c2b8d7e-xxxx-xxxx-xxxx-cda8a50dd73f:d7014634-xxxx-xxxx-xxxx-6842b7aa8861')
{ "language": "en", "url": "https://stackoverflow.com/questions/75633821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How to show results of union query side by side? I have two select statements with union and I wanna show the result in multiple columns or side by side. I try this: SELECT COUNT(emp.id) num1,dep.departmentname from tblemployees emp JOIN tbldepartments dep on dep.id = emp.OriginalDepartment WHERE emp.OriginalDepartment IN (208,168,209,189,157) and emp.JobType in (41,51,52,53,54) AND emp.`Status`NOT IN (11,20,21,26,17) AND emp.retiredate > CURDATE() GROUP BY dep.departmentname union SELECT COUNT(emp.id) num2,dep.departmentname from tblemployees emp JOIN tbldepartments dep on dep.id = emp.OriginalDepartment WHERE emp.OriginalDepartment IN (208,168,209,189,157) and emp.JobType IN (1,6,7,8,9,11,26,32,33,34,36,43,45,46,47,48,49,55) AND emp.`Status`NOT IN (11,20,21,26,17) AND emp.retiredate > CURDATE() GROUP BY dep.departmentname the result show like this: num1 departmentname 4 dep 2 5 dep 3 20 dep 4 50 dep 5 53 dep 2 56 dep 3 30 dep 4 16 dep 5 19 dep 6 40 dep 7 and I wanna to show the results like this: num1 num2 departmentname 4 53 dep 2 5 56 dep 3 20 30 dep 4 50 16 dep 5 19 dep 6 40 dep 7 A: How to show results of union query side by side? The above query can be re-written using subqueries and an inner join select t1.empId1, t2.empId2, t1.departmentname from ( select count(emp.id) empId1, dep.departmentname from tblemployees emp join tbldepartments dep on dep.id = emp.OriginalDepartment where emp.OriginalDepartment IN (208, 168, 209, 189, 157) and emp.JobType IN (41, 51, 52, 53, 54) and emp.`Status` NOT IN (11, 20, 21, 26, 17) and emp.retiredate > CURDATE() group by dep.departmentname ) t1 inner join( select count(emp.id) empId2, dep.departmentname from tblemployees emp join tbldepartments dep on dep.id = emp.OriginalDepartment where emp.OriginalDepartment in (208, 168, 209, 189, 157) and emp.JobType IN (1, 6, 7, 8, 9, 11, 26, 32, 33, 34, 36, 43, 45, 46, 47, 48, 49, 55) and emp.`Status` NOT IN (11, 20, 21, 26, 17) and emp.retiredate > CURDATE() group by dep.departmentname ) t2 ON t1.departmentname = t2.departmentname order by t1.departmentname; I use inner join here as I see; as you are filtering out only for the selected departments (208, 168, 209, 189, 157) in both the queries. In case; if departments are different in both queries; You can use; left join or right join as per requirement A: Conditionally aggregate instead for example:- DROP TABLE IF EXISTS T; CREATE TABLE T (dept VARCHAR(20), type int); INSERT INTO T VALUES ('one', 41), ('one', 41), ('one', 41), ('one', 41), ('two', 52), ('one', 1), ('one', 6), ('one', 1), ('two', 1), ('three', 6); select dept, sum(case when type in (41,52) then 1 else 0 end) num1, sum(case when type in (1,6) then 1 else 0 end) num2 from t group by dept; +-------+------+------+ | dept | num1 | num2 | +-------+------+------+ | one | 4 | 3 | | three | 0 | 1 | | two | 1 | 1 | +-------+------+------+ 3 rows in set (0.002 sec) need more help search for mysql pivot , still struggling publish REPRESENTATIVE sample data and desired outcome as text.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Django filter Two Models for ID I am working on a Django Application where registered users can be added Deposit by staff users, and I want to know whether a user has been added Deposit in the current month. And also check in HTML on a Button url whether the user has a deposit or Not then decide how whether to display the Button. I have tried with the below but here is the error I am getting: Cannot use QuerySet for "Account": Use a QuerySet for "Profile". Here is my Models: class Account(models.Model): customer = models.OneToOneField(User, on_delete=models.CASCADE, null=True) account_number = models.CharField(max_length=10, null=True) date = models.DateTimeField(auto_now_add=True, null=True) def __str__(self): return f' {self.customer} - Account No: {self.account_number}' class Deposit(models.Model): customer = models.ForeignKey(Profile, on_delete=models.CASCADE, null=True) transID = models.CharField(max_length=12, null=True) acct = models.CharField(max_length=6, null=True) staff = models.ForeignKey(User, on_delete=models.CASCADE, null=True) deposit_amount = models.PositiveIntegerField(null=True) date = models.DateTimeField(auto_now_add=True) def get_absolute_url(self): return reverse('create_account', args=[self.id]) def __str__(self): return f'{self.customer} Deposited {self.deposit_amount} by {self.staff.username}' Here is my View function: def create_account(request): customer = Account.objects.all() deposited_this_month = Deposit.objects.filter(customer__profile=customer, date__year=now.year, date__month=now.month).aggregate(deposited_this_month=Sum('deposit_amount')).get('deposited_this_month') or 0 context = { 'deposited_this_month ':deposited_this_month , } return render(request, 'dashboard/customers.html', context) In my HTML below is my code: {% if deposited_this_month %} <a class="btn btn-success btn-sm" href="{% url 'account-statement' customer.id %}">Statement</a> {% else %} <a class="btn btn-success btn-sm" href="">No Transaction</a> {% endif %} A: When you want to check obj in list of objs or queryset ,you should use __in Update the below code: deposited_this_month = Deposit.objects.filter(customer__profile=customer, date__year=now.year, date__month=now.month).aggregate(deposited_this_month=Sum('deposit_amount')).get('deposited_this_month') or 0 to this: deposited_this_month = Deposit.objects.filter(customer__profile__in=customer, date__year=now.year, date__month=now.month).aggregate(deposited_this_month=Sum('deposit_amount')).get('deposited_this_month') or 0 Assuming your Profile model consists of profile field, and profile field is foreignkey for Account model
{ "language": "en", "url": "https://stackoverflow.com/questions/75633823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: convert the postgres query to jpa native query Actual Query: select COALESCE(sum(infra_evt_count),0) +COALESCE(sum(infra_res_count),0) +COALESCE(sum(user_req_count),0) +COALESCE(sum(user_res_count) ,0) from dashboard.application_incident_summary_t where date between '2023-01-23' and '2023-01-29' and app_id = 7 @Query(value = "SELECT COALESCE(SUM(a.infraEvtCount), 0) " + "+ COALESCE(SUM(a.infraResCount), 0) " + "+ COALESCE(SUM(a.userReqCount), 0) " + "+ COALESCE(SUM(a.userResCount), 0) " + "FROM dashboard.APPLICATION_INCIDENT_SUMMARY_T a " + "WHERE a.date BETWEEN :startDate AND :endDate AND a.app_id = ?3",nativeQuery = true) public int findReportedCount(@Param("startDate") Date startDate, @Param("endDate") Date endDate,@Param("appId") int appId); This is the exception when i'm trying to run the code. org.springframework.beans.factory.UnsatisfiedDependencyException:
{ "language": "en", "url": "https://stackoverflow.com/questions/75633824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Cloudbuild unable to decrypt using key 404 I am attempting to set up a simple Rails application using GCP Cloudbuild, and am running into a error message that indicates a failure to decrypt a variable due to the kms key not being found. Currently, when running the build gcloud builds submit --config cloudbuild.yaml, I get the following error: ERROR: build step 2 "gcr.io/cloud-builders/docker" failed: Failed to decrypt "DB_PWD" using key "projects/onlydrams/locations/us-central1/keyRings/onlydrams/cryptoKeys/db_pwd_key/cryptoKeyVersions/1": googleapi: got HTTP response code 404 with body: <!DOCTYPE html> The Google Cloud Build IAM role that is generated from authorizing the API in the account has the following roles assigned to it: Cloud Build Service Account Cloud KMS Admin Cloud KMS CryptoKey Decrypter The part that is most confusing to me, is if it were a problem with the role assignments and their permissions are missing some role - I would expect a 403 or 401, but in this case it is a 404. In the codebuild.yaml file, under availableSecrets, the kmsKeyName is being copied directly from the Cloud Console, but it seems with that link that is auto generated a 404 occurs. steps: # Build image with tag 'latest' and pass decrypted Rails DB password as argument - name: 'gcr.io/cloud-builders/docker' args: ['build', '--tag', 'gcr.io/onlydrams/onlydrams:latest', '--build-arg', 'DB_PWD', '.'] secretEnv: ['DB_PWD'] # Push new image to Google Cloud Registry - name: 'gcr.io/cloud-builders/docker' args: ['push', 'gcr.io/onlydrams/onlydrams:latest'] availableSecrets: inline: - kmsKeyName: projects/onlydrams/locations/us-central1/keyRings/onlydrams/cryptoKeys/db_pwd_key/cryptoKeyVersions/1 envMap: DB_PWD: "CiQAYGWAVuMg5wxnkgWjKH07iWxR+GBD/wYE1YAcgYDa5nAPADwSOQDtVRn4Aj5LAMl5V0YiEnwJ48cd3RqG3lk4MN4IzhUyPIvKIZUtj5uKOVA86VbnzOaPxKNDPFUGIw==" Is there a particular reason this Cloudbuild run might not have access or be able to find the key being references in the step calling it? A: I think it's because you're passing it the key version when it's asking for the key name. Trying just using projects/onlydrams/locations/us-central1/keyRings/onlydrams/cryptoKeys/db_pwd_key
{ "language": "en", "url": "https://stackoverflow.com/questions/75633825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: I have a question about Mat-Select Angular So I've Imported all necessary modules: import { MatSelectModule } from "@angular/material/select"; import { FormsModule, ReactiveFormsModule } from "@angular/forms"; import { MatFormFieldModule } from "@angular/material/form-field"; And Created such sample code: <mat-form-field appearance="fill"> <mat-label>Select an option</mat-label> <mat-select> <mat-option>None</mat-option> <mat-option value="option1">Option 1</mat-option> <mat-option value="option2">Option 2</mat-option> <mat-option value="option3">Option 3</mat-option> </mat-select> </mat-form-field> But When I test it looks like this how it looks when I clicked The problem is when I select a value, the window with available options doesn't close, and when I click outside, the same behavior. And also, if I scroll the page options window scrolls with the screen So, in addition, when I select a value or click outside the pointer-events: none; is added to the style of the div of value window when I disable pointer events
{ "language": "en", "url": "https://stackoverflow.com/questions/75633828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to make package hints in powershell when you want to download a package I saw when you want to download a package, a hint is shown, you press the tab and you don’t need to completely write the command, how to do this in powershell?
{ "language": "en", "url": "https://stackoverflow.com/questions/75633829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: POST request from python to nodeJS server I am trying to send a post request from python to my nodeJS server. I can successfully do it from client-side js to nodeJS server using the fetch API but how can I achieve this with python? What I tried below is sending the post request successfully but the data/body attached to it is not reaching the server. What am I doing wrong and how can I fix it? Thanks in advance. NOTE: All my nodeJS routes are set up correctly and work fines! //index.js 'use strict'; const express = require('express') const app = express() const PORT = 5000 app.use('/js', express.static(__dirname + '/public/js')) app.use('/css', express.static(__dirname + '/public/css')) app.set('view engine', 'ejs') app.set('views', './views') app.use(cookie()) app.use(express.json({ limit: '50mb' })); app.use('/', require('./routes/pages')) app.use('/api', require('./controllers/auth')) app.listen(PORT, '127.0.0.1', function(err) { if (err) console.log("Error in server setup") console.log("Server listening on Port", '127.0.0.1', PORT); }) //server file //served on http://127.0.0.1:5000/api/server const distribution = async(req, res) => { //prints an empty object console.log(req.body) } module.exports = distribution; //auth const express = require('express') const server = require('./server') const router = express.Router() router.post('/server', server) module.exports = router; //routes const express = require('express') const router = express.Router() router.get('/', loggedIn, (req, res) => { res.render('test', { status: 'no', user: 'nothing' }) }) #python3 import requests API_ENDPOINT = "http://127.0.0.1:5000/api/server" data = '{"test": "testing"}' response = requests.post(url = API_ENDPOINT, data = data) print(response) A: Since you are manually passing the request as a string you may need to specify also the content-type so that Express middleware can recognise and parse it as JSON. See express.json documentation Returns middleware that only parses JSON and only looks at requests where the Content-Type header matches the type option. This parser accepts any Unicode encoding of the body and supports automatic inflation of gzip and deflate encodings. you could do it like this: headers = {'Content-type': 'application/json'} data = '{"test": "testing"}' response = requests.post(url = API_ENDPOINT, data = data, headers = headers) A better idea is to use instead (if your requests version supports it) the json parameter instead of data as shown in How to POST JSON data with Python Requests? and let the requests framework set the correct header for you: data = {'test': 'testing'} response = requests.post(url = API_ENDPOINT, json = data) A: Have you tried passing the request payload directly as a json instead of converting it to string? Like @pqnet Has mentioned, Python's request library will automatically add content-type header to your post request. import requests API_ENDPOINT = "http://127.0.0.1:5000/api/server" data = {"test": "testing"} response = requests.post(url = API_ENDPOINT, json = data) print(response)
{ "language": "en", "url": "https://stackoverflow.com/questions/75633830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to Specify Source Time Zone When converting between two Time Zones I'm working with the following Time Zone code: using System.Globalization; DateTime sourceDt = Convert.ToDateTime("2023-09-05T08:00:00"); DateTime sourceUtc = sourceDt.ToUniversalTime(); var tz = TimeZoneInfo.FindSystemTimeZoneById("Central Standard Time"); var tzTime = TimeZoneInfo.ConvertTimeFromUtc(sourceUtc, tz); Console.WriteLine("Time1= " + tzTime.ToString("hh:mm tt", CultureInfo.InvariantCulture)); var tz2 = TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time"); var tz2Time = TimeZoneInfo.ConvertTimeFromUtc(sourceUtc, tz2); Console.WriteLine("Time2= " + tz2Time.ToString("hh:mm tt",CultureInfo.InvariantCulture)); Time1 outputs 07:00 AM and Time2 outputs 08:00 AM. I'm assuming this is because I'm in EST. Is there a way I can specify that what time zone sourceDt is in? For example, I would like sourceDt to be in CST and have Time1 output 08:00 AM and Time2 output 09:00 AM. A: Use the TimeZoneInfo.ConvertTimeToUtc method to convert a local time to utc time while specifying the source time zone. var tz = TimeZoneInfo.FindSystemTimeZoneById("Central Standard Time"); var tz2 = TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time"); DateTime sourceDt = Convert.ToDateTime("2023-09-05T08:00:00"); DateTime sourceUtc = TimeZoneInfo.ConvertTimeToUtc(sourceDt, tz); var tzTime = TimeZoneInfo.ConvertTimeFromUtc(sourceUtc, tz); Console.WriteLine("Time1= " + tzTime.ToString("hh:mm tt", CultureInfo.InvariantCulture)); var tz2Time = TimeZoneInfo.ConvertTimeFromUtc(sourceUtc, tz2); Console.WriteLine("Time2= " + tz2Time.ToString("hh:mm tt", CultureInfo.InvariantCulture)); Console.ReadLine(); Output: Time1= 08:00 AM Time2= 09:00 AM A: A DateTime doesn't contain timezone information, it only has a Kind property which can be either of UTC, Local or Unspecified. Is there a way I can specify that what time zone sourceDt is in? So the answer is no, if you want to have Time1 output 08:00 AM, just create a datetime like this: var tzTime = Convert.ToDateTime("2023-09-05T08:00:00"); What you can do is assume that this datetime is in the CST, and then deduce a local time in the reverse direction: var sourceUtc = TimeZoneInfo.ConvertTimeToUtc(tzTime, tz); var sourceDt = sourceUtc.ToLocalTime(); In fact if the Kind property of a DateTime object is unspecified, you can assume the datetime in any time zone as you wish.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: i can add folders in gitignore with [] names i can add folders in gitignore with [] names i have folder wit this names [exampel] / [example2] / folder / file.lua in github desktop when i want to gitignore this file right click and gitignore this file github add this to gitignore +\[exampel\]/\[example2\]/folder /json/data.json i want manually add like **\folder not work all of then i cant do anything on my folder have [] in names in gitignore add folder to gitignore A: you can use a backslash before the square brackets to escape them. For example, to ignore the "example" folder with square brackets in its name, you can add the following line to your gitignore file: \[example\]/ If you want to ignore all folders with square brackets in their names, you can use this: *\[*/
{ "language": "en", "url": "https://stackoverflow.com/questions/75633834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Oracle UPDATE statment targets 4 columns but only 2 get updated. Why? I am trying to create a PLSQL package, SQL_LOGGING_PKG, that contains an "execute_sql" public procedure that executes a SQL string that is passed to it using dynamic SQL. The intent is for the procedure to log the SQL before it is executed and then, after the SQL completes, update the log record with the number of records impacted and the execution end time. I want the logging DML operations to be completely independent of the execution of the SQL. If there is any issue with logging, I just want to disregard the error. I also want the log operations to be committed to the database independent of the main transaction which is associated with the execution of the SQL passed. I want the caller of the procedure to control the commiting of the SQL that they requested to be executed. In my example below, I intentionally pass in an invalid SQL by referencing a column that doesn't exist. When I examine the SQL_LOG file, I see that the original SQL_LOG record is inserted and then an update is performed to the record that was inserted. I know that the SQL_LOG record was updated because I see that the ROWS_AFFECTED was set to -1 and the LAST_UPD_TS was updated but the BACK_TRACE and ERROR_MESSAGE columns remain originally NULL values. I tried to hard code a literal in for these fields just to simplify the update, but the column values remain null. All the code is below. Please also feel free to make otehr suggestions, especially if I am handling my transdactions/commits incorrectly. I am a knoob. CREATE TABLE SQL_LOG ( SQL_LOG_ID NUMBER, SQL_TEXT CLOB, SESSION_ID VARCHAR2(256 BYTE), ROWS_AFFECTED NUMBER, ERROR_MESSAGE VARCHAR2(4000 BYTE), BACK_TRACE CLOB, CRET_OPER_ID VARCHAR2(20 BYTE), CRET_TS TIMESTAMP(6) DEFAULT CURRENT_TIMESTAMP, UPDT_OPER_ID VARCHAR2(20 BYTE), LAST_UPDT_TS TIMESTAMP(6), PRIMARY KEY (SQL_LOG_ID) USING INDEX TABLESPACE FDS_DATA) TABLESPACE FDS_DATA STORAGE ( INITIAL 64K MAXEXTENTS UNLIMITED ) LOGGING; CREATE SEQUENCE SQL_LOGS_SEQ START WITH 81 INCREMENT BY 1; DECLARE l_P_SQL CLOB; BEGIN -- Variable initializations l_P_SQL := 'UPDATE COUNTRIES SET INVALIDCOL = COUNTRY_NAME'; -- Call SQL_LOGGING_PKG.EXECUTE_SQL (P_SQL => l_P_SQL); COMMIT; END; CREATE OR REPLACE PACKAGE SQL_LOGGING_PKG IS PROCEDURE execute_sql (p_sql IN CLOB); END SQL_LOGGING_PKG; / CREATE OR REPLACE PACKAGE BODY SQL_LOGGING_PKG AS FUNCTION insert_log ( p_sql_text IN CLOB ) RETURN INTEGER IS PRAGMA autonomous_transaction; v_row_count INTEGER; v_is_log_all_sql CHAR(1); v_sql_logs_seq INTEGER; v_inserted_log CHAR(1) := 'N'; BEGIN v_sql_logs_seq := sql_logs_seq.nextval; INSERT INTO SQL_LOG ( SQL_LOG_ID, SQL_TEXT, SESSION_ID, ROWS_AFFECTED, ERROR_MESSAGE, BACK_TRACE ) VALUES ( v_sql_logs_seq, p_sql_text, SYS_CONTEXT('USERENV', 'SID'), NULL, NULL, NULL ); COMMIT; RETURN v_sql_logs_seq; EXCEPTION WHEN OTHERS THEN NULL; --Eat logging errors END; PROCEDURE update_log ( p_sql_log_id IN INTEGER ,p_row_count IN INTEGER ) IS PRAGMA autonomous_transaction; v_row_count INTEGER; v_is_log_all_sql CHAR(1); v_sql_logs_seq INTEGER; v_inserted_log CHAR(1) := 'N'; BEGIN UPDATE SQL_LOG L SET L.ROWS_AFFECTED = p_row_count, L.LAST_UPDT_TS = current_timestamp WHERE L.SQL_LOG_ID = p_sql_log_id; COMMIT; EXCEPTION WHEN OTHERS THEN NULL; --Eat logging errors END; PROCEDURE log_error ( p_sql_log_id IN INTEGER ,error_message IN VARCHAR2 ,back_trace IN CLOB ) IS PRAGMA autonomous_transaction; v_row_count INTEGER; v_is_log_all_sql CHAR(1); v_sql_logs_seq INTEGER; v_inserted_log CHAR(1) := 'N'; BEGIN UPDATE SQL_LOG L SET L.ROWS_AFFECTED = -1, L.ERROR_MESSAGE = error_message, L.BACK_TRACE = back_trace, L.LAST_UPDT_TS = current_timestamp WHERE L.SQL_LOG_ID = p_sql_log_id; v_row_count := SQL%rowcount; DBMS_OUTPUT.PUT_LINE ('log_error.v_row_count=' || v_row_count); COMMIT; EXCEPTION WHEN OTHERS THEN NULL; --Eat logging errors END; PROCEDURE execute_sql (p_sql IN CLOB) IS v_row_count INTEGER; v_is_log_all_sql CHAR(1); v_sql_log_id INTEGER; BEGIN v_is_log_all_sql := 'Y'; --For now IF v_is_log_all_sql = 'Y' THEN BEGIN v_sql_log_id := insert_log(p_sql); EXCEPTION WHEN OTHERS THEN NULL; --Eat logging errors END; END IF; EXECUTE IMMEDIATE p_sql; v_row_count := SQL%rowcount; -- Get the number of rows affected IF v_is_log_all_sql = 'Y' THEN update_log(v_sql_log_id, v_row_count); END IF; EXCEPTION WHEN OTHERS THEN --log_error(v_sql_log_id, dbms_utility.format_error_stack(), dbms_utility.format_error_backtrace()); log_error(v_sql_log_id, 'stack', 'back'); RAISE; END; END; --SQL_LOGGING_PKG; / Thanks for your help. A: I had to better differentiate the column name from the value being assigned. Time to go to bed! Sorry. But I am still open to other suggestions! I will consider all comments and be grateful for them. PROCEDURE log_error ( p_sql_log_id IN INTEGER ,p_error_message IN VARCHAR2 ,p_back_trace IN CLOB ) IS PRAGMA autonomous_transaction; v_row_count INTEGER; v_is_log_all_sql CHAR(1); v_sql_logs_seq INTEGER; v_inserted_log CHAR(1) := 'N'; BEGIN UPDATE SQL_LOG L SET L.ROWS_AFFECTED = -1, L.ERROR_MESSAGE = p_error_message, L.BACK_TRACE = p_back_trace, L.LAST_UPDT_TS = CURRENT_TIMESTAMP WHERE L.SQL_LOG_ID = p_sql_log_id; A: This will provide a functional starting point for you. I added a few additional fields you probably want. Of course you can take the same concept and break them out into separate named procedures with parameter lists if you prefer. This is simply the most compact way to demonstrate how to do it. Also I am inserting into the log only at the end rather than inserting at the start and then updating. You could do it either way. create table tab1(col1 integer,col2 integer); -- test table / CREATE TABLE SQL_LOG ( SQL_LOG_ID NUMBER, SQL_TEXT CLOB, SESSION_ID VARCHAR2(256 BYTE), ROWS_AFFECTED NUMBER, error_number integer, error_message VARCHAR2(255 BYTE), error_stack varchar2(4000 BYTE), error_backtrace varchar2(4000 BYTE), call_stack varchar2(4000 BYTE), execution_duration_interval interval day to second, CRET_OPER_ID VARCHAR2(20 BYTE), CRET_TS TIMESTAMP(6) DEFAULT CURRENT_TIMESTAMP, UPDT_OPER_ID VARCHAR2(20 BYTE), LAST_UPDT_TS TIMESTAMP(6), PRIMARY KEY (SQL_LOG_ID) ) / create sequence s_sql_log_id / DECLARE PROCEDURE p_exec_dml (in_clob IN CLOB) AS var_start_time timestamp with time zone; var_end_time timestamp with time zone; var_error_number sql_log.error_number%TYPE; var_error_message sql_log.error_message%TYPE; var_error_stack sql_log.error_stack%TYPE; var_error_backtrace sql_log.error_backtrace%TYPE; var_call_stack sql_log.call_stack%TYPE; var_rows_affected sql_log.rows_affected%TYPE; var_execution_duration_interval sql_log.execution_duration_interval%TYPE; PROCEDURE pl_log AS PRAGMA AUTONOMOUS_TRANSACTION; -- perform the logging insert in a separate transaction so we can commit it without impacting the original transaction BEGIN INSERT INTO sql_log (sql_log_id, sql_text, session_id, rows_affected, error_number, error_message, error_stack, error_backtrace, call_stack, execution_duration_interval) VALUES (s_sql_log_id.NextVal, in_clob, SYS_CONTEXT('USERENV', 'SID'), var_rows_affected, var_error_number, var_error_message, var_error_stack, var_error_backtrace, var_call_stack, var_execution_duration_interval); COMMIT; END pl_log; BEGIN BEGIN var_start_time := current_timestamp; EXECUTE IMMEDIATE in_clob; -- execute the DML var_end_time := current_timestamp; var_execution_duration_interval := var_end_time - var_start_time; var_rows_affected := SQL%ROWCOUNT; pl_log; -- log success EXCEPTION WHEN OTHERS THEN var_end_time := current_timestamp; var_rows_affected := -1; var_execution_duration_interval := var_end_time - var_start_time; var_error_number := SQLCODE; var_error_message := SQLERRM; var_error_stack := dbms_utility.format_error_stack; var_error_backtrace := dbms_utility.format_error_backtrace; var_call_stack := dbms_utility.format_call_stack; pl_log; -- log failure RAISE; -- must cause the initiating call to fail END; END p_exec_dml; BEGIN /* test random DML calls */ p_exec_dml('insert into tab1 values (1,1)'); p_exec_dml('insert into tab1 values (1,2)'); p_exec_dml('update tab1 set col1 = 3 where col2 = 2'); p_exec_dml('update tab1 set col5 = 3'); -- this one will fail COMMIT; EXCEPTION WHEN OTHERS THEN ROLLBACK; -- this will undo all the above work, but not the logging RAISE; END; The main disadvantage to encapsulating SQL like this is the inability to handle RETURNING clauses, and the difficulty of handling bind variables. You can enhance this for bind variables, but the typing is tricky... you'd need to pass in a collection type of records with multiple datatype fields so it can support native types without casting everything to string, then transform that into a USING clause and execute the whole thing dynamically (execute immediate within an execute immediate). It will get ugly, but is doable. I typically use this kind of logging mechanism for DDL, not for DML, so these complications don't arise for me. Another thing to consider is whether you even need successful DMLs logged at all. If the errors are the main thing you're after, you would be more successful using a database-level system error trigger (CREATE OR REPLACE TRIGGER .. AFTER SERVERERROR ON DATABASE...) which can log all the same kind of error info, including the SQL (in most cases), session identity, etc... then you don't restrict the flexibility of the DMLs... RETURNING and binds would be unhampered by the constraints of any centralized dynamic SQL.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Release date for openpdf 1.3.31? Anyone know when Open PDF 1.3.31 will be released? I am running into this bug which has a fix scheduled to be in that release: Bug 823
{ "language": "en", "url": "https://stackoverflow.com/questions/75633836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: ASP.NET Core Identity multiple user types I'm trying to build an application where there are two types of users, Staffs and Customers. I need the staff data and customer data to be separate. What is the best approach on creating the tables? Do I need to create one table for all the user types or separate tables for staffs and customer. Haven't started to build the app, as I'm not sure on the approach.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Change id of child elements from a list using jQuery I am adding options to a select tag from a dictionary using jQuery and now I want to set title attribute of each element according to the keys of the dictionary. Can anyone tell me the solution for doing so. JQuery function addOptions(){ var dict = { key1: "val1", key2: "val2", key3: "val3", key4: "val4" } $.each(dict, function(val, text){ $('#mySelect').append($('<option></option>').val(val).html(text)); }); HTML <div class="row" style="padding-top: 20px;"> <div class="col-3"> <select id="mySelect"> <option value="select">--Select--</option> </select> <br><br> <button class="btn btn-sm btn-primary" type="button" id="add" onclick="addOptions()">Add Values in Dropdown </button> </div> </div> I tried to select the children of select and then set the title attribute to keys. But all the elements are getting the last key as their title in the same loop. I want all options to have the keys as their title. $('#mySelect').children('option').attr({"title":val}); A: for this you can use the addOptions func like this. This can set the title attribute of each option element with the keys of the dictionary function addOptions(){ var dict = { key1: "val1", key2: "val2", key3: "val3", key4: "val4" } $.each(dict, function(val, text){ $('#mySelect').append($('<option></option>').val(val).html(text).attr('title', val)); }); } A: You'll have less DOM traversal if you start with your select or at least externalize the variable. End result might look something like let dict = { key1: "val1", key2: "val2", key3: "val3", key4: "val4" } let elem = $('#mySelect'); Object.entries(dict).forEach(([key, val]) => elem.append($('<option></option>') .val(key).html(val).attr('title', key))) <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <select id="mySelect"></select>
{ "language": "en", "url": "https://stackoverflow.com/questions/75633840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Seeking assistance to the integration of Emotiv Cortex V3 Unity Plugin to Unity3D platform I'm new to the programming field and would like to seek assistance or guidance in the necessary steps in integrating the Emotiv Cortex V3 Unity plugin that can be found on Emotiv's github to my Unity3D platform. After reading the README locating in Emotiv's repo, I'm still extremely lost on what to do to integrate them. As such, if there's someone who is able to understand or have experienced working with it, please do illustrate the implementation procedure. Some things to note is that ever since the Cortex API has been renewed in 2021, lots of functionality appeared to be overcomplicated to greenies like me and supports that are provided are out of date. Even though I have reached out to the emotiv team, I have yet to receive any response from them till date. As I'm not knowledgeable in the version control mechanism that Unity allows, I am afraid to test out the integration without adequate knowledge or understanding as any unnecessary testing may crack my development. I have also went through the emotiv example that emotiv has provided but didn't get any clear clarification on how can it be integrated to my workspace.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: GTKTerm and ELM327 interface not communicating I am trying to send serial data to a USB connected ELM327 interface. Port setup is correct since when I unplug then plug the interface to my UBUNTU system the initial message from the interface is displayed correctly. However any ELM commands I send is replied with a "?" which in ELM protocol means it did not understand the message. Same thing happens with Minicom. However I am able to send and receive using Python pyserial and also have no issues when using YAT in Windows. Any clues on what is happening?
{ "language": "en", "url": "https://stackoverflow.com/questions/75633843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Shared Storage Between Browser and Windows Worker Service I was looking for a way to share data between my browser and a windows worker service. I want to have a shared storage place where a blazor wasm application writes a simple GUID and a c# windows worker service read it. both worker service and blazor wasm are runinng on the same machine. How can I do it?
{ "language": "en", "url": "https://stackoverflow.com/questions/75633844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there an efficient way to determine if a sum of floats will be order invariant? Due to precision limitations in floating point numbers, the order in which numbers are summed can affect the result. >>> 0.3 + 0.4 + 2.8 3.5 >>> 2.8 + 0.4 + 0.3 3.4999999999999996 This small error can become a bigger problem if the results are then rounded. >>> round(0.3 + 0.4 + 2.8) 4 >>> round(2.8 + 0.4 + 0.3) 3 I would like to generate a list of random floats such that their rounded sum does not depend on the order in which the numbers are summed. My current brute force approach is O(n!). Is there a more efficient method? import random import itertools import math def gen_sum_safe_seq(func, length: int, precision: int) -> list[float]: """ Return a list of floats that has the same sum when rounded to the given precision regardless of the order in which its values are summed. """ invalid = True while invalid: invalid = False nums = [func() for _ in range(length)] first_sum = round(sum(nums), precision) for p in itertools.permutations(nums): if round(sum(p), precision) != first_sum: invalid = True print(f"rejected {nums}") break return nums for _ in range(3): nums = gen_sum_safe_seq( func=lambda :round(random.gauss(3, 0.5), 3), length=10, precision=2, ) print(f"{nums} sum={sum(nums)}") For context, as part of a programming exercise I'm providing a list of floats that model a measured value over time to ~1000 entry-level programming students. They will sum them in a variety of ways. Provided that their code is correct, I'd like for them all to get the same result to simplify checking their code. I do not want to introduce the complexities of floating point representation to students at this level. A: Not that I know of, but a practical approach is to use math.fsum() instead. While some platforms are perverse nearly beyond repair, on most platforms fsum() returns the infinitely-precise result subject to a single rounding error at the end. Which means the final result is independent of the order in which elements are given. For example, >>> from math import fsum >>> from itertools import permutations >>> for p in permutations([0.3, 0.4, 2.8]): ... print(p, fsum(p)) (0.3, 0.4, 2.8) 3.5 (0.3, 2.8, 0.4) 3.5 (0.4, 0.3, 2.8) 3.5 (0.4, 2.8, 0.3) 3.5 (2.8, 0.3, 0.4) 3.5 (2.8, 0.4, 0.3) 3.5 Python's fsum() docs go on to point to slower ways that are more robust against perverse platform quirks. Arguably silly Here's another approach: fiddle the numbers you generate, clearing enough low-order bits so that no rounding of any kind is ever needed no matter how an addition tree is arranged. I haven't thought hard about this - it's not worth the effort ;-) For a start, I haven't thought about negative inputs at all. def crunch(xs): from math import floor, ulp, ldexp if any(x < 0.0 for x in xs): raise ValueError("all elements must be >= 0.0") target_ulp = ldexp(ulp(max(xs)), len(xs).bit_length()) return [floor(x / target_ulp) * target_ulp for x in xs] Then, e.g., >>> xs = crunch([0.3, 0.4, 2.8]) >>> for x in xs: ... print(x, x.hex()) 0.29999999999999893 0x1.3333333333320p-2 0.3999999999999986 0x1.9999999999980p-2 2.799999999999999 0x1.6666666666664p+1 The decimal values are "a mess", because, from the hex values, you can see that the binary values reliably have enough low-order 0 bits to absorb any shifts that may be needed during a sum. The order of summation makes no difference then: >>> for p in permutations(xs): ... print(p, sum(p)) (0.29999999999999893, 0.3999999999999986, 2.799999999999999) 3.4999999999999964 (0.29999999999999893, 2.799999999999999, 0.3999999999999986) 3.4999999999999964 (0.3999999999999986, 0.29999999999999893, 2.799999999999999) 3.4999999999999964 (0.3999999999999986, 2.799999999999999, 0.29999999999999893) 3.4999999999999964 (2.799999999999999, 0.29999999999999893, 0.3999999999999986) 3.4999999999999964 (2.799999999999999, 0.3999999999999986, 0.29999999999999893) 3.4999999999999964 and >>> import random, math >>> xs = [random.random() * 1e3 for i in range(100_000)] >>> sum(xs) 49872035.43787267 >>> math.fsum(xs) # different 49872035.43787304 >>> sum(sorted(xs, reverse=True)) # and different again 49872035.43787266 >>> ys = crunch(xs) # now fiddle the numbers >>> sum(ys) # and all three ways are the same 49872035.43712826 >>> math.fsum(ys) 49872035.43712826 >>> sum(sorted(ys, reverse=True)) 49872035.43712826 The good news is that this is obviously linear-time in the number of inputs. The bad news is that more and more trailing bits have to be thrown away, the higher the dynamic range across the inputs, and the more inputs there are. A: You are asking for a numeric error analysis; there is a rich literature on this. In your example you found the relative error was unacceptably large. Plus, you're cramming infinite repeating fractions into a 53-bit mantissa, with predictable truncation issues. Adding numbers of different magnitudes tends to cause trouble. Here, 2.8 is more than 8x 0.3, so we risk losing three bits of precision. You're making this problem much too hard. Simply use decimal values. Or do the equivalent: scale your random floats by some large number, perhaps 1e9, and truncate to integer. Now you're summing integers, with no repeating fractional digits, so we're back to being able to rely on commutativity and associativity. Remember to scale sums appropriately when reporting the results. What you want is "math". What you'll get is a "machine representation". So choose a representation that is a good fit for your use case. EDIT Oh, the educational context is illuminating. Avoiding negative round-off errors is crucial. Simply add epsilon = 2 ** -50 to all those round( ..., 3) figures being summed. With epsilon big enough to turn even the negative rounding errors into positive terms, for N numbers with average of mean, your FP sum will be approximately N * mean + N * epsilon, and then the final rounding operation trims the accumulated error. We exploit the facts that input values are in a defined range and N is small, so lots of zero bits separate those two terms. The naive sum of three-digit quantities is d1 + err1 + d2 + err2 + ... + dN + errN, where the errors are + or - rounding errors that come from truncating a repeating fraction at 53 bits. Separating them gives d1 + ... + dN + N * random_var_with_zero_mean. I am proposing d1 + err1 + eps + d2 + err2 + eps + ... which is d1 + ... + dN + N * eps + small_random_error. In particular, eps ensures that we only add positive errors as we accumulate, and by "small" I mean small_random_error < N * eps. from itertools import permutations eps = 2 ** -50 # -52 suffices, but -50 is more obvious during debugging nums = [round(random.gauss(3, 0.5), 3) + eps for _ in range(10)] print(expected := round(sum(nums), 3)) for perm in permutations(nums): assert round(sum(perm), 3) == expected Assume positive values, e.g. uniform draws from the unit interval. Then standard Numeric Analysis advice is to first order the values from smallest to largest, and then sum them. If your students will distribute the summation over K hosts, they should walk the sorted values with stride of K. If we don't need very tight error bounds, then histogram estimates can save a big-Oh log N factor, or can even let you begin computations after a "taste the prefix" operation which takes constant time. A: Faster (0.3 seconds instead of your 8 seconds for length 10, and 3.4 seconds for length 12) and considers more ways to sum (not just linear like ((a+b)+c)+d, but also divide&conquer summation like (a+b)+(c+d)). The core part is the sums function, which computes all possible sums. First it enumerates the numbers, so it can use sets without losing duplicate numbers. Then its inner helper sums does the actual work. It tries all possible splits of the given numbers into a left subset and a right subset, computes all possible sums for each, and combines them. import random import itertools import math import functools def sums(nums): @functools.cache def sums(nums): if len(nums) == 1: [num] = nums return {num[1]} result = set() for k in range(1, len(nums)): for left in map(frozenset, itertools.combinations(nums, k)): right = nums - left left_sums = sums(left) right_sums = sums(right) for L in left_sums: for R in right_sums: result.add(L + R) return result return sums(frozenset(enumerate(nums))) def gen_sum_safe_seq(func, length: int, precision: int) -> list[float]: """ Return a list of floats that has the same sum when rounded to the given precision regardless of the order in which its values are summed. """ while True: nums = [func() for _ in range(length)] rounded_sums = { round(s, precision) for s in sums(nums) } if len(rounded_sums) == 1: return nums print(f"rejected {nums}") for _ in range(3): nums = gen_sum_safe_seq( func=lambda :round(random.gauss(3, 0.5), 3), length=10, precision=2, ) print(f"{nums} sum={sum(nums)}") Attempt This Online! A: The easiest way is to create random integers, and then divide (or multiply) them all by the same power of 2. As long as the sum of the absolute values of the original integers fits into 52 bits, then you can add the resulting floats without any rounding errors. A: Faster variant of my first answer, this one only checking linear summation like ((a+b)+c)+d like you do yourself, not also divide&conquer summation like (a+b)+(c+d)). Takes me 0.03 seconds for length 10 and 0.12 seconds for length 12. import random import itertools import math import functools def sums(nums): @functools.cache def sums(nums): if len(nums) == 1: [num] = nums return {num[1]} result = set() for last in nums: before = nums - {last} before_sums = sums(before) _, R = last for L in before_sums: result.add(L + R) return result return sums(frozenset(enumerate(nums))) def gen_sum_safe_seq(func, length: int, precision: int) -> list[float]: """ Return a list of floats that has the same sum when rounded to the given precision regardless of the order in which its values are summed. """ while True: nums = [func() for _ in range(length)] rounded_sums = { round(s, precision) for s in sums(nums) } if len(rounded_sums) == 1: return nums print(f"rejected {nums}") for _ in range(3): nums = gen_sum_safe_seq( func=lambda :round(random.gauss(3, 0.5), 3), length=10, precision=2, ) print(f"{nums} sum={sum(nums)}") Attempt This Online!
{ "language": "en", "url": "https://stackoverflow.com/questions/75633851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Extract first "extension" tag in nested xml using pyspark Spark version=3.0with python I am using pyspark and want to read a XML. There are multiple extension tags and I need only the first tag. Extension is of type array. IF I explode I am getting multiple rows with nulls. I need only below tags <extension> <docClass>USCOURTS</docClass> <accessId>USCOURTS-txsb-2_05-bk-21207</accessId> <courtType>Bankruptcy</courtType> <courtCode>txsb</courtCode> <courtCircuit>5th</courtCircuit> <courtState>Texas</courtState> <courtSortOrder>3483</courtSortOrder> <caseNumber>2:05-bk-21207</caseNumber> <caseOffice>Corpus Christi</caseOffice> and <relatedItem type="constituent" ID="id-USCOURTS-txsb-2_05-bk-21207-1" xlink:href="https://www.govinfo.gov/metadata/granule/USCOURTS-txsb-2_05-bk-21207/USCOURTS-txsb-2_05-bk-21207-1/mods.xml"> <titleInfo> <title>ASARCO LLC and Official Committee of Asbestos Claimants</title> <subTitle>Memorandum Opinion And Order of Bankruptcy Judge On Motion For Summary Judgment Regarding Proof Of Claim Number 9464 Filed By Jerome Davis Signed on 8/19/2009. Proof of Claim Number 9464 is hereby DISALLOWED in its entirety. (Related document(s):7542 Objection to Claim, 8142 Generic Motion) (gjon)</subTitle> <partNumber>1</partNumber> </titleInfo> Pyspark code: df = spark.read.format("xml").option("rowTag", "mods").load("/Users/a/Desktop/USCOURTS-akb-3_15-ap-90018-0.xml") first_extension = df.select(explode("extension").alias("first_extension")) first_extension.show(2,False) Complete XML code: https://www.govinfo.gov/metadata/granule/USCOURTS-txsb-2_05-bk-21207/USCOURTS-txsb-2_05-bk-21207-10/mods.xml A: It seems that there exist multiple extensions, so filter it with some condition, explode related columns and select distincts. df = spark.read.format('xml').option('rowTag', 'mods').load('mods.xml') df.select('extension', 'relatedItem') \ .withColumn('extension', f.explode('extension')) \ .filter('extension.accessId is not null') \ .withColumn('relatedItem', f.explode('relatedItem')) \ .select( 'extension.docClass', 'extension.accessId', 'extension.courtType', 'extension.courtCode', 'extension.courtCircuit', 'extension.courtState', 'extension.courtSortOrder', 'extension.caseNumber', 'extension.caseOffice', 'relatedItem.titleInfo.*' ) \ .distinct() \ .orderBy('partNumber') \ .show(100, truncate=False) +--------+---------------------------+----------+---------+------------+----------+--------------+-------------+--------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------+ |docClass|accessId |courtType |courtCode|courtCircuit|courtState|courtSortOrder|caseNumber |caseOffice |partNumber|subTitle |title | +--------+---------------------------+----------+---------+------------+----------+--------------+-------------+--------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------+ |USCOURTS|USCOURTS-txsb-2_05-bk-21207|Bankruptcy|txsb |5th |Texas |3483 |2:05-bk-21207|Corpus Christi|0 |Memorandum Opinion And Order of Bankruptcy Judge Richard Schmidt On Motion To File Proofs Of Claim Out Of Time Signed on 10/3/2008 (gluc, ) |ASARCO LLC and Official Committee of Asbestos Claimants| |USCOURTS|USCOURTS-txsb-2_05-bk-21207|Bankruptcy|txsb |5th |Texas |3483 |2:05-bk-21207|Corpus Christi|1 |Memorandum Opinion And Order of Bankruptcy Judge On Motion For Summary Judgment Regarding Proof Of Claim Number 9464 Filed By Jerome Davis Signed on 8/19/2009. Proof of Claim Number 9464 is hereby DISALLOWED in its entirety. (Related document(s):7542 Objection to Claim, 8142 Generic Motion) (gjon) |ASARCO LLC and Official Committee of Asbestos Claimants| |USCOURTS|USCOURTS-txsb-2_05-bk-21207|Bankruptcy|txsb |5th |Texas |3483 |2:05-bk-21207|Corpus Christi|2 |Certified copy of Memorandum Opinion, Order of Confirmation, and Injunction entered. The Court adopts the findings of fact and conclusion of Law subject to any rejections or revisions noted in this opinion. This Court agrees that the Parent's Plan is both feasible and confirmable. It offers the creditors full payment and is more likely to close than the Debtor's Plan. While this court does not minimize the damage that a strike could do should one occur, the Court has a record which supports the findings that a strike is unlikely, and it hopes that reason will prevail and that both sides will decide that mining cooper, while the price remains high, makes money for equity holders and pays good wages for both labor and management. Finally, this Court finds the Original Report and Recommendation not only accurately describes the requirements placed upon it by the Bankruptcy Code, but also finds that the Bankruptcy Court complied with those requirements signed by District Court Judge Andrew S. Hanen in Civil Case No. 2:09-cv-177 on 11/13/2009 (Related document(s):11884 Order Approving Disclosure Statement, 12040 Report and Recommendation) (Attachments: 1 Continuation of Order2 Continuation of Order3 Continuation of Order) (gjon)|ASARCO LLC and Official Committee of Asbestos Claimants| |USCOURTS|USCOURTS-txsb-2_05-bk-21207|Bankruptcy|txsb |5th |Texas |3483 |2:05-bk-21207|Corpus Christi|3 |Memorandum Opinion and Order on Application of Majority Bondholders Under 11 USC Sections 503(B)(3)(D) and (B)(4) for Payment of Fees and Reimbursement of Expenses for Substantial Contribution Signed on 9/28/2010 (Related document(s):13897 Application for Administrative Expenses) (bcor) |ASARCO LLC and Official Committee of Asbestos Claimants| |USCOURTS|USCOURTS-txsb-2_05-bk-21207|Bankruptcy|txsb |5th |Texas |3483 |2:05-bk-21207|Corpus Christi|4 |Memorandum Opinion on Application of The United States and The States of Texas, Montana and Washington Under 11 USC Sections 503(B)(3)(D) and (B)(4) for Payment of Fees and Reimbursement of Expenses for Substantial Contrinution Signed on 9/29/2010 (Related document(s):13872 Application for Administrative Expenses, 13893 Application for Administrative Expenses, 13912 Application for Administrative Expenses, 13916 Application for Administrative Expenses) (bcor) |ASARCO LLC and Official Committee of Asbestos Claimants| |USCOURTS|USCOURTS-txsb-2_05-bk-21207|Bankruptcy|txsb |5th |Texas |3483 |2:05-bk-21207|Corpus Christi|5 |Memorandum Opinion And Order On Fee Application And Fee Enhancement Motion Of Barclays Capital Inc of Bankruptcy Judge Signed on 12/2/2010 (Related document(s):13389 Generic Application, 13408 Generic Application, 13850 Application for Compensation) (gcha) |ASARCO LLC and Official Committee of Asbestos Claimants| |USCOURTS|USCOURTS-txsb-2_05-bk-21207|Bankruptcy|txsb |5th |Texas |3483 |2:05-bk-21207|Corpus Christi|6 |Memorandum Opinion on Final Fee Application of Baker Botts L.L.P. Signed on 7/20/2011 (Related document(s):13915 Application for Compensation) (Attachments: 1 continuation2 continuation) (vrio) |ASARCO LLC and Official Committee of Asbestos Claimants| |USCOURTS|USCOURTS-txsb-2_05-bk-21207|Bankruptcy|txsb |5th |Texas |3483 |2:05-bk-21207|Corpus Christi|7 |Memorandum Opinion on Final Fee Application of Jordan, Hyden, Womble, Culbreth & Holzer, P.C. Signed on 7/20/2011 (Related document(s):13917 Application for Compensation) (vrio) |ASARCO LLC and Official Committee of Asbestos Claimants| |USCOURTS|USCOURTS-txsb-2_05-bk-21207|Bankruptcy|txsb |5th |Texas |3483 |2:05-bk-21207|Corpus Christi|8 |Memorandum Opinion with Respect to: (1) Final Application of Oppenheimer, Blend, Harrison & Tate, Inc. (2) Final Application of Robert C. Pate (3) Joint Application of Robert C. Pate and Oppenheimer, Blend, Harrison & Tate, Inc. (4) All Supplements Thereto. Signed on 7/20/2011 (Related document(s):13883 Application for Compensation, 13886 Application for Compensation) (Attachments: 1 continuation) (vrio) |ASARCO LLC and Official Committee of Asbestos Claimants| |USCOURTS|USCOURTS-txsb-2_05-bk-21207|Bankruptcy|txsb |5th |Texas |3483 |2:05-bk-21207|Corpus Christi|9 |Memorandum Opinion on Final Application of Stutzman, Bromberg, Esserman & Plifka, a Professional Corporation, for Approval of Attorneys' Fees and Expenses Incurred as Counsel for the Official Committee of Asbestos Claimants for the Period from April 11, 2005 Through January 31, 2010, as Amended and Supplemented. Signed on 7/20/2011 (Related document(s):13881 Application for Compensation) (vrio) |ASARCO LLC and Official Committee of Asbestos Claimants| |USCOURTS|USCOURTS-txsb-2_05-bk-21207|Bankruptcy|txsb |5th |Texas |3483 |2:05-bk-21207|Corpus Christi|10 |Memorandum Opinion And Order On State Of Missouri's Motion For Summary Judgment In The Contested Matter Of Debtor's Motion To Withhold Signed on 11/13/2013 (Related document(s):16420 Motion for Approval) (gcha) |ASARCO LLC and Official Committee of Asbestos Claimants| +--------+---------------------------+----------+---------+------------+----------+--------------+-------------+--------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------+
{ "language": "en", "url": "https://stackoverflow.com/questions/75633853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to go to AWS RDS instance for some work in the directory of RDS server I want to directly work in the AWS RDS Server (MYSQL). All I can find is to use mysql client. But I literally want to work on the RDS server like SSH user@{rds ip} How can I do this? A: Amazon RDS is a fully-managed database service. You only have access to the Amazon RDS management console, Amazon RDS API calls (that can launch/stop instances, take snapshots, etc) and the SQL endpoint. You do not have access to the 'server', nor are you given full 'superuser' permissions when connecting to the SQL endpoint.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: can't scan into dest[2]: cannot scan int4 (OID 23) in binary format into *models.User type Order struct{ Id uint `json:"id"` Items []Item Total float64 CustomerID int32 Address Address CreatedAt time.Time UpdatedAt time.Time Status orderstatus } query := INSERT INTO orders(items, total, address, customer_id, status, created_at) VALUES(@orderItems,@orderTotal,@orderAddress,@orderCustomerID, @orderStatus,@orderCreatedAt) returning id, total, customer_id; trans := pgx.NamedArgs{ "orderItems": items, "orderTotal": float64(grandtotal), "orderAddress": address, "orderCustomer" : id, "orderStatus" : "Pending", "orderCreatedAt" : time.Now(), } err := database.DB.QueryRow(context.Background(),query,trans) .Scan(&order.Id, &order.Total, &order.CustomerID) whenever i scan the customerID i get the error saying it cant be converted into a struct the customerID references Users(id) i tried to write struct that mentioned CustomerID as type of User and it failed too any help is highly appreciated #golangbeginner
{ "language": "en", "url": "https://stackoverflow.com/questions/75633855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to have a service worker for a PWA if your 3rd party push notifications supplier already has a service worker? I currently use Pushwoosh for my push notifications in my web-app, which I've successfully set up. I'm in the process of converting the web-app to a PWA, and want to set up my own service worker so that I can control caching and a few other things. The Pushwoosh service worker needs to be in the root level. From what I understand there can only be one service worker per scope. If this is the case, what's the done thing when you need to have your own service worker but you already have one via a 3rd party service? Would a solution be to download the Pushwoosh one from their cdn, add my own code requirements to the end of it and serve it from my own server, or is there a better solution? Thank you for your time and help.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I fix this React Hook example to have the same functionality as this one written in Classes? I'm trying to set minimum width for my columns dynamically when window resizes! The example on Kendo website using Classes works fine. But the ones using Hooks is not. Can someone help me figure out what am I missing/the example missing? Here's the one working (Classes): https://stackblitz.com/edit/react-wwto1g?file=app%2Fmain.jsx Here's the one not working (Hooks): https://codesandbox.io/s/6cs17q?file=/index.js Try to resize the browser side of the page to see the difference in functionality. Thank you.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: for sequelize js findall Why does my cycle findall once. More precisely, it outputs only one record from the base it picks me the first entry and that's it const task = cron.schedule('* * * * *', async () => { try { let u_sers = await User.findAll(); for (let i = 0; u_sers.length; i++) { } console.log('running a task every minute'); } catch (err) { console.log(err); } }); A: You should correct the for condition: for (let i = 0; i < u_sers.length; i++) { } And of course you can check u_sers.length and output it to the console to make sure you have more than one record. I'd recommend to use for of for such cases as a safer alternative: for (const user of u_sers) { }
{ "language": "en", "url": "https://stackoverflow.com/questions/75633858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Spherical Coordinates to Cartesian conversion according to the viewing angle (FOV, zoom) I have a spherical projection of the sky texture made in WebGL which takes CameraAngleX, CameraAngleY and CameraZoom parameters to calculate the coordinates by ray direction function: vec3 GetRayDirection(vec2 uv, vec3 ro, float zoom) { vec3 cf = normalize(-ro); vec3 cr = normalize( cross( vec3(0,1,0), cf )); vec3 cu = cross(cf,cr); vec3 c = ro + cf * zoom; vec3 i = c + uv.x*cr + uv.y*cu; vec3 rd = normalize(i - ro); return rd; } I need to bind some JavaScript elemnts on the top of this projection, so when Zoom = 1.0 and there is no distortion, biding with standard spherical-to-cartesian formula works well: x = r * sin(θ) * cos(φ); y = r * sin(θ) * sin(φ); x = r * cos(φ); But when Zoom is getting smaller than 1.0 and wide angle field of view distortion appears, it doesn't work because. according to distorted FOV projection, elements in the center of the screen should move much slower than in the corners of the screen. I am looking for a formula which can do the same binding on the wide-angle FOV. Please, advise me some formulas or articles pointing to the conversion of wide-angle spherical coordinates to Cartesian! Thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Loop Pagination on Collection Products Into Small And Big Product Image Sizes I am new in liquid (shopify) I try to modify the theme in a collection grid, but it is coded using loop. Can anyone help me to edit my code to display my collection page in the following pattern as follow below, Row 1 output: 1st product grid size 1/3-lap-and-up, 2nd product grid size 1/2-lap-and-up Row 2 output: 3rd product grid size 1/4-lap-and-up, 4th product grid size 1/4-lap-and-up, 5th product grid size 1/4-lap-and-up, 6th product grid size 1/4-lap-and-up Row 3 output: 7th product grid size 1/2-lap-and-up, 8th product grid size 1/3-lap-and-up Row 4 output: 9th product grid size 1/4-lap-and-up, 10th product grid size 1/4-lap-and-up, 11th product grid size 1/4-lap-and-up, 12th product grid size 1/4-lap-and-up Row 5 output: repeat from row 1 again and so on... I can only change to one size and apply to all products in the collection which is not what i expected. Below is my main-collection liquid code, {%- if section.settings.show_layout_switch -%} {%- assign desktop_items_per_row = cart.attributes.collection_desktop_items_per_row | default: section.settings.grid_desktop_items_per_row | times: 1 -%} {%- assign mobile_items_per_row = cart.attributes.collection_mobile_items_per_row | default: section.settings.grid_mobile_items_per_row | times: 1 -%} {%- if desktop_items_per_row >= 3 and desktop_items_per_row != section.settings.grid_desktop_items_per_row -%} {%- assign desktop_items_per_row = section.settings.grid_desktop_items_per_row -%} {%- endif -%} {%- else -%} {%- assign desktop_items_per_row = section.settings.grid_desktop_items_per_row | times: 1 -%} {%- assign mobile_items_per_row = section.settings.grid_mobile_items_per_row | times: 1 -%} {%- endif -%} {%- if desktop_items_per_row == 4 -%} {%- assign tablet_items_per_row = 3 -%} {%- else -%} {%- assign tablet_items_per_row = 2 -%} {%- endif -%} {%- capture collection_inner -%} {%- comment -%}This is the common part to both template{%- endcomment -%} {%- if collection.products_count > 0 -%} {%- paginate collection.products by section.settings.grid_items_per_page -%} <div class="ProductListWrapper"> <div class="ProductList ProductList--grid {% if paginate.pages > 1 %}ProductList--removeMargin{% endif %} Grid" data-mobile-count="{{ mobile_items_per_row }}" data-desktop-count="{{ desktop_items_per_row }}"> {%- for product in collection.products -%} <div class="Grid__Cell 1/{{ mobile_items_per_row }}--phone 1/{{ tablet_items_per_row }}--tablet-and-up 1/{{ desktop_items_per_row }}--{% if section.settings.filter_position == 'drawer' %}lap-and-up{% else %}desk{% endif %}"> {%- render 'product-item', product: product, show_product_info: true, show_vendor: section.settings.show_vendor, show_color_swatch: section.settings.show_color_swatch, show_labels: true -%} </div> {%- endfor -%} </div> </div> {%- render 'pagination', paginate: paginate -%} {%- endpaginate -%} {%- else -%} <div class="EmptyState"> <div class="Container"> <h1 class="EmptyState__Title Heading u-h5">{{ 'collection.general.no_products' | t }}</h1> <button class="EmptyState__Action Button Button--primary" data-action="clear-filters" data-url="{{ collection.url }}?sort_by={{ sort_by }}">{{ 'collection.general.reset' | t }}</button> </div> </div> {%- endif -%} {%- endcapture -%} {%- assign sort_by = collection.sort_by | default: collection.default_sort_by -%} {%- assign active_filters_count = 0 -%} {%- for filter in collection.filters -%} {%- if filter.type == 'list' -%} {%- assign active_filters_count = active_filters_count | plus: filter.active_values.size -%} {%- elsif filter.type == 'price_range' and filter.min_value.value or filter.max_value.value -%} {%- assign active_filters_count = active_filters_count | plus: 1 -%} {%- endif -%} {%- endfor -%} {%- capture section_settings -%} { "sectionId": {{ section.id | json }}, "filterPosition": {{ section.settings.filter_position | json }} } {%- endcapture -%} <section data-section-id="{{ section.id }}" data-section-type="collection" data-section-settings='{{ section_settings }}'> {%- comment -%} -------------------------------------------------------------------------------------------------------------------- COLLECTION INFO -------------------------------------------------------------------------------------------------------------------- {%- endcomment -%} {%- if collection.all_products_count > 0 -%} {%- if section.settings.show_collection_image and collection.image and collection.template_suffix != 'no-image' -%} <div class="FlexboxIeFix"> <header class="PageHeader PageHeader--withBackground {% if section.settings.collection_image_size != 'normal' %}PageHeader--{{ section.settings.collection_image_size }}{% endif %}" style="background: url({{ collection.image | img_url: '1x1', format: 'jpg' }})"> <div class="PageHeader__ImageWrapper Image--lazyLoad Image--fadeIn {% if section.settings.apply_overlay %}Image--contrast{% endif %}" data-optimumx="1.2" data-bgset="{{ collection.image | img_url: 'x600' }} 600w, {{ collection.image | img_url: '800x' }} 800w, {{ collection.image | img_url: '1200x' }} 1200w, {{ collection.image | img_url: '1400x' }} 1400w, {{ collection.image | img_url: '1600x' }} 1600w"> </div> <noscript> <div class="PageHeader__ImageWrapper {% if section.settings.apply_overlay %}Image--contrast{% endif %}" style="background-image: url({{ collection.image | img_url: '800x' }})"></div> </noscript> {%- if section.settings.show_collection_info -%} <div class="Container"> <div class="SectionHeader SectionHeader--center"> <h1 class="SectionHeader__Heading Heading u-h1">{{ collection.title }}</h1> {%- if collection.description != blank -%} <div class="SectionHeader__Description Rte"> {{- collection.description -}} </div> {%- endif -%} </div> </div> {%- endif -%} </header> </div> {%- elsif section.settings.show_collection_info -%} <header class="PageHeader"> <div class="Container"> <div class="SectionHeader SectionHeader--center"> <h1 class="SectionHeader__Heading Heading u-h1">{{ collection.title }}</h1> {%- if collection.description != blank -%} <div class="SectionHeader__Description Rte"> {{- collection.description -}} </div> {%- endif -%} </div> </div> </header> {%- endif -%} {%- endif -%} {%- comment -%} -------------------------------------------------------------------------------------------------------------------- COLLECTION TOOLBAR -------------------------------------------------------------------------------------------------------------------- {%- endcomment -%} {%- if collection.all_products_count > 0 -%} {%- assign show_filters = false -%} {%- assign quick_links = linklists[section.settings.filter_menu] -%} {%- if quick_links != blank or collection.filters != empty and section.settings.show_filters -%} {%- assign show_filters = true -%} {%- endif -%} {%- capture collection_toolbar -%} {%- if show_filters or section.settings.show_sort_by or section.settings.show_layout_switch -%} <div class="CollectionToolbar CollectionToolbar--{{ section.settings.toolbar_position }} {% unless section.settings.show_layout_switch and show_filters == false and section.settings.show_sort_by == false %}CollectionToolbar--reverse{% endunless %}"> {%- if show_filters or section.settings.show_sort_by -%} <div class="CollectionToolbar__Group"> {%- if show_filters -%} <button class="CollectionToolbar__Item CollectionToolbar__Item--filter Heading {% if active_filters_count == 0 %}Text--subdued{% endif %} u-h6 {% if section.settings.filter_position != 'drawer' %}hidden-lap-and-up{% endif %}" data-action="open-drawer" data-drawer-id="collection-filter-drawer" aria-label="{{ 'collection.filter.show_filter' | t }}"> {{ 'collection.filter.title' | t }} {% if active_filters_count > 0 %}<span class="Text--subdued">({{ active_filters_count }})</span>{% endif %} </button> {%- endif -%} {%- if section.settings.show_sort_by -%} <button class="CollectionToolbar__Item CollectionToolbar__Item--sort Heading Text--subdued u-h6" aria-label="{{ 'collection.sorting.show_sort' | t }}" aria-haspopup="true" aria-expanded="false" aria-controls="collection-sort-popover"> {{ 'collection.sorting.title' | t }} {% render 'icon' with 'select-arrow' %} </button> {%- endif -%} </div> {%- endif -%} {%- if section.settings.show_layout_switch -%} <div class="CollectionToolbar__Item CollectionToolbar__Item--layout"> <div class="CollectionToolbar__LayoutSwitch hidden-tablet-and-up"> <button aria-label="{{ 'collection.layout.show_one_per_row' | t }}" class="CollectionToolbar__LayoutType {% if mobile_items_per_row == 1 %}is-active{% endif %}" data-action="change-layout-mode" data-grid-type="mobile" data-count="1">{% render 'icon' with 'wall-1' %}</button> <button aria-label="{{ 'collection.layout.show_two_per_row' | t }}" class="CollectionToolbar__LayoutType {% if mobile_items_per_row == 2 %}is-active{% endif %}" data-action="change-layout-mode" data-grid-type="mobile" data-count="2">{% render 'icon' with 'wall-2' %}</button> </div> <div class="CollectionToolbar__LayoutSwitch hidden-phone"> <button aria-label="{{ 'collection.layout.show_two_per_row' | t }}" class="CollectionToolbar__LayoutType {% if desktop_items_per_row == 2 %}is-active{% endif %}" data-action="change-layout-mode" data-grid-type="desktop" data-count="2">{% render 'icon' with 'wall-2' %}</button> <button aria-label="{{ 'collection.layout.show_four_per_row' | t }}" class="CollectionToolbar__LayoutType {% if desktop_items_per_row >= 3 %}is-active{% endif %}" data-action="change-layout-mode" data-grid-type="desktop" data-count="{{ section.settings.grid_desktop_items_per_row }}">{% render 'icon' with 'wall-4' %}</button> </div> </div> {%- endif -%} </div> {%- endif -%} {%- endcapture -%} {%- comment -%} -------------------------------------------------------------------------------------------------------------------- FILTERS AND SORT BY POPOVER -------------------------------------------------------------------------------------------------------------------- {%- endcomment -%} {%- if show_filters -%} {%- comment -%} Implementation note: the filters can be displayed in two different ways: in a sidebar menu, always visible, or in a drawer. Due to that, we are setting the general code here, and then we will output it twice. {%- endcomment -%} {%- assign quick_links = linklists[section.settings.filter_menu] -%} {%- capture filters_content -%} {%- if quick_links != empty -%} <div class="Collapsible Collapsible--padded {% if section.settings.expand_filters %}Collapsible--autoExpand{% endif %}" data-filter-index="{% increment filter_index %}"> <button type="button" class="Collapsible__Button Heading u-h6" data-action="toggle-collapsible" aria-expanded="false"> {{- quick_links.title | escape -}} <span class="Collapsible__Plus"></span> </button> <div class="Collapsible__Inner"> <div class="Collapsible__Content"> <ul class="Linklist"> {%- for link in quick_links.links -%} <li class="Linklist__Item {% if link.active %}is-selected{% endif %}"> <a href="{{ link.url }}" class="Link Link--primary Text--subdued {% if link.active %}is-active{% endif %}">{{ link.title | escape }}</a> </li> {%- endfor -%} </ul> </div> </div> </div> {%- endif -%} {%- if section.settings.show_filters and collection.filters != empty -%} {%- assign color_label = 'color,colour,couleur,colore,farbe,색,色,färg,farve' | split: ',' -%} {%- for filter in collection.filters -%} {%- assign downcase_filter_label = filter.label | downcase -%} <div class="Collapsible Collapsible--padded {% if section.settings.expand_filters %}Collapsible--autoExpand{% endif %}" data-filter-index="{% increment filter_index %}"> {%- if filter.type == 'boolean' -%} <div class="Collapsible__Button BooleanFilter"> <label for="{{ filter.param_name }}" class="Heading u-h6">{{- filter.label -}}</label> <input id="{{ filter.param_name }}" type="checkbox" class="switch-checkbox" name="{{ filter.param_name }}" value="1" {% if filter.true_value.active %}checked{% endif %}> </div> {%- else -%} <button type="button" class="Collapsible__Button Heading u-h6" data-action="toggle-collapsible" aria-expanded="false"> {{- filter.label -}} <span class="Collapsible__Plus"></span> </button> {%- endif -%} <div class="Collapsible__Inner"> <div class="Collapsible__Content"> <ul class="{% if section.settings.show_filter_color_swatch and color_label contains downcase_filter_label %}ColorSwatchList HorizontalList HorizontalList--spacingTight{% else %}Linklist{% endif %}"> {%- if section.settings.show_filter_color_swatch and color_label contains downcase_filter_label -%} {%- assign color_swatch_config = settings.color_swatch_config | newline_to_br | split: '<br />' -%} {%- for filter_value in filter.values -%} <li class="HorizontalList__Item"> {%- capture filter_value_id -%}@@@@-{{ filter_value.param_name | append: filter_value.value | handle }}{%- endcapture -%} <input id="{{ filter_value_id | escape }}" class="ColorSwatch__Radio" type="checkbox" name="{{ filter_value.param_name }}" value="{{ filter_value.value }}" {% if filter_value.active %}checked="checked"{% endif %}> <label for="{{ filter_value_id | escape }}" class="ColorSwatch" data-tooltip="{{ filter_value.label | escape }}" style="{% render 'color-swatch-style', color_swatch_config: color_swatch_config, value: filter_value.label %}"> <span class="u-visually-hidden">{{ filter_value.label }}</span> </label> </li> {%- endfor -%} {%- else -%} {%- if filter.type == 'list' -%} {%- for filter_value in filter.values -%} {%- capture filter_value_id -%}@@@@-{{ filter_value.param_name | append: filter_value.value | handle }}{%- endcapture -%} <li class="Linklist__Item"> <input class="Linklist__Checkbox u-visually-hidden" id="{{ filter_value_id | escape }}" type="checkbox" name="{{ filter_value.param_name }}" value="{{ filter_value.value }}" {% if filter_value.active %}checked{% endif %}> <label for="{{ filter_value_id | escape }}" class="Text--subdued Link Link--primary"> {{- filter_value.label }} ({{ filter_value.count -}}) </label> </li> {%- endfor -%} {%- elsif filter.type == 'price_range' -%} <price-range class="price-range"> {%- assign min_value = filter.min_value.value | default: 0.0 | divided_by: 100.0 -%} {%- assign max_value = filter.max_value.value | default: filter.range_max | divided_by: 100.0 -%} {%- assign range_max = filter.range_max | divided_by: 100.0 | ceil -%} {% assign lower_bound_progress = min_value | divided_by: range_max | times: 100.0 %} {% assign higher_bound_progress = max_value | divided_by: range_max | times: 100.0 %} <div class="price-range__range-group range-group" style="--range-min: {{ lower_bound_progress }}%; --range-max: {{ higher_bound_progress }}%"> <input type="range" aria-label="{{ 'collection.filter.price_filter_from' | t }}" class="range" min="0" max="{{ range_max | ceil }}" value="{{ min_value | ceil }}"> <input type="range" aria-label="{{ 'collection.filter.price_filter_to' | t }}" class="range" min="0" max="{{ range_max | ceil }}" value="{{ max_value | ceil }}"> </div> <div class="price-range__input-group"> <div class="price-range__input input-prefix text--xsmall"> <span class="input-prefix__value text--subdued">{{ cart.currency.symbol }}</span> <input aria-label="{{ 'collection.filter.price_filter_from' | t }}" class="input-prefix__field" type="number" inputmode="numeric" {% if filter.min_value.value %}value="{{ min_value | ceil }}"{% endif %} name="{{ filter.min_value.param_name }}" id="{{ filter.min_value.param_name | handle }}" min="0" max="{{ max_value | ceil }}" placeholder="0"> </div> <span class="price-range__delimiter text--small">-</span> <div class="price-range__input input-prefix text--xsmall"> <span class="input-prefix__value text--subdued">{{ cart.currency.symbol }}</span> <input aria-label="{{ 'collection.filter.price_filter_to' | t }}" class="input-prefix__field" type="number" inputmode="numeric" {% if filter.max_value.value %}value="{{ max_value | ceil }}"{% endif %} name="{{ filter.max_value.param_name }}" id="{{ filter.max_value.param_name | handle }}" min="{{ min_value | ceil }}" max="{{ range_max | ceil }}" placeholder="{{ range_max | ceil }}"> </div> </div> </price-range> {%- endif -%} {%- endif -%} </ul> </div> </div> </div> {%- endfor -%} {%- endif -%} <input type="hidden" name="sort_by" value="{{ sort_by }}"> {%- if collection.current_type != blank or collection.current_vendor != blank -%} <input type="hidden" name="q" value="{{ collection.current_vendor | default: collection.current_type | escape }}"> {%- endif -%} {%- endcapture -%} <div id="collection-filter-drawer" class="CollectionFilters Drawer Drawer--secondary Drawer--fromRight" aria-hidden="true"> <header class="Drawer__Header Drawer__Header--bordered Drawer__Header--center Drawer__Container"> <span class="Drawer__Title Heading u-h4">{{ 'collection.filter.all' | t }}</span> <button class="Drawer__Close Icon-Wrapper--clickable" data-action="close-drawer" data-drawer-id="collection-filter-drawer" aria-label="{{ 'header.navigation.close_sidebar' | t }}"> {%- render 'icon' with 'close' -%} </button> </header> <div class="Drawer__Content"> <div class="Drawer__Main" data-scrollable> <form id="collection-filters-drawer-form" class="collection-filters-form"> {{ filters_content | replace: '@@@@', 'drawer' }} </form> </div> <div class="Drawer__Footer Drawer__Footer--padded" data-drawer-animated-bottom> <div class="ButtonGroup"> <button type="button" class="ButtonGroup__Item ButtonGroup__Item--expand Button Button--primary" data-action="close-drawer" data-drawer-id="collection-filter-drawer">{{ 'collection.filter.apply' | t }}</button> </div> </div> </div> </div> {%- endif -%} {%- if section.settings.show_sort_by -%} <div id="collection-sort-popover" class="Popover" aria-hidden="true"> <header class="Popover__Header"> <button class="Popover__Close Icon-Wrapper--clickable" data-action="close-popover" aria-label="{{ 'general.popup.close' | t }}">{% render 'icon' with 'close' %}</button> <span class="Popover__Title Heading u-h4">{{ 'collection.sorting.title' | t }}</span> </header> <div class="Popover__Content"> <div class="Popover__ValueList" data-scrollable> {% assign collection_sort_by = collection.sort_by | default: collection.default_sort_by %} {%- for option in collection.sort_options -%} <button class="Popover__Value {% if option.value == collection_sort_by %}is-selected{% endif %} Heading Link Link--primary u-h6" data-value="{{ option.value }}" data-action="select-value"> {{ option.name }} </button> {%- endfor -%} </div> </div> </div> {%- endif -%} {%- comment -%} -------------------------------------------------------------------------------------------------------------------- COLLECTION PRODUCTS -------------------------------------------------------------------------------------------------------------------- {%- endcomment -%} <div class="CollectionMain"> {%- if section.settings.toolbar_position == 'top' -%} {{- collection_toolbar -}} {%- endif -%} <div class="CollectionInner"> {%- if show_filters and section.settings.filter_position == 'sidebar' -%} <div class="CollectionInner__Sidebar {% if collection_toolbar != blank and section.settings.toolbar_position == 'top' %}CollectionInner__Sidebar--withTopToolbar{% endif %} hidden-pocket"> <div class="CollectionFilters"> <form id="collection-filters-sidebar-form" class="collection-filters-form"> {{ filters_content | replace: '@@@@', 'sidebar' }} {%- if active_filters_count > 0 -%} <button type="button" class="CollectionFilters__ClearButton Button Button--secondary" data-action="clear-filters" data-url="{{ collection.url }}?sort_by={{ sort_by }}">{{ 'collection.filter.reset' | t }}</button> {%- endif -%} </form> </div> </div> {%- endif -%} <div class="CollectionInner__Products"> {{ collection_inner }} </div> </div> {%- if section.settings.toolbar_position == 'bottom' -%} {{- collection_toolbar -}} {%- endif -%} </div> {%- else -%} <div class="EmptyState"> <div class="Container"> <h3 class="EmptyState__Title Heading u-h5">{{ 'collection.general.empty' | t: collection_title: collection.title }}</h3> <a href="{{ routes.root_url }}" class="EmptyState__Action Button Button--primary">{{ 'collection.general.empty_button' | t }}</a> </div> </div> {%- endif -%} </section> {% schema %} { "name": "Collection page", "class": "shopify-section--bordered", "settings": [ { "type": "checkbox", "id": "show_collection_info", "label": "Show collection info", "default": true }, { "type": "checkbox", "id": "show_collection_image", "label": "Show collection image", "default": false }, { "type": "checkbox", "id": "apply_overlay", "label": "Apply overlay on image", "info": "This can improve text visibility.", "default": true }, { "type": "checkbox", "id": "show_color_swatch", "label": "Show color swatch", "info": "Some colors appear white? [Learn more](http://support.maestrooo.com/article/80-product-uploading-custom-color-for-color-swatch).", "default": false }, { "type": "checkbox", "id": "show_vendor", "label": "Show vendor", "default": false }, { "type": "select", "id": "collection_image_size", "label": "Collection image size", "options": [ { "value": "small", "label": "Small" }, { "value": "normal", "label": "Normal" }, { "value": "large", "label": "Large" } ], "default": "normal" }, { "type": "header", "content": "Toolbar" }, { "type": "checkbox", "id": "show_sort_by", "label": "Show sort by", "default": true }, { "type": "checkbox", "id": "show_layout_switch", "label": "Show layout switch" }, { "type": "select", "id": "toolbar_position", "label": "Position", "options": [ { "value": "top", "label": "Top" }, { "value": "bottom", "label": "Bottom" } ], "default": "top" }, { "type": "header", "content": "Filters" }, { "type": "checkbox", "id": "show_filters", "label": "Show filters", "info": "[Customize filters](/admin/menus)", "default": true }, { "type": "checkbox", "id": "show_filter_color_swatch", "label": "Show filter color swatch", "info": "Transform color filters to swatches.", "default": true }, { "type": "checkbox", "id": "expand_filters", "label": "Expand filters on desktop", "default": true }, { "type": "select", "id": "filter_position", "label": "Desktop position", "options": [ { "value": "sidebar", "label": "Sidebar" }, { "value": "drawer", "label": "Drawer" } ], "default": "sidebar" }, { "type": "link_list", "id": "filter_menu", "label": "Quick links", "info": "This menu won't show dropdown items." }, { "type": "header", "content": "Grid" }, { "type": "range", "id": "grid_items_per_page", "label": "Products per page", "min": 4, "max": 48, "step": 4, "default": 16 }, { "type": "select", "id": "grid_mobile_items_per_row", "label": "Products per row (mobile)", "options": [ { "value": "1", "label": "1" }, { "value": "2", "label": "2" } ], "default": "2" }, { "type": "range", "min": 2, "max": 4, "id": "grid_desktop_items_per_row", "label": "Products per row (desktop)", "default": 4 } ] } {% endschema %}
{ "language": "en", "url": "https://stackoverflow.com/questions/75633862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to highlight active (variable) link in ReactJs? I know how to 'highlight' an active link in React.js, I do it like this: <Link className={splitted[1] === 'Details' ? "bg-red-800 rounded-full hover:text-white" : "hover:bg-blue-800 rounded-full hover:text-white" } key={'xxx'} to={`/Details/${id}`} > This is my link </Link> (ok, maybe a bit overkill, but it works) But I have problems doing the same with a bunch of links that are rendered via a mapping. I try to do the same but it doesn't work at all. What is wrong in the code below? const renderedLinks = links.map((link) => { return ( <Link className={splitted[1] === `${link.path}` ? "bg-red-800 rounded-full hover:text-white" : "hover:bg-blue-800 rounded-full hover:text-white" } key={link.label} to={link.path} > {link.label} </Link> ) }); So, how do I get the variable link.path into this? Links are generated by this: const links = [ { label: 'Details', path: '/Details' }, ... ] And splitted is is the first path of the pathname obtained by useLocation. I am using react-router-dom@6. A: Instead of the Link component it would be better to use the NavLink component as it has logic baked-in to handle matching an active link to the current URL path. Use the className callback to access the isActive prop and conditionally apply the appropriate CSS classes. Example: <Link key={'xxx'} to={`/Details/${id}`} className={({ isActive }) => [ "rounded-full hover:text-white", isActive ? "bg-red-800" : "hover:bg-blue-800" ].join(" ") } > This is my link </Link> const renderedLinks = links.map((link) => ( <Link key={link.path} to={link.path} className={({ isActive }) => [ "rounded-full hover:text-white", isActive ? "bg-red-800" : "hover:bg-blue-800" ].join(" ") } > {link.label} </Link> )); If you render links to both parent and child/descendent paths, i.e. "/details" and "/details/specificDetails" then you can also specify the end prop on the NavLink so the active link is matched by the end of the path. If the end prop is used, it will ensure this component isn't matched as "active" when its descendant paths are matched. For example, to render a link that is only active at the website root and not any other URLs, you can use: <NavLink to="/" end> Home </NavLink> Applied to your code: const renderedLinks = links.map((link) => ( <Link key={link.path} to={link.path} className={({ isActive }) => [ "rounded-full hover:text-white", isActive ? "bg-red-800" : "hover:bg-blue-800" ].join(" ") } end // <-- > {link.label} </Link> )); A: You can try using NavLink instead of Link. NavLink is particularly used to know the state(active or not) of any link. You can find its documentation here https://reactrouter.com/en/main/components/nav-link
{ "language": "en", "url": "https://stackoverflow.com/questions/75633863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Installing linux mint damages windows boot I have made a bootable flash for linux mint 21.1. My windows 10 is installed on UEFI part of bios and i want to install linux on legacy part. I changed the setting and installed linux on legacy, and it worked. but when i changed bios to UEFI in order to load windows, again the boot manager of linux mint appeared and now both on legacy and uefi setting, only linux is loaded and i cant load windows. Is there any way to solve this problem? i have done these steps before for installing ubuntu and linux mint and everything was ok, but this time i faced with this problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: React setState not when switching between tabs in Material UI. Applied div with display none but not rendering When I navigate from one Material UI Tab to another, the state does not persist and is cleared. https://codesandbox.io/s/quiet-meadow-6pw3dk?file=/demo.tsx I've tried: * *Using React Context to pass in the React State resulting in the same issue This is a known issue and there exist a workaround found at: * *Material UI Tabs: After switch between tabs, changes in tabs discarded *React MUI Tab component cause re-render and remove child states My issue is I've added the tabs div but the page is still not keeping the inactive tabs from unmounting. export default function LabTabs() { const [value, setValue] = React.useState("1"); const handleChange = (event: React.SyntheticEvent, newValue: string) => { setValue(newValue); }; return ( <Box sx={{ width: "100%", typography: "body1" }}> <TabContext value={value}> <Box sx={{ borderBottom: 1, borderColor: "divider" }}> <TabList onChange={handleChange} aria-label="lab API tabs example"> <Tab label="Item One" value="1" /> <Tab label="Item Two" value="2" /> <Tab label="Item Three" value="3" /> </TabList> </Box> <TabPanel value="1"> <div style={{ display: "0" === value ? "block" : "none" }}> <div> [Step 1] Click on "ADD" and note the counter then navigate to TAB 1. </div> <Panel /> </div> </TabPanel> <TabPanel value="2"> <div style={{ display: "1" === value ? "block" : "none" }}> [Step 2] Return to Tab 0 and see that the Counter reverted to zero. </div> </TabPanel> <TabPanel value="3">Item Three</TabPanel> </TabContext> </Box> ); } A: When you use MUI Tabs, only the active tab exists in the DOM. The others aren't hidden; they're destroyed, along with the unsaved state within. In your example, you're storing the counter value with React.useState inside your <Panel /> component, which is inside tab #1. When you click another tab, the Panel is unmounted and fully removed from the DOM, along with the state value within. To resolve the issue and keep your state, you'll need to lift it up above the tabs so the state doesn't live in an unmounted component. Try adding this to your demo.tsx file on line 11: const [counter, setCounter] = useState(0); Then add those as props to your Panel component on line 33: <Panel counter={counter} setCounter={setCounter} /> Finally, in your panel.tsx file, use the props instead of the local useState values. interface Props { counter: number; useCounter: (num: number) => void; }; export default const Panel = ({ counter, useCounter }: Props) => ( <> <div>Counter: {counter}</div> <Button onClick={() => setCounter(counter + 1)} variant="contained"> ADD </Button> </> );
{ "language": "en", "url": "https://stackoverflow.com/questions/75633868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I catalogue product names and prices using CSS selectors I have been trying to create a program to learn Kotlin + webscraping with Jsoup. The goal of the program is to enter in a product name, go to the search page for that product: class MainActivity : AppCompatActivity() { private val baseUrl = "https://www.woolworths.com.au/shop/search/products?searchTerm=coffee" override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val btnClick = findViewById<Button>(R.id.button) btnClick.setOnClickListener{ val scraper = CoffeeScraper() scraper.execute() } } and return the name and price of all items within that first search page. inner class CoffeeScraper : AsyncTask<Void, Void, List<Pair<String, String>>>() { override fun doInBackground(vararg params: Void?): List<Pair<String, String>> { val document: Document = Jsoup.connect(baseUrl).get() val products: MutableList<Pair<String, String>> = mutableListOf() val productElements: List<Element> = document.select("#search-content > div > wow-product-search-container > shared-grid > div > div:nth-child(1)") if (productElements.isNotEmpty()) { for (product in productElements) { val name = product.select(".#search-content > div > wow-product-search-container > shared-grid > div > div:nth-child(1) > shared-product-tile > shared-product-tile-v2 > section > div.product-title-container > shared-product-tile-title > div > a").text() val price = product.select("#search-content > div > wow-product-search-container > shared-grid > div > div:nth-child(1) > shared-product-tile > shared-product-tile-v2 > section > div.product-information-container > div.product-tile-v2--prices.ng-star-inserted > shared-product-tile-price > div > div.primary").text() Log.d("In loop", "Item added") } } else { Log.d("Element Contents", "Empty") } Log.d("After loop", "$products") return products } override fun onPostExecute(result: List<Pair<String, String>>) { super.onPostExecute(result) for (product in result) { val name = product.first val price = product.second val textView = findViewById<TextView>(R.id.textView) textView.setText("$name: $price\n") } } } } The problem is that I can't seem to get the program to return any values at all. Any help or guidance with this would be greatly appreciated! I've tried getting the data by using just class selectors but that also seems to return nothing.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Angular Cant Access Spring Boot Endpoint Heres is my spring boot controllers, the first one is authenticate, this endpoint is the one i called in logging in after implementing this authentication response will be stored in session, Authentication Response is an object which have the property string token; @PostMapping("/authenticate") public ResponseEntity<AuthenticationResponse> authenticate(@RequestBody AuthenticationRequest request, HttpSession session) { AuthenticationResponse auth = service.authenticate(request); session.setAttribute("jwtToken", auth); return ResponseEntity.ok(auth); } next is get all courses, this endpoint gets all courses by userid on database if there is a session else it will be forbidden @GetMapping("/forInstructors") public ResponseEntity<List<Course>>findForInstructor(HttpSession session){ AuthenticationResponse auth = (AuthenticationResponse) session.getAttribute("jwtToken"); if (auth == null) {return new ResponseEntity<>(HttpStatus.UNAUTHORIZED);} String jwt = auth.getToken(); String email = jwtService.extractUserName(jwt); User user = userService.findByEmail(email); String password = user.getPassword(); return new ResponseEntity<>(courseService.getAllCoursesForInstructors(user.getUserId()), HttpStatus.OK); } Ive tried this on postman and it works, get courses only works if there is a session existing or after a user logins, but when I tried calling this on angular it says 101, the authentication endpoint works but after that get all courses does not work even if there is a session Here's my services on angular that call that two end points: Authenticate: public loginAccount(login: AuthenticationRequest): Observable<AuthenticationResponse> { const url = 'http://localhost:8080/e-classroom/auth/authenticate'; return this.http.post<AuthenticationResponse>(url, login); } Get Courses: public getAllCourses(): Observable<Course[]> { const url = 'http://localhost:8080/e-classroom/course/forInstructors'; return this.http.get<Course[]>(url); }
{ "language": "en", "url": "https://stackoverflow.com/questions/75633874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: OpenAiService duplicated Is there anyone konw how to fix OpenAiService duplication? I used gpt-3.5-turbo by OpenAI API and currently OpenAiService works. OpenAiService service = new OpenAiService("API_KEY"); Thanks, Message is below. Creates a new OpenAiService that wraps OpenAiApi But I don't konw how to fix this.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What does auto and zero do in margin: 50px auto 0;? What does auto and zero do in margin: 50px auto 0;? I know it has the effect of centring objects. A: it means that there is a margin of 50 pixels on the top, 0 pixels on the bottom, and the left and right margins are set to "auto". The "auto" value for left and right margins will automatically adjust to center the element within its container. So, the element will be centered horizontally within its parent container, and have a margin of 50 pixels at the top and no margin at the bottom.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: -1While solving Today's leetcode Daily challenge I wrote this code: (not the actual solution) class Solution { public: long long countSubarrays(vector<int>& nums, int minK, int maxK) { int j = -1; int mintillnow = INT_MAX; int maxtillnow = INT_MIN; long long count = 0; while(j<nums.size()){ cout<<"why not running"<<endl; j++; mintillnow = min(nums[j],mintillnow); maxtillnow = max(nums[j],maxtillnow); if(mintillnow==minK && maxtillnow == maxK){ count++; } if(mintillnow<minK || maxtillnow>maxK){ mintillnow = INT_MAX; maxtillnow = INT_MIN; } } return count; } }; The problem is when I initialize j = -1, the loop doesn't run and just returns count. But the loop works fine when I initialize j = 0 . Why does this happen? A: The problem is that nums.size() returns a value that is unsigned. That means j will also be converted to an unsigned integer type. And -1 as unsigned is very large. So the condition is simply never true. A: Since j and nums.size() are of different type, they are subject to implicit conversion so that they become of the same type. Since they are both integer, the most precise one is chosen: in this case you most likely have int and size_t, and the language considers size_t the most precise of the two. Hence -1 is converted to size_t and will end up being the max possible value for size_t (thus your while guard is always false)
{ "language": "en", "url": "https://stackoverflow.com/questions/75633877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Using Constraint Programming to model preemptive task with energy allocation over periods I have a linear model that works fine but has a huge amount of time of resolution that I want to reduce. One approach is that I convert it to constraint programming. Supposing that I have a specific task that requires 5 hours of working. For instance there are a lot of possibility to allocate energy: I can perform 5 hours and I finish the task in one day or perform 2.5 hours a day and finish in 2 days. Suppose we admit at max this task can be interrupted once such that either we allocate hours until final day or allocate 1 hours for example and then come back after 3 days of break (no allocated energy there) to allocate the rest of required energy on all the consecutive coming days after the break Can anyone suggest me some logic how to do it in the compact form using constraint programming? I don't mind if the solution is cplex of minizinc
{ "language": "en", "url": "https://stackoverflow.com/questions/75633878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: negative numbers in python im trying to get this rawstr = input('enter a number: ') try: ival = abs(rawstr) except: ival = 'boob' if ival.isnumeric(): print('nice work') else: print('not a number') to recognize negative numbers, but i cant figure out why it always returns 'is not a number' no matter what the input is the original purpose of this was a learning excercise for try/except functions but i wanted it to recognise negative numbers as well, instead of returning 'is not a number' for anything less than 0 rawstr = input('enter a number: ') try: ival = int(rawstr) except: ival = -1 if ival > 0: print('nice work') else: print('not a number') ^ this was the original code that i wanted to read negative numbers A: The abs function expects an int, but in the first code block you pass it a string. In the second code block, you convert the string to an int -- this is missing in the first code block. So combine the two: ival = abs(int(rawstr)) A second issue is that isnumeric is a method for strings, not for numbers, so don't use that as you did in the first code block, and do ival >= 0 as if condition. So: rawstr = input('enter a number: ') try: ival = abs(int(rawstr)) except: ival = -1 if ival >= 0: print('nice work') else: print('not a number') The downside is that with abs you really have a non-negative number and lost the sign. Merge the two parts and do: rawstr = input('enter a number: ') try: ival = int(rawstr) print('nice work') except: print('not a number') # Here you can exit a loop, or a function,... A: you may change the abs function to the int function to convert the input string to an integer rawstr = input('enter a number: ') try: ival = int(rawstr) except: ival = 'boob' if isinstance(ival, int) and ival >= 0: print('nice work') else: print('not a number') A: Python inputs are strings by default, we need to convert them to relevant data types before consuming them. Here in your code: at the line ival = abs(rawstr), the rawtr is always a string, but abs() expects an int type. So you will get an exception at this line, so if ival.isnumeric() always receives ival as boob which was not a numeric, so you're getting not a number output. Updated code to fix this: rawstr = input('enter a number: ') try: ival = str(abs(int(rawstr))) except: ival = 'boob' if ival.isnumeric(): print('nice work') else: print('not a number')
{ "language": "en", "url": "https://stackoverflow.com/questions/75633880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Source paths being preserved in Stacktrace in release build after .NET 6 upgrade We have recently upgraded our C# application from .NET Core 3.0 to .NET 6 and we are seeing an issue where our error logs have source path in the errors (the path has build machine information). We are using the following commands in CMD to build and publish the application: @SET _config=%1 @IF '%_config%'=='' (SET _config=Release) dotnet build ..\name.sln -c:%_config% -v:minimal @IF NOT %ERRORLEVEL%==0 (GOTO lbl_error) dotnet test ..\source\name.Tests @IF NOT %ERRORLEVEL%==0 (GOTO lbl_error) dotnet publish ..\source\namecore\name.csproj -c:%_config% --self-contained:true -r:win-x64 -f:net6.0 -p:PublishTrimmed=true -o:..\source\name\bin\Release\Publish\ @IF NOT %ERRORLEVEL%==0 (GOTO lbl_error) I have read that these settings are preserved if we are building in debug mode but this isn't the case here. Have been ripping my hair out for this error but can't figure it out. Any help will be appreciated, if I have missed any info please let me know.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there anyway to externally detect if a syntax error occurred in VScode? I am working on commission (so I can't tell you much without breaking agreements) I need to create a program to run a python file whenever VScode console prints a syntax error I don't even know where to start so any help would be greatly appreciated A: use the subprocess module in Python to run the Python file whenever a syntax error is printed in the VScode console. it continuously reads the console output line by line and checks if each line contains the string "SyntaxError". If a syntax error is found, the run_python_file() function is called, which executes the Python file using the subprocess.call() method. import subprocess def run_python_file(): subprocess.call(["python", "file_path"]) while True: try: line = input() if "SyntaxError" in line: run_python_file() except EOFError: break
{ "language": "en", "url": "https://stackoverflow.com/questions/75633883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: Cross Account S3 Bucket Object copy gives (403) when calling the HeadObject operation: Forbidden I have a lambda found in the destination account that copies s3 objects from source_A to destination_B. For the source bucket I have attached the permissions { ## permission for source bucket "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::destination_B:root" }, "Action": "s3:*" ## Also I have tried s3:Get* and s3:List*, "Resource": [ "arn:aws:s3:::source_A", "arn:aws:s3:::source_A/*" ] } ] } For the destination lambda function, I have attached a policy which is also fairly simple and nothing complex here, and have changed the bucket ownership. { "Statement": [ { "Action": "s3:*", "Effect": "Allow", "Resource": [ "arn:aws:s3:::source_A", "arn:aws:s3:::source_A/*", "arn:aws:s3:::destination_B", "arn:aws:s3:::destination_B/*" ], "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ] } I know this question has been asked before but I am unable to locate the mistake. Likely is going to be very small in some policy or permission. Even giving '*' permission doesn't solve the issue. A small hint would be great. Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/75633888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: folder.appendMessages takes more time in java mail api while sending email After sending an email I am storing it into a folder using java mail API. The folder.appendMessages takes 8 to 10.005 seconds to store a single message with attachment of 3.26MB. Below is the code, Folder folder = store.getFolder("Sent"); folder.open(Folder.READ_WRITE); msg.setFlag(Flag.SEEN, true); long previousTime3 = System.currentTimeMillis(); folder.appendMessages(new Message[] { msg }); long currentTime3 = System.currentTimeMillis(); double elapsedTime3 = (currentTime3 - previousTime3) / 1000.0; System.out.println("Time in seconds--folder.appendMessages :" + elapsedTime3); How can I reduce this time?
{ "language": "en", "url": "https://stackoverflow.com/questions/75633889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: File size is getting progressively larger as I continue to download data I am downloading financial data using to_hdf and I have noticed that each file gets larger and larger as it keeps downloading. What is happening? The first file was saved as 223 KB and the most recent where I stopped (67) was saved as 14,609 KB. The following is the code (some sections that are irrelevant have been removed): import pandas as pd import datetime as dt import yfinance as yf from pandas.tseries.holiday import USFederalHolidayCalendar import yahoo_fin.stock_info as si from pathlib import Path import os.path def main(): end = dt.datetime.now() start = end + dt.timedelta(days=-5) dr = pd.date_range(start=start, end=end) cal = USFederalHolidayCalendar() holidays = cal.holidays(start=dr.min(), end=dr.max()) a = dr[~dr.isin(holidays)] # not US holiday b = a[a.weekday != 5] b = b[b.weekday != 6] for year in set(b.year): tmp = b[b.year == year] for week in set(pd.Index(tmp.isocalendar().week)): temp = tmp[pd.Index(tmp.isocalendar().week) == week] start = temp[temp.weekday == temp.weekday.min()][0] # beginning of week end = temp[temp.weekday == temp.weekday.max()][0] # ending of week # get list of all index tickers ticker_strings = si.tickers_sp500() data_dir = 'data' x = 1 tickers_dir = './tickers' Index = '^GSPC' # initialize list for the following f(x) Df_list = list() ticker_data(ticker_strings, start, end, Df_list, data_dir, x) print("Complete") def ticker_data(ticker_strings, start, end, Df_list, data_dir, x): # find values for individual stocks for ticker in ticker_strings: loc_start = start while loc_start <= end: period_end = loc_start + dt.timedelta(days=1) intra_day_data = yf.download(ticker, loc_start, period_end, period="1d", interval="1m") extra_day_data = yf.download(ticker, loc_start, period_end, period="1d", interval="1m", prepost=True) Df_list.append(intra_day_data) Df_list.append(extra_day_data) loc_start = loc_start + dt.timedelta(days=1) df = pd.concat(Df_list) # creates file name filename = end.strftime('%F') + " " + ticker + ".h5" # saves file name to folder df.to_hdf(os.path.join(data_dir, filename), mode='w', key='df') #df.to_csv(os.path.join(data_dir, filename)) print(x, ticker) x += 1 if __name__ == "__main__": main() A: This could be because it is constantly downloading new data. A: You are appending new data to Df_list at every iteration of for ticker in ticker_strings and you are saving all of that every time. Which means that every file will contain also the previous file's data. You should use a variable local to the for ticker in ticker_strings loop instead of using a list passed in as a parameter.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to get the day number of a date from a month in momentjs? I tried to get the day number of a choosen date from a month by using momentjs. My scenario is, i chose a date, like, 4-3-2023, and i want the output as 4 that it is 4th day of the month. I searched in web but didn't get the way as every solutions are related with moment().day or moment().daysInMonth(). i am expecting the the day number from the date i have chosen. A: You can achieve your expected result by using, moment('your_date').format('D') A: You can get it with the following code: // Get the date of month moment().date(); // Output assuming March 4th, 2023: 4 You can find more documentation and examples at https://momentjs.com/docs/#/get-set/date/
{ "language": "en", "url": "https://stackoverflow.com/questions/75633900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Unable to get opentelemetry's trace_id through MDC, even though MDC was working through logging.pattern.level I have integrated OpenTelemetry in my application and after following this https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/logger-mdc-instrumentation.md, I am able to inject the trace_id and other info in logs through logging.pattern.level = trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p But when tryinig to get these trace_id in my code through MDC.get("trace_id") I am gettting null. This is my configuration. ` plugins { id 'org.springframework.boot' version '2.7.0' id 'io.spring.dependency-management' version '1.0.11.RELEASE' id 'java' id 'jacoco' id "org.sonarqube" version "3.4.0.2513" id 'de.undercouch.download' version '4.1.1' } version = '0.0.1-SNAPSHOT' sourceCompatibility = '17' configurations { compileOnly { extendsFrom annotationProcessor } } ext { fasterxmlJackson = '2.9.7'; } dependencies { implementation 'org.springframework.boot:spring-boot-starter-actuator' implementation 'org.springframework.boot:spring-boot-starter-data-jpa' implementation 'org.springframework.boot:spring-boot-starter-jdbc' implementation 'org.springframework.boot:spring-boot-starter-security' implementation 'org.springframework.boot:spring-boot-starter-web' implementation('org.springframework.boot:spring-boot-starter-data-rest') compileOnly 'org.projectlombok:lombok' implementation 'org.postgresql:postgresql:42.4.0' implementation 'com.vladmihalcea:hibernate-types-55:2.16.2' annotationProcessor 'org.projectlombok:lombok' implementation('com.querydsl:querydsl-jpa:4.1.3') implementation('com.amazonaws:aws-java-sdk-s3:1.11.704') implementation('com.mashape.unirest:unirest-java:1.4.9') implementation('org.springframework.boot:spring-boot-starter-cache:2.7.1') implementation('org.springframework.boot:spring-boot-starter-data-redis:2.7.1') implementation("com.fasterxml.jackson.datatype:jackson-datatype-json-org:${fasterxmlJackson}", "com.fasterxml.jackson.datatype:jackson-datatype-joda:${fasterxmlJackson}", "com.fasterxml.jackson.datatype:jackson-datatype-jsr310:${fasterxmlJackson}") implementation('org.apache.commons:commons-lang3:3.4') implementation('org.apache.commons:commons-collections4:4.3') implementation 'org.springdoc:springdoc-openapi-ui:1.6.14' implementation 'org.flywaydb:flyway-core' implementation(platform("io.opentelemetry:opentelemetry-bom:1.18.0")) implementation("io.opentelemetry:opentelemetry-api") testImplementation 'org.springframework.boot:spring-boot-starter-test' testImplementation 'org.mockito:mockito-inline:4.6.1' testImplementation 'org.mockito:mockito-junit-jupiter:4.6.1' } configurations { compile.all*.exclude group: "org.apache.logging.log4j", module: "log4j-slf4j-impl" compile.all*.exclude group: "com.vaadin.external.google", module: "android-json" compile.all*.exclude group: "commons-logging", module: "commons-logging" all { exclude group: 'org.springframework.boot', module: 'spring-boot-starter-logging' } } tasks.named('test') { useJUnitPlatform() } jacocoTestReport { reports { xml.enabled true html.enabled true } } bootJar { dependsOn("downloadAgent") } task downloadAgent(type: Download) { src "https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/download/v1.17.0/opentelemetry-javaagent.jar" dest project.buildDir.toString() + "/otel/opentelemetry-javaagent.jar" overwrite true }` I need fetch trace_id in code so that I can use in non-logging usecases.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I implement this feature? I have a scriptable object called Augment Data that contains a single float. Each instance of Augment Data will have a different float. I have a class Snail that contains roughly 20 different floats and a list of Augment Data. Each Augment Data is intended to only be added to 1 specific float in Snail. For example, an instance of Augment Data called HealthMultiplier with a float value of 0.5f is added to the list in Snail. How can HealthMultiplier be applied to only the healthCap property in Snail? * *I've tried storing a dictionary in AugmentData but Unity doesn't support editing dictionaries in the inspector so that kills the purpose of using scriptable objects. *I tried to have Augment Data store another property, a different type of scriptable object called Augment Name that holds a single string. But I still couldn't figure out a clean, expandable way of assigning it to the Snail. (too much to manually change if I want to introduce a new augment data). A: You can add an abstract method "Apply" to the base class, and implement it in the "HealthMultiplier" class that apply the multiplier value to the corresponding property. public abstract class AugmentData : ScriptableObject { public float value; public abstract void Apply(Snail snail); } public class HealthMultiplier : AugmentData { public override void Apply(Snail snail) => snail.healthCap *= value; } In the "Snail" class you can loop through the list and call the "Apply" method. public class Snail { public List<AugmentData> list; float healthCap; public void Start() { foreach(var a in list) a.Apply(this); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/75633902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I recover from "invalid sha1 pointer in resolve-undo" with git fsck? I run git gc in a repository and get a fatal error: Enumerating objects: 2382, done. Counting objects: 100% (2382/2382), done. Delta compression using up to 8 threads Compressing objects: 100% (747/747), done. fatal: unable to read <object-id> fatal: failed to run repack Running git fsck --full --no-dangling provides more detail about the problem with that object: Checking object directories: 100% (256/256), done. Checking objects: 100% (2381/2381), done. error: <object-id>: invalid sha1 pointer in resolve-undo Verifying commits in commit graph: 100% (287/287), done. I believe this is caused by a bug that has been fixed: The resolve-undo information in the index was not protected against GC, which has been corrected with Git 2.38 (Q3 2022). If my repository is already in this state, how can I fix it? A: Ensure you don't have any staged changes and recreate the index. rm .git/index git reset This will recreate the index from HEAD without including the resolve-undo extension.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to Inject a wallet into the browser from Mobile (react-native)? I have created a mobile wallet and am attempting to inject the wallet provider into a webview instance. I want the wallet to mimic how metamask would connect. this is the code I am using currently. and the image below shows that the browser sees an injected wallet, but the site cannot seem to link to this window.ethereum instance. import React, {Component, useEffect, useContext, useState} from 'react'; import {WebView} from 'react-native-webview'; import {ActivityIndicator} from "react-native-paper"; import "@ethersproject/shims" import { ethers } from 'ethers' export const Web3View = ({url, address, privateKey}) => { const nodeProvider = 'https://bsc-dataseed1.binance.org/' const [injectedJavaScript, setInjectedJavaScript] = useState(null); function handleMessage(event) { const message = JSON.parse(event.nativeEvent.data); if (message.type === 'accounts') { setAccounts(message.payload); } } useEffect(()=>{ if (privateKey == ""){ console.log("NO PK"); return; } console.log("url=", url) const providerMainnet = new ethers.providers.JsonRpcProvider(nodeProvider) let walletProviderMainnet = new ethers.Wallet(privateKey, providerMainnet) const injectedJavaScript = ` window.ethereum = {}; window.ethereum.isMetaMask = true; window.ethereum.isConnected = function() { return true }; window.ethereum.provider = ${JSON.stringify(providerMainnet)}; window.ethereum.wallet = {}; window.ethereum.wallet.provider = ${JSON.stringify(walletProviderMainnet.provider)}; window.ethereum.wallet.address = '${address}'; window.ethereum.selectedAddress = '${address}'; window.ethereum.eth_requestAccounts = async function(tx) { return '${address}'; }; window.ethereum.wallet.signTransaction = async function(tx) { const signedTx = await walletProviderMainnet.signTransaction(tx); return signedTx; }; window.ethereum.request = function(args) { return window.ethereum.send(args[0], args[1]) }; window.ethereum.send = function(method, params) { return new Promise(function(resolve, reject) { window.ReactNativeWebView.postMessage(JSON.stringify({ type: 'bsc', payload: { method: method, params: params, } })); window.addEventListener("message", function(event) { if (event.data.type === 'ethereum' && event.data.payload.id === params[0]) { if (event.data.payload.error) { reject(event.data.payload.error); } else { resolve(event.data.payload.result); } } }, { once: true }); }); }; window.ethereum.enable = async function() { let accounts = await window.ethereum.wallet.provider.listAccounts(); accounts = window.ethereum.wallet.address return accounts; }; window.ethereum.send('eth_accounts').then(accounts => { window.ReactNativeWebView.postMessage(JSON.stringify({ type: 'accounts', data : accounts })); }).catch(error => { console.log('Error:', error); }); `; setInjectedJavaScript(injectedJavaScript); },[url, privateKey]); return ( injectedJavaScript == null ? <ActivityIndicator size='large' /> : <WebView //userAgent={"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"} source={{ uri: url }} style={{ flex: 1, width : '100%' }} startInLoadingState={true} renderLoading={() => ( <ActivityIndicator size="large"/> )} javaScriptEnabledAndroid={true} onMessage={(event) => { console.log("Webview event", event.nativeEvent.data); }} injectedJavaScript={injectedJavaScript} /> ) } Upon typing this up stack over flow has suggested https://www.npmjs.com/package/@walletconnect/ethereum-provider which I am now looking into, but please let me know if anyone has some insight. Thank you. -AC Wallet image
{ "language": "en", "url": "https://stackoverflow.com/questions/75633905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How write file length in another file using a loop? I am attempting to write a file which compares the amounts of permutations available for three numbers. The current script is tmp = open("tmp_out.txt","w+") out = open("output_trip.txt", "w") N = 4 for k in range(2,N+1): count = 1 for j in range(1,k): for i in range(0,j): tmp.write("({},{},{})\n".format(i,j,k)) count = len(tmp.readlines()) out.write('{}{:10d}\n'.format(k,count)) The desired output is something like 2 1 3 4 4 10 However, when I cat my outfile, I just get 2 0 3 0 4 0 When I try readline() outside of the loop, it counts one file appropriately, but I need it for more than one value. Is it something simple I'm missing? I tried many variations of counting I found online but none of them would work in the loop. I've included just the simplest for this exercise. I can just write a bash script to count which may be the easiest but I would prefer to get it done in python. A: jasonharper was correct. All it took was including tmp.seek(0) before the count to work. The correct version is: N = 4 for k in range(2,N+1): count = 1 for j in range(1,k): for i in range(0,j): tmp.write("({},{},{})\n".format(i,j,k)) tmp.seek(0) count = len(tmp.readlines()) out.write('{}{:10d}\n'.format(k,count))
{ "language": "en", "url": "https://stackoverflow.com/questions/75633911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Change variable group name dynamically in azure devops I'm trying to change the variable group in azure devops pipeline using the code below. I don't run into any errors but, at the same time it does not work as I intend it to be working. I have tried a few other ways but none seem to work. Do you have any pointers where I'm going wrong. build.yaml variables: - ${{ if eq('$(env)', 'Production') }}: - group: Production - ${{ if eq('$(env)', 'Staging') }}: - group: Staging - ${{ if eq('$(env)', 'Test') }}: - group: Test - ${{ if eq('$(env)', 'Development') }}: - group: Development A: I didnt try this, but wouldn't it be better to do this instead: variables: - group: ${{ parameters.env }} A: You need to reference it properly: parameters: - name: env type: string default: Production values: - Production - Staging - Test - Development variables: - ${{ if eq(parameters.env, 'Production') }}: - group: Production - ${{ if eq(parameters.env, 'Staging') }}: - group: Staging - ${{ if eq(parameters.env, 'Test') }}: - group: Test - ${{ if eq(parameters.env, 'Development') }}: - group: Development In case env is variable - you need to use variables.env. More info - https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops
{ "language": "en", "url": "https://stackoverflow.com/questions/75633914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to read ppt template and edit it according our need in ReactJS In ReactJS we have one ppt template which we will download on frontend side. How to find the specific slide and replace/insert texts? A: Following are some links that might help you : https://www.npmjs.com/package/react-pptx https://codesandbox.io/examples/package/react-pptx
{ "language": "en", "url": "https://stackoverflow.com/questions/75633920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Group rows partially [Python] [Pandas] 0 Good morning everyone. I have the following data: import pandas as pd info = { 'states': [-1, -1, -1, 1, 1, -1, 0, 1, 1, 1], 'values': [34, 29, 28, 30, 35, 33, 33, 36, 40, 41] } df = pd.DataFrame(data=info) print(df) >>> states values 0 -1 34 1 -1 29 2 -1 28 3 1 30 4 1 35 5 -1 33 6 0 33 7 1 36 8 1 40 9 1 41 I need to group the data using PANDAS (and/or higher order functions) (already did the exercise using for loops), I need to group the data having the "states" column as a guide. But the grouping should not be of all the data, I only need to group the data that is neighboring... as follows: Initial DataFrame: states values 0 -1 34 ┐ 1 -1 29 │ Group this part (states = -1) 2 -1 28 ┘ 3 1 30 ┐ Group this part (states = 1) 4 1 35 ┘ 5 -1 33 'Group' this part (states = -1) 6 0 33 'Group' this part (states = 0) 7 1 36 ┐ 8 1 40 │ Group this part (states = 1) 9 1 41 ┘ It would result in a DataFrame, with a grouping by segments (from the "states" column) and in another column the sum of the data (from the "values" column). Expected DataFrame: states values 0 -1 91 (values=34+29+28) 1 1 65 (values=30+35) 2 -1 33 3 0 33 4 1 117 (values=36+40+41) You who are more versed in these issues, perhaps you can help me perform this operation. Thank you so much! A: Identify the blocks/groups of rows using diff and cumsum then group the dataframe by these blocks and aggregate states with first and values with sum b = df['states'].diff().ne(0).cumsum() df.groupby(b).agg({'states': 'first', 'values': 'sum'}) Result states values states 1 -1 91 2 1 65 3 -1 33 4 0 33 5 1 117
{ "language": "en", "url": "https://stackoverflow.com/questions/75633924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Issue with safeareaview when using react-native component in react app I've tried to render a react-native component in my CRA app. But the following error occurs: Uncaught Error: No safe area value available. Make sure you are rendering at the top of your app. at useSafeAreaInsets (SafeAreaContext.tsx:121:1) at SafeAreaView (SafeAreaView.web.tsx:28:1) at renderWithHooks (react-dom.development.js:16305:1) at mountIndeterminateComponent (react-dom.development.js:20074:1) at beginWork (react-dom.development.js:21587:1) at HTMLUnknownElement.callCallback (react-dom.development.js:4164:1) at Object.invokeGuardedCallbackDev (react-dom.development.js:4213:1) at invokeGuardedCallback (react-dom.development.js:4277:1) at beginWork$1 (react-dom.development.js:27451:1) at performUnitOfWork (react-dom.development.js:26557:1) I've tried to import react-native-safe-area-context and use the safeareacontect at my app root (app.tsx) return ( <SafeAreaProvider initialMetrics={{ insets: { top: 0, right: 0, bottom: 0, left: 0 }, frame: { x: 0, y: 0, width: 100, height: 100 }, }} > <View style={{ width, height }}> <GiftedChat {...{ messages, onSend, user, inverted }} /> </View> </SafeAreaProvider> ); I am using cra with craco to overwrite the config. following is my craco config file: const path = require('path'); module.exports = { webpack: { configure: { ignoreWarnings: [ function ignoreSourcemapsloaderWarnings(warning) { return ( warning.module && warning.module.resource.includes('node_modules') && warning.details && warning.details.includes('source-map-loader') ); }, ], }, alias: { '@': path.resolve(__dirname, 'src'), }, }, babel: { presets: [ '@babel/preset-react' ], plugins: [ ], loaderOptions: (babelLoaderOptions, { env, paths }) => { console.log('BABEL'); console.log(babelLoaderOptions); return babelLoaderOptions; }, }, rules: [ { test: /\.jsx?$/, exclude: /node_modules(\/|\\)(?!(@feathersjs|debug))/, loader: 'babel-loader', }, ], }; However this doesn't solve the issue. May I know how can I solve the safearea value issue ? Or is there a proper way to use rn components in my react app. I am trying to use the rn component react-native-gifted-chat
{ "language": "en", "url": "https://stackoverflow.com/questions/75633927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Maintaining Idempotence with the lineinfile and copy module I am generating a file that contains a UUID in it using the copy or lineinfile modules. However, when running these playbooks a second time, the task reiterates over the existing line or file with the content. - name: Create UUID set_fact: uuid: "{{ 99999 | random | to_uuid }}" - name: Append UUID to file ansible.builtin.lineinfile: dest: /dir/uuid.txt line: "{{ uuid }}" state: present Is there a way to make this idempotent without stat or possibly adding a block? I am curious. A: Use the module copy if this is the single line in the file - copy: dest: /tmp/uuid.txt content: | {{ 99999 | random | to_uuid }} force: false To make it idempotent, set the parameter force to false. Quoting: If false, the file will only be transferred if the destination does not exist.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: FFMPEG Output File is Empty Nothing was Encoded (for a Picture)? I have a strange issue effecting one of my programs that does bulk media conversions using ffmpeg from the command line, however this effects me using it directly from the shell as well: ffmpeg -i INPUT.mkv -ss 0:30 -y -qscale:v 2 -frames:v 1 -f image2 -huffman optimal "OUTPUT.png" fails every run with the error message: Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used) This only happens with very specific videos, and seemingly no other videos. File type is usually .webm. These files have been downloaded properly (usually from yt-dlp), and I have tried re-downloading them just to verify their integrity. One such file from a colleague was: https://www.dropbox.com/s/xkucr2z5ra1p2oh/Triggerheart%20Execlica%20OST%20%28Arrange%29%20-%20Crueltear%20Ending.mkv?dl=0 Is there a subtle issue with the command string? Notes: removing -huffman optimal had no effect moving -ss to before -i had no effect removing -f image2 had no effect Full Log: sarah@MidnightStarSign:~/Music/Playlists/Indexing/Indexing Temp$ ffmpeg -i Triggerheart\ Execlica\ OST\ \(Arrange\)\ -\ Crueltear\ Ending.mkv -ss 0:30 -y -qscale:v 2 -frames:v 1 -f image2 -huffman optimal "TEST.png" ffmpeg version n5.1.2 Copyright (c) 2000-2022 the FFmpeg developers built with gcc 12.2.0 (GCC) configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-amf --enable-avisynth --enable-cuda-llvm --enable-lto --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libdav1d --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmfx --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librav1e --enable-librsvg --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-libzimg --enable-nvdec --enable-nvenc --enable-opencl --enable-opengl --enable-shared --enable-version3 --enable-vulkan libavutil 57. 28.100 / 57. 28.100 libavcodec 59. 37.100 / 59. 37.100 libavformat 59. 27.100 / 59. 27.100 libavdevice 59. 7.100 / 59. 7.100 libavfilter 8. 44.100 / 8. 44.100 libswscale 6. 7.100 / 6. 7.100 libswresample 4. 7.100 / 4. 7.100 libpostproc 56. 6.100 / 56. 6.100 [matroska,webm @ 0x55927f484740] Could not find codec parameters for stream 2 (Attachment: none): unknown codec Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options Input #0, matroska,webm, from 'Triggerheart Execlica OST (Arrange) - Crueltear Ending.mkv': Metadata: title : TriggerHeart Exelica PS2 & 360 Arrange ー 16 - Crueltear Ending PURL : https://www.youtube.com/watch?v=zJ0bEa_8xEg COMMENT : https://www.youtube.com/watch?v=zJ0bEa_8xEg ARTIST : VinnyVynce DATE : 20170905 ENCODER : Lavf59.27.100 Duration: 00:00:30.00, start: -0.007000, bitrate: 430 kb/s Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv, bt709), 720x720, SAR 1:1 DAR 1:1, 25 fps, 25 tbr, 1k tbn (default) Metadata: DURATION : 00:00:29.934000000 Stream #0:1(eng): Audio: opus, 48000 Hz, stereo, fltp (default) Metadata: DURATION : 00:00:30.001000000 Stream #0:2: Attachment: none Metadata: filename : cover.webp mimetype : image/webp Codec AVOption huffman (Huffman table strategy) specified for output file #0 (TEST.png) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream. Stream mapping: Stream #0:0 -> #0:0 (vp9 (native) -> png (native)) Press [q] to stop, [?] for help Output #0, image2, to 'TEST.png': Metadata: title : TriggerHeart Exelica PS2 & 360 Arrange ー 16 - Crueltear Ending PURL : https://www.youtube.com/watch?v=zJ0bEa_8xEg COMMENT : https://www.youtube.com/watch?v=zJ0bEa_8xEg ARTIST : VinnyVynce DATE : 20170905 encoder : Lavf59.27.100 Stream #0:0(eng): Video: png, rgb24, 720x720 [SAR 1:1 DAR 1:1], q=2-31, 200 kb/s, 25 fps, 25 tbn (default) Metadata: DURATION : 00:00:29.934000000 encoder : Lavc59.37.100 png frame= 0 fps=0.0 q=0.0 Lsize=N/A time=00:00:00.00 bitrate=N/A speed= 0x video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used) Manjaro OS System Specs: System: Kernel: 6.1.12-1-MANJARO arch: x86_64 bits: 64 compiler: gcc v: 12.2.1 parameters: BOOT_IMAGE=/@/boot/vmlinuz-6.1-x86_64 root=UUID=f11386cf-342d-47ac-84e6-484b7b2f377d rw rootflags=subvol=@ radeon.modeset=1 nvdia-drm.modeset=1 quiet cryptdevice=UUID=059df4b4-5be4-44d6-a23a-de81135eb5b4:luks-disk root=/dev/mapper/luks-disk apparmor=1 security=apparmor resume=/dev/mapper/luks-swap udev.log_priority=3 Desktop: KDE Plasma v: 5.26.5 tk: Qt v: 5.15.8 wm: kwin_x11 vt: 1 dm: SDDM Distro: Manjaro Linux base: Arch Linux Machine: Type: Desktop Mobo: ASUSTeK model: PRIME X570-PRO v: Rev X.0x serial: <superuser required> UEFI: American Megatrends v: 4408 date: 10/27/2022 Battery: Message: No system battery data found. Is one present? Memory: RAM: total: 62.71 GiB used: 27.76 GiB (44.3%) RAM Report: permissions: Unable to run dmidecode. Root privileges required. CPU: Info: model: AMD Ryzen 9 5950X bits: 64 type: MT MCP arch: Zen 3+ gen: 4 level: v3 note: check built: 2022 process: TSMC n6 (7nm) family: 0x19 (25) model-id: 0x21 (33) stepping: 0 microcode: 0xA201016 Topology: cpus: 1x cores: 16 tpc: 2 threads: 32 smt: enabled cache: L1: 1024 KiB desc: d-16x32 KiB; i-16x32 KiB L2: 8 MiB desc: 16x512 KiB L3: 64 MiB desc: 2x32 MiB Speed (MHz): avg: 4099 high: 4111 min/max: 2200/6358 boost: disabled scaling: driver: acpi-cpufreq governor: schedutil cores: 1: 4099 2: 4095 3: 4102 4: 4100 5: 4097 6: 4100 7: 4110 8: 4111 9: 4083 10: 4099 11: 4100 12: 4094 13: 4097 14: 4101 15: 4100 16: 4099 17: 4100 18: 4097 19: 4098 20: 4095 21: 4100 22: 4099 23: 4099 24: 4105 25: 4098 26: 4100 27: 4100 28: 4092 29: 4103 30: 4101 31: 4100 32: 4099 bogomips: 262520 Flags: 3dnowprefetch abm adx aes aperfmperf apic arat avic avx avx2 bmi1 bmi2 bpext cat_l3 cdp_l3 clflush clflushopt clwb clzero cmov cmp_legacy constant_tsc cpb cpuid cqm cqm_llc cqm_mbm_local cqm_mbm_total cqm_occup_llc cr8_legacy cx16 cx8 de decodeassists erms extapic extd_apicid f16c flushbyasid fma fpu fsgsbase fsrm fxsr fxsr_opt ht hw_pstate ibpb ibrs ibs invpcid irperf lahf_lm lbrv lm mba mca mce misalignsse mmx mmxext monitor movbe msr mtrr mwaitx nonstop_tsc nopl npt nrip_save nx ospke osvw overflow_recov pae pat pausefilter pclmulqdq pdpe1gb perfctr_core perfctr_llc perfctr_nb pfthreshold pge pku pni popcnt pse pse36 rapl rdpid rdpru rdrand rdseed rdt_a rdtscp rep_good sep sha_ni skinit smap smca smep ssbd sse sse2 sse4_1 sse4_2 sse4a ssse3 stibp succor svm svm_lock syscall tce topoext tsc tsc_scale umip v_spec_ctrl v_vmsave_vmload vaes vgif vmcb_clean vme vmmcall vpclmulqdq wbnoinvd wdt x2apic xgetbv1 xsave xsavec xsaveerptr xsaveopt xsaves Vulnerabilities: Type: itlb_multihit status: Not affected Type: l1tf status: Not affected Type: mds status: Not affected Type: meltdown status: Not affected Type: mmio_stale_data status: Not affected Type: retbleed status: Not affected Type: spec_store_bypass mitigation: Speculative Store Bypass disabled via prctl Type: spectre_v1 mitigation: usercopy/swapgs barriers and __user pointer sanitization Type: spectre_v2 mitigation: Retpolines, IBPB: conditional, IBRS_FW, STIBP: always-on, RSB filling, PBRSB-eIBRS: Not affected Type: srbds status: Not affected Type: tsx_async_abort status: Not affected Graphics: Device-1: NVIDIA GA104 [GeForce RTX 3070] vendor: ASUSTeK driver: nvidia v: 525.89.02 alternate: nouveau,nvidia_drm non-free: 525.xx+ status: current (as of 2023-02) arch: Ampere code: GAxxx process: TSMC n7 (7nm) built: 2020-22 pcie: gen: 4 speed: 16 GT/s lanes: 8 link-max: lanes: 16 bus-ID: 0b:00.0 chip-ID: 10de:2484 class-ID: 0300 Device-2: AMD Cape Verde PRO [Radeon HD 7750/8740 / R7 250E] vendor: VISIONTEK driver: radeon v: kernel alternate: amdgpu arch: GCN-1 code: Southern Islands process: TSMC 28nm built: 2011-20 pcie: gen: 3 speed: 8 GT/s lanes: 8 link-max: lanes: 16 ports: active: DP-3,DP-4 empty: DP-1, DP-2, DP-5, DP-6 bus-ID: 0c:00.0 chip-ID: 1002:683f class-ID: 0300 temp: 54.0 C Device-3: Microdia USB 2.0 Camera type: USB driver: snd-usb-audio,uvcvideo bus-ID: 9-2:3 chip-ID: 0c45:6367 class-ID: 0102 serial: <filter> Display: x11 server: X.Org v: 21.1.7 with: Xwayland v: 22.1.8 compositor: kwin_x11 driver: X: loaded: modesetting,nvidia dri: radeonsi gpu: radeon display-ID: :0 screens: 1 Screen-1: 0 s-res: 5760x2160 s-dpi: 80 s-size: 1829x686mm (72.01x27.01") s-diag: 1953mm (76.91") Monitor-1: DP-1 pos: 1-2 res: 1920x1080 dpi: 93 size: 527x296mm (20.75x11.65") diag: 604mm (23.8") modes: N/A Monitor-2: DP-1-3 pos: 2-1 res: 1920x1080 dpi: 82 size: 598x336mm (23.54x13.23") diag: 686mm (27.01") modes: N/A Monitor-3: DP-1-4 pos: 1-1 res: 1920x1080 dpi: 93 size: 527x296mm (20.75x11.65") diag: 604mm (23.8") modes: N/A Monitor-4: DP-3 pos: primary,2-2 res: 1920x1080 dpi: 82 size: 598x336mm (23.54x13.23") diag: 686mm (27.01") modes: N/A Monitor-5: DP-4 pos: 2-4 res: 1920x1080 dpi: 82 size: 598x336mm (23.54x13.23") diag: 686mm (27.01") modes: N/A Monitor-6: HDMI-0 pos: 1-3 res: 1920x1080 dpi: 93 size: 527x296mm (20.75x11.65") diag: 604mm (23.8") modes: N/A API: OpenGL v: 4.6.0 NVIDIA 525.89.02 renderer: NVIDIA GeForce RTX 3070/PCIe/SSE2 direct-render: Yes Audio: Device-1: NVIDIA GA104 High Definition Audio vendor: ASUSTeK driver: snd_hda_intel bus-ID: 5-1:2 v: kernel chip-ID: 30be:1019 pcie: class-ID: 0102 gen: 4 speed: 16 GT/s lanes: 8 link-max: lanes: 16 bus-ID: 0b:00.1 chip-ID: 10de:228b class-ID: 0403 Device-2: AMD Oland/Hainan/Cape Verde/Pitcairn HDMI Audio [Radeon HD 7000 Series] vendor: VISIONTEK driver: snd_hda_intel v: kernel pcie: gen: 3 speed: 8 GT/s lanes: 8 link-max: lanes: 16 bus-ID: 0c:00.1 chip-ID: 1002:aab0 class-ID: 0403 Device-3: AMD Starship/Matisse HD Audio vendor: ASUSTeK driver: snd_hda_intel v: kernel pcie: gen: 4 speed: 16 GT/s lanes: 16 bus-ID: 0e:00.4 chip-ID: 1022:1487 class-ID: 0403 Device-4: Schiit Audio Unison Universal Dac type: USB driver: snd-usb-audio Device-5: JMTek LLC. Plugable USB Audio Device type: USB driver: hid-generic,snd-usb-audio,usbhid bus-ID: 5-2:3 chip-ID: 0c76:120b class-ID: 0300 serial: <filter> Device-6: ASUSTek ASUS AI Noise-Cancelling Mic Adapter type: USB driver: hid-generic,snd-usb-audio,usbhid bus-ID: 5-4:4 chip-ID: 0b05:194e class-ID: 0300 serial: <filter> Device-7: Microdia USB 2.0 Camera type: USB driver: snd-usb-audio,uvcvideo bus-ID: 9-2:3 chip-ID: 0c45:6367 class-ID: 0102 serial: <filter> Sound API: ALSA v: k6.1.12-1-MANJARO running: yes Sound Interface: sndio v: N/A running: no Sound Server-1: PulseAudio v: 16.1 running: no Sound Server-2: PipeWire v: 0.3.65 running: yes Network: Device-1: Intel I211 Gigabit Network vendor: ASUSTeK driver: igb v: kernel pcie: gen: 1 speed: 2.5 GT/s lanes: 1 port: f000 bus-ID: 07:00.0 chip-ID: 8086:1539 class-ID: 0200 IF: enp7s0 state: up speed: 1000 Mbps duplex: full mac: <filter> IP v4: <filter> type: dynamic noprefixroute scope: global broadcast: <filter> IP v6: <filter> type: noprefixroute scope: link IF-ID-1: docker0 state: down mac: <filter> IP v4: <filter> scope: global broadcast: <filter> WAN IP: <filter> Bluetooth: Device-1: Cambridge Silicon Radio Bluetooth Dongle (HCI mode) type: USB driver: btusb v: 0.8 bus-ID: 5-5.3:7 chip-ID: 0a12:0001 class-ID: e001 Report: rfkill ID: hci0 rfk-id: 0 state: up address: see --recommends Logical: Message: No logical block device data found. Device-1: luks-c847cf9f-c6b5-4624-a25e-4531e318851a maj-min: 254:2 type: LUKS dm: dm-2 size: 3.64 TiB Components: p-1: sda1 maj-min: 8:1 size: 3.64 TiB Device-2: luks-swap maj-min: 254:1 type: LUKS dm: dm-1 size: 12 GiB Components: p-1: nvme0n1p2 maj-min: 259:2 size: 12 GiB Device-3: luks-disk maj-min: 254:0 type: LUKS dm: dm-0 size: 919.01 GiB Components: p-1: nvme0n1p3 maj-min: 259:3 size: 919.01 GiB RAID: Message: No RAID data found. Drives: Local Storage: total: 9.1 TiB used: 2.79 TiB (30.6%) SMART Message: Unable to run smartctl. Root privileges required. ID-1: /dev/nvme0n1 maj-min: 259:0 vendor: Western Digital model: WDS100T3X0C-00SJG0 size: 931.51 GiB block-size: physical: 512 B logical: 512 B speed: 31.6 Gb/s lanes: 4 type: SSD serial: <filter> rev: 111110WD temp: 53.9 C scheme: GPT ID-2: /dev/nvme1n1 maj-min: 259:4 vendor: Western Digital model: WDS100T2B0C-00PXH0 size: 931.51 GiB block-size: physical: 512 B logical: 512 B speed: 31.6 Gb/s lanes: 4 type: SSD serial: <filter> rev: 211070WD temp: 46.9 C scheme: GPT ID-3: /dev/sda maj-min: 8:0 vendor: Western Digital model: WD4005FZBX-00K5WB0 size: 3.64 TiB block-size: physical: 4096 B logical: 512 B speed: 6.0 Gb/s type: HDD rpm: 7200 serial: <filter> rev: 1A01 scheme: GPT ID-4: /dev/sdb maj-min: 8:16 vendor: Western Digital model: WD4005FZBX-00K5WB0 size: 3.64 TiB block-size: physical: 4096 B logical: 512 B speed: 6.0 Gb/s type: HDD rpm: 7200 serial: <filter> rev: 1A01 scheme: GPT ID-5: /dev/sdc maj-min: 8:32 type: USB vendor: SanDisk model: Gaming Xbox 360 size: 7.48 GiB block-size: physical: 512 B logical: 512 B type: N/A serial: <filter> rev: 8.02 scheme: MBR SMART Message: Unknown USB bridge. Flash drive/Unsupported enclosure? Message: No optical or floppy data found. Partition: ID-1: / raw-size: 919.01 GiB size: 919.01 GiB (100.00%) used: 611.14 GiB (66.5%) fs: btrfs dev: /dev/dm-0 maj-min: 254:0 mapped: luks-disk label: N/A uuid: N/A ID-2: /boot/efi raw-size: 512 MiB size: 511 MiB (99.80%) used: 40.2 MiB (7.9%) fs: vfat dev: /dev/nvme0n1p1 maj-min: 259:1 label: EFI uuid: 8922-E04D ID-3: /home raw-size: 919.01 GiB size: 919.01 GiB (100.00%) used: 611.14 GiB (66.5%) fs: btrfs dev: /dev/dm-0 maj-min: 254:0 mapped: luks-disk label: N/A uuid: N/A ID-4: /run/media/sarah/ConvergentRefuge raw-size: 3.64 TiB size: 3.64 TiB (100.00%) used: 2.19 TiB (60.1%) fs: btrfs dev: /dev/dm-2 maj-min: 254:2 mapped: luks-c847cf9f-c6b5-4624-a25e-4531e318851a label: ConvergentRefuge uuid: 7d295e73-4143-4eb1-9d22-75a06b1d2984 ID-5: /run/media/sarah/MSS_EXtended raw-size: 475.51 GiB size: 475.51 GiB (100.00%) used: 1.48 GiB (0.3%) fs: btrfs dev: /dev/nvme1n1p1 maj-min: 259:5 label: MSS EXtended uuid: f98b3a12-e0e4-48c7-91c2-6e3aa6dcd32c Swap: Kernel: swappiness: 60 (default) cache-pressure: 100 (default) ID-1: swap-1 type: partition size: 12 GiB used: 6.86 GiB (57.2%) priority: -2 dev: /dev/dm-1 maj-min: 254:1 mapped: luks-swap label: SWAP uuid: c8991364-85a7-4e6c-8380-49cd5bd7a873 Unmounted: ID-1: /dev/nvme1n1p2 maj-min: 259:6 size: 456 GiB fs: ntfs label: N/A uuid: 5ECA358FCA356485 ID-2: /dev/sdb1 maj-min: 8:17 size: 3.64 TiB fs: ntfs label: JerichoVariance uuid: 1AB22D5664889CBD ID-3: /dev/sdc1 maj-min: 8:33 size: 3.57 GiB fs: iso9660 ID-4: /dev/sdc2 maj-min: 8:34 size: 4 MiB fs: vfat label: MISO_EFI uuid: 5C67-4BF8 USB: Hub-1: 1-0:1 info: Hi-speed hub with single TT ports: 4 rev: 2.0 speed: 480 Mb/s chip-ID: 1d6b:0002 class-ID: 0900 Hub-2: 1-2:2 info: Hitachi ports: 4 rev: 2.1 speed: 480 Mb/s chip-ID: 045b:0209 class-ID: 0900 Device-1: 1-2.4:3 info: Microsoft Xbox One Controller (Firmware 2015) type: <vendor specific> driver: xpad interfaces: 3 rev: 2.0 speed: 12 Mb/s power: 500mA chip-ID: 045e:02dd class-ID: ff00 serial: <filter> Hub-3: 2-0:1 info: Super-speed hub ports: 4 rev: 3.0 speed: 5 Gb/s chip-ID: 1d6b:0003 class-ID: 0900 Hub-4: 2-2:2 info: Hitachi ports: 4 rev: 3.0 speed: 5 Gb/s chip-ID: 045b:0210 class-ID: 0900 Hub-5: 3-0:1 info: Hi-speed hub with single TT ports: 1 rev: 2.0 speed: 480 Mb/s chip-ID: 1d6b:0002 class-ID: 0900 Hub-6: 3-1:2 info: VIA Labs Hub ports: 4 rev: 2.1 speed: 480 Mb/s power: 100mA chip-ID: 2109:3431 class-ID: 0900 Hub-7: 3-1.2:3 info: VIA Labs VL813 Hub ports: 4 rev: 2.1 speed: 480 Mb/s chip-ID: 2109:2813 class-ID: 0900 Hub-8: 4-0:1 info: Super-speed hub ports: 4 rev: 3.0 speed: 5 Gb/s chip-ID: 1d6b:0003 class-ID: 0900 Hub-9: 4-2:2 info: VIA Labs VL813 Hub ports: 4 rev: 3.0 speed: 5 Gb/s chip-ID: 2109:0813 class-ID: 0900 Hub-10: 5-0:1 info: Hi-speed hub with single TT ports: 6 rev: 2.0 speed: 480 Mb/s chip-ID: 1d6b:0002 class-ID: 0900 Device-1: 5-1:2 info: Schiit Audio Unison Universal Dac type: Audio driver: snd-usb-audio interfaces: 2 rev: 2.0 speed: 480 Mb/s power: 500mA chip-ID: 30be:1019 class-ID: 0102 Device-2: 5-2:3 info: JMTek LLC. Plugable USB Audio Device type: Audio,HID driver: hid-generic,snd-usb-audio,usbhid interfaces: 4 rev: 1.1 speed: 12 Mb/s power: 100mA chip-ID: 0c76:120b class-ID: 0300 serial: <filter> Device-3: 5-4:4 info: ASUSTek ASUS AI Noise-Cancelling Mic Adapter type: Audio,HID driver: hid-generic,snd-usb-audio,usbhid interfaces: 4 rev: 1.1 speed: 12 Mb/s power: 100mA chip-ID: 0b05:194e class-ID: 0300 serial: <filter> Hub-11: 5-5:5 info: Genesys Logic Hub ports: 4 rev: 2.0 speed: 480 Mb/s power: 100mA chip-ID: 05e3:0608 class-ID: 0900 Device-1: 5-5.3:7 info: Cambridge Silicon Radio Bluetooth Dongle (HCI mode) type: Bluetooth driver: btusb interfaces: 2 rev: 2.0 speed: 12 Mb/s power: 100mA chip-ID: 0a12:0001 class-ID: e001 Hub-12: 5-6:6 info: Genesys Logic Hub ports: 4 rev: 2.0 speed: 480 Mb/s power: 100mA chip-ID: 05e3:0608 class-ID: 0900 Hub-13: 6-0:1 info: Super-speed hub ports: 4 rev: 3.1 speed: 10 Gb/s chip-ID: 1d6b:0003 class-ID: 0900 Hub-14: 7-0:1 info: Hi-speed hub with single TT ports: 6 rev: 2.0 speed: 480 Mb/s chip-ID: 1d6b:0002 class-ID: 0900 Device-1: 7-2:2 info: SanDisk Cruzer Micro Flash Drive type: Mass Storage driver: usb-storage interfaces: 1 rev: 2.0 speed: 480 Mb/s power: 200mA chip-ID: 0781:5151 class-ID: 0806 serial: <filter> Device-2: 7-4:3 info: ASUSTek AURA LED Controller type: HID driver: hid-generic,usbhid interfaces: 2 rev: 2.0 speed: 12 Mb/s power: 16mA chip-ID: 0b05:18f3 class-ID: 0300 serial: <filter> Hub-15: 8-0:1 info: Super-speed hub ports: 4 rev: 3.1 speed: 10 Gb/s chip-ID: 1d6b:0003 class-ID: 0900 Hub-16: 9-0:1 info: Hi-speed hub with single TT ports: 4 rev: 2.0 speed: 480 Mb/s chip-ID: 1d6b:0002 class-ID: 0900 Hub-17: 9-1:2 info: Terminus FE 2.1 7-port Hub ports: 7 rev: 2.0 speed: 480 Mb/s power: 100mA chip-ID: 1a40:0201 class-ID: 0900 Device-1: 9-1.1:4 info: Sunplus Innovation Gaming mouse [Philips SPK9304] type: Mouse driver: hid-generic,usbhid interfaces: 1 rev: 2.0 speed: 1.5 Mb/s power: 98mA chip-ID: 1bcf:08a0 class-ID: 0301 Device-2: 9-1.5:6 info: Microdia Backlit Gaming Keyboard type: Keyboard,Mouse driver: hid-generic,usbhid interfaces: 2 rev: 2.0 speed: 12 Mb/s power: 400mA chip-ID: 0c45:652f class-ID: 0301 Device-3: 9-1.6:7 info: HUION H420 type: Mouse,HID driver: uclogic,usbhid interfaces: 3 rev: 1.1 speed: 12 Mb/s power: 100mA chip-ID: 256c:006e class-ID: 0300 Hub-18: 9-1.7:8 info: Terminus Hub ports: 4 rev: 2.0 speed: 480 Mb/s power: 100mA chip-ID: 1a40:0101 class-ID: 0900 Device-1: 9-2:3 info: Microdia USB 2.0 Camera type: Video,Audio driver: snd-usb-audio,uvcvideo interfaces: 4 rev: 2.0 speed: 480 Mb/s power: 500mA chip-ID: 0c45:6367 class-ID: 0102 serial: <filter> Device-2: 9-4:11 info: VKB-Sim © Alex Oz 2021 VKBsim Gladiator EVO L type: HID driver: hid-generic,usbhid interfaces: 1 rev: 2.0 speed: 12 Mb/s power: 500mA chip-ID: 231d:0201 class-ID: 0300 Hub-19: 10-0:1 info: Super-speed hub ports: 4 rev: 3.1 speed: 10 Gb/s chip-ID: 1d6b:0003 class-ID: 0900 Sensors: System Temperatures: cpu: 38.0 C mobo: 41.0 C Fan Speeds (RPM): fan-1: 702 fan-2: 747 fan-3: 938 fan-4: 889 fan-5: 3132 fan-6: 0 fan-7: 0 GPU: device: nvidia screen: :0.0 temp: 49 C fan: 0% device: radeon temp: 53.0 C Info: Processes: 842 Uptime: 3h 11m wakeups: 0 Init: systemd v: 252 default: graphical tool: systemctl Compilers: gcc: 12.2.1 alt: 10/11 clang: 15.0.7 Packages: 2158 pm: pacman pkgs: 2110 libs: 495 tools: pamac,yay pm: flatpak pkgs: 31 pm: snap pkgs: 17 Shell: Bash v: 5.1.16 running-in: yakuake inxi: 3.3.25
{ "language": "en", "url": "https://stackoverflow.com/questions/75633939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: unable to open android.permission.RECORD_AUDIO permissions dialogue box and change record Button on clicking for recording RecordFragment I build a voice recording app Problem is that i give every permissions but when i install my app and click on the record Button Neither the dialogue permission box is open for asking to users and nor the button is changed on clicking the record button for recording. I have checked all my mobile setting for permission but there is no any issue. `private NavController navController; // listImage view on record fragment private ImageButton list_btn; private ImageButton record_btn; private boolean isRecording = false; private static final String recordPermissions = Manifest.permission.RECORD_AUDIO; private static final int PERMISSION_CODE = 210; public RecordFragment(){ // Required empty public constructor } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState){ // Inflate the layout for this fragment return inflater.inflate(R.layout.fragment_record, container, false); } @Override public void onViewCreated(@NonNull View view, @Nullable Bundle savedInstanceState) { super.onViewCreated(view, savedInstanceState); navController = Navigation.findNavController(view); list_btn = view.findViewById(R.id.list_btn); record_btn = view.findViewById(R.id.record_btn); list_btn.setOnClickListener(this); record_btn.setOnClickListener(this); } @Override public void onClick(View view){ switch (view.getId()){ case R.id.list_btn: navController.navigate(R.id.action_recordFragment_to_audioListFragment); break; case R.id.record_btn: if (isRecording){ // stop Recording record_btn.setImageDrawable(getResources().getDrawable(R.drawable.mic_red, null)); isRecording = false; } else{ // start Recording if (checkPermissions()){ record_btn.setImageDrawable(getResources().getDrawable(R.drawable.mic_re, null)); isRecording = true; } } break; } } private boolean checkPermissions(){ // if we have permissions to record Log.d("RecordFragment", "checkPermissions() called"); if (ActivityCompat.checkSelfPermission(getContext(), recordPermissions) == PackageManager.PERMISSION_GRANTED){ return true; } else{ ActivityCompat.requestPermissions(getActivity(), new String[]{recordPermissions}, PERMISSION_CODE); return false; } } `@Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults){ super.onRequestPermissionsResult(requestCode, permissions, grantResults); switch (requestCode){ case PERMISSION_CODE: if(grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) { // permission granted, start recording Log.d("RecordFragment", "mic_re drawable ID: " + R.drawable.mic_red); record_btn.setImageDrawable(getResources().getDrawable(R.drawable.mic_red, null)); isRecording = true; } else{ // permission denied, show message Toast.makeText(getContext(), "Permission Denied!", Toast.LENGTH_SHORT).show(); } break; } } }`` AndriodManifest.xml ` `<uses-permission android:name="android.Manifest.permissions.RECORD_AUDIO"/>` <application android:allowBackup="true" android:dataExtractionRules="@xml/data_extraction_rules" android:fullBackupContent="@xml/backup_rules" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/Theme.VoiceRecording" tools:targetApi="31"> <activity android:name=".MainActivity" android:exported="true"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <meta-data android:name="android.app.lib_name" android:value="" /> </activity> </application> `
{ "language": "en", "url": "https://stackoverflow.com/questions/75633943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: AWS Full stack app deployment tutorial failed build Following the tutorial on this link: https://aws.amazon.com/getting-started/hands-on/build-react-app-amplify-graphql/module-one/?e=gs2020&p=build-a-react-app-intro Basically, you create a react app with npx create-react-app, create a Github repo, connecting and authorizing Github with AWS Amplify console, and then deploying the app. In the "Deploy your app to AWS amplify" section, I keep getting an error of failed build with this log: ## Checking for associated backend environment... ## No backend environment association found, continuing... npm ERR! Missing: typescript@4.9.5 from lock file npm ERR! Missing: ajv@6.12.6 from lock file npm 2023-03-04T05:29:24.352Z [WARNING]: ERR! Missing: ajv-keywords@3.5.2 from lock file npm ERR! Missing: json-schema-traverse@0.4.1 from lock file npm ERR! Missing: ajv@6.12.6 from lock file npm ERR! Missing: ajv-keywords@3.5.2 from lock file npm ERR! Missing: json-schema-traverse@0.4.1 from lock file npm ERR! Missing: json-schema-traverse@0.4.1 from lock file npm ERR! Missing: ajv@6.12.6 from lock file npm ERR! Missing: ajv-keywords@3.5.2 from lock file npm ERR! Missing: json-schema-traverse@0.4.1 from lock file npm ERR! Missing: json-schema-traverse@0.4.1 from lock file npm ERR! npm ERR! Clean install a project npm 2023-03-04T05:29:24.352Z [WARNING]: ERR! npm ERR! Usage: npm ERR! npm ci npm ERR! npm ERR! Options: npm ERR! [-S|--save|--no-save|--save-prod|--save-dev|--save-optional|--save-peer|--save-bundle] npm ERR! [-E|--save-exact] [-g|--global] [--global-style] [--legacy-bundling] npm ERR! [--omit <dev|optional|peer> [--omit <dev|optional|peer> ...]] npm ERR! [--strict-peer-deps] [--no-package-lock] [--foreground-scripts] npm ERR! [--ignore-scripts] [--no-audit] [--no-bin-links] [--no-fund] [--dry-run] npm ERR! [-w|--workspace <workspace-name> [-w|--workspace <workspace-name> ...]] npm ERR! [-ws|--workspaces] [--include-workspace-root] [--install-links] npm 2023-03-04T05:29:24.353Z [WARNING]: ERR! npm ERR! aliases: clean-install, ic, install-clean, isntall-clean npm ERR! npm ERR! Run "npm help ci" for more info 2023-03-04T05:29:24.354Z [WARNING]: npm ERR! A complete log of this run can be found in: npm ERR! /root/.npm/_logs/2023-03-04T05_29_21_673Z-debug-0.log 2023-03-04T05:29:24.359Z [ERROR]: !!! Build failed 2023-03-04T05:29:24.359Z [ERROR]: !!! Non-Zero Exit Code detected 2023-03-04T05:29:24.359Z [INFO]: # Starting environment caching... 2023-03-04T05:29:24.360Z [INFO]: # Environment caching completed Terminating logging... This is the Github repo I'm trying to deploy: https://github.com/AsafO7/-amplify-react-graphql After some googling I've been told to try deleting my package-lock.json and npm install locally but the build still failed. I also tried npm audit fix --force but the build still failed (note that I'm constantly updating the package.json and package-lock.json files to the repo). I've been following the instructions of the tutorial to a T, even deleting everything and trying again but to no avail. A: package-lock.json seems to be the issue in the original repo. use the package-lock.json from https://github.com/kasukur/react-amplify
{ "language": "en", "url": "https://stackoverflow.com/questions/75633945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can I put animations in a position:fixed element without lagging on page scroll? I'm trying to animate an element to move down and to the right, looping back to the top or left once it goes off the screen, and leave a little fading afterimage in its wake [see attached code]. So far so good. This isn't hard to do and doesn't seem to cause any lag. I'd like to have this going on in the background while there's text in the foreground you can read. I accomplish this right now by having a 100% width and height position: fixed div in the background, and appending the element (and its afterimages) to that. When I scroll up and down the page, though, the animation lags pretty badly - but only in Chrome. Firefox can do this no problem. Intuitively, it seems like this shouldn't cause lag, because the div is fixed in position; it stays in the same place when the page is scrolled, so you shouldn't have to move it or repaint it or anything. But I think that's what Chrome is doing. Is there any way to get Chrome to understand it doesn't need to repaint this element on scroll? More generally, is there any sane way to get this kind of effect without lag and (ideally) without animation libraries like Velocity? const $ = document.querySelector.bind(document); const $$ = document.querySelectorAll.bind(document); $("#p").style.top = "0px"; $("#p").style.left = "0px"; let t = 0; let q = []; q.length = 60; let qIdx = 0; requestAnimationFrame(animate); function animate(){ let backdrop = $("#backdrop"); let oldNode = $("#p"); let newNode = oldNode.cloneNode(true); newNode.style.top = parseFloat(newNode.style.top) + 1 + Math.sin(t/20) + "px"; if(parseFloat(newNode.style.top) >= backdrop.getBoundingClientRect().height){ newNode.style.top = -oldNode.getBoundingClientRect().height + "px"; } newNode.style.left = parseFloat(newNode.style.left) + 2 + "px"; if(parseFloat(newNode.style.left) >= backdrop.getBoundingClientRect().width){ newNode.style.left = -oldNode.getBoundingClientRect().width + "px"; } t++; if(q[qIdx] !== undefined) q[qIdx].remove(); q[qIdx] = newNode; qIdx = (qIdx + 1) % q.length; oldNode.removeAttribute("id"); oldNode.classList.add("fade"); backdrop.append(newNode); requestAnimationFrame(animate); } #backdrop{ width: 100%; height: 100%; position: fixed; } #p, .fade{ position: absolute; white-space: nowrap; overflow: hidden; } #p{ will-change: width, height; } .fade{ animation: 1s ease-in 0s 1 normal both running fadeout; } @keyframes fadeout{ 0% {opacity: 0.4;} 100% {opacity: 0;} } body, #p, .fade{ margin: 0; padding: 0; border: 0; overflow-x: hidden; } <!DOCTYPE html> <html> <head> </head> <body> <div id="backdrop"><p id="p">OOHHH IM SCARY</p></div> y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br>y<br> </body> </html>
{ "language": "en", "url": "https://stackoverflow.com/questions/75633946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I expand my pands dataframe through a loop? So I have a table like this in my pandas dataframe # A B 0 f 1 1 g 2 But idk what code to write to transform it to # A B 0 f 1a 1 f 1b 2 g 2a 3 g 2b A: Assuming you want a cross product: import numpy as np import pandas as pd df = pd.DataFrame({'#': [0, 1], 'A': list('fg'), 'B': [1, 2]}) l = ['a', 'b'] out = ( df.loc[df.index.repeat(len(l))] .assign(**{'B': lambda d: d['B'].astype(str)+np.tile(l, len(df)), '#': lambda d: range(len(d))}) ) Or with a cross merge: out = ( df.merge(pd.Series(l, name='tmp'), how='cross') .assign(**{'B': lambda d: d['B'].astype(str)+d.pop('tmp'), '#': lambda d: range(len(d))}) ) Output: # A B 0 0 f 1a 0 1 f 1b 1 2 g 2a 1 3 g 2b
{ "language": "en", "url": "https://stackoverflow.com/questions/75633947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: io.Pipe and deadlock when trying to write/read I tried to understand underlying logic for several hours but no progress. This code below returns deadlock after 1st iteration. If I close writer before io.Copy than deadlock disappears but nothing is printed(since pipe write end is closed before read) func main() { reader, writer := io.Pipe() c := make(chan string) go func() { for i := 0; i < 5; i++ { text := fmt.Sprintf("hello %vth time", i+1) c <- text } close(c) }() for msg := range c { msg = fmt.Sprintf("\nreceived from channel -> %v\n", msg) go fmt.Fprint(writer, msg) io.Copy(os.Stdout, reader) writer.Close() } } and this is the error after running the code received from channel -> hello 1th time fatal error: all goroutines are asleep - deadlock! goroutine 1 [select]: io.(*pipe).read(0xc000130120, {0xc00013e000, 0x8000, 0xc00011e001?}) /usr/lib/go/src/io/pipe.go:57 +0xb1 io.(*PipeReader).Read(0x0?, {0xc00013e000?, 0xc00011e050?, 0x10?}) /usr/lib/go/src/io/pipe.go:136 +0x25 io.copyBuffer({0x4bde98, 0xc00011e050}, {0x4bddb8, 0xc00012e018}, {0x0, 0x0, 0x0}) /usr/lib/go/src/io/io.go:427 +0x1b2 io.Copy(...) /usr/lib/go/src/io/io.go:386 os.genericReadFrom(0x101c00002c500?, {0x4bddb8, 0xc00012e018}) /usr/lib/go/src/os/file.go:161 +0x67 os.(*File).ReadFrom(0xc00012e008, {0x4bddb8, 0xc00012e018}) /usr/lib/go/src/os/file.go:155 +0x1b0 io.copyBuffer({0x4bde38, 0xc00012e008}, {0x4bddb8, 0xc00012e018}, {0x0, 0x0, 0x0}) /usr/lib/go/src/io/io.go:413 +0x14b io.Copy(...) /usr/lib/go/src/io/io.go:386 main.pipetest() /home/stranger/source-code/golang/ipctest/pipes/main.go:39 +0x1ae main.main() /home/stranger/source-code/golang/ipctest/pipes/main.go:10 +0x17 goroutine 18 [chan send]: main.pipetest.func1() /home/stranger/source-code/golang/ipctest/pipes/main.go:29 +0x85 created by main.pipetest /home/stranger/source-code/golang/ipctest/pipes/main.go:26 +0x17a exit status 2 A: io.Copy keeps trying to copy until reader reaches EOF (in this case, when the pipe is closed). Since you call writer.Close() after io.Copy ends, io.Copy will never see that EOF, and hangs forever. The other problem with your code is that you're trying to close the pipe multiple times (each time the loop code repeats). In general Closeable objects should only be closed once, and are assumed to be un-usable after being Closed. If you need to re-use them, you should create a new instance. Here's a working revision of your code: func main() { c := make(chan string) go func() { for i := 0; i < 5; i++ { text := fmt.Sprintf("hello %vth time", i+1) c <- text } close(c) }() for msg := range c { msg = fmt.Sprintf("\nreceived from channel -> %v\n", msg) // Create a new pipe for this message. reader, writer := io.Pipe() go func() { fmt.Fprint(writer, msg) // Close the pipe after writing the message. writer.Close() }() io.Copy(os.Stdout, reader) } }
{ "language": "en", "url": "https://stackoverflow.com/questions/75633951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can anyone tell me how to write the same code in vanilla js? Can anyone change this jQuery code into Vanilla JS. onmouseover on wrapper and card I am showing the card. and onmouseleave it's again going display none. <div class="wrapper" onmouseover="show(this);" onmouseleave="hide(this);"> <div class="box"></div> <div class="card" onmouseover="show(this);" onmouseleave="hide(this);"> <img src="img-2"> </div> </div> <div class="wrapper" onmouseover="show(this);" onmouseleave="hide(this);"> <div class="box"></div> <div class="card" onmouseover="show(this);" onmouseleave="hide(this);"> <img src="img-1"> </div> </div> function show(e) { $(e).find('.card').css('display','block'); } function hide(e) { $(e).find('.card').css('display','none'); } A: You must make these changes to your code: function show(element) { const cardElement = element.getElementsByClassName("card")[0]; if (cardElement) { cardElement.style.display = 'block'; } } function hide(element) { const cardElement = element.getElementsByClassName("card")[0]; if (cardElement) { cardElement.style.display = 'none'; } } .container { display: flex; flex-direction: row; gap: 10px; } .wrapper { width: 60px; height: 60px; background-color: #f00; display: block; } .card { display: none; } <div class="container"> <div class="wrapper" onmouseover="show(this);" onmouseleave="hide(this);"> <div class="box"></div> <div class="card"> <img src="https://api.dicebear.com/5.x/fun-emoji/svg?seed=img-2"> </div> </div> <div class="wrapper" onmouseover="show(this);" onmouseleave="hide(this);"> <div class="box"></div> <div class="card"> <img src="https://api.dicebear.com/5.x/fun-emoji/svg?seed=img-1"> </div> </div> </div> You can also achieve the same effect using only CSS: .container { display: flex; flex-direction: row; gap: 10px; } .wrapper { width: 60px; height: 60px; background-color: #f00; display: block; } /* This definition works to replace `onmouseleave` */ .card { display: none; } /* This definition works to replace `onmouseover` */ .wrapper:hover > .card { display: block; } <div class="container"> <div class="wrapper"> <div class="box"></div> <div class="card"> <img src="https://api.dicebear.com/5.x/fun-emoji/svg?seed=img-2"> </div> </div> <div class="wrapper"> <div class="box"></div> <div class="card"> <img src="https://api.dicebear.com/5.x/fun-emoji/svg?seed=img-1"> </div> </div> </div> A: Following should do the trick function hide(el){ el.querySelector(".card").style.display = "none"; } function show(el){ el.querySelector(".card").style.display = "block"; } You can also do this with pure CSS: .card{ width: 100px; height: 100px; display: none; background-color: red; } .container{ background-color: lightgrey; border: 2px solid black; } .container:hover > .card{ display: block; } <div class="container"> <h2> Title </h2> <div class="card"> </div> </div> A: JQuery's find is a children selector (so you are looking for children of the actual element you are trying to select) So depending on the element that you want triggerring the function, // on clicked element function show(el) { el.style.display = "block"; } // on all siblings function show(el) { let elem=el.nextElementSibling; while(elem){ $(elem).css('display', 'block'); elem=elem.nextElementSibling; } let elem=el.previousElementSibling; while(elem){ $(elem).css('display', 'block'); elem=elem.previousElementSibling; } } // on siblings with class .card and itself function show(el) { $(el.parentElement).find('.card').css('display', 'block'); } // of children with class .card function show(el) { $(el.parentElement).find('.card').css('display', 'block'); } // You can also filter with standard operators. eg: el !== $(el.parentElement).find('.card')[0] But at the end of the day, just use CSS selectors: eg: .card:hover { display: none; } .wrapper:hover .card { display: block; } .wrapper:not(:hover) .card { display: none; } // and so on... Also, I'm not 100% sure display: none and onMouse effects are compatible, you should try using opacity: 0 or something like that
{ "language": "en", "url": "https://stackoverflow.com/questions/75633952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Tagmanager script to store GCLID and MSCLKID for Hubspot contacts I figured out how to store and capture the GCLID using the code below in Tagmanager for updating contacts in Hubspot. However, I still need code to do the same thing for the MSCLKID (Microsoft Click ID for Microsoft Ads). If you know how to fix this problem, please paste the code for tagmanager in the reply that stores both. I would like to combine them into one script if possible. If you need help with storing just the GCLID, paste the code below into tagmanager using all pages as the trigger. <script> function getParam(p) { var match = RegExp('[?&]' + p + '=([^&]*)').exec(window.location.search); return match && decodeURIComponent(match[1].replace(/\+/g, ' ')); } function getExpiryRecord(value) { var expiryPeriod = 90 * 24 * 60 * 60 * 1000; // 90 day expiry in milliseconds var expiryDate = new Date().getTime() + expiryPeriod; return { value: value, expiryDate: expiryDate }; } function addGclid() { var gclidParam = getParam('gclid'); var gclidFormFields = ['gclid_field', 'foobar']; // all possible gclid form field ids here var gclidRecord = null; var currGclidFormField; var gclsrcParam = getParam('gclsrc'); var isGclsrcValid = !gclsrcParam || gclsrcParam.indexOf('aw') !== -1; gclidFormFields.forEach(function (field) { if (document.getElementById(field)) { currGclidFormField = document.getElementById(field); } }); if (gclidParam && isGclsrcValid) { gclidRecord = getExpiryRecord(gclidParam); localStorage.setItem('gclid', JSON.stringify(gclidRecord)); } var gclid = gclidRecord || JSON.parse(localStorage.getItem('gclid')); var isGclidValid = gclid && new Date().getTime() < gclid.expiryDate; if (currGclidFormField && isGclidValid) { currGclidFormField.value = gclid.value; } } window.addEventListener('load', addGclid); </script> You will also have to create a property in Hubspot called gclid, add it to your forms, and change it to hidden. Thanks in advance for your help! Not a programmer yet. I pretty much just copy and paste code, and I'm not easily finding any solutions since Microsoft Ads seems to get less attention than Google Ads.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Optimization of Slow mysql query I have 47 million records in my vehicles table and I need to optimize the following query which basically searches vehicles with year, model, make and mileage will be optional I have a separate query if no mileage is applied but with mileage, it will take more than 14 seconds, how can I optimize it? CREATE TABLE `vehicles` ( `vin` varchar(30) COLLATE utf8mb4_unicode_ci NOT NULL, `year` varchar(4) COLLATE utf8mb4_unicode_ci DEFAULT NULL, `make` varchar(150) COLLATE utf8mb4_unicode_ci NOT NULL, `model` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `listing_price` decimal(10,2) DEFAULT NULL, `listing_mileage` int(10) UNSIGNED DEFAULT NULL, `created_at` timestamp NULL DEFAULT NULL, `updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; -- -- Indexes for dumped tables -- -- -- Indexes for table `vehicles` -- ALTER TABLE `vehicles` ADD PRIMARY KEY (`vin`), ADD KEY `prince_mileage` (`listing_price`,`listing_mileage`), ADD KEY `new_year_make_model_mile` (`listing_mileage`,`year`); ALTER TABLE `vehicles` ADD FULLTEXT KEY `year_make_model` (`year`,`make`,`model`); Attaching GUI for reference: and here is the query: SELECT vehicles.vin, CAST(listing_price AS UNSIGNED) AS listing_price, CAST(listing_mileage AS UNSIGNED) AS listing_mileage, CONCAT(YEAR,' ',make,' ',model) AS vehicle, CONCAT('$ ', FORMAT(listing_price, 0)) AS price, CONCAT(listing_mileage, ' miles') AS mileage FROM `vehicles` WHERE `listing_mileage` > 100 AND `listing_mileage` <= 600 AND `listing_price` > 0 AND MATCH(YEAR,make,model) AGAINST ('2018 ford expedition' IN BOOLEAN MODE) AND CONCAT(YEAR,' ',make,' ',model) LIKE '%2018 ford expedition%' LIMIT 100
{ "language": "en", "url": "https://stackoverflow.com/questions/75633958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: How can I add a switch case inside "MultiProvider" in Flutter? I am trying to use a code like below: class MultiPageProvider extends StatefulWidget { const MultiPageProvider({Key? key}) : super(key: key); @override _MultiPageProviderState createState() => _MultiPageProviderState(); } class _MultiPageProviderState extends State<MultiPageProvider> { @override Widget build(BuildContext context) { return ChangeNotifierProvider<UserModal>( create: (context) => UserModal(), child: Scaffold( appBar: AppBar( title: const Text( "Using Provider", ), ), body: Consumer<UserModal>( builder: (context, modal, child) { switch (modal.activeIndex) { case 0: return const BasicDetails(); case 1: return const EducationDetails(); default: return const BasicDetails(); } }, ), ), ); } } But as long as I use all the providers inside the main.dart function like below, I am wondering to know how can I add this provider to main.dart like the others? class MyApp extends StatelessWidget { const MyApp({super.key}); @override Widget build(BuildContext context) { return MultiProvider( providers: [ ChangeNotifierProvider.value( value: Auth(), ), ChangeNotifierProxyProvider<Auth, Books>( create: (ctx) => Books('', []), update: (ctx, auth, previousBooks) => Books( auth.userId, previousBooks == null ? [] : previousBooks.books, ), ), ], }; } A: I would give this another thought. Have multiple providers given your base class and have the "for logic" somewhere else in the class. @override Widget build(BuildContext context) { return MultiProvider( providers: [ Provider<BasicDetails>( create: (_) => BasicDetails(), ), Provider<EducationDetails>( create: (_) => BasicDetails(), ),
{ "language": "en", "url": "https://stackoverflow.com/questions/75633960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to sync playback progress through devices? I have a mobile streaming app that need to save and sync playback progress to the same apps on other platforms. Most audiobook applications has this feature. Do they periodically post progress to the server?
{ "language": "en", "url": "https://stackoverflow.com/questions/75633961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to patch values in nested map I have 2 nested maps, i want to compare those 2 maps using the keys and replace the values first map: { "atBaseType": "abc", "atType": "abc", "id": "ot10", "name": "ot10", "version": "1.0", "validFor": { "endDateTime": "2023-02-25T00:00:00Z", "startDateTime": "2023-02-01T00:00:00Z" } } second map: { "validFor": { "endDateTime": "2023-04-25T00:00:00Z", } } result map should be : { "atBaseType": "abc", "atType": "abc", "id": "ot10", "name": "ot10", "version": "1.0", "validFor": { "endDateTime": "2023-04-25T00:00:00Z", "startDateTime": "2023-02-01T00:00:00Z" } } basically i want to patch new endDateTime from 2nd map to 1st map without changing any other values in 1st map. if i use map.putall() or map.replaceAll(), the issue is it will append new value to endDateTime and append null to startDateTime.. { "atBaseType": "abc", "atType": "abc", "id": "ot10", "name": "ot10", "version": "1.0", "validFor": { "endDateTime": "2023-04-25T00:00:00Z", "startDateTime": null } } can anyone help me in this issue
{ "language": "en", "url": "https://stackoverflow.com/questions/75633966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: filtering characters out of a username (py) i wanted to create a terminal game using python. A username is needed for the database, but i dont want the user’s username to be random special characters. So i decided to filter characters, but didnt work as i planned. the code i used: # Python3 import string username = input(“Username = “) random_var = 0 filter = [string.digits, string.ascii_letters, “_”] while True: try: if username[random_var] in filter: random_var += 1 else: print(“Error, inappropriate username.”) break except IndexError: print(“Success with 0 errors.”) A: regex would lead to more readable code here, and you might want to ask for the username repeatedly until a valid one is input \w matches characters a-z, A-Z, 0-9 and _ import re while True: try: username = input("Username = ") assert re.fullmatch('\w+', username) break except AssertionError: print("Error, inappropriate username.") A: I haven't checked it, because your code uses fancy quotes (“”) instead of normal double quotes (") and I don't think it will run as a valid Python program. Probably you pasted the code into MS Word or something like that which autoformatted them. Don't do that. In any case, I don't want to spend time fixing all the quotes but this line is wrong: filter = [string.digits, string.ascii_letters, “_”] string.digits, string.ascii_letters and _ are all strings. In this line, you are creating a list of strings: >>> [string.digits, string.ascii_letters, "_"] ['0123456789', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ', '_'] When you later check if some character like x is in this list, of course it isn't, because the list doesn't contain a string "x". Notably, "_" is a single character, so if the user enters a username like ____ I bet that would be accepted. What you need to do is concatenate these all into one string: allowed_chars = string.digits + string.ascii_letters + "_" When you do some_char in allowed_chars, Python will automatically treat the string allowed_chars as a list of characters and the logic will work. Now time for some things you didn't ask about, but I'll still lecture you on them :) Don't name the string filter, this is already a function called that and you are making things confusing by replacing it: https://docs.python.org/3/library/functions.html?highlight=filter#filter Incidentally, filter() is something you could use here to remove all allowed characters from the username, and see if anything is left, without using a for loop at all. I will leave that as an exercise to you. Usually, if there's been an error, you should issue an exception. You can use raise Exception("Unsupported character") or assert. Read the docs about these, they will help you understand error handling. You have a variable called random_var but it's actually not random at all, but stores each character of the username in order. Don't give things names that lie about what they do, it makes code extra confusing and hard to debug. You don't need to use a while loop here. A while loop is useful when it's unpredictable how many times you're gonna go through the loop, because you are planning to evaluate the termination condition dynamically. In this case, you know exactly how many times you will loop, it's len(username). In fact, you don't even need to count them, you can just tell Python to go through each letter: for c in username: assert c in allowed_chars, "Character not supported in username: " + c There, isn't that much cleaner? And as an exercise, I recommend you check out the any() method, which can allow you to do this in a very elegant way without any loops. Lastly, don't use input, it sucks. It's very low level and IMO only useful as a building block for a UI framework. Use something like https://github.com/tmbo/questionary (you're welcome). Oh and, there's no point actually checking each letter, because some will be repeats. If the first e in nebuchadnezzar is allowed, so will the second e. You should do set(username) to get the list of unique characters, and validate only those. Performance wise, it's kind of insignificant, but it's good to practice writing logical algorithms.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: I am pytesting code in flask. What is the best way to add select and delete from the databas in flask-sqlalchemy while showing when asserion is False? How do I add as in db.session.add(new_user) and db.commit() , select as in user = User.query.filter_by(email=new_user.email).first() and delete as db.session.add(new_user) and db.commit() while also getting the error if something goes wrong in an assert? I could just go db.session.add(new_user) db.session.commit() assert user.email == 1 # this will be False db.session.delete(new_user) db.session.commit() But what if something goes wrong and I accidentally add the user twice or make another mistake? db.session.add(new_user) db.session.commit() db.session.add(new_user) db.session.commit() assert user.email == 1 # this will be False db.session.delete(new_user) db.session.commit() This is why I added the try below to catch errors. Here is the new_user function @pytest.fixture def new_user(): ''' Given a User model When a new user is being created Check the User database columns ''' plaintext_password = 'pojkp[kjpj[pj' # converting password to array of bytes bytes = plaintext_password.encode('utf-8') # generating the salt salt = bcrypt.gensalt() # Hashing the password hashed_password = bcrypt.hashpw(bytes, salt) current_user = User(username='fkpr[kfkuh', hashed_password=hashed_password, email=os.environ['TESTING_EMAIL_USERNAME']) return current_user Here is the code I have where I added db.session.add(new_user) db.session.commit() twice. def test_register_page_get(client, new_user, False_output_check_if_user_already_registered): response = client.get('/register', follow_redirects=True) assert response.status_code == 200 assert b'register' in response.data with app.test_request_context(): number_of_users = User.filter_by(username=new_user.username).count() if number_of_users > 0: db.session.rollback() try: db.session.add(new_user) db.session.commit() user = User.query.filter_by(email=new_user.email).first() assert user.email == None #False_output_check_if_user_already_registered(new_user) except: db.sesion.rollback() else: db.session.delete(new_user) db.session.commit() ` Here is the error sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: user.username Here is the entire error https://pastes.io/iijimkme1k Here is the important part of the error statement = 'INSERT INTO user (username, hashed_password, email, registration_confirmation_email, profile_pic_name) VALUES (?, ?, ?, ?, ?)' parameters = ('fkpr[kfkuh', b'$2b$12$JCtTnWOhYo4NdrFAiVkAjuefCwGvRbuHGasTWSohuh6.Vka.Dp1qG', 'somemail@gmail.com', 0, None) context = <sqlalchemy.dialects.sqlite.base.SQLiteExecutionContext object at 0x0000021204681090> def do_execute(self, cursor, statement, parameters, context=None): > cursor.execute(statement, parameters) E sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: user.username E [SQL: INSERT INTO user (username, hashed_password, email, registration_confirmation_email, profile_pic_name) VALUES (?, ?, ?, ?, ?)] E [parameters: ('fkpr[kfkuh', b'$2b$12$JCtTnWOhYo4NdrFAiVkAjuefCwGvRbuHGasTWSohuh6.Vka.Dp1qG', 'somemail@gmail.com', 0, None)] E (Background on this error at: http://sqlalche.me/e/13/gkpj) I even tried changing to from sqlalchemy import exc if exc.IntegrityError: db.session.rollback() instead of number_of_users = User.filter_by(username=new_user.username).count() if number_of_users > 0: db.session.rollback() in the example above and am still getting the same error. I could delete the database but there must be an easier way.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Firebase function only deployment issue with node_modules Having I'm following a reddit clone walkthrough and have encountered a problem when trying to execute this line in the terminal: reddit-clone git:(main) ✗ firebase deploy --only functions The final error message reads as: Error: functions predeploy error: Command terminated with non-zero exit code 2 I'm trying to deploy a single function to my projects firebase account. There seems to be an issue with the node modules? Below is the terminals output with the consequent errors. ➜ reddit-clone git:(main) ✗ firebase deploy --only functions === Deploying to 'reddit-clone-1531d'... i deploying functions Running command: npm --prefix "$RESOURCE_DIR" run build > build > tsc ../node_modules/@types/react/index.d.ts:3135:14 - error TS2300: Duplicate identifier 'LibraryManagedAttributes'. 3135 type LibraryManagedAttributes<C, P> = C extends React.MemoExoticComponent<infer T> | React.LazyExoticComponent<infer T> ~~~~~~~~~~~~~~~~~~~~~~~~ ../../../../node_modules/@types/react/index.d.ts:3135:14 3135 type LibraryManagedAttributes<C, P> = C extends React.MemoExoticComponent<infer T> | React.LazyExoticComponent<infer T> ~~~~~~~~~~~~~~~~~~~~~~~~ 'LibraryManagedAttributes' was also declared here. ../../../../node_modules/@types/react/index.d.ts:3135:14 - error TS2300: Duplicate identifier 'LibraryManagedAttributes'. 3135 type LibraryManagedAttributes<C, P> = C extends React.MemoExoticComponent<infer T> | React.LazyExoticComponent<infer T> ~~~~~~~~~~~~~~~~~~~~~~~~ ../node_modules/@types/react/index.d.ts:3135:14 3135 type LibraryManagedAttributes<C, P> = C extends React.MemoExoticComponent<infer T> | React.LazyExoticComponent<infer T> ~~~~~~~~~~~~~~~~~~~~~~~~ 'LibraryManagedAttributes' was also declared here. ../../../../node_modules/@types/react/index.d.ts:3152:13 - error TS2717: Subsequent property declarations must have the same type. Property 'audio' must be of type 'DetailedHTMLProps<AudioHTMLAttributes<HTMLAudioElement>, HTMLAudioElement>', but here has type 'DetailedHTMLProps<AudioHTMLAttributes<HTMLAudioElement>, HTMLAudioElement>'. 3152 audio: React.DetailedHTMLProps<React.AudioHTMLAttributes<HTMLAudioElement>, HTMLAudioElement>; ~~~~~ ../node_modules/@types/react/index.d.ts:3152:13 3152 audio: React.DetailedHTMLProps<React.AudioHTMLAttributes<HTMLAudioElement>, HTMLAudioElement>; ~~~~~ 'audio' was also declared here. ../../../../node_modules/@types/react/index.d.ts:3200:13 - error TS2717: Subsequent property declarations must have the same type. Property 'input' must be of type 'DetailedHTMLProps<InputHTMLAttributes<HTMLInputElement>, HTMLInputElement>', but here has type 'DetailedHTMLProps<InputHTMLAttributes<HTMLInputElement>, HTMLInputElement>'. 3200 input: React.DetailedHTMLProps<React.InputHTMLAttributes<HTMLInputElement>, HTMLInputElement>; ~~~~~ ../node_modules/@types/react/index.d.ts:3200:13 3200 input: React.DetailedHTMLProps<React.InputHTMLAttributes<HTMLInputElement>, HTMLInputElement>; ~~~~~ 'input' was also declared here. ../../../../node_modules/@types/react/index.d.ts:3207:13 - error TS2717: Subsequent property declarations must have the same type. Property 'link' must be of type 'DetailedHTMLProps<LinkHTMLAttributes<HTMLLinkElement>, HTMLLinkElement>', but here has type 'DetailedHTMLProps<LinkHTMLAttributes<HTMLLinkElement>, HTMLLinkElement>'. 3207 link: React.DetailedHTMLProps<React.LinkHTMLAttributes<HTMLLinkElement>, HTMLLinkElement>; ~~~~ ../node_modules/@types/react/index.d.ts:3207:13 3207 link: React.DetailedHTMLProps<React.LinkHTMLAttributes<HTMLLinkElement>, HTMLLinkElement>; ~~~~ 'link' was also declared here. ../../../../node_modules/@types/react/index.d.ts:3235:13 - error TS2717: Subsequent property declarations must have the same type. Property 'script' must be of type 'DetailedHTMLProps<ScriptHTMLAttributes<HTMLScriptElement>, HTMLScriptElement>', but here has type 'DetailedHTMLProps<ScriptHTMLAttributes<HTMLScriptElement>, HTMLScriptElement>'. 3235 script: React.DetailedHTMLProps<React.ScriptHTMLAttributes<HTMLScriptElement>, HTMLScriptElement>; ~~~~~~ ../node_modules/@types/react/index.d.ts:3235:13 3235 script: React.DetailedHTMLProps<React.ScriptHTMLAttributes<HTMLScriptElement>, HTMLScriptElement>; ~~~~~~ 'script' was also declared here. ../../../../node_modules/@types/react/index.d.ts:3261:13 - error TS2717: Subsequent property declarations must have the same type. Property 'video' must be of type 'DetailedHTMLProps<VideoHTMLAttributes<HTMLVideoElement>, HTMLVideoElement>', but here has type 'DetailedHTMLProps<VideoHTMLAttributes<HTMLVideoElement>, HTMLVideoElement>'. 3261 video: React.DetailedHTMLProps<React.VideoHTMLAttributes<HTMLVideoElement>, HTMLVideoElement>; ~~~~~ ../node_modules/@types/react/index.d.ts:3261:13 3261 video: React.DetailedHTMLProps<React.VideoHTMLAttributes<HTMLVideoElement>, HTMLVideoElement>; ~~~~~ 'video' was also declared here. Found 7 errors in 2 files. Errors Files 1 ../node_modules/@types/react/index.d.ts:3135 6 ../../../../node_modules/@types/react/index.d.ts:3135 Error: functions predeploy error: Command terminated with non-zero exit code 2 As far as I can tell, I've made some sort of mistake with setting up the project and packages? I tried a completed reinstall of npm but this didn't work. I can't work out why this simple task is catching errors in the node modules? I've also made sure that all packages are up-to-date. I've triple checked that firebase as been installed, I'm logged into my firebase account within the terminal and able to list available projects etc. firebase.json file: { "functions": [ { "source": "functions", "codebase": "default", "ignore": [ "node_modules", ".git", "firebase-debug.log", "firebase-debug.*.log" ], "predeploy": [ "npm --prefix \"$RESOURCE_DIR\" run build" ] } ] } Even a standard firebase deployment produces the same problem. Also, node_modules is excluded in tsconfig/gitignore files. A little point in the right direction would be appreciated! Namaste
{ "language": "en", "url": "https://stackoverflow.com/questions/75633970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: "TypeError: instantsearch is not a function" vanilla javascript this is the full error I am receiving: export const search = instantsearch({ ^ TypeError: instantsearch is not a function at file:///c:/Users/OneDrive/Documents/Programs/Javascript/New%20folder/SearchTool.mjs:45:23 at ModuleJob.run (node:internal/modules/esm/module_job:198:25) at async Promise.all (index 0) at async ESMLoader.import (node:internal/modules/esm/loader:385:24) at async loadESM (node:internal/process/esm_loader:88:5) at async handleMainPromise (node:internal/modules/run_main:61:12) this is how I am importing it and how I am using it: import instantsearch from "instantsearch.js"; export const search = instantsearch({ indexName: "Links", searchClient, }); search.addWidgets([ hits({ container: "#hits", }), ]); search.start(); I already checked if instantsearch was defined, and it is, so I am not really where the error stems from.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How can we pass two paths in when a file is created or modified in a folder I'm using when a file is created or modified in a folder i can pass one path in the trigger so how can I pass another path in the trigger
{ "language": "en", "url": "https://stackoverflow.com/questions/75633973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: External login with Google/GitHub on the new Spring authorization server I have an Angular client app and I am implementing a Spring authorization server. I want to provide external login with Google/GitHub on the new Spring authorization server and redirect to the Angular client app with authorization code. In the spring-authorization-server GitHub repository provided a federated identity authorization server but the problem is that after successfully login with Google they do not provide success URL. I want to redirect my Angular client app with authorization code with success URL. Is there any sample code available with Angular and federated identity authorization server for Google/Github Social Login?
{ "language": "en", "url": "https://stackoverflow.com/questions/75633974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: how to solve the problem of "import gradio could not be resolved" importing api from gradio quickstart but in terminal it's showing a prblem of "import gradio could not be resolved" how to make complier accept gradio A: firstly make sure that you have installed the library using the following command. pip install gradio The solution is to select the python3 interpreter inside the bin/ folder of the python installed click on the enter interpreter path here and choose the python3 inside the bin it will get the import error solved. in my case i was using a venv so i'm selecting the python3 so that the pylint in the vscode will now the packages.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ThreeFiber Drei PivotControls, get position after onDragEnd I'm trying to get the updated position of a mesh after moving it with the Drei Pivot Controls, but digging into the object it says the position is [0,0,0]. How can I console.log the position after dragging a mesh? const meshRef = useRef(); const handleDragEnd = () => { const model = meshRef.current debugger }; return ( <PivotControls anchor={[0, 0, 0]} onDragEnd ={handleDragEnd}> <primitive ref={meshRef} scale={scale} position={position} rotation={rotation} object={loadedModel.scene} /> </PivotControls> )
{ "language": "en", "url": "https://stackoverflow.com/questions/75633979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: hot to solve strapi server eorror when integration stape payment method i use in front react and backed use strapi and i add strape payment getway.when requst from front to backend strapi always give server error.all screan shot given when front request to strapi strapi redirect to strape payement prebuild page. strapi router code 'use strict' const stripe = require('stripe')(process.env.STAPE_SECRET_KEY); const { createCoreController } = require("@strapi/strapi").factories; module.exports = createCoreController("api::order.order", ({ strapi }) => ({ async create(ctx) { const { products } = ctx.request.body; try { const lineItems = await Promise.all( products.map(async (product) => { const item = await strapi .service("api::product.product") .findOne(product.id); return { price_data: { currency: "usd", product_data: { name: item.title, }, unit_amount: item.price * 100 }, quantity: item.attributes.quantity, }; }) ); const session = await stripe.checkout.session.create({ shipping_address_collection: { allowed_countries: ["US"] }, payment_method_types: ["card"], mode: "payment", success_url: `${process.env.CLIENT_URL}?success=true`, cancel_url: `${process.env.CLIENT_URL}?success=false`, line_items: lineItems, }) await strapi.service("api::order.order").create({ data: { products, stripeId: session.id } }) return { strapiSession: session } } catch (error) { ctx.response.status = 500; return error } } })) react front code const publishKey = import.meta.env.VITE_STRAPE_PUBLISH_TOKEN const stripePromise = loadStripe(publishKey) const handlePayment = async () => { try { const stripe = await stripePromise; console.log("call") const res = await makePaymentRequest.get("/api/order", { products: cartItems, }) console.log("call") await stripe.redirectToCheckout({ sessionId: res.data.stripeSession.id, }) } catch (error) { console.log(error) } }
{ "language": "en", "url": "https://stackoverflow.com/questions/75633980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to display what i searched in the search box after getting resul How to display what i searched in the search box after getting result...eg: if searched for apple in a search box after getting result how to display the apple in the search box How to display what i searched in the search box after getting result A: Get the value of the search box: let searchTerm = document.getElementById("search-box").value; Display the search term in the search box: document.getElementById("search-box").value = searchTerm; You can place this in the func that displays the search results. Then it updates the search box with the search term after displaying the results. A: You can use localStorage element in your code to store the current textbox value. It will stay until you remove it. For Example, <form id="form1" action="#" method="post" > Selected Employee <input type="text" name="EmployeeName" id="text1"> <input type ="submit" value="check"> </form> and js part is, $(document).ready(function () { $("#form1").submit(function () { window.localStorage['text1_val'] = $("input[id = 'text1']").val(); }); $(window).load(function () { $("input[id = 'text1']").val(window.localStorage['text1_val']); }); }); You need jQuery in your code to run this code. Hope this will help you.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: setTimeout Not Working With For Each Loop On Array I had a forEach() loop set up that was running through a number of Docs from Firebase exactly the same as the exmaple code and it was working great... Until I found out Firestore stores docs lexicographically. Which ended up ruining my forEach() loops. So I put the data from Firebase into an array like so... const timeArray = [ vhfSnap.get('time1'), vhfSnap.get('time2'), vhfSnap.get('time3'), vhfSnap.get('time4'), vhfSnap.get('time5'), vhfSnap.get('time6'), ] I am now running the for each loop on the 'timeArray' array but now the forEach loops aren’t working properly for some reason. Ive gotten some instances of the forEach loops to work as there are multiple instances of them... But the ones I’m having trouble with most are the ones that have setTimeouts() in them. The setTimeout() functions are no longer waiting to be complete and just firing without waiting... They are also firing in an odd order. This is the code I’m running: var liCounter = 1; timeArray.forEach((time) => { if (time != undefined) { let timeData = time; let timeDataMs = (timeData * 1000); let selectedTopic = document.getElementById('topic' + liCounter); function test() { selectedTopic.style.color = 'green' } setTimeout(test, timeDataMs) liCounter++ }; }); Why did this code work perfectly with the Firebase data but now it doesn’t work with array data? What am I missing? I've tried for 2 hours and been through all the similar questions on here to try and figure this out, but have had no luck... Edit: I have just tried to replicate the results in a less complicated way: const fruits = ['', '', ''] fruits.forEach(fruit => { function print() { console.log(fruit)}; setTimeout(print, 1000) }) This also has the exact same issue. There is something going on with the setTimeout being used with data from an array.. A: Have you checked to see the values of timeDataMs? If that value is undefined (I think) or if whatever value you have for it cannot be converted into a number, the function reference will fire immediately. A: Try this var liCounter = 1; timeArray.forEach((time) => { if (time != undefined) { let timeData = time; let timeDataMs = (timeData * 1000); let selectedTopic = document.getElementById('topic' + liCounter); function test() { selectedTopic.style.color = 'green' } setTimeout(function() { test(); }, timeDataMs) liCounter++ }; }); EDIT function print(fruit) { console.log(fruit); } function timeout(fruit, time) { setTimeout(function() { print(fruit) }, time); } const fruits = ['', '', ''] var i = 1; fruits.forEach(fruit => { var time = i * 1000; timeout(fruit, time); i++; });
{ "language": "en", "url": "https://stackoverflow.com/questions/75633987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: TSNE plot having Type Error must be real number, not str I kept getting this error message when i tried doing TSNE plot :TypeError: must be real number, not str. I need help seriously. below is my code: y = df_new['binary'] X = df_new.drop('binary', axis = 1) def tsne_plot(x, y): # Setting the plotting background sns.set(style ="whitegrid") tsne = TSNE(n_components = 2, random_state = 0) # Reducing the dimensionality of the data X_transformed = tsne.fit_transform(x) plt.figure(figsize =(12, 8)) # Building the scatter plot plt.scatter(X_transformed[np.where(y == 0), 0], X_transformed[np.where(y == 0), 1], marker ='o', color ='y', linewidth ='1', alpha = 0.8, label ='Normal') plt.scatter(X_transformed[np.where(y == 1), 0], X_transformed[np.where(y == 1), 1], marker ='o', color ='k', linewidth ='1', alpha = 0.8, label ='Abnormal') # Specifying the location of the legend plt.legend(loc ='best') # Plotting the reduced data plt.show()
{ "language": "en", "url": "https://stackoverflow.com/questions/75633989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is it possible to collect all sites into one app? In University we were asked to create conceptual design of digital product as a Project Work. My team wants to create Application for our University where we could collect all university platforms. Because now, we have different websites for Timetable, Rankings, e-Class, Mail. And lots of useful mini websites created by students. The question: is it possible to do that? If yes, why university did not do that before. By the way, our university is branch of Korean INHA that's why e-class is fully controlled by them, so please keep in mind this as well. Till now I asked several people who are not working at University. They said it may be because servers may crush.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-8" }
Q: CORB while getting a click event through Jquery I am working on frontend of a website and I am getting Cross-Origin Read Blocking (CORB) blocked cross-origin response. I have searched it and what I have understood that this error comes when your request to the server is suspicious or there's something with the JSON and JSONP. I don't get it. I want to run other events based the the click or change events. This is the Code I am using. $(document).on('click','.element_class',function(){ console.log('sssssssss') }) And I'm getting no response. I am debugging on Chrome console. Windows 10 OS. I have tried opening Chrome without the CORB. But that did not help either. I have tried to get the events through the parent elements, nothing has worked for me so far.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: @RequestMapping and @GetMapping are not mapping correctly, and I have run out of ideas for how to fix it I am working on an assignment for class and for some reason the @GetMapping and @RequestMapping are not properly setting up. When I run the project and then try to access the page it only displays a WhiteLabel Error. I am using Spring Boot version 3.0.3 through Eclipse, and we are required to use Java 15 for the build path. I have spent hours trying to get this to load and I have not had any luck. I even reached out to the professor and he was not able to help. Thank you! //Topic21Application.java package com.gcu; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.ComponentScan; @SpringBootApplication @ComponentScan("com.gcu") public class Topic21Application { public static void main(String[] args) { SpringApplication.run(Topic21Application.class, args); } } //HelloWorldController.java package com.gcu.controller; import org.springframework.stereotype.Controller; import org.springframework.ui.Model; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.ResponseBody; import org.springframework.web.servlet.ModelAndView; @Controller @RequestMapping("/hello") public class HelloWorldController { @GetMapping("/test1") @ResponseBody public String printHello() { return "hello"; } } //topic2-1/pom.xml <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.0.3</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.gcu</groupId> <artifactId>topic2-1</artifactId> <version>0.0.1-SNAPSHOT</version> <name>topic2-1</name> <description>topic2-1</description> <properties> <java.version>15</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies> <build> <finalName>cst339activity</finalName> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> //hello.html <!DOCTYPE html> <html xmlns:th="http://www.thymeleaf.org"> <body> //This is for the next step of the assignment. <h2 th:text = "${message}">This is my default text</h2><br/> </body> </html> I have tried to debug, reach out to my professor, and even ask my group for guidance but so far everyone has been at a loss. I have attempted to add the Thymeleaf add-on into eclipse and that did nothing to solve this issue. I tried accessing different pages, including localhost:8080/hello.html, localhost:8080/hello/test1, and localhost:8080/hello.html/test1, and all of these pages gave me the same error. Thank you again in advance! A: Based on the code you provided, it looks like you are missing a method in your HelloWorldController that maps to the "hello.html" page. Try adding the following method to your controller: @GetMapping("/html") public String printHello(Model model) { model.addAttribute("message", "Hello from Thymeleaf!"); return "hello"; } Also, make sure that you have a file called "hello.html" in your src/main/resources/templates directory that contains the HTML you want to display. Once you have made these changes, you should be able to access the "hello.html" page at http://localhost:8080/hello/html.
{ "language": "en", "url": "https://stackoverflow.com/questions/75633995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to wait for a process to open Im trying to create a function that waits for a process to open before doing other stuff with it, it seems like it should work and im not sure why it doesnt, if it matters im on visual studio 2022 with ISO C++20 Standard and multibyte character set (it works if the process is already open) PROCESSENTRY32 pe32; HANDLE openProc; std::string choice("ac_client.exe"); DWORD getProc(std::string name) { HANDLE snap = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); pe32.dwSize = sizeof(pe32); Process32First(snap, &pe32); while (Process32Next(snap, &pe32)) { if (!name.compare(pe32.szExeFile)) { openProc = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pe32.th32ProcessID); CloseHandle(snap); return 0; } } } HANDLE get(DWORD pID) { bool endFunction{ false }; HANDLE procCheck{}; HANDLE snap(CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0)); Process32First(snap, &pe32); while (Process32Next(snap, &pe32)) { procCheck = OpenProcess(PROCESS_ALL_ACCESS, false, pID); std::cout << pe32.szExeFile << '\n'; std::cout << choice << '\n'; if (pe32.szExeFile == "ac_cube.exe") { if (!choice.compare(pe32.szExeFile) == 0) { std::cout << pe32.szExeFile << '\n'; std::cout << choice << '\n'; endFunction = true; } } } return procCheck; } int main() { getProc(choice); DWORD pID = pe32.th32ProcessID; HANDLE pHandle = get(pID); it does actually do the loop a few times before just seemingly giving up and going on to read a process with a completely different name, here is the last bit of the command prompt showing that it just quits for some reason (keep in mind, it prints szExeFile then the correct process name below) VsDebugConsole.exe ac_client.exe conhost.exe ac_client.exe assaultCubeC.exe //this is the program im trying to fix if it matters ac_client.exe SearchFilterHost.exe ac_client.exe msvsmon.exe ac_client.exe SUCCESSFULLY LOADED || DETAILS BELOW Health -> 236 AR Ammo -> 320 (284) Pistol Ammo -> 300 (264) for context its supposed to read and write to the ammo and health values in a game but that doesnt matter that all works perfectly i have no issues with that, i just cant seem to figure this part out. apologies if its something very obvious that i am not noticing lol thats happened a few times to me, but i am about to go to sleep and hopefully somebody would be able to tell me what im doing wrong by the time im awake i tried running with the game open which worked exactly as expected, but i just dont see what there is that im doing wrong i have no idea what i could be changing, but hopefully i will learn from this and be able to utilize this later on A: Your getProc() function has several problems. * *it is not checking if CreateToolhelp32Snapshot() or Process32First() fail. *it is skipping the first process reported by Process32First() (if it is successful). *if Process32Next() returns false (either because CreateToolhelp32Snapshot() or Process32First() had failed, or because the end of the process list is reached without you finding a matching filename), you are leaking the HANDLE returned by CreateToolhelp32Snapshot() (if it was successful), and your function then exhibits undefined behavior by not return'ing any value at all, even though it is declared as returning a DWORD. Aside from the missing error handling, it would be better to have getProc() return the opened HANDLE to the caller instead of assigning it to the global openProc variable. You are making similar mistakes in your get() function, as well. But, in addition to those problems, this statement in get() does not do what you think it does: if (pe32.szExeFile == "ac_cube.exe") You are comparing two char[] arrays, which is not allowed, so what really happens is that they both decay into char* pointers, and then you are comparing the pointers, which do not point at the same memory address, so the if statement always evaluates as false. You need to compare the content of the arrays, not their addresses, such as with strcmp() (or equivalent). Also, this statement looks suspiciously wrong, too: if (!choice.compare(pe32.szExeFile) == 0) Due to operator precedence, it is processed as if you had written it like this: if ((!choice.compare(pe32.szExeFile)) == 0) If the two strings compare equal, compare() returns 0, which ! turns into true, which does not equal 0, so the if evaluates as false. If the two strings compare unequal, compare() returns non-zero, which ! turns into false, which does equal 0, so the if evaluates as true. So, the result is effectively the same as if you had written this: if (choice != pe32.szExeFile) With all of that said, try something more like this instead: static const std::string choice = "ac_client.exe"; DWORD getProc(const std::string &name, HANDLE *hProcess = NULL) { if (hProcess) *hProcess = NULL; DWORD pID = 0; HANDLE snap = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); if (snap) { PROCESSENTRY32 pe32; pe32.dwSize = sizeof(pe32); if (Process32First(snap, &pe32)) { do { if (name == pe32.szExeFile) { if (hProcess) *hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pe32.th32ProcessID); pID = pe32.th32ProcessID; break; } } while (Process32Next(snap, &pe32)); } CloseHandle(snap); } return pID; } HANDLE get(DWORD pID) { HANDLE procCheck = NULL; HANDLE snap = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); if (snap) { pe32.dwSize = sizeof(pe32); if (Process32First(snap, &pe32)) { do { procCheck = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pID); std::cout << pe32.szExeFile << '\n'; std::cout << choice << '\n'; if (strcmp(pe32.szExeFile, "ac_cube.exe") == 0) { if (choice != pe32.szExeFile) { std::cout << pe32.szExeFile << '\n'; std::cout << choice << '\n'; break; } } CloseHandle(procCheck); procCheck = NULL; } while (Process32Next(snap, &pe32)); } CloseHandle(snap); } return procCheck; } int main() { HANDLE openProc; DWORD pID = getProc(choice, &openProc); if (pID != 0) { HANDLE pHandle = get(pID); if (pHandle) { ... CloseHandle(pHandle); } if (openProc) CloseHandle(openProc); } return 0; } That being said, I honestly do not understand what you are trying to accomplish with the get() function. If all you are trying to do is wait for a process to start running, your getProc() function will suffice, eg: int main() { HANDLE pHandle; DWORD pID; while ((pID = getProc(choice, &pHandle)) == 0) { Sleep(10); } // use pID and pHandle as needed... CloseHandle(pHandle); return 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/75633997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: AAD B2C not skipping orchestration step Good day! I have a problem when skipping an orchestration step in AAD B2C. I'll start at the beginning. So I have a custom user attribute named User Tagged, the type of this attribute is boolean and it indicates if the user is tagged or not. Now in my xml file, I have declared the ClaimType of User Tagged Here is the ClaimType: <ClaimType Id="extension_userTagged"> <DataType>boolean</DataType> </ClaimType> Now in my technical profile, i've set the default value of User Tagged to true Here is the TechnicalProfile: <!-- Technical profile to set extension_userTagged to true --> <TechnicalProfile Id="UserTagged"> <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.ClaimsTransformationProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <OutputClaims> <OutputClaim ClaimTypeReferenceId="extension_userTagged" DefaultValue="true" AlwaysUseDefaultValue="true" /> </OutputClaims> </TechnicalProfile> Now in my orchestration steps, I have two steps, The api that tags the user and the step that makes the User Tagged to true Here is the OrchestrationSteps: <!-- # The Tagging API Step --> <OrchestrationStep Order="1" Type="ClaimsExchange"> <Preconditions> <Precondition Type="ClaimEquals" ExecuteActionsIf="true"> <Value>extension_userTagged</Value> <Value>True</Value> <Action>SkipThisOrchestrationStep</Action> </Precondition> </Preconditions> <ClaimsExchanges> <ClaimsExchange Id="ApiTaggingStep" TechnicalProfileReferenceId="TPRestfulTagging" /> </ClaimsExchanges> </OrchestrationStep> <!-- Setting the User Tagged to true --> <OrchestrationStep Order="2" Type="ClaimsExchange"> <Preconditions> <Precondition Type="ClaimEquals" ExecuteActionsIf="true"> <Value>extension_userTagged</Value> <Value>True</Value> <Action>SkipThisOrchestrationStep</Action> </Precondition> </Preconditions> <ClaimsExchanges> <ClaimsExchange Id="CheckUserIfTagged" TechnicalProfileReferenceId="UserTagged"/> </ClaimsExchanges> </OrchestrationStep> Now comes the problem, first if the user is NOT yet tagged then my OrchestrationStep works. But when the user IS tagged, the OrchestrationStep order 1 still executes. I have checked the logs and yes, the extension_userTagged is there. For some reason it doesn't execute the preconditions. I'm probably missing some things but idk what it is. Thank you for your help! A: Not sure but I always use: <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
{ "language": "en", "url": "https://stackoverflow.com/questions/75633999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: SQL query that does not exist Phpmyadmin I am writing a SQL query to show me the following: * *Results that have _go_product_url in the meta_key and that is not null. *And I need it to only show me in the results based on the first query that do not have the meta_key _go_product_info What would be the best method without spending a lot of resources on the server? I am working with Wordpress. But for now testing in phpmyadmin I ran this query but without results SELECT * FROM `wp_postmeta` WHERE `meta_key` LIKE '%_go_product_url%' AND `meta_value` IS NOT NULL AND NOT EXISTS (SELECT * FROM `wp_postmeta` WHERE `meta_key` LIKE '%_go_product_info%' IS NULL); A: You presumably want to identify posts meeting these criteria, so you should be selecting something like the post_id and/or the other columns you want. Here is one aggregation approach: SELECT post_id FROM wp_postmeta GROUP BY post_id HAVING SUM(meta_key LIKE '%_go_product_url%') > 0 AND SUM(meta_key LIKE '%_go_product_info%') = 0;
{ "language": "en", "url": "https://stackoverflow.com/questions/75634000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Android Studio was unable to start correctly (0xc0000005) Recently i downloaded android studio for some projects. After installing when i tried to run the program it says ( Android studio was unable to start correctly (0xc0000005). I tried reinstalling the program and restarted my pc still it didnt worked out. I again tried to installing from zip file rather then .exe but still it shows same things. How do i fix this problem.text I tried to reinstalling and hoping it will work. But it didnt went well.
{ "language": "en", "url": "https://stackoverflow.com/questions/75634001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: "Invalid content type for feed" on my RSS feed (PHP / Wordpress) We create RSS feeds on our Wordpress-based website which have been working for 3-4 years. Suddenly, a couple of months ago, they have all become "invalid" in the eyes of Apple, Google, Spotify, and various RSS validators. The error I'm getting is "Invalid content type for feed" in this validator: https://www.castfeedvalidator.com/validate.php?url=https://www.ananda.org/video/series/life-lessons-in-unexpected-places/podcast The detailed view shows that, in the HTTP headers, the Content-Type is set to text/html, whereas I see nothing of the like when I navigate to it manually: https://www.ananda.org/video/series/life-lessons-in-unexpected-places/podcast This is the beginning of my file (after setting some variables). Nothing strange that I can see. The headers are clearly sent as text/html : /** * Output the podcast feed */ header("Content-Type: text/xml;charset=utf-8"); //header("content-type: application/rss+xml; charset=utf-8"); echo '<?xml version="1.0" encoding="UTF-8"?' . '>'; // Have to split up the PHP-triggering bit ?><rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
{ "language": "en", "url": "https://stackoverflow.com/questions/75634008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Handling States.Runtime errors in AWS Step Functions I am using AWS Step functions for an automation solution and the execution is stopping because of a Runtime exception.I added a catch block to catch the States.Runtime exception and when it occurs it should go to the next task defined,but it does not reach there an stops at the block where the exception happened. Here is my Step function definition: { "Comment": "Terminate a Tagged EC2 Instance", "StartAt": "DescribeInstances", "States": { "DescribeInstances": { "ResultPath": "$.DescribeInstances", "Resource": "arn:aws:states:::aws-sdk:ec2:describeInstances", "Type": "Task", "Parameters": { "InstanceIds.$": "$.Params.InstanceIds" }, "ResultSelector": { "Value.$": "$.Reservations[0].Instances[0].Tags[0].Value" }, "Catch": [ { "ErrorEquals": [ "States.Runtime" ], "Next": "RuntimeErrorFallback" } ], "Next": "isInstanceTagged" }, "isInstanceTagged": { "Type": "Choice", "Choices": [ { "Variable": "$.DescribeInstances.Value", "StringEquals": "true", "Next": "TerminateInstances" }, { "Not": { "Variable": "$.DescribeInstances.Value", "StringEquals": "true" }, "Next": "send_email2" } ] }, "TerminateInstances": { "ResultPath": "$.TerminateInstances", "Resource": "arn:aws:states:::aws-sdk:ec2:terminateInstances", "Type": "Task", "Parameters": { "InstanceIds.$": "$.Params.InstanceIds" }, "Next": "send_email" }, "send_email": { "Resource": "arn:aws:lambda:us-east-1:1111111:function:send_email", "Type": "Task", "End": true }, "send_email2": { "Resource": "arn:aws:lambda:us-east-1:1111111:function:send_email", "Type": "Task", "End": true }, "RuntimeErrorFallback": { "Type": "Pass", "Result": "This is a fallback for Runtime Error", "End": true } } } Is there anything I am missing so that I can handle this correctly? A: Unfortunately, you cannot catch States.Runtime errors. "An execution failed due to some exception that it couldn't process. Often these are caused by errors at runtime, such as attempting to apply InputPath or OutputPath on a null JSON payload. A States.Runtime error isn't retriable, and will always cause the execution to fail. A retry or catch on States.ALL won't catch States.Runtime errors." -- From Step Functions documentation. The most common case for this type of scenario is when the JSON Path you specified in your definition conflicts with the input to your state or the results from a Task. In your particular case, I suspect that the input either doesn't have the required array of InstanceIds or the input contains a string rather than an array of strings (as required by the API). If that's the case, the following ASL demonstrates how to handle this. The first choice condition confirms that the referenced item exists, then sends to a Fail state if not (you could take some other compensating action). If it's present but is a string instead of an array, then it will transform to an array using the States.Array intrinsic function. { "StartAt": "Check for Instance Ids", "States": { "Check for Instance Ids": { "Type": "Choice", "Choices": [ { "Not": { "Variable": "$.Params.InstanceIds", "IsPresent": true }, "Next": "No Instance Ids" }, { "Variable": "$.Params.InstanceIds", "IsString": true, "Next": "Transform To Array" } ], "Default": "DescribeInstances" }, "No Instance Ids": { "Type": "Fail", "Error": "InvalidInput", "Cause": "The required value $.Params.InstanceIds was not provided" }, "Transform To Array": { "Type": "Pass", "Next": "DescribeInstances", "Parameters": { "Params": { "InstanceIds.$": "States.Array($.Params.InstanceIds)" } } }, "DescribeInstances": { "Type": "Task", "End": true, "Parameters": { "InstanceIds.$": "$.Params.InstanceIds" }, "Resource": "arn:aws:states:::aws-sdk:ec2:describeInstances", "ResultSelector": { "Value.$": "$.Reservations[0].Instances[0].Tags[0].Value" } } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/75634009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: My Windows Task Scheduler no longer triggers a task after Re-installing Windows 10. Is there an error in my custom filter? I finally decided to clean install Windows 10, but I exported all my custom tasks (which were all working) beforehand. After the clean install I noticed my files weren't synchronizing. Part of the issue was Google's renaming of the sync folder from "Google Drive" to "My Drive" but that was the only change. I especially made sure all the drive letters along with my user folder were exactly the same. Same goes for my computers Host name. I obviously adjusted the directory path for the batch file in my google drive folder and quadruple checked to make sure it's correct. I can confirm that the task does launch successfully when ran on-demand. However, it doesn't trigger when I launch the Dolphin.exe (I'm trying to sync save data between two PCs). It's beyond me why the trigger would work with one install and not the other when the directory path to the program is exactly the same (same version of windows too). Here's the path recorded via SHIFT + Right-click -> "copy path" "C:\Program Files\Dolphin\Dolphin.exe" Below are the contents of my custom event filter that is applied as the trigger. Again, it was working on my previous windows install: <QueryList> <Query Id="0" Path="Security"> <Select Path="Security">*[System[Provider[@Name='Microsoft-Windows-Security-Auditing'] and Task = 13312 and (band(Keywords,9007199254740992)) and (EventID=4688)]] and *[EventData[Data[@Name='NewProcessName'] and (Data='C:\Program Files\Dolphin\Dolphin.exe')]]</Select> </Query> </QueryList> Any idea what the issue could be? I imported a task from my previous windows installation. I also modified the path to my batch file which reflected an update to the google drive directory path. The task works when launched manually so the issue must be the trigger. I was expecting the task to run upon launching Dolphin.exe A: The issue was resolved by enabling 'Application start' logging. I completely forgot it had to be enabled. * *Start and enter secpol.msc into the Run box *Navigate to Local Policies/Audit Policy Double Click Audit process tracking and enable Success *Double Click Audit process tracking and enable Success
{ "language": "en", "url": "https://stackoverflow.com/questions/75634011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How to create autocall recording app(service) without User Interface in android studio? I noticed that google dialer has a announcement feature after starting call recording... to overcome this i want to create an app without any UI which should record incoming and outgoing calls automatically when user get or make a call and after that it should save recording to storage. should i use only background service? or broadcast receiver? or Both... I have tried to achieve that using following code but yet to achieve it. thanks there! package com.example.central_device_manager; import android.app.Service; import android.content.Context; import android.content.Intent; import android.media.MediaRecorder; import android.net.Uri; import android.os.Environment; import android.os.IBinder; import android.telephony.PhoneStateListener; import android.telephony.TelephonyManager; import android.util.Log; import androidx.annotation.Nullable; import java.io.File; import java.io.IOException; public class CallRecorder extends Service{ public static final String TAG = "MyTag"; private MediaRecorder recorder; private String phoneNumber; private boolean isRecording; private Context context; public CallRecorder() { // Default constructor required by Android } @Nullable @Override public IBinder onBind(Intent intent) { return null; } private CallRecorder(Context context) { recorder = new MediaRecorder(); this.context = context; } public static CallRecorder instance; public static CallRecorder getInstance(Context context) { if (instance == null) { instance = new CallRecorder(context); } return instance; } @Override public int onStartCommand(Intent intent, int flags, int startId) { Log.d(TAG,"onStartCommand is started..."); startRecording(); return Service.START_REDELIVER_INTENT; } public void startRecording() { Log.d(TAG,"startRecording is started..."); TelephonyManager telephonyManager = (TelephonyManager) context.getSystemService(Context.TELEPHONY_SERVICE); PhoneStateListener callStateListener = new PhoneStateListener() { String fileName; File recording; public void onCallStateChanged(int state, String incomingNumber) { Log.d(TAG,"onCallStateChanged executed due to call state changed..."); switch (state) { case TelephonyManager.CALL_STATE_RINGING: Log.d(TAG,"Phone is ringing..."); phoneNumber = incomingNumber; break; case TelephonyManager.CALL_STATE_OFFHOOK: Log.d(TAG,"Phone is on offhook state..."); if (!isRecording) { Log.d(TAG,"starting record..."); // start recording Toast.makeText(getApplicationContext(), "Recording started", Toast.LENGTH_SHORT).show(); recorder.setAudioSource(MediaRecorder.AudioSource.VOICE_COMMUNICATION); recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC); File path = Environment.getExternalStorageDirectory(); try { File dir = new File(path, "CallRecorder"); if (!dir.exists()) { dir.mkdirs(); } fileName = "record_" + phoneNumber + ".mp4"; recording = new File(dir, fileName); recorder.setOutputFile(recording.getAbsolutePath()); recorder.prepare(); recorder.start(); isRecording = true; } catch (IOException e) { e.printStackTrace(); } } if (state == TelephonyManager.CALL_STATE_IDLE) { Log.d(TAG,"stopping record..."); // stop recording if (isRecording) { recorder.stop(); recorder.reset(); isRecording = false; Toast.makeText(getApplicationContext(), "Recording stopped", Toast.LENGTH_SHORT).show(); } else { // recording was not started, do nothing } } else if (state == TelephonyManager.CALL_STATE_RINGING) { // call is ringing, do nothing } else { Log.d(TAG,"Unable to identify call state..."); // call state is unknown, do nothing } } } }; telephonyManager.listen(callStateListener,PhoneStateListener.LISTEN_CALL_STATE); Log.d(TAG,"startRecording method executed..."); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/75634013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why is srand used only in main function? #include <stdio.h> #include <stdlib.h> #include <time.h> int intRandom(int min, int max){ int t; t=min+rand()%(max-min+1); return t; } int main(){ srand(time(NULL)); int total; do { printf("Enter the total: "); scanf("%d",&total); }while (total<2 || total>12); int count=1; int x,y; do{ x=intRandom(1,6); y=intRandom(1,6); printf("Result of throw %d : %d+%d\n ",count,x,y); count++; }while (x+y!=total); return 0; } I need a reasonable explanation as to why we don't use srand in a function other than main. And Could you explain to me why we have to add 1 to this part of the formula generating the random number (max-min+1) instead of just (max-min) in intRandom function?
{ "language": "en", "url": "https://stackoverflow.com/questions/75634014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: const server = quic.createServer({ ^ TypeError: quic.createServer is not a function here is the output F:\js learning\js basic learning\bye\server.js:3 const server = quic.createServer({ ^ TypeError: quic.createServer is not a function at Object.<anonymous> (F:\js learning\js basic learning\bye\server.js:3:21) at Module._compile (node:internal/modules/cjs/loader:1218:14) at Module._extensions..js (node:internal/modules/cjs/loader:1272:10) at Module.load (node:internal/modules/cjs/loader:1081:32) at Module._load (node:internal/modules/cjs/loader:922:12) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:82:12) at node:internal/main/run_main_module:23:47 Here is the code of quic protocol but I'm facing above mention problem please solve this Here is the code of quic protocol but I'm facing above mention problem please solve this Here is the code of quic protocol but I'm facing above mention problem please solve this const quic = require('quic'); const server = quic.createServer({ key: 'server.key', cert: 'server.crt' }); server.on('session', (session) => { session.on('stream', (stream) => { stream.write('Hello, world!'); stream.end(); }); }); server.listen(1234, () => { console.log('QUIC server is listening on port 1234'); });
{ "language": "en", "url": "https://stackoverflow.com/questions/75634016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }