instruction
stringlengths
0
30k
|spartacus-storefront|
I'm encountering a TypeError while working on a React project where I'm using a Chart component to display user analytics data fetched from an API. Additionally, I'm seeking guidance on how to implement filtering by year. Here's the error message I'm seeing: ``` TypeError: Cannot read properties of undefined (reading 'year') ``` **Code Snippets:** Chart Component: ``` import "./Chart.scss"; import { LineChart, Line, XAxis, CartesianGrid, Tooltip, ResponsiveContainer } from 'recharts'; import React, { useState } from 'react'; // Don't forget to import React if you're using JSX import Calendar from 'react-calendar'; // Import the Calendar component const Chart = ({ title, data, datakeys, grid }) => { const [selectedYear, setSelectedYear] = useState(new Date().getFullYear()); // Initialize selectedYear state with the current year // Extracting month values from the data array const months = data.map(item => item._id.month); // Filter data based on the selected year const filteredData = data.filter(item => item._id.year === selectedYear); // Handle calendar change const handleCalendarChange = date => { setSelectedYear(date.getFullYear()); }; return ( <div className="chart"> <div className="topOfchart"> <div className="left"> <h3 className="chartTitle"> {title} </h3> </div> <div className="right"> <Calendar onChange={handleCalendarChange} value={new Date(selectedYear, 0, 1)} // Set the initial value to January 1st of the selected year view="year" // Display the calendar in year view /> </div> </div> <ResponsiveContainer width="100%" aspect={4/1}> <LineChart data={filteredData}> <XAxis dataKey="_id.month" stroke="#5550bd" tick={{fontSize: 12}} /> {/* Accessing the month value from the _id object */} <Line type="monotone" dataKey={datakeys} stroke="#5550bd"/> <Tooltip/> { grid && <CartesianGrid stroke="#e0dfdf" strokeDasharray="5 5"/> } </LineChart> </ResponsiveContainer> </div> ); }; export default Chart; ``` and Home Page Component: ``` import Chart from "../../components/chart/Chart" import FeaturedInfo from "../../components/featuredinfo/FeaturedInfo" import { WidgetLg } from "../../components/widgetLg/WidgetLg" import { WidgetSm } from "../../components/widgetSm/WidgetSm" import "./Home.scss" import {Data} from "../../data"; import { useEffect, useState } from "react"; import api from "../../api"; const Home = () => { const MONTHS=[ "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Agu", "Sep", "Oct", "Nov", "Dec" ]; const [userStats, setUserStats] = useState([]); useEffect(()=>{ const getStats = async ()=>{ try { const res = await api.get("/users/stats",{ headers:{ token: "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjY1YjkxMWYwMjM5MjAxMTBhNTA3NGJmNCIsImlzQWRtaW4iOnRydWUsImlhdCI6MTcxMTcwMzAzMiwiZXhwIjoxNzE0NzI3MDMyfQ.ClT9x0c6v_Bfm9zvhmDAwUPDHWkm-Ws-yZ1CfNhX-5Y" } }); setUserStats(res.data) } catch (error) { console.log(error); } }; getStats(); },[]) console.log(userStats); return ( <div className="home"> <FeaturedInfo/> <Chart data={userStats} title="User Analytics" grid datakeys="newUsers"/> <div className="homeWidgets"> <WidgetSm/> <WidgetLg/> </div> </div> ) } export default Home ``` i got this output [![this picher give this code out put][1]][1] and [![i expected this with filter by year ][2]][2] [1]: https://i.stack.imgur.com/HiNQh.png [2]: https://i.stack.imgur.com/dCMPP.png here is backend code : ``` router.get("/stats", async (req, res) => { const today = new Date(); const lastMonth = new Date(today.getFullYear(), today.getMonth() - 1, today.getDate()); // Get the date of last month const monthsArray = [ "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December" ]; try { // Calculate total users of all time const totalUsersData = await User.aggregate([ { $group: { _id: null, totalUsers: { $sum: 1 } } } ]); // Calculate total new users of last month const totalNewUsersData = await User.aggregate([ { $match: { createdAt: { $gte: lastMonth } } // Filter users created within the last month }, { $group: { _id: null, totalNewUsers: { $sum: 1 } } } ]); let totalUsers; if (totalUsersData.length > 0) { totalUsers = totalUsersData[0].totalUsers; } else { totalUsers = 0; } let totalNewUsers; if (totalNewUsersData.length > 0) { totalNewUsers = totalNewUsersData[0].totalNewUsers; } else { totalNewUsers = 0; } // Retrieve monthly user data const monthlyUserData = await User.aggregate([ { $group: { _id: { year: { $year: "$createdAt" }, month: { $arrayElemAt: [monthsArray, { $subtract: [{ $month: "$createdAt" }, 1] }] } }, total: { $sum: 1 }, // Total users newUsers: { $sum: { $cond: [ { $gte: ["$createdAt", lastMonth] }, // Check if user was created within the last month 1, 0 ] } } } }, { $project: { _id: 1, total: 1, newUsers: 1 } } ]); const data = [ { totalUsers: totalUsers, totalNewUsers: totalNewUsers }, ...monthlyUserData ]; res.status(200).json(data); } catch (error) { res.status(500).json(error); } }); ```
TypeError: Cannot read properties of undefined (reading 'year') in React Chart Component, and How to Implement Filtering by Year?
|reactjs|node.js|mongodb|mern|
if you use Mysql 8, can use this packge: [staudenmeir/laravel-adjacency-list][1] no need any change in your structure or other version : [lazychaser/laravel-nestedset][2] need add 2 column _lft,_rgt to your table [1]: https://github.com/staudenmeir/laravel-adjacency-list [2]: https://github.com/lazychaser/laravel-nestedset
Timezone Issue with Clickhouse - Asia/Tehran Timezone
I'm trying to get the descriptorMatcher to work. I have a template image that I want to look for in every frame of a video feed. Here is the relevant code: //the following is done once since the template is static: templateImage = Utils.loadResource(mContext, R.raw.watch); detector.detect(rgb, templateKeypoints); Toast.makeText(mContext,"templateImage:" + templateImage.rows() + "/" + templateImage.cols() + " templateKeypoints:" + templateKeypoints.rows() + "/" + templateKeypoints.cols(), Toast.LENGTH_LONG).show(); descriptor.compute(rgb, templateKeypoints, templateDescriptors); The result of this toast is "templateImage:480/640 templateKeypoints:0/1" so the keypoints have 0 rows and 1 column... Next, on every frame this code is run: public Mat featureDetection(Mat inputImage) { // first image Mat descriptors1 = new Mat(); MatOfKeyPoint keypoints1 = new MatOfKeyPoint(); detector.detect(inputImage, keypoints1); descriptor.compute(inputImage, keypoints1, descriptors1); // second image - COVERED BY templateKeypoints, templateDescriptors // detector.detect(img2, keypoints2); // descriptor.compute(img2, keypoints2, descriptors2); // detector.detect(templateImage, templateKeypoints); // descriptor.compute(templateImage, templateKeypoints, // templateDescriptors); // matcher should include 2 different image's descriptors Log.d(TAG, "descriptors1.type: " + descriptors1.type() + " / cols: " + descriptors1.cols()); Log.d(TAG, "templateDescriptors.type: " + templateDescriptors.type() + " / cols: " + templateDescriptors.cols()); matcher.match(descriptors1, templateDescriptors, matches); // output image Mat outputImg = new Mat(); MatOfByte drawnMatches = new MatOfByte(); // this will draw all matches, works fine Features2d.drawMatches(inputImage, keypoints1, templateImage, templateKeypoints, matches, outputImg, GREEN, RED, drawnMatches, Features2d.NOT_DRAW_SINGLE_POINTS); return inputImage; } It crashes before I can even return anything: (Log results first: ) 04-27 23:04:53.011: D/FrameProcessing(2225): descriptors1.type: 0 / cols: 32 04-27 23:04:53.011: D/FrameProcessing(2225): templateDescriptors.type: 0 / cols: 0 04-27 23:04:53.011: E/cv::error()(2225): OpenCV Error: Assertion failed (type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U)) in void cv::batchDistance(cv::InputArray, cv::InputArray, cv::OutputArray, int, cv::OutputArray, int, int, cv::InputArray, int, bool), file /home/reports/ci/slave/50-SDK/opencv/modules/core/src/stat.cpp, line 1797 04-27 23:04:53.011: W/dalvikvm(2225): threadid=11: thread exiting with uncaught exception (group=0x413c9930) 04-27 23:04:53.018: E/AndroidRuntime(2225): FATAL EXCEPTION: Thread-207 04-27 23:04:53.018: E/AndroidRuntime(2225): CvException [org.opencv.core.CvException: /home/reports/ci/slave/50-SDK/opencv/modules/core/src/stat.cpp:1797: error: (-215) type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U) in function void cv::batchDistance(cv::InputArray, cv::InputArray, cv::OutputArray, int, cv::OutputArray, int, int, cv::InputArray, int, bool) 04-27 23:04:53.018: E/AndroidRuntime(2225): ] 04-27 23:04:53.018: E/AndroidRuntime(2225): at org.opencv.features2d.DescriptorMatcher.match_1(Native Method) 04-27 23:04:53.018: E/AndroidRuntime(2225): at org.opencv.features2d.DescriptorMatcher.match(DescriptorMatcher.java:437) 04-27 23:04:53.018: E/AndroidRuntime(2225): at com.bigsolve.frameprocessing.FrameProcessing.featureDetection(FrameProcessing.java:114) So I think there must be some big problem with my `tempateKeypoints` and thus my `templateDescriptors`. Also, assuming I ever get past this error, what exactly are `outImage` and `drawnMatches`? If I want to overlay the potential matches to my template on my 640x480 video feed (the return value mat of `featureDetection`) would I return `outImage`?
Android OpenCV descriptorMatcher CvException error
your namespaces in the begining of the controller file should match the herirechy of the data sturcture it should implement [PSR-4 autoloading standard][1]. check [laravel doc][2] for more details so namespace for DokterController should be namespace App\Http\Controllers\dokter; and the use should be: use App\Http\Controllers\docker\DokterController; further try to refactor your code in your route file . Instead of calling controller during defining each route separately you should use Route::controller(controllername::class)->group(function(){ Route::get('routeurl','controllerMethodToCallForThisRoute'); }) [1]: https://www.php-fig.org/psr/psr-4/ [2]: https://laravel.com/docs/11.x/structure#the-app-directory
I have a nested list which has alway 5 values. The elements of the list can be different. I like to copy the list into two seperate lists and exchange in both new lists always two values. The code is doing this but for the second loop the first list will be again overwritten. I don´t know why Example: test = [['1', '1', '1', '2024-03-03', "100"], ['1', '1', '2', '2024-05-03', "200"], ['1', '2', '2', '2024-05-03', "200"], ['1', '3', '3', '2024-01-03', "200"]] print(test) # Okay lst_01 = test.copy() # Okay lst_02 = test.copy() # Okay for item in lst_01: item[3] = "2024-01-01" item[4] = "" print(lst_01) # Okay print(lst_02) # Not Okay: Overwritten for item in lst_02: item[3] = "2024-12-31" item[4] = "" print(lst_01) # Mistake: First list again overwritten print(lst_02) # Okay sum = lst_01 + lst_02 # List lst_01 and lst_02 equal. This is not correct print(sum) ----------------------------------------------------------------- Result of the code: list(sum) [['1', '1', '1', '2024-12-31', ''], ['1', '1', '2', '2024-12-31', ''], ['1', '2', '2', '2024-12-31', ''], ['1', '3', '3', '2024-12-31', ''], ['1', '1', '1', '2024-12-31', ''], ['1', '1', '2', '2024-12-31', ''], ['1', '2', '2', '2024-12-31', ''], ['1', '3', '3', '2024-12-31', '']] The date is always "2024-12-31" for the first list "lst_01" Wher is my mistake? Thanks for a feedback I expect a new list base on the content of "test" list with changed values. The new list has exactly the doubled numbers of elements with new values
How to restrict vpasolve() to only integer solutions (MATLAB)
|matlab|math|
null
I'm trying to create a pod for a project's api from a yaml file. The pod will be created from an image that I already have in docker desktop and docker hub. I've even pulled this image to minikube, and even so, I can't create the pod and it stays in Back-off status, and gives an error, see my logs: and Normal Scheduled 66s default-scheduler Successfully assigned default/api-farmacia to minikube Normal Pulling 66s kubelet Pulling image "erivan41/app-farma:1.0" Normal Pulled 46s kubelet Successfully pulled image "erivan41/app-farma:1.0" in 20.344s (20.344s including waiting) Normal Created 23s (x3 over 45s) kubelet Created container api-farmacia-container Normal Pulled 23s (x2 over 40s) kubelet Container image "erivan41/app-farma:1.0" already present on machine Normal Started 22s (x3 over 45s) kubelet Started container api-farmacia-container Warning BackOff 3s (x3 over 34s) kubelet Back-off restarting failed container api-farmacia-container in pod api-farmacia_default(0c6c4ce3-d2b1-4a33-a400-75e1408e3b66) I expected it to stay in runnimg but it stays in status crash loop back off
I can't create a pod in minikube on windows
|windows|docker|kubernetes|minikube|
null
|c++|stdvector|
Geographical dimension tables are commonly used in a lot of data models, which has hierarchy by nature just like Date / DateTime dimensions. I'm just wondering if there is any trick to get OOTB geographical dimension tables from Power BI, or alternatively any open source to import, so that each Power BI developer doesn't have to reinvent the wheel to create one from the scratch?
Does Power BI provide an OOTB Geographical Dimension Table
|powerbi|data-modeling|business-intelligence|dimensional-modeling|
null
In one iteration over the input array you can: * Collect the unique values in their order of first appearance * Count how many you have of each (Both can be captured by using a hash map that maintains insertion order, like for instance a dict in Python 3.6+, a Map in JavaScript, or a LinkedHashMap in Java) With this information you can produce the sorted output.
Please, don't be harsh, I'm just a beginner! I have a MERN stack application which I deployed to Render (server side as web service and front end as static). In my application, I have a profile where users can upload avatar image. I store an image as a url string in my MongoDB atlas in User model. Images are saved on the server in uploads folder. The image upload works fine and image fetches successfully, from both deployment and local environment. But after a while I start to get Internal server error when fetching same image. What is the reason and how to solve this issue? Or is there a better way of storing images in MongoDB atlas without using cloud or GridFs? Here's how I set user avatar ``` const setUserAvatar = async (req, res) => { try { if (!req.user || !req.user.id) { return res.status(401).json({ message: 'Unauthorized' }); } const user = await User.findById(req.user.id); if (!user) { return res.status(404).json({ message: 'User not found' }); } if (!req.file) { return res.status(400).json({ message: 'No file uploaded' }); } const fileName = `avatar_${user.id}_${Date.now()}.png`; const filePath = path.join(__dirname, '..', 'uploads', fileName); fs.writeFileSync(filePath, req.file.buffer); user.avatar = { data: fileName }; await user.save(); res.status(200).json({ message: 'Avatar uploaded successfully' }); } catch (error) { console.error('Error uploading avatar', error); res.status(500).json({ message: 'Internal Server Error' }); } }; ``` And get: ``` const getUserAvatar = async (req, res) => { const userId = req.user.id; try { const user = await User.findById(userId); if (user.avatar.data === null) { return res.status(404).json({ message: 'No avatar found' }) } if (!user || !user.avatar.data) { res.send(user.avatar.data) } else { const filePath = path.join(__dirname, '..', 'uploads', user.avatar.data); const avatarData = fs.readFileSync(filePath); res.setHeader('Content-Type', 'image/*'); res.send(avatarData); } } catch (error) { console.error('Error fetching avatar', error); res.status(500).json({ message: 'Internal Server Error' }); } }; ``` How I handle avatar change on front end: ``` const handleAvatarChange = async (e) => { const token = localStorage.getItem('token'); if (token) { try { const formData = new FormData(); formData.append('avatar', e.target.files[0]) const response = await fetch('RENDERLINK/avatar', { 'method': 'POST', 'headers': { 'Authorization': `Bearer ${token}`, }, 'body': formData, 'credentials': 'include' }) if (response.status === 200) { getUser() toast.success('Avatar changed successfully') } else if (response.status === 400) { toast.error('No file selected') } else { toast.error('Error changing avatar') } } catch (error) { toast.error('Oops! Something went wrong. Please try again later.') } } else { console.log('No token found') navigate('/login') } } ``` Fetching user data including avatar: ``` const getUser = async () => { const token = localStorage.getItem('token'); if (token) { try { const response = await fetch('RENDERLINK/profile', { 'method': 'POST', 'headers': { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}`, }, 'credentials': 'include' }) if (response.ok) { const data = await response.json(); setUser({ username: data.username, email: data.email, avatar: data.avatar.data }) if (data.avatar.data !== null) { // Fetch and set the avatar image const avatarResponse = await fetch('RENDERLINK/get_avatar', { headers: { 'Authorization': `Bearer ${token}`, }, credentials: 'include', }); const avatarBlob = await avatarResponse.blob(); const avatarUrl = URL.createObjectURL(avatarBlob); setUser((prevUser) => ({ ...prevUser, avatar: avatarUrl })); setLoading(false) } } else if (response.status === 500) { toast.error('Oops! Something went wrong. Please try again later.') } } catch (error) { toast.error('Oops! Something went wrong. Please try again later.') } } else { console.log('No token found') navigate('/login') } } ```
MERN Stack App - User Avatar Upload - 500 Error After Deployment on Render
|reactjs|mongodb|file-upload|mern|render.com|
null
It seems my collection doesn't contain that case but we can achieve it by combining #37 with #21 https://css-generators.com/tooltip-speech-bubble/ First you copy the code of #37, then you add an extra span that will contain the code of #21 (not all its code but only the pseudo element part) <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> .tooltip { /* triangle dimension */ --a: 90deg; /* angle */ --h: 1em; /* height */ --p: 50%; /* triangle position (0%:left 100%:right) */ --b: 7px; /* border width */ --r: 1.2em; /* the radius */ --c1: #0009; /* semi-transparent background*/ --c2: red; /* border color*/ padding: 1em; color: #fff; position: relative; z-index: 0; } .tooltip:before, .tooltip:after { content: ""; position: absolute; z-index: -1; inset: 0; background: conic-gradient(var(--c2) 0 0); --_p:clamp(var(--h)*tan(var(--a)/2) + var(--b),var(--p),100% - var(--h)*tan(var(--a)/2) - var(--b)); } .tooltip:before { padding: var(--b); border-radius: var(--r) var(--r) min(var(--r),100% - var(--p) - var(--h)*tan(var(--a)/2)) min(var(--r),var(--p) - var(--h)*tan(var(--a)/2))/var(--r); background-size: 100% calc(100% + var(--h)); clip-path: polygon(0 100%,0 0,100% 0,100% 100%, calc(var(--_p) + var(--h)*tan(var(--a)/2) - var(--b)*tan(45deg - var(--a)/4)) 100%, calc(var(--_p) + var(--h)*tan(var(--a)/2) - var(--b)*tan(45deg - var(--a)/4)) calc(100% - var(--b)), calc(var(--_p) - var(--h)*tan(var(--a)/2) + var(--b)*tan(45deg - var(--a)/4)) calc(100% - var(--b)), calc(var(--_p) - var(--h)*tan(var(--a)/2) + var(--b)*tan(45deg - var(--a)/4)) 100% ); -webkit-mask: linear-gradient(#000 0 0) content-box,linear-gradient(#000 0 0); -webkit-mask-composite: xor; mask-composite: exclude; } .tooltip:after { bottom: calc(-1*var(--h)); clip-path: polygon( calc(var(--_p) + var(--h)*tan(var(--a)/2)) calc(100% - var(--h)), var(--_p) 100%, calc(var(--_p) - var(--h)*tan(var(--a)/2)) calc(100% - var(--h)), calc(var(--_p) - var(--h)*tan(var(--a)/2) + var(--b)*tan(45deg - var(--a)/4)) calc(100% - var(--h) - var(--b)), var(--_p) calc(100% - var(--b)/sin(var(--a)/2)), calc(var(--_p) + var(--h)*tan(var(--a)/2) - var(--b)*tan(45deg - var(--a)/4)) calc(100% - var(--h) - var(--b))); } .tooltip span { position: absolute; z-index: -1; inset: 0; padding: var(--b); border-radius: var(--r); clip-path: polygon(0 100%, 0 0, 100% 0, 100% 100%, min(100% - var(--b), var(--p) + var(--h)* tan(var(--a) / 2) - var(--b)* tan(45deg - var(--a) / 4)) calc(100% - var(--b)), var(--p) calc(100% + var(--h) - var(--b) / sin(var(--a) / 2)), max(var(--b), var(--p) - var(--h)* tan(var(--a) / 2) + var(--b)* tan(45deg - var(--a) / 4)) calc(100% - var(--b))); background: var(--c1) content-box; border-image: conic-gradient(var(--c1) 0 0) fill 0 / calc(100% - var(--h) - var(--b)) max(var(--b), 100% - var(--p) - var(--h)* tan(var(--a) / 2)) 0 max(var(--b), var(--p) - var(--h)* tan(var(--a) / 2)) / 0 0 var(--h) 0; } body { background: url(https://picsum.photos/id/107/800/600) center/cover } .tooltip { font-size: 18px; max-width: 28ch; text-align: center; } <!-- language: lang-html --> <div class="tooltip"> <span></span> This is a Tooltip with a border and a border radius. Border can have a solid or gradient coloration and the background is transparent</div> <!-- end snippet -->
Polars with Rust: Out of Memory Error when Processing Large Dataset in Docker Using Streaming
|database|database-normalization|first-normal-form|
null
null
null
null
I'm making my first game in Twine. I'm trying to figure out JS. I immediately apologize for the code in my examples, because... Twine has slightly modified HTML, but I don’t know how to write it correctly in pure HTML, so I copied it directly from the project. Everything in it is correct and works, but in an excerpt and in pure HTML form as here, most likely it will not work. But I threw it off so that I could follow the logic to write the correct JS. Sorry about that! Now to the question... I'm making a gaming phone. There is an array of icons. Using a for loop, I display the universal buttons in the grid. Only the icons in them change relative to the iteration of the loop and the array. The names of the icons correspond to the Ids of the blocks, for example foto.svg - fotoApp. This is implemented in such a way that by adding the next application icon to the array and creating the corresponding block, the JS logic works regardless of the number of applications (images in the array). I was able to implement the function of opening itself (opening the application occurs as in a real phone, overlapping the desktop with an open application) and opening each button, but they all open as before the first test block - “messengerApp” and so far without reference to names. I perform the discovery by adding the class "active" and "activeNow", replacing them and deleting them, I was able to implement the "return" button. Actually, the task is to ensure that each button opens only its own application foto - fotoApp, phone - phoneApp, etc. I thought that somehow it was necessary to take the name of the icon and substitute it in a button in which Id = "App", so that the total would be foto.svg -> foto+App=fotoApp and the block with this ID would receive the class "activeNow" thereby opening up. Preferably without changing the existing structure too much. Help me please! ``` <div class = "phone"> <div class = "appBtns"> for (let i = 0; i < setup.appButtonsList.length; i++) { <span id="gridItem"> <button id="App" class="appButton open__app">[img[setup.appButtonsList[_i]]] </button> </span> } </div> <div class = "messengerApp content__app"> </div> <div class = "phoneApp content__app"> </div> <div class = "fotoApp content__app"> </div> <div class = "browserApp content__app"> </div> <div class = "phoneBtns"> <span id='appLast' class="phone-button" ><<button [img[assets/img/phone/phoneBtns/lastApp.svg]]>><</button>></span> <span id='appHome' class="phone-button close__app" ><<button [img[assets/img/phone/phoneBtns/homeApp.svg]]>><</button>></span> <span id='appReturn' class="phone-button return__app" ><<button [img[assets/img/phone/phoneBtns/returnApp.svg]]>><</button>></span> </div> </div> ``` ``` .content__app { display: none; } .content__appChat { display: none; } .content__app.active { display: flex; } .content__app.activeNow { display: flex; } .content__appChat.active { display: flex; } .content__appChat.activeNow { display: flex; } .phone { display:flex; position: relative; flex-direction:column; justify-content: flex-start; width: 232px; z-index:1; border-radius: 30px; overflow: hidden; height: 514px; z-index: 1; background-color: blue; } #appButtons{ display: grid; grid-template-columns: 1fr 1fr 1fr 1fr; grid-template-rows: 1fr 1fr 1fr 1fr 0.3fr 1fr; grid-template-areas: "6 6 6 6" "5 5 5 5" "4 4 4 4" "3 3 3 image" "upArrow upArrow upArrow upArrow" "phone internet sms foto"; flex-grow: 1; flex-wrap: wrap; justify-content: center; justify-items: center; align-items: center; padding: 40px 10px 0px 10px; z-index: 2; } #appButtons img { width: 40px; } #gridItem { width: 40px; height:40px; } .messengerApp{ position: relative; flex-direction:column; justify-content: flex-start; width: 232px; z-index:1; border-radius: 30px; overflow: hidden; height: 514px; z-index: 1; background-color: green; } .fotoApp{ position: relative; flex-direction:column; justify-content: flex-start; width: 232px; z-index:1; border-radius: 30px; overflow: hidden; height: 514px; z-index: 1; background-color: red; } .browserApp{ position: relative; flex-direction:column; justify-content: flex-start; width: 232px; z-index:1; border-radius: 30px; overflow: hidden; height: 514px; z-index: 1; background-color: black; } .phoneApp{ position: relative; flex-direction:column; justify-content: flex-start; width: 232px; z-index:1; border-radius: 30px; overflow: hidden; height: 514px; z-index: 1; background-color: yellow; } ``` ``` let setup.appButtonsList = ['assets/img/phone/appBtns/foto.svg', 'assets/img/phone/appBtns/messenger.svg', 'assets/img/phone/appBtns/browser.svg', 'assets/img/phone/appBtns/phone.svg']; let openApp = document.querySelectorAll('.open__app'); let openAppChat = document.querySelectorAll('.open__appChat'); let contentApp = document.querySelector('.content__app'); let contentAppChat = document.querySelector('.content__appChat'); let closeApp = document.querySelector('.close__app'); let returnApp = document.querySelector('.return__app'); let returnAppChat = document.querySelector('.messengerReturn'); const active = document.querySelector('.active'); const activeNow = document.querySelector('.activeNow'); openApp.forEach(open__app => {open__app.addEventListener('click', () => contentApp.classList.toggle('activeNow')); }); openAppChat.forEach(open__appChat => {open__appChat.addEventListener('click', () => {contentApp.classList.replace('activeNow', 'active'); contentAppChat.classList.add('activeNow'); }) }); closeApp.addEventListener('click', () => {contentApp.classList.remove('active'); contentAppChat.classList.remove('activeNow'); }); returnApp.addEventListener('click', () => {contentAppChat.classList.remove('activeNow'); contentApp.classList.remove('activeNow'); contentApp.classList.replace('active', 'activeNow'); }); returnAppChat.addEventListener('click', () => {contentAppChat.classList.remove('activeNow'); }); ``` I read a lot of examples on the Internet, where they open and close with one button, hide through “hidden”, through “template”, through a lister on buttons. But what I didn’t try didn’t fit mine, or I don’t know how to apply it all correctly.
An array of images and a for loop display the buttons. How to assign each button to open its own block by name?
|javascript|
null
Hello Stack Overflow community, I am encountering a peculiar issue with my PyTorch model where the presence of an initialized but unused FeedForward Network (FFN) affects the model's accuracy. Specifically, when the FFN is initialized in my CRS_A class but not used in the forward pass, my model's accuracy is higher compared to when I completely remove (or comment out) the FFN initialization. The FFN is defined as follows in my model's constructor: `class CRS_A(nn.Module): def __init__(self, modal_x, modal_y, hid_dim=128, d_ff=512, dropout_rate=0.1): super(CRS_A, self).__init__() self.cross_attention = CrossAttention(modal_y, modal_x, hid_dim) self.ffn = nn.Sequential( nn.Conv1d(modal_x, d_ff, kernel_size=1), nn.GELU(), nn.Dropout(dropout_rate), nn.Conv1d(d_ff, 128, kernel_size=1), nn.Dropout(dropout_rate), ) self.norm = nn.LayerNorm(modal_x) self.linear1 = nn.Conv1d(1024, 512, kernel_size=1) self.linear2 = nn.Conv1d(512, 300, kernel_size=1) self.dropout1 = nn.Dropout(0.1) self.dropout2 = nn.Dropout(0.1) def forward(self, x, y, adj): x = x + self.cross_attention(y, x, adj) #torch.Size([5, 67, 1024]) x = self.norm(x).permute(0, 2, 1) x = self.dropout1(F.gelu(self.linear1(x))) #torch.Size([5, 512, 67]) x_e = self.dropout2(F.gelu(self.linear2(x))) #torch.Size([5, 300, 67]) return x_e, x ` As you can see, the self.ffn is not used in the forward pass. Despite this, removing or commenting out the FFN's initialization leads to a noticeable drop in accuracy. Could this be due to some form of implicit regularization, or is there another explanation for this behavior? Has anyone encountered a similar situation, and how did you address it? Any insights or explanations would be greatly appreciated.
Influence of Unused FFN on Model Accuracy in PyTorch
|python|deep-learning|pytorch|neural-network|
null
If you want a single plot with two y-axes, it's possible with ggplo2 but requires a bit of fiddling. 1) You can add a second y-axis with a magnitude scaled relative to the first one using ``` + scale_y_continuous( sec.axis = sec_axis(~./6000, name="Belysning") ) ``` Here, I'm taking the first axis to go up to 6000. I would recommend manually setting the first y-axis-limit so the two are 100% congruent. 2. Now you add the points/lines for "Belysning" to the plot. Note, they're plotted relative to y-axis 1, so you need to rescale them. ``` + geom_line( aes(y=Belysning * 6000), color="gold2") ``` 3. Combined, it should look something like (I haven't been able to test this due to no data sample): ``` plot_data <- cbind(complete_july_data,complete_august_data) plot_data <- full_join(result_df, plot_data, join_by(Dato == DATE)) plot <- ggplot(plot_data, aes(x=Dato)) + geom_bar(aes(y = Count, fill = MANUAL.ID.), stat = "identity", width = 0.7, position = "dodge") + geom_line(aes(y = Belysning*6000), color = "gold2") + geom_point(aes(y = Belysning*6000), color = "goldenrod", size = 2) + labs(title = "Kumulative kurver for juli og august ved Borupgård 2018", x = "Dato", fill = "Art") + scale_y_continuous( limits = c(0,6000) name = "Kumulative observationer", sec.axis = sec_axis(~./6000, name="Belysning") ) + scale_x_date(limits = as.Date(c("2018-07-26", "2018-08-29")), date_breaks = "5 day", date_labels = "%Y-%m-%d")+ theme( axis.text.x = element_text(angle = 45, hjust = 1), axis.title.y.right = element_text(color = "gold2") ) + theme_bw() ``` I tried simplifying by merging all three datasets into one. If that causes issues just add them in singularly like you were doing.
unable to retrieve record from MySQL using UUID with springboot jpa
|java|mysql|spring|jpa|jdbc|
null
Navigate To IAM section then click on 'Access Reports' and then select 'Credentials report' and click on download Credentials report, you will get root user creation date.
This is because you misunderstand the usage of `scanf`. This function considers a string to be any contiguous collection of non-whitespace characters, ignores preceding whitespace, and *stops reading at whitespace*. So you can get three reads from one line with three words. Instead of scanning for a string, I would use something like `fgets`, which allows you to accept a full line, and then I would process that. For example ``` #include<stdio.h> #include<stdlib.h> #include<ctype.h> #include<string.h> #define LINE_BUF 100 void handleMenu(char input[LINE_BUF]) { for (int i = 0; input[i]; i++) { input[i] = toupper((unsigned char)input[i]); } if (strcmp(input, "LOAD") == 0) { printf("\nLOAD introduced\n"); } else{ printf("\nError\n"); } } int main() { char input[LINE_BUF]; do { printf("\n%s\n", "------ Command Menu ------"); printf("%-10s %s\n", " ", "LOAD"); printf("%-10s %s\n", " ", "CLEAR"); printf("%-10s %s\n", " ", "LIST"); printf("%-10s %s\n", " ", "FOUNDP"); printf("%-10s %s\n", " ", "QUIT"); printf("--------------------------\n"); printf("\nChoose an option: "); if(fgets(input,LINE_BUF, stdin) == NULL) break; input[strcspn(input, "\n")] = 0; // the line above does the same as this one // for (int i = 0; input[i]; ++i) if (input[i] == '\n') { input[i] = '\0'; break;} handleMenu(input); } while (strcmp(input, "QUIT") != 0 && strcmp(input, "quit") != 0 ); return 0; } ``` `fgets` is also safer, as you can indicate the buffer size, and you won't write over the end. Update: Thanks to @chux-ReinstateMonica for comments below. Here's an updated working code snippet.
import pymqi queue_manager = 'QM1' channel = 'DEV.APP.SVRCONN' host = '127.0.0.1' port = '1414' queue_name = 'TEST.1' message = 'Hello from Python 2!' conn_info = '%s(%s)' % (host, port) First I did the PUT operation like the below code and I worked fine. I put for more 2000 messages qmgr = pymqi.connect(queue_manager, channel, conn_info) queue = pymqi.Queue(qmgr, queue_name) queue.put(message) queue.close() qmgr.disconnect() I tried the below code to get the first message It also worked fine. It gives me the first message. I don't know is the correct approach to get the first message. queue = pymqi.Queue(qmgr, queue_name, pymqi.CMQC.MQOO_BROWSE) gmo = pymqi.GMO() gmo.Options = pymqi.CMQC.MQGMO_WAIT | pymqi.CMQC.MQGMO_BROWSE_NEXT | pymqi.CMQC.MQGMO_NO_PROPERTIES gmo.WaitInterval = 5000 md = pymqi.MD() md.Format = pymqi.CMQC.MQFMT_STRING message = queue.get(None, md, gmo) print(message) # b'Hello from Python 2!' print(md.MsgId.hex()) #'414d51204e41544d333030202020202817d9ff650a742f22' queue.close() qmgr.disconnect() but when I tried below code it gives me correct message but I this is not the best solution because if I have 2000 messages then it will iterate until the msgId matches. queue = pymqi.Queue(qmgr, queue_name, pymqi.CMQC.MQOO_BROWSE) gmo = pymqi.GMO() gmo.Options = pymqi.CMQC.MQGMO_WAIT | pymqi.CMQC.MQGMO_BROWSE_NEXT | pymqi.CMQC.MQGMO_NO_PROPERTIES gmo.WaitInterval = 5000 user_MsgId = '414d51204e41544d333030202020202817d9ff650a742f22' overall_message = [] keep_running = True while keep_running: md = pymqi.MD() md.Format = pymqi.CMQC.MQFMT_STRING message = queue.get(None, md, gmo) print(md.MsgId.hex()) if md.MsgId.hex() != user_MsgId: continue: else: overall_message.append(message) break queue.close() qmgr.disconnect() The above solution gives me correct message but when messages are more performance goes down. Can anyone please suggest better approach to browse the message by MsgId or CorrelId? I tried below code as per suggestion I edited here but still it is not working: queue = pymqi.Queue(qmgr, queue_name, pymqi.CMQC.MQOO_BROWSE) gmo = pymqi.GMO() gmo.MatchOptions = pymqi.CMQC.MQMO_MATCH_MSG_ID gmo.Version = pymqi.CMQC.MQMO_VERSION_2 gmo.Options = pymqi.CMQC.MQGMO_WAIT | pymqi.CMQC.MQGMO_BROWSE_NEXT | pymqi.CMQC.MQGMO_NO_PROPERTIES gmo.WaitInterval = 5000 md = pymqi.MD() md.Version = pymqi.CMQC.MQMO_VERSION_2 md.Format = pymqi.CMQC.MQFMT_STRING md.MsgId = b'414d51204e41544d333030202020202817d9ff650a742f22' # this is the same output which I get from above message = queue.get(None, md, gmo) print(message) print(md.MsgId.hex()) queue.close() qmgr.disconnect()
Android compose animations crashing in release builds
|android|android-jetpack-compose|android-r8|
You can create an extension on `DateTime` to be able to use it directly on objects: ```dart extension DateTimeWeekday on DateTime { DateTime setWeekday(final int weekday) { final int current = this.weekday; // If the day of the week is already past the current value, set it to the next week's day. final int desired = current <= weekday ? weekday : weekday + 7; final int diff = desired - current; return this.add(Duration(days: diff)); } } ```
I've been trying to solve this for a while now but I keep getting this error. I am using VS code and Python version 3.11. I trying using IDLE, and Spyder, but I still have this error. Anyone know how to solve it? I uninstalled all the libraries and installed them again, but it didn't work. Here's the code: ``` import os import numpy as np import torch import torch.nn as nn import torch.optim as optim import gym import matplotlib.pyplot as plt # Hyperparameters H = 200 # number of hidden layer neurons learning_rate = 1e-4 gamma = 0.99 # discount factor for reward decay_rate = 0.99 # decay factor for RMSProp leaky sum of grad^2 batch_size = 10 # Amount of episodes to do an update num_train_episodes = 1000 # Define the policy network class PolicyNetwork(nn.Module): def __init__(self): super(PolicyNetwork, self).__init__() self.fc1 = nn.Linear(80 * 80, H) self.fc2 = nn.Linear(H, 1) self.relu = nn.ReLU() self.sigmoid = nn.Sigmoid() def forward(self, x): x = self.relu(self.fc1(x)) return self.sigmoid(self.fc2(x)) def prepro(I): """ Preprocess 210x160x3 uint8 frame into 6400 (80x80) 1D float vector """ I = I[35:195] # Crop I = I[::2, ::2, 0] # Downsample by factor of 2 I[I == 144] = 0 # Erase background (background type 1) I[I == 109] = 0 # Erase background (background type 2) I[I != 0] = 1 # Everything else (paddles, ball) just set to 1 return I.astype(np.float64).ravel() def discount_rewards(r): """ Take 1D float array of rewards and compute discounted reward """ discounted_r = np.zeros_like(r) running_add = 0 for t in reversed(range(0, r.size)): if r[t] != 0: running_add = 0 # Reset the sum, since this was a game boundary (pong specific!) running_add = running_add * gamma + r[t] discounted_r[t] = running_add return discounted_r """ # Print model's state_dict print("Model's state_dict:") for param_tensor in policy_network.state_dict(): print(param_tensor, "\t", policy_network.state_dict()[param_tensor].size()) """ def train(policy_network, optimizer, env, batch_size=10, gamma=0.99, resume=False): # Initialize environment observation = env.reset() prev_x = None # Used in computing the difference frame xs, hs, dlogps, drs = [], [], [], [] running_reward = None reward_sum = 0 episode_number = 0 episode_rewards = [] resume = False if resume: policy_network.load_state_dict(torch.load('policy_checkpoint.pth')) policy_network.eval() while True: # Preprocess the observation, set input to network to be difference image cur_x = prepro(observation) x = torch.tensor(cur_x - prev_x if prev_x is not None else np.zeros_like(cur_x), dtype=torch.float32) prev_x = cur_x # Forward pass the policy network and sample an action from the returned probability aprob = policy_network(x) action = 2 if np.random.uniform() < aprob.item() else 3 # Roll the dice! # Record various intermediates (needed later for backprop) xs.append(x) # Observation hs.append(x) # Hidden state y = torch.tensor([1 if action == 2 else 0], dtype=torch.float32) # A "fake label" dlogps.append(y - aprob) # Grad that encourages the action that was taken to be taken # Step the environment and get new measurements observation, reward, done, info = env.step(action) reward_sum += reward drs.append(reward) # Record reward if done: # An episode finished episode_number += 1 # Stack together all inputs, hidden states, action gradients, and rewards for this episode epx = torch.stack(xs) epdlogp = torch.stack(dlogps) epr = torch.tensor(discount_rewards(np.vstack(drs)), dtype=torch.float32) xs, hs, dlogps, drs = [], [], [], [] # Reset array memory # Discount and normalize rewards discounted_epr = (epr - epr.mean()) / (epr.std() + 1e-8) epdlogp *= discounted_epr # Modulate the gradient with advantage optimizer.zero_grad() # Backward pass loss = torch.sum(-epdlogp) loss.backward() optimizer.step() # Save model if episode_number % batch_size == 0: torch.save(policy_network.state_dict(), 'policy_checkpoint.pth') print(policy_network.state_dict()) episode_rewards.append(reward_sum) # Boring book-keeping running_reward = reward_sum if running_reward is None else running_reward * 0.99 + reward_sum * 0.01 print('Resetting env. Episode reward total was %.2f. Running mean: %.2f' % (reward_sum, running_reward)) reward_sum = 0 observation = env.reset() # Reset environment prev_x = None if reward != 0: # Pong has either +1 or -1 reward exactly when game ends print('Ep {}: Game finished, reward: {}'.format(episode_number, reward) + ('' if reward == -1 else ' !!!!!!!!')) if episode_number >= 1000: # stopping criteria break return episode_rewards def plot_results(episode_rewards): plt.plot(episode_rewards) plt.title('Episode rewards over time') plt.xlabel('Episode') plt.ylabel('Total reward') plt.show() # Create environment render = False env = gym.make("Pong-v0") if not render else gym.make("Pong-v0", render_mode='human') # Initialize policy network and optimizer policy_network = PolicyNetwork() optimizer = optim.RMSprop(policy_network.parameters(), lr=learning_rate, alpha=decay_rate) # Train the policy network train_rewards = train(policy_network, optimizer, env, batch_size, gamma, num_train_episodes) # Plotting plot_results(train_rewards) Here's the error: FileNotFoundError Traceback (most recent call last) Cell In[2], line 157 155 # Create environment 156 render = False --> 157 env = gym.make("Pong-v0") if not render else gym.make("Pong-v0", render_mode='human') 159 # Initialize policy network and optimizer 160 policy_network = PolicyNetwork() ```
I removed this extension from vs code and this solved my issue. [1]: https://i.stack.imgur.com/9mcig.png If this helped you solving the issue vote up.. :)
|svg|icons|moodle|moodle-theme|
guys. As usual: doing first steps in Data Science. Trying to build Extra Trees REgressor model from the data frame with lists Data frame size is 47724 rows × 27 columns, each cell contains a list with 4 figures, for example [7.69, 7.68, 7.66, 7.74]. This is X factor Y (target) is the single one, just float number. After transferring all data to regressor model, there is a mistake: *setting an array element with a sequence* How to fix it? Many thanks for every idea Have tried to use advices from that case https://stackoverflow.com/questions/56980042/sklearn-valueerror-setting-an-array-element-with-a-sequence and to use np.array but it gives another mistake *'DataFrame' object has no attribute 'tolist'* but a little bit difference that here is the list with the data at the beginning whether I have data frame, or I don't see the right way to use it
How to transfer object dataframe in sklearn.ensemble methods
|python|pandas|dataframe|numpy|scikit-learn|
null
> Is it even possible to bootstrap a multilevel sem model in R? Yes, but what kind of resampling procedure do you think is appropriate? There are some proposals you can search the literature for (e.g., resample clusters as a whole; or resample clusters followed by resampling observations within each cluster; or residual bootstrapping both Level-1 and Level-2 residuals / random intercepts). None of these "nonparametric" bootstrapping methods are implemented in `lavaan` because it is not clear whether any method is a good default option. The default is to provide delta-method *SE*s and CIs in `summary(..., ci=TRUE)` or `paramterEstimates()`. If your sample size is not big enough for the delta-method to provide CIs with nominal coverage (and thus nominal Type I error rates), then your sample size certain also would not be big enough to trust your point estimates from a ML-SEM. Rather than a "nonparametric" bootstrap, the most straight-forward solution would be to use a parametric bootstrap, which is called a Monte Carlo CI in the SEM literature: Preacher, K. J., & Selig, J. P. (2012). Advantages of Monte Carlo confidence intervals for indirect effects. *Communication Methods and Measures, 6*(2), 77–98. https://doi.org/10.1080/19312458.2012.679848 Tofighi, D., & MacKinnon, D. P. (2016). Monte Carlo confidence intervals for complex functions of indirect effects. *Structural Equation Modeling, 23*(2), 194-205. https://doi.org/10.1080/10705511.2015.1057284 This is implemented in the `semTools` package's `monteCarloCI()` function, which provides CIs automatically for any user-defined parameters (such as indirect effects) in a `lavaan` or `lavaan.mi` object.
{"Voters":[{"Id":10304804,"DisplayName":"Christopher"},{"Id":466862,"DisplayName":"Mark Rotteveel"},{"Id":839601,"DisplayName":"gnat"}],"SiteSpecificCloseReasonIds":[11]}
|python|pytorch|
I'm getting this primary key constraint error ``` Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: org.springframework.dao.DataIntegrityViolationException: could not execute statement [Unique index or primary key violation: "PUBLIC.CONSTRAINT_INDEX_3 ON PUBLIC.USER_TACO_ORDER_REF(TACO_ORDER_ID NULLS FIRST) VALUES ( /* 1 */ '52aca972-aeac-4305-9021-ad17ea1a0b07' )"; SQL statement: insert into user_taco_order_ref (user_id,taco_order_id) values (?,?) [23505-220]] [insert into user_taco_order_ref (user_id,taco_order_id) values (?,?)]; SQL [insert into user_taco_order_ref (user_id,taco_order_id) values (?,?)]; constraint ["PUBLIC.CONSTRAINT_INDEX_3 ON PUBLIC.USER_TACO_ORDER_REF(TACO_ORDER_ID NULLS FIRST) VALUES ( /* 1 */ '52aca972-aeac-4305-9021-ad17ea1a0b07' )" ``` I'm saving more than one TacoOrder objects in one session. In first order controller works fine. It is saved in database and no errors. After that, I add it to the User's tacoOrder list collection, and as its Cascade.ALL, its saved automatically. Database is reset everytime so it's just one object in the list. In second Order, I reset the current sessional TacoOrder object and it goes through same controller and is added in User's list and when hibernate persists the User. The list is persisted too, but it now has 2 tacoOrders, one of which is already persisted and saving that is giving me error. These are the Controller and entities ``` @Transactional @PostMapping("/current") public String processOrder(@ModelAttribute("order") @Valid TacoOrder order, Errors errors, @AuthenticationPrincipal User user) { if(errors.hasErrors()){ return "orderForm"; } order.setUser(user); order.setPlacedAt(new Date()); user.addTacoOrder(order); log.info("4 {} {}", order, user); //LOG TacoOrder tacoOrder = orderRepository.save(order); userService.updateUserFromOrder(user, tacoOrder); userRepository.save(user); log.info("Order submitted: {}", tacoOrder); //LOG return "redirect:/orders"; } ``` ``` @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER) @JoinTable( name = "User_Taco_Order_Ref", joinColumns = @JoinColumn(name = "user_id"), inverseJoinColumns = @JoinColumn(name = "tacoOrder_id")) private List<TacoOrder> tacoOrders = new ArrayList<>(); ``` These are the logs:- First Order: ``` 4 TacoOrder(id=52aca972-aeac-4305-9021-ad17ea1a0b07, placedAt=Fri Mar 29 04:07:34 IST 2024, deliveryName=a, deliveryStreet=a, deliveryCity=a, deliveryState=a, deliveryZip=a, ccNumber=4417123456789113, ccExpiration=11/2024, ccCVV=111, tacos=[Taco(id=1, createdAt=Fri Mar 29 04:07:24 IST 2024, name=Classic Beef Taco, ingredients=[Ingredient(id=FLTO, name=Flour Tortilla, type=WRAP), Ingredient(id=GRBF, name=Ground Beef, type=PROTEIN), Ingredient(id=CHED, name=Cheddar, type=CHEESE), Ingredient(id=LETC, name=Lettuce, type=VEGGIES), Ingredient(id=SLSA, name=Salsa, type=SAUCE)])]) ``` ``` User(id=1, username=admin, password=$2a$10$zpjL0V253nLpa1cH0j5sNOWKrfyikWjMZcXzc1TQ5HtttSgQgGi7O, roles=[ROLE_USER, ROLE_ADMIN], fullName=null, street=null, city=null, state=null, zip=null, phoneNumber=null, tacoOrders=[TacoOrder(id=52aca972-aeac-4305-9021-ad17ea1a0b07, placedAt=Fri Mar 29 04:07:34 IST 2024, (rest is same) ``` Second Order: ``` 4 TacoOrder(id=37808e6b-ed1d-4f38-8652-8c672b88d5e5, placedAt=Fri Mar 29 04:07:52 IST 2024, deliveryName=1, deliveryStreet=1, deliveryCity=1, deliveryState=1, deliveryZip=1, ccNumber=4417123456789113, ccExpiration=11/2024, ccCVV=111, tacos=[Taco(id=2, createdAt=Fri Mar 29 04:07:39 IST 2024, name=Spicy Carnitas Delight, ingredients=[Ingredient(id=COTO, name=Corn Tortilla, type=WRAP), Ingredient(id=CARN, name=Carnitas, type=PROTEIN), Ingredient(id=JALA, name=Jalapeños, type=VEGGIES), Ingredient(id=LETC, name=Lettuce, type=VEGGIES), Ingredient(id=SLSA, name=Salsa, type=SAUCE)])]) ``` ``` User(id=1, username=admin, password=$2a$10$zpjL0V253nLpa1cH0j5sNOWKrfyikWjMZcXzc1TQ5HtttSgQgGi7O, roles=[ROLE_USER, ROLE_ADMIN], fullName=null, street=a, city=a, state=a, zip=a, phoneNumber=null, tacoOrders=[TacoOrder(id=52aca972-aeac-4305-9021-ad17ea1a0b07, placedAt=Fri Mar 29 04:07:34 IST 2024, , TacoOrder(id=37808e6b-ed1d-4f38-8652-8c672b88d5e5, placedAt=Fri Mar 29 04:07:52 IST 2024, ``` as per logs first order @id is ``` 52aca972-aeac-4305-9021-ad17ea1a0b07 ``` and second order @id is ``` 37808e6b-ed1d-4f38-8652-8c672b88d5e5 ``` Now in second Order, User has both the first Order (which was already persisted in User_Taco_Order_Ref table) and second new Order which is yet to be added in the User_Taco_Order_Ref table. Hibernate is saving the whole collection again and giving me this error while saving already persisted instanced of taco orders. How can I fix it?
null
Simply using `apt` in place of `apt-get` worked for me https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
I have created the API in .net 6 C# with docker support and it worked perfectly when I try to run from Visual Studio , When I deploy the docker build image to docker desktop app it gives following error tried to run the project with .net 7 and used latest package and followed the checked on the https://github.com/Azure/azure-sdk-for-net/issues/28120 protected static async Task<string> GetSpnSecretAsync(string secretKey) { var keyVaultName = Environment.GetEnvironmentVariable("KEYVAULT"); var keyVaultUrl = $"https://{keyVaultName}.vault.azure.net"; var credential = new DefaultAzureCredential(includeInteractiveCredentials: true); var client = new SecretClient(vaultUri: new Uri(keyVaultUrl), credential: credential); var secret = await client.GetSecretAsync(secretKey); var secretValue = secret.Value.Value; return secretValue; } > <sup>2024-03-29 17:50:53 info: Microsoft.Hosting.Lifetime[14] > 2024-03-29 17:50:53 Now listening on: http://[::]:8080 > 2024-03-29 17:50:53 info: Microsoft.Hosting.Lifetime[0] > 2024-03-29 17:50:53 Application started. Press Ctrl+C to shut down. > 2024-03-29 17:50:53 info: Microsoft.Hosting.Lifetime[0] > 2024-03-29 17:50:53 Hosting environment: Development > 2024-03-29 17:50:53 info: Microsoft.Hosting.Lifetime[0] > 2024-03-29 17:50:53 Content root path: /app/ > 2024-03-29 17:50:59 warn: Microsoft.AspNetCore.HttpsPolicy.HttpsRedirectionMiddleware[3] > 2024-03-29 17:50:59 Failed to determine the https port for redirect. > 2024-03-29 17:50:59 info: WebApplication Controllers.HealthController[0] > 2024-03-29 17:50:59 Getting token Async > 2024-03-29 17:50:59 info: WebApplication Controllers.HealthController[0] > 2024-03-29 17:50:59 Use Key Vault True > 2024-03-29 17:50:59 info: WebApplication Controllers.HealthController[0] > 2024-03-29 17:50:59 Keyvault : : XXXXX > 2024-03-29 17:50:59 info: WebApplication Controllers.HealthController[0] > 2024-03-29 17:50:59 Keyvault URL : : https://XXXXX.vault.azure.net > 2024-03-29 17:50:59 info: WebApplication Controllers.HealthController[0] > 2024-03-29 17:50:59 Creating Secret client > 2024-03-29 17:50:59 info: WebApplication Controllers.HealthController[0] > 2024-03-29 17:50:59 Getting Secret > 2024-03-29 17:51:02 fail: WebApplication Controllers.HealthController[0] > 2024-03-29 17:51:02 Health Controller : InteractiveBrowserCredential authentication failed: Persistence check failed. Inspect inner exception for details > 2024-03-29 17:51:02 Azure.Identity.AuthenticationFailedException: InteractiveBrowserCredential authentication failed: Persistence check failed. Inspect inner exception for details > 2024-03-29 17:51:02 ---> Microsoft.Identity.Client.Extensions.Msal.MsalCachePersistenceException: Persistence check failed. Inspect inner exception for details > 2024-03-29 17:51:02 ---> System.DllNotFoundException: Unable to load shared library 'libsecret-1.so.0' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: liblibsecret-1.so.0: cannot open shared object file: No such file or directory > 2024-03-29 17:51:02 at Microsoft.Identity.Client.Extensions.Msal.Libsecret.secret_schema_new(String name, Int32 flags, String attribute1, Int32 attribute1Type, String attribute2, Int32 attribute2Type, IntPtr end) > 2024-03-29 17:51:02 at Microsoft.Identity.Client.Extensions.Msal.LinuxKeyringAccessor.GetLibsecretSchema() > 2024-03-29 17:51:02 at Microsoft.Identity.Client.Extensions.Msal.LinuxKeyringAccessor.Write(Byte[] data) > 2024-03-29 17:51:02 at Microsoft.Identity.Client.Extensions.Msal.Storage.VerifyPersistence() > 2024-03-29 17:51:02 --- End of inner exception stack trace --- > 2024-03-29 17:51:02 at Microsoft.Identity.Client.Extensions.Msal.Storage.VerifyPersistence() > 2024-03-29 17:51:02 at Microsoft.Identity.Client.Extensions.Msal.MsalCacheHelper.VerifyPersistence() > 2024-03-29 17:51:02 at Azure.Identity.MsalCacheHelperWrapper.VerifyPersistence() > 2024-03-29 17:51:02 at Azure.Identity.TokenCache.GetCacheHelperAsync(Boolean async, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.TokenCache.GetCacheHelperAsync(Boolean async, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.TokenCache.RegisterCache(Boolean async, ITokenCache tokenCache, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.MsalClientBase GetClientAsync(Boolean enableCae, Boolean async, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.MsalPublicClient.AcquireTokenInteractiveCoreAsync(String[] scopes, String claims, Prompt prompt, String loginHint, String tenantId, Boolean enableCae, BrowserCustomizationOptions browserOptions, Boolean async, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.MsalPublicClient.AcquireTokenInteractiveAsync(String[] scopes, String claims, Prompt prompt, String loginHint, String tenantId, Boolean enableCae, BrowserCustomizationOptions browserOptions, Boolean async, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.InteractiveBrowserCredential.GetTokenViaBrowserLoginAsync(TokenRequestContext context, Boolean async, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.InteractiveBrowserCredential.GetTokenImplAsync(Boolean async, TokenRequestContext requestContext, CancellationToken cancellationToken) > 2024-03-29 17:51:02 --- End of inner exception stack trace --- > 2024-03-29 17:51:02 at Azure.Identity.CredentialDiagnosticScope.FailWrapAndThrow(Exception ex, String additionalMessage, Boolean isCredentialUnavailable) > 2024-03-29 17:51:02 at Azure.Identity.InteractiveBrowserCredential.GetTokenImplAsync(Boolean async, TokenRequestContext requestContext, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.InteractiveBrowserCredential.GetTokenAsync(TokenRequestContext requestContext, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.DefaultAzureCredential.GetTokenFromSourcesAsync(TokenCredential[] sources, TokenRequestContext requestContext, Boolean async, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.DefaultAzureCredential.GetTokenImplAsync(Boolean async, TokenRequestContext requestContext, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.CredentialDiagnosticScope.FailWrapAndThrow(Exception ex, String additionalMessage, Boolean isCredentialUnavailable) > 2024-03-29 17:51:02 at Azure.Identity.DefaultAzureCredential.GetTokenImplAsync(Boolean async, TokenRequestContext requestContext, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Identity.DefaultAzureCredential.GetTokenAsync(TokenRequestContext requestContext, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AccessTokenCache.GetHeaderValueFromCredentialAsync(TokenRequestContext context, Boolean async, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AccessTokenCache.GetHeaderValueAsync(HttpMessage message, TokenRequestContext context, Boolean async) > 2024-03-29 17:51:02 at Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AccessTokenCache.GetHeaderValueAsync(HttpMessage message, TokenRequestContext context, Boolean async) > 2024-03-29 17:51:02 at Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AuthenticateAndAuthorizeRequestAsync(HttpMessage message, TokenRequestContext context) > 2024-03-29 17:51:02 at Azure.Security.KeyVault.ChallengeBasedAuthenticationPolicy.AuthorizeRequestOnChallengeAsyncInternal(HttpMessage message, Boolean async) > 2024-03-29 17:51:02 at Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory 1 pipeline, Boolean async) > 2024-03-29 17:51:02 at Azure.Core.Pipeline.RedirectPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory 1 pipeline, Boolean async) > 2024-03-29 17:51:02 at Azure.Core.Pipeline.RetryPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory 1 pipeline, Boolean async) > 2024-03-29 17:51:02 at Azure.Core.Pipeline.RetryPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory 1 pipeline, Boolean async) > 2024-03-29 17:51:02 at Azure.Core.Pipeline.HttpPipeline.SendRequestAsync(Request request, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Security.KeyVault.KeyVaultPipeline.SendRequestAsync(Request request, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at Azure.Security.KeyVault.KeyVaultPipeline.SendRequestAsync[TResult](RequestMethod method, Func 1 resultFactory, CancellationToken cancellationToken, String[] path) > 2024-03-29 17:51:02 at Azure.Security.KeyVault.Secrets.SecretClient.GetSecretAsync(String name, String version, CancellationToken cancellationToken) > 2024-03-29 17:51:02 at WebApplication Controllers.AdministratorControllerBase.GetSpnSecretAsync(String secretKey) in /src/WebApplication1/Controllers/AdministratorControllerBase.cs:line 43 > 2024-03-29 17:51:02 at WebApplication Controllers.AdministratorControllerBase.GetTokenAsync() in /src/WebApplication1/Controllers/AdministratorControllerBase.cs:line 59 > 2024-03-29 17:51:02 at WebApplication Controllers.HealthController.Get() in /src/WebApplication1/Controllers/HealthController.cs:line 30 > 2024-03-29 17:51:10 info: WebApplication1 Controllers.WeatherForecastController[0] > 2024-03-29 17:51:10 Use Key Vault True > 2024-03-29 17:51:10 info: WebApplication Controllers.WeatherForecastController[0] > 2024-03-29 17:51:10 Keyvault : : XXXXX > 2024-03-29 17:51:10 info: WebApplication Controllers.WeatherForecastController[0] > 2024-03-29 17:51:10 Keyvault URL : : https://XXXX.vault.azure.net > 2024-03-29 17:51:10 info: WebApplication Controllers.WeatherForecastController[0] > 2024-03-29 17:51:10 Creating Secret client > 2024-03-29 17:51:10 info: WebApplication Controllers.WeatherForecastController[0] > 2024-03-29 17:51:10 Getting Secret > 2024-03-29 19:13:04 info: Microsoft.Hosting.Lifetime[0] > 2024-03-29 19:13:04 Application is shutting down... > </sup> >
Unable to connect to Azure keyvault when I deploy API(C#) .net 6 , the docker image to docker desktop app
|asp.net-web-api|azure-keyvault|docker-desktop|azure-identity|
null
I am using Azure services in my application (AppService, AppInsight) Also, I've implemented my project int **.net Core7** I am using `Microsoft.ApplicationInsights.AspNetCore` library (version `2.22.0`) My Application Insight configuration in appsetting.json is like this "ApplicationInsights": { "ConnectionString": "InstrumentationKey=a91c943b-5268-42e1-80c3-dc03c5e9bf0c;IngestionEndpoint=https://easteurope.in.applicationinsights.azure.com/;LiveEndpoint=https://easteurope.livediagnostics.monitor.azure.com/" } in I am using I've registered the service like this: builder.Services.AddApplicationInsightsTelemetry(); and my code is private readonly TelemetryClient telemetryClient; public SaveMetricsUseCase(TelemetryClient telemetryClient) { this.telemetryClient = telemetryClient; } ... private void SaveMetrics() { telemetryClient.GetMetric("ProjectName.PromptTokens", "CategoryId", "Model", "UserId") .TrackValue(input.Prompt, input.CategoryId, input.ModelName, input.UserId); telemetryClient.GetMetric("ProjectName.CompletionTokens", "CategoryId", "Model", "UserId") .TrackValue(input.Completion, input.CategoryId, input.ModelName, input.UserId); } Whenever I deploy my changes on AppService, the custom metrics are saved properly but after **one hour** the metrics stop storing and I can not see any metrics in the App Insight portal anymore (other logs and exceptions saved in AppInsight without problem, only metrics do not store anymore) **Have you any idea what the problem might be?**
Custom Metrics stop saving in App Insight after one hour
|c#|.net-core|azure-application-insights|azure-appservice|
I'm trying to encrypt a ZIP file using AES-256 GCM in C and decrypting it in Python. Here is the C code I'm using: ```c NTSTATUS generateRandomBytes(BYTE *buffer, ULONG length) { BCRYPT_ALG_HANDLE hProvider; NTSTATUS status = BCryptOpenAlgorithmProvider(&hProvider, BCRYPT_RNG_ALGORITHM, NULL, 0); if (!NT_SUCCESS(status)) { return status; } status = BCryptGenRandom(hProvider, buffer, length, 0); BCryptCloseAlgorithmProvider(hProvider, 0); return status; } NTSTATUS encrypt_AES_GCM(const BYTE *plainData, ULONG plainDataLength, const BYTE *iv, ULONG ivLength, const BYTE *key, ULONG keyLength, BYTE *encryptedData, ULONG encryptedDataLength, BYTE *authTag, ULONG authTagLength) { NTSTATUS status = 0; DWORD bytesDone = 0; BCRYPT_ALG_HANDLE algHandle = 0; status = BCryptOpenAlgorithmProvider(&algHandle, BCRYPT_AES_ALGORITHM, NULL, 0); if (!NT_SUCCESS(status)) { return status; } status = BCryptSetProperty(algHandle, BCRYPT_CHAINING_MODE, (PBYTE)BCRYPT_CHAIN_MODE_GCM, sizeof(BCRYPT_CHAIN_MODE_GCM), 0); if (!NT_SUCCESS(status)) { BCryptCloseAlgorithmProvider(algHandle, 0); return status; } BCRYPT_KEY_HANDLE keyHandle = 0; status = BCryptGenerateSymmetricKey(algHandle, &keyHandle, NULL, 0, (PUCHAR)key, keyLength, 0); if (!NT_SUCCESS(status)) { BCryptCloseAlgorithmProvider(algHandle, 0); return status; } BCRYPT_AUTHENTICATED_CIPHER_MODE_INFO authInfo; BCRYPT_INIT_AUTH_MODE_INFO(authInfo); authInfo.pbNonce = (PUCHAR)iv; authInfo.cbNonce = ivLength; authInfo.pbTag = authTag; authInfo.cbTag = authTagLength; status = BCryptEncrypt(keyHandle, (PUCHAR)plainData, plainDataLength, &authInfo, NULL, 0, encryptedData, encryptedDataLength, &bytesDone, 0); if (!NT_SUCCESS(status)) { BCryptDestroyKey(keyHandle); BCryptCloseAlgorithmProvider(algHandle, 0); return status; } BCryptDestroyKey(keyHandle); BCryptCloseAlgorithmProvider(algHandle, 0); return status; } int GCM_Encrypt_File(char *FileToEncrypt, char *keys, char *OutputFile) { int fd; long file_size; char *input; _sopen_s(&fd, FileToEncrypt, _O_RDONLY, _SH_DENYRW, _S_IREAD); file_size = _filelength(fd); input = (char *)malloc(file_size); size_t bytes_read = fread(input, 1, file_size, _fdopen(fd, "rb")); BYTE key[32]; BYTE iv[12]; BYTE KeyTagIV[60]; BYTE *encrypted; ULONG encryptedSize = file_size; ULONG authTagLength = 16; BYTE authTag[16]; // Generate key and iv generateRandomBytes(key, sizeof(key)); generateRandomBytes(iv, sizeof(iv)); // Concatenate key and IV memcpy(KeyTagIV, key, sizeof(key)); memcpy(KeyTagIV + sizeof(key), iv, sizeof(iv)); // Print the info encrypted = (BYTE *)malloc(encryptedSize); // Encrypt encrypt_AES_GCM((BYTE *)input, file_size, iv, 12, key, 32, encrypted, encryptedSize, authTag, authTagLength); // printf("Encrypted bytes:\n"); // for (ULONG i = 0; i < encryptedSize; i++) { // printf("%02X", encrypted[i]); // } // printf("\n"); // Print data printf("KEY:\n"); for (ULONG i = 0; i < sizeof(key); i++) { printf("%02X", key[i]); } printf("\n"); printf("IV:\n"); for (ULONG i = 0; i < sizeof(iv); i++) { printf("%02X", iv[i]); } printf("\n"); printf("Authentication tag:\n"); for (ULONG i = 0; i < authTagLength; i++) { printf("%02X", authTag[i]); } printf("\n"); // Add authentication tag to the ciphertext and write the data to a file memcpy(KeyTagIV + 44, authTag, authTagLength); memcpy(keys, KeyTagIV, 60); printf("FULL:\n"); for (ULONG i = 0; i < 60; i++) { printf("%02X", KeyTagIV[i]); } printf("\n"); FILE *encryptedFile = fopen(OutputFile, "wb"); fwrite(encrypted, sizeof(char), file_size, encryptedFile); free(encrypted); return 1; } int main() { // GCM Encrypt the File. char key_data[60]; char path[] = "test.txt"; char path1[] = "test.enc"; GCM_Encrypt_File(path, key_data, path1); printf("Decryption Data:"); for (int i = 0; i < 60; i++) { printf("%02X ", key_data[i]); } return 0; } ``` and I'm trying to decrypt the data with this Python code: ```python import rsa from Crypto.Cipher import AES import base64 import os if __name__ == '__main__': # Decryption data in HEX KEY; IV; TAG KEYS = """540AA548ADBBF19820FEC1DDF9BC19B6230A746C0CF0EA87E083FDF314867DA525F299D2B9FEBC26A864A9F149D3A60B05E03CA9C3328E5AB10228DB""" encrypted_keys = bytearray.fromhex(KEYS) ENC_PATH = "test.enc" keys_data = encrypted_keys key = keys_data[:32] iv = keys_data[32:44] authTag = keys_data[44:60] ciphertext = open(ENC_PATH, mode="rb").read() cipher = AES.new(key, AES.MODE_GCM, nonce=iv) plaintext = cipher.decrypt_and_verify(ciphertext, authTag) with open('decrypted_test.txt', 'wb') as f: f.write(plaintext) print("Done.") #decrypt_rsa(input_filename, output_filename) ``` **This code seems to work, when for example encrypting a .txt file with just "Hello, world!". So it's able to decrypt it correctly.** The problem is that when I try and encrypt a .zip file like this: [![enter image description here][1]][1] The decrypted output data looks like this: [![enter image description here][2]][2] **The .txt file is gone, and the test2.zip says "Corrupted" when trying to open. The test1.zip still opens correctly.** *Could the algorithm be corrupting it? Or is there something wrong in my code?* [1]: https://i.stack.imgur.com/bQKX3.png [2]: https://i.stack.imgur.com/oIwI2.png
Could bcrypts AES-256 GCM encryption corrupt ZIP files?
|python|c|corruption|aes-gcm|
We could add a temporary column `".rm"` (for remove), created from unlisting the `"id"` column and scanning it for `duplicated`. This gives a vector that can be `split`ted along `rep`eated consecutive integers each `nrow` times for each sub-list and added to the sub-lists using `` `[<-`() ``. Finally we `subset` for not `TRUE`s in `".rm"` and remove that temporary column. > Map(`[<-`, my_list, '.rm', value=lapply(my_list, `[`, 'id') |> + unlist() |> + duplicated() |> + split(sapply(my_list, nrow) |> { + \(.) mapply(rep.int, seq_along(.), .) + }())) |> + lapply(subset, !.rm, select=-.rm) [[1]] id country 1 xxxyz USA 3 zzuio Canada [[2]] id country 2 ppuip Canada This removes "zzuio" in second sub-list instead of the first, but that actually makes more sense to me.
Download and correctly install an updated [CA bundle][1] for your PHP's curl. Better yet, update or switch to a webserver with a current version of PHP, as whatever you are using is likely to have security vulnerabilities. [1]: https://curl.se/docs/caextract.html
``` <template> <div class="flex"> <SidebarComponent /> <div> <slot /> </div> </div> </template> <style></style> <script setup lang="ts"> </script> ```
I am writing a program that reads a bmp file, and then outputs the same BMP file but cropped in half. I'm doing this by simply dividing the bmp height in half, and that results in the top half of the BMP image being cropped out. However, I can only crop out the top half. I am trying to find a way to crop out the bottom half of the BMP image, but it seems that the pixels begin writing from the bottom of the file and I am trying to get them to start halfway through the file, or maybe start from the top down. ``` // Update the width and height in the BMP header header.width = newWidth; header.height = newHeight; printf("New header width: %d\n", header.width); printf("New header height: %d\n", header.height); // Write the modified BMP header to the output file fwrite(&header, sizeof(BMPHeader), 1, outputFile); // Calculate the padding int padding = (4 - (header.width * (header.bpp / 8)) % 4) % 4; // Copy the image data unsigned char pixel[4]; for (int y = 0; y < newHeight; y++) { for (int x = 0; x < header.width; x++) { fread(pixel, sizeof(unsigned char), header.bpp / 8, inputFile); fwrite(pixel, sizeof(unsigned char), header.bpp / 8, outputFile); } for (int p = 0; p < padding; p++) { fputc(0, outputFile); // Write padding for the new image } } // Close the files fclose(inputFile); fclose(outputFile); printf("BMP image cropped successfully\n"); ``` This is essentially all the code that does the image cropping. I'm only using stdio.h and stdlib.h libraries and would like to keep it that way. The outputted image is the bottom half of the original image, but I would like to also be able to find a way to keep the top half instead. The original BMP image I am using is 3200x1200, and I am setting the new height to be 600 instead of 1200 so the new image can be cut in half vertically. [Original BMP image (3200x1200)](https://i.stack.imgur.com/3Rml3.png) [Cropped image (3200x600)](https://i.stack.imgur.com/zStih.png)
How to crop a BMP image in half using C
|c|crop|bmp|
null
The answer is [here](https://cabal.readthedocs.io/en/latest/cabal-package-description-file.html#library), I was just looking at some old version of the doc. Here's what the new has to say > Before version 3.0 of the Cabal specification, all sublibraries were internal libraries. Before version 2.0, a package could not include sublibraries.
It appears that it's important to express that you want the 7 specific items extracted by calling for tokens=1-7. Token 1 will be %%G, with each successive, stipulated token using %%H, %%I, etc. Wanting to express one line for each token comes by echoing them 1 echo command at a time for each token's variable, either as one per line stacked or parenthetically grouped sequentially with &s: do (echo %%G) & (echo %%H) & (echo %%I)... etc In total, the following worked for me: `for /f "tokens=1-7" %G in ("1 2 7 16 21 26 688") do ( echo %G echo %H echo %I echo %J echo %K echo %L echo %M )`
if you don't have a value in the array to use as key, the solution proposed in [this answer][1] works like a charm: ... routes.map(route => <li className='mr-8' key={JSON.stringify(route)}> ... [1]: https://stackoverflow.com/a/52706585/1787312
You can develop new code to act automatically based on html direction with `*-inline-*` css rules. There is a [website][1] helping you convert your `CSS` code to such styles, but your `writing-mode` should be horizontal. [1]: https://www.bicss.net/
I am trying to upgrade our version of JFROG from 7.33 to 7.77 on Centos 7 but getting the follow error: Error: Package: jfrog-artifactory-pro-7.77.6-77706900.x86_64 (Artifactory-pro) Requires: libstdc++.so.6(CXXABI_1.3.9)(64bit) Would anyone in the forum have experience with this issue? My initial research/testing has me leaning there is no libstdc++.so.6(CXXABI_1.3.9)(64bit) for Centos 7 but going to dive deeper in research now to see. Thank you!
Centos 7 and JFROG Artifactory 7.77.X
|centos|centos7|
Try combining those data columns into a table and then expanding ``` let [...] // Converter a resposta JSON em uma tabela jsonResponse = Json.Document(response), #"Convertido para Tabela" = Record.ToTable(jsonResponse), Value = Table.FromRecords(List.Transform(#"Convertido para Tabela"{0}[Value], each [id = [id], data = Table.FromColumns({[power], [wind_speed], [times]}, {"power", "wind_speed", "times"})])), #"Expanded data" = Table.ExpandTableColumn(Value, "data", {"power", "wind_speed", "times"}, {"power", "wind_speed", "times"}) in #"Expanded data" ```
I'm trying to figure out how I can stop maya from displaying some messages such as info and warnings. it sounds like a job for the logger by just settings its level but it doesn't seem to affect the commands such as cmds.warning() are those commands even using the logger? is maya playing some trick on use by force setting the logger in the command itself? any info is appreciated! Cheers!
Maya.cmds.warning() relying on a maya's logger?
|debugging|logging|maya|
In my application the user can assume one of 3 different roles. These users can be assigned to programs, and they can be assigned to 3 different fields, each of those exclusevely to a role. In my API I'm trying to query all my users and annotate the programs they are in: ``` def get_queryset(self): queryset = ( User.objects .select_related('profile') .prefetch_related('managed_programs', 'supervised_programs', 'case_managed_programs') .annotate( programs=Case( When( role=RoleChoices.PROGRAM_MANAGER, then=ArraySubquery( Program.objects.filter(program_managers=OuterRef('pk'), is_active=True) .values('id', 'name') .annotate( data=Func( Value('{"id": '), F('id'), Value(', "name": "'), F('name'), Value('"}'), function='concat', output_field=CharField(), ) ) .values('data') ), ), When( role=RoleChoices.SUPERVISOR, then=ArraySubquery( Program.objects.filter(supervisors=OuterRef('pk'), is_active=True) .values('id', 'name') .annotate( data=Func( Value('{"id": '), F('id'), Value(', "name": "'), F('name'), Value('"}'), function='concat', output_field=CharField(), ) ) .values('data') ), ), When( role=RoleChoices.CASE_MANAGER, then=ArraySubquery( Program.objects.filter(case_managers=OuterRef('pk'), is_active=True) .values('id', 'name') .annotate( data=Func( Value('{"id": '), F('id'), Value(', "name": "'), F('name'), Value('"}'), function='concat', output_field=CharField(), ) ) .values('data') ), ), default=Value(list()), output_field=ArrayField(base_field=JSONField()), ) ) .order_by('id') ) return queryset ``` This works (almost) flawlessly and gives me only 5 DB hits, perfect, or not... The problem is that I'm using Django HashID fields for the Program PK, and this query returns the pure integer value for each Program. I've tried a more "normal" approach, by getting the data using a `SerializerMethodField`: ``` @staticmethod def get_programs(obj): role_attr = { RoleChoices.PROGRAM_MANAGER: 'managed_programs', RoleChoices.SUPERVISOR: 'supervised_programs', RoleChoices.CASE_MANAGER: 'case_managed_programs', } try: programs = getattr(obj, role_attr[obj.role], None).values_list('id', 'name') return [{'id': str(id), 'name': name} for id, name in programs] except (AttributeError, KeyError): return [] ``` This gives me the result I need, but the query quantity skyrockets. It seems that it's not taking advantage of the `prefetch_related`, but I don't understand how is this possible, considering I'm using the same queryset. So, I have two options here: - Use the annotations but having the HashID returning, instead of integer PK - Have the SerializerMethodField reuse the prefetched data, instead of requerying Is there a way to accomplish any of those?
Annotations returning pure PK value instead of Django HashID
|python|django|postgresql|django-orm|hashids|
null
From your documentation your LCD has an I2C interface. You need a different library in this case such as LiquidCrystal_I2C. This code should work: #include <LiquidCrystal_I2C.h> LiquidCrystal_I2C lcd(0x27,16,2); // Set the LCD I2C address - ususally 0x27 or 0x3F void setup() { lcd.begin(); // initialize the lcd lcd.home (); // cursor to home lcd.print("Hello, World "); lcd.setCursor(0, 1); // cursor to the next line lcd.print("Line Two"); } void loop() { } You will need to know the I2C address of the LCD. This is usually 0x27 or 0x3F. Currently the code uses 0x27 (which works with mine). If not try 0x3f. If it still doesen't work, reply in the comments, and I will add an address sniffer so you can find out what yours is. The pin connections in the image seem to be OK. A4 is indeed the I2C SDA line, and A5 the I2C SCL.
Add this line ``` storeFile keystoreProperties['storeFile'] ? file(keystoreProperties['storeFile']) : null at android{ release{ storeFile keystoreProperties['storeFile'] ? file(keystoreProperties['storeFile']) : null } } ``` If you have already then replace with: ``` storeFile file("key.jks") ? file("key.jks") : null ```
I had a similar issue, I think the OP solved the problem the same way I did. But their answer isn't too clear. So what I had to do is add an URI under 'Authorized redirect URIs' for my google app on GCP (Select your project -> Credentials -> Authorized redirect URIs). The URI is https://{my-domain}.auth.{region}.amazoncognito.com/oauth2/ipdresponse