instruction
stringlengths
0
30k
i use Material 16. I would like mat-form-field with appearance="outline" but the label should inside my InputField. CURRENTLY it looks like this [![enter image description here][1]][1] But i want this. [![enter image description here][2]][2] [1]: https://i.stack.imgur.com/MVpsH.png [2]: https://i.stack.imgur.com/fNNSD.png If i use <mat-form-field appearance="fill" (2nd picture) my border is away. i would like use the border too.
Angular Material mat-form-field appearance="outline"
|angular|material-ui|angular-material|material-design|mat-form-field|
From https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/limits.h.html : {CHAR_BIT} Number of bits in a type char. [CX] [Option Start] Value: 8 [Option End] The POSIX issues 7 and 6 specifications state that char has 8 bits. POSIX standard before issue 6 did _not_ require that. From the same site: > CHANGE HISTORY > > Issue 6 > > The values for the limits {CHAR_BIT}, {SCHAR_MAX}, and {UCHAR_MAX} are now required to be 8, +127, and 255, respectively.
it's my test code ``` public class Test { public static SoftReference<byte[]> cache = new SoftReference<>(new byte[0]); public static List<byte[]> list = new ArrayList<>(); public static void main(String[] args) { try { func(); } catch (OutOfMemoryError e) { sniff(); e.printStackTrace(); } } public static void func() { byte[] bytes = new byte[1024 * 1024]; cache = new SoftReference<>(bytes); for(;;) { byte[] tmp = new byte[1024 * 1024]; list.add(tmp); } } public static void sniff() { byte[] bytes = cache.get(); if (bytes == null) { System.out.println("recycling data."); } else { System.out.println("object still live"); } } } ``` The program output is as follows > object still live I don't understand? why? This is a sentence I found in the official documentation of Oracle: > All soft references to softly-reachable objects are guaranteed to have been cleared before the virtual machine throws an OutOfMemoryError. Even more strangely, if I put `byte[] bytes = new byte[1024 * 1024]; cache = new SoftReference<>(bytes);` into the for loop; like this ``` public class Test { public static SoftReference<byte[]> cache = new SoftReference<>(new byte[0]); public static List<byte[]> list = new ArrayList<>(); public static void main(String[] args) { try { func(); } catch (OutOfMemoryError e) { sniff(); e.printStackTrace(); } } public static void func() { for(;;) { byte[] tmp = new byte[1024 * 1024]; list.add(tmp); byte[] bytes = new byte[1024 * 1024]; cache = new SoftReference<>(bytes); } } public static void sniff() { byte[] bytes = cache.get(); if (bytes == null) { System.out.println("recycling data."); } else { System.out.println("object still live"); } } } ``` the program output is as follows: > recycling data. I have two question: 1. The first writing method,Why did the garbage collector not collect SoftReferences 2. Why do these two writing methods produce such a difference I analyzed the dump file and the first way of writing really doesn't collect it
I have a JSON file with data, and I am looking to see if a participant has completed a lesson by looking to see if an object called targetTitle has "Exit" where object pTitle has all the following text elements: "hard", "easy", "medium", "CD", "CH", "WU" for a given lesson. Currently, the code cannot differentiate between the two following things: #1: targetTitle has "Exit" and pTitle has all text elements for a given lesson at least once. #2: targetTitle has "Exit" and pTitle has **one** text element for a given lesson at least once. Here is example data: ``` { "_id": "018e3sdafdsg810c478", "Score": 0, "Participant": "mailto:ExampleEmail@gmail.com", "Time": "2024-03-14T21:01:53.511Z", "Answer Details": { "pID": "ac82f2b7-842b-4f34-96e7-177324359390", "pTitle": "Part 1 Lesson6-Main0-10easy", "iID": "dsfds1201-7a94-41as-bsad-a6fdsf9f428", "targetID": "Rad_id_145527", "targetTitle": "2 Exit", "targetNote": "Choice #1", "sID": 0, "sub": false, "tI": 7443, "sessionID": "ltrq5nzr2gs723h3g0ld", "sessionTime": 7439, "r": "FA" } } ``` Here is my current python script: ``` import csv import json import re def check_participant(entry): participant = entry.get('Participant', 'Participant information missing') lessons_completed = {} # Check if 'Answer Details' key is present if 'Answer Details' in entry: answer_details = entry['Answer Details'] # Get 'pTitle' and 'targetTitle' if present p_title = answer_details.get('pTitle', 'pTitle information missing') target_title = answer_details.get('targetTitle', 'targetTitle information missing') # Extract lesson number using regex lesson_number_match = re.search(r'(Lesson|L)(\d+)', p_title) if lesson_number_match: lesson_number = f"Lesson {lesson_number_match.group(2)}" # Check conditions completed = False if any(keyword in p_title for keyword in ['WU', 'CH', 'CD', 'easy', 'medium', 'hard']) and 'Exit' in target_title: completed = True lessons_completed[lesson_number] = 'Yes' if completed else 'No' return participant, lessons_completed # Read data from JSON file with open('example test.json', 'r') as file: data = json.load(file) # Process data to create rows for each participant participant_data = {} for entry in data: participant, lessons_completed = check_participant(entry) if participant not in participant_data: participant_data[participant] = lessons_completed else: participant_data[participant].update(lessons_completed) # Specify the CSV file path csv_file_path = "Lesson Completed.csv" # Write data to the CSV file with open(csv_file_path, mode='w', newline='') as file: writer = csv.writer(file) # Write header header = ['Participant'] + list(next(iter(participant_data.values())).keys()) writer.writerow(header) # Write each row of data for each participant for participant, lessons in participant_data.items(): row = [participant] for lesson, completed in lessons.items(): row.append(completed) writer.writerow(row) print(f"CSV file '{csv_file_path}' has been created successfully.") ```
I am having difficulty with a side project I am working on. The goal of the program is to be able to do a few functions with a youtube video or playlist link. 1. download a youtube video and audio and merge them. 2. download a youtube playlist's video and audio and merge them. 3. download a youtube video's audio only. 4. download a youtube playlist's audio only. My main issues arise with the 2. function, downloading a playlist with video and audio and merging them. The program is able to complete function 1 successfully (download a video and audio and merge to one file), and function 3 successfully (download video and output just audio file) Function 4 hasn't been implemented at all yet. The program is broken into 2 main files: youtube_downloader.py and main.py The errors I receive from output of running function 2 are listed after the code below I am hoping for some clarification on what the errors are describing any advice on implementing the second function outlined in comments below. main.py ``` from youtube_downloader import download_playlist,input_links,download_video,convert_to_mp3,convert_to_mp4 from moviepy.editor import VideoFileClip, AudioFileClip, concatenate_videoclips import os print("Load Complete\n") print(''' What would you like to do? (1) Download a YouTube video and audio (2) Download a YouTube Playlist (video and audio) (3) Download a YouTube Video's audio only (4) Download a Youtube Playlist's audio only (q) Quit ''') done = False while done == False: #ask user for choice choice = input("Choice: ") if choice == "1" or choice == "2": # Sets Quality Option of video(s) quality = input("Please choose a quality (low or 0, medium or 1, high or 2, very high or 3):") #download videos manually if choice == "1": links = input_links() print('Download has been started') for link in links: filename = download_video(link, quality) convert_to_mp4(filename) print("Download finished!") #download a playlist if choice == "2": link = input("Enter the link to the playlist: ") print("Downloading playlist...") filenames = download_playlist(link, quality) print("Download finished! Beginning conversion...") **################################################################# for file in os.listdir('./Downloaded/'): convert_to_mp4(filenames) #################################################################** print("Conversion finished!") elif choice == "3": links = input_links() for link in links: print("Downloading...") filename = download_video(link, 'low') print("Converting...") convert_to_mp3(filename) os.remove(filename) elif choice == "4": pass #TODO: add option 4 code elif choice == "q" or choice == "Q": done = True print("Goodbye") else: print("Invalid input!") ``` youtube_downloader.py ``` import pytube from pytube import YouTube, Playlist from pytube.cli import on_progress from moviepy.editor import VideoFileClip, AudioFileClip import os """ Downloads video to a 'Downloaded' folder in the same dir as the program. """ def download_video(url, resolution): itag = choose_resolution(resolution) video = YouTube(url,on_progress_callback=on_progress) stream = video.streams.get_by_itag(itag) try: os.mkdir('./Downloaded/') except: pass stream.download(output_path='./Downloaded/') return f'./Downloaded/{stream.default_filename}' def download_videos(urls, resolution): for url in urls: download_video(url, resolution) def download_playlist(url, resolution): playlist = Playlist(url) download_videos(playlist.video_urls, resolution) def choose_resolution(resolution): if resolution in ["low", "360", "360p","0"]: itag = 18 elif resolution in ["medium", "720", "720p", "hd","1"]: itag = 22 elif resolution in ["high", "1080", "1080p", "fullhd", "full_hd", "full hd","2"]: itag = 137 elif resolution in ["very high", "2160", "2160p", "4K", "4k","3"]: itag = 313 else: itag = 18 return itag def input_links(): print("Enter the links of the videos (end by entering 'stop' or 0):") links = [] link = "" while link != "0" and link.lower() != "stop": link = input("video_url or \"stop\": ") links.append(link) if len(links)==1: print("No links were inputed") exit() links.pop() return links def convert_to_mp3(filename): clip = VideoFileClip(filename) clip.audio.write_audiofile(filename[:-3] + "mp3") clip.close() def convert_to_mp4(filename): video_clip = VideoFileClip(filename) audio_clip = AudioFileClip(filename) final_clip = video_clip.set_audio(audio_clip) final_clip.write_videofile(filename[:-3] + "mp4") final_clip.close() ``` Error output ``` Traceback (most recent call last): File "C:\Users\ptcoo\Documents\youtube-downloader-converter\main.py", line 44, in <module> convert_to_mp4(filenames) File "C:\Users\ptcoo\Documents\youtube-downloader-converter\youtube_downloader.py", line 69, in convert_to_mp4 video_clip = VideoFileClip(filename) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ptcoo\AppData\Roaming\Python\Python312\site-packages\moviepy\video\io\VideoFileClip.py", line 88, in __init__ self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ptcoo\AppData\Roaming\Python\Python312\site-packages\moviepy\video\io\ffmpeg_reader.py", line 35, in __init__ infos = ffmpeg_parse_infos(filename, print_infos, check_duration, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ptcoo\AppData\Roaming\Python\Python312\site-packages\moviepy\video\io\ffmpeg_reader.py", line 244, in ffmpeg_parse_infos is_GIF = filename.endswith('.gif') ^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'endswith' ```
This is the SQL query that I'm trying to replicate in CakePHP. $sql = "select sl.name_en, l.* from service_line_entries sle inner join escalation_entries ee on ee.service_line_entry_id=sle.id inner join service_lines sl on sle.service_line_id=sl.id inner join audit_logs l on l.primary_key = ee.id and l.source='escalation_entries' where sle.request_id=69060 order by sl.name_en asc, created asc"; The `AuditLogs` table is populated by the `AuditStash` behavior, thus contains audit data from many different tables in my app identified in the column `source`. Here is my CakePHP code: $this->ServiceLineEntries = TableRegistry::getTableLocator()->get('ServiceLineEntries'); $sle = $this->ServiceLineEntries->find() ->contain(['EscalationEntries']) ->join([ 'table' => 'audit_logs', 'alias' => 'AuditLogs', 'type' => 'INNER', 'conditions' => [ "AuditLogs.primary_key = EscalationEntries.id", "AuditLogs.source = 'escalation_entries'" ], ]) ->where(['ServiceLineEntries.request_id'=>69060]); Errors: `Column not found: 1054 Unknown column 'EscalationEntries.id' in 'on clause'` It doesn't work even if I specify the physical table name 'conditions' => [ "AuditLogs.primary_key = escalation_entries.id", "AuditLogs.source = 'escalation_entries'" ], `Column not found: 1054 Unknown column 'escalation_entries.id' in 'on clause'` Edit #1: need to see columns from AuditLogs. Edit #2: using the raw SQL, I can in theory massage the variable to make it easier to loop in the template: $sql = "select sl.name_en, l.* from service_line_entries sle inner join escalation_entries ee on ee.service_line_entry_id=sle.id inner join service_lines sl on sle.service_line_id=sl.id inner join audit_logs l on l.primary_key = ee.id and l.source='escalation_entries' where sle.request_id=69060 order by sl.name_en asc, created asc"; $connection = ConnectionManager::get('default'); $results = $connection->execute($sql)->fetchAll('assoc'); $out = []; foreach($results as $r) { $out[$r['name_en']][] = $r; }
The main problem here is that you use `and_` as reduce operator, so that means that you specify as condition that the `code_postal` should start with `78` and `95` at the same time. No text/number can start with `78` and `95` (and all other values) at the same time. You can easily fix this by reducing this with `or_`: <pre><code>from functools import reduce from operator import <b>or_</b> query = reduce(<b>or_</b>, (Q(code_postal__startswith=item) for item in q)) result = Record14.objects.filter(query)</code></pre> That being said, it is probably better to use a [*regular expression* [wiki]](https://en.wikipedia.org/wiki/Regular_expression) here, like: <pre><code>from re import escape as <b>reescape</b> result = Record14.objects.filter( code_postal<b>__regex= '^({})'.format('|'.join(map(reescape, q)))</b> )</code></pre> For your given list `q`, this will result in a regex: ^(78|95|77|91|92|93|94|75|27|28|45|89|10|51|02|60|27) The `^` is the start anchor here, and the pipe acts as a "union", so this regex looks for columns that start with `78`, `95`, `77`, etc.
null
I am trying to create a new strapi project in my api folder for an ecommerce website project. However no matter what I do to try to change the node version using nvm I still get this error (that it is not compatible to the current version). As you can see below I literally just used nvm to change the node version to 20.0.0 and I got confirmation I was currently using it. Also you can see on my nvm list command that 21.2.0 isn't even installed. I am confused and also a beginner. Am I using nvm wrong? Do I have to delete some previous folder that is somehow making my project think I am using 21.2.0? I am very confused and would appreciate any help. ``` current@0.1.6 PS C:\Users\liamt\Desktop\ecommerce-app\api> nvm current v20.11.0 PS C:\Users\liamt\Desktop\ecommerce-app\api> nvm ls 20.9.0 20.5.0 20.0.0 PS C:\Users\liamt\Desktop\ecommerce-app\api> nvm use 20.0.0 Now using node v20.0.0 (64-bit) PS C:\Users\liamt\Desktop\ecommerce-app\api> npx create-strapi-app@latest . (node:14736) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternatiternative instead. (Use `node --trace-deprecation ...` to show where the warning was created) ? Choose your installation type Custom (manual settings) You are running Node.js 21.2.0 Strapi requires Node.js >=18.0.0 <=20.x.x Please make sure to use the right version of Node. PS C:\Users\liamt\Desktop\ecommerce-app\api> ``` I tried to change the version of nod using nvm and it appeared to change however I still have the same error.
nvm not changing node version to comply with strapi
|node.js|terminal|strapi|nvm|
null
To create an [`array`][1] you can do the following: ```sql select array[Column1, Column2] from table; ``` [1]: https://prestodb.io/docs/current/language/types.html#array
I am trying to make a tooltip using TailwindCss but I am facing a problem as I am not very good with border manipulation Example of how I want it to look: ![Example of how I want it to look](https://i.stack.imgur.com/RHDuI.png) I am able to round the tip of the arrow by making a rounded triangle and adding it on top, but in this way I can't make the bottom parts that continue to be rounded.
I would add something more to the [answer][1]. To get all options for the resources that are in defined providers you can use run command: terraform providers schema -json Part of the output: "google_storage_bucket_iam_member": { "version": 0, "block": { "attributes": { "bucket": { "type": "string", "description_kind": "plain", "required": true }, "etag": { "type": "string", "description_kind": "plain", "computed": true }, "id": { "type": "string", "description_kind": "plain", "optional": true, "computed": true }, "member": { "type": "string", "description_kind": "plain", "required": true }, "role": { "type": "string", "description_kind": "plain", "required": true } }, "block_types": { "condition": { "nesting_mode": "list", "block": { "attributes": { "description": { "type": "string", "description_kind": "plain", "optional": true }, "expression": { "type": "string", "description_kind": "plain", "required": true }, "title": { "type": "string", "description_kind": "plain", "required": true } }, "description_kind": "plain" }, "max_items": 1 } }, "description_kind": "plain" } } You can combine `terraform show -json` with `terraform providers schema -json` to achieve your goals. [1]: https://stackoverflow.com/a/78242266/13347227
// SPDX-License-Identifier: MIT pragma solidity ^0.8.20; import "@openzeppelin/contracts/token/ERC721/IERC721.sol"; import "@openzeppelin/contracts/security/ReentrancyGuard.sol"; contract Marketplace is ReentrancyGuard { // Variables address payable public immutable feeAccount; // the account that receives fees uint public immutable feePercent; // the fee percentage on sales uint public itemCount; struct Item { uint itemId; IERC721 nft; uint tokenId; uint price; address payable seller; bool sold; } // itemId -> Item mapping(uint => Item) public items; event Offered( uint itemId, address indexed nft, uint tokenId, uint price, address indexed seller ); event Bought( uint itemId, address indexed nft, uint tokenId, uint price, address indexed seller, address indexed buyer ); constructor(uint _feePercent) { feeAccount = payable(msg.sender); feePercent = _feePercent; } // Make item to offer on the marketplace function makeItem(IERC721 _nft, uint _tokenId, uint _price) external nonReentrant { require(_price > 0, "Price must be greater than zero"); // Increment itemCount itemCount ++; // Approve Marketplace to transfer the NFT on behalf of the user _nft.approve(address(this), _tokenId); // Transfer nft _nft.transferFrom(msg.sender, address(this), _tokenId); // Add new item to items mapping items[itemCount] = Item ( itemCount, _nft, _tokenId, _price, payable(msg.sender), false ); // Emit Offered event emit Offered( itemCount, address(_nft), _tokenId, _price, msg.sender ); } function purchaseItem(uint _itemId) external payable nonReentrant { uint _totalPrice = getTotalPrice(_itemId); Item storage item = items[_itemId]; require(_itemId > 0 && _itemId <= itemCount, "Item doesn't exist"); require(msg.value >= _totalPrice, "Not enough ether to cover item price and market fee"); require(!item.sold, "Item already sold"); // Pay seller and feeAccount item.seller.transfer(item.price); feeAccount.transfer(_totalPrice - item.price); // Update item to sold item.sold = true; // Transfer NFT to buyer item.nft.transferFrom(address(this), msg.sender, item.tokenId); // Swap seller and buyer address payable temp = item.seller; item.seller = payable(msg.sender); // Emit Bought event emit Bought( _itemId, address(item.nft), item.tokenId, item.price, temp, // Original seller msg.sender // Buyer ); } function getTotalPrice(uint _itemId) view public returns(uint){ return((items[_itemId].price*(100 + feePercent))/100); } } This is the code of smart contract error is occurring while calling `makeItem` function. The error is failed to estimate gas value How to solve this error? > (transactionHash="0xf6a128f64f2117b5454ef82b8694844a13d087f25cfec6d869e415bff78f6848", transaction={"hash":"0xf6a128f64f2117b5454ef82b8694844a13d087f25cfec6d869e415bff78f6848","type":2,"accessList":null,"blockHash":null,"blockNumber":null,"transactionIndex":null,"confirmations":0,"from":"0x23a8704727E7Df40Ed0032553D964d346Dae6c38","gasPrice":{"type":"BigNumber","hex":"0x61c517b2"},"maxPriorityFeePerGas":{"type":"BigNumber","hex":"0x59682f00"},"maxFeePerGas":{"type":"BigNumber","hex":"0x61c517b2"},"gasLimit":{"type":"BigNumber","hex":"0x0493e0"},"to":"0x57adc0d1289bC0D5E2980935E8946eF6AAAd9cFD","value":{"type":"BigNumber","hex":"0x00"},"nonce":170,"data":"0xfa00afc70000000000000000000000007c37c609bef3081770b1a2dba2a31d8b070970b60000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000016345785d8a0000","r":"0xbe6d5f0a133c30b16723e0000a0a11d3095f175f9412c7a28e2a7c8f52c221c1","s":"0x6a54aeeedab5c88a2ac8a2c19a6d89943155e32473ff512c46d1097fce40cf5a","v":1,"creates":null,"chainId":0}, receipt={"to":"0x57adc0d1289bC0D5E2980935E8946eF6AAAd9cFD","from":"0x23a8704727E7Df40Ed0032553D964d346Dae6c38","contractAddress":null,"transactionIndex":17,"gasUsed":{"type":"BigNumber","hex":"0xf12f"},"logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","blockHash":"0x1c7a1f0d16c7454cf2866625ae0be2ec26232ae5f29dc836221293a7efd104a4","transactionHash":"0xf6a128f64f2117b5454ef82b8694844a13d087f25cfec6d869e415bff78f6848","logs":\[\],"blockNumber":5590766,"confirmations":1,"cumulativeGasUsed":{"type":"BigNumber","hex":"0x15b5f8"},"effectiveGasPrice":{"type":"BigNumber","hex":"0x5edf00ad"},"status":0,"type":2,"byzantium":true}, code=CALL_EXCEPTION, version=providers/5.7.2) at Logger.makeError (index.ts:269:1) at Logger.throwError (index.ts:281:1) at Web3Provider.\<anonymous\> (base-provider.ts:1549:1) at Generator.next (\<anonymous\>) at fulfilled (base-provider.ts:1:1)
This is the solidity code to list nft in marketplace after minting but while calling makeItem function Error is showing that Faliled to estimate gas
{"OriginalQuestionIds":[77252112],"Voters":[{"Id":2370483,"DisplayName":"Machavity"}]}
I have a bottomNavView with Viewpager2 using FragmentStateAdapter. I have 4 tabs. Before i was using viewpager. then, i switch to viewpager2 because of this error. But on viewpager2 same error is occurring. When i switch between apps tiktok,facebook,instagram and come back to my app. its recreated. all 4 fragment are called from oncreateview() and so on. but my back button was not working. because onBackPress. I am accessing fragment. But the fragments are null. Althought, they are being recreated and displayed. if (binding.viewPager.currentItem == 3) { val frag = sectionsPagerAdapter.fragment3 } after spending an entire day I was able to fix the problem by assigning it back to fragments but this doesn't seem like the best solution. override fun onResume() { super.onResume() val activity = activity as ActivityMain activity.sectionsPagerAdapter.fragment2 = this } In Manifest <activity android:name=".activities.ActivityMain" android:configChanges="keyboardHidden|orientation|screenSize" android:exported="true" android:label="@string/title_activity_navigation" android:screenOrientation="portrait" /> This is my viewpager2 Adapter. before I was using lateinit val to fragments. still, i was getting error. At that time fragment was not null but it could not pass isadded. Another thing is that **createFragment doesn't get called** when activity is auto_recreated. but all fragment calls oncreateview class SectionsPagerAdapter( activity: FragmentActivity) : FragmentStateAdapter( activity) { val TAG = "SectionsPagerAdapter" var fragment0: Fragment0? = null var fragment3: Fragment1? = null var fragment1: Fragment2? = null var fragment2: Fragment3? = null override fun getItemCount(): Int { return 4 } override fun createFragment(position: Int): Fragment { if (position == 0) { if(fragment0 == null){ fragment0 = Fragment0() } return fragment0!! } if (position == 1) { if(fragment1 == null){ fragment1 = Fragment1() } return fragment1!! } if (position == 2) { if(fragment2 == null){ fragment2 = Fragment2() } return fragment2!! } if (position == 3) { if(fragment3 == null){ fragment3 = Fragment3() } return fragment3!! } return Fragment0() } }
I recommend method with @Transactional of CommandLineRunner, or ApplicationRunner.
I installed arch linux (current version archlinux-2024.03.01-x86_64) on VM VirtualBox 7.0 (didn't use the archinstall script in case thats' relevant) Install docker and git with pacman -S docker pacman -S git And run the `pacman -Sy` and `pacman -Su` commands Then followed the steps from this link https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/docker/building-net-docker-images?view=aspnetcore-8.0 I cloned the docker dotnet app with git clone https://github.com/dotnet/dotnet-docker Built it as per instructions docker build -t aspnetapp . And got it to run (not sure why didn't run first try but eventually ran) docker run -it --rm -p 5000:8080 --name aspnetcore_sample aspnetapp This the output after executing the docker run command: [![enter image description here][1]][1] I can see from tty2 (other terminal) the container is running with `docker ps` command [![enter image description here][2]][2] However can't do a wget to the localhost:5000 or use lynx [![enter image description here][3]][3] What am I'm missing to set up to browser the url/webapp ? note: I have network/internet in the arch virtual machine but haven't configured anything else **Update / Solved** I executed the run commando with a typo `5000:8000` instead of `... 5000:8080` Once I re-ran the command properly I can lynx to http://localhost:5000 url [![enter image description here][4]][4] [1]: https://i.stack.imgur.com/YqLUo.png [2]: https://i.stack.imgur.com/qOtIV.png [3]: https://i.stack.imgur.com/96Xwk.png [4]: https://i.stack.imgur.com/w4XJg.png
null
null
null
null
null
To get adjusted results, you will need to use `add_p()` with a custom function for the p-value calculation. See below! ``` r library(gtsummary) #> #BlackLivesMatter packageVersion("gtsummary") #> [1] '1.7.2' df <- data.frame( group = c(2, 1, 1, 2, 1, 2), var1 = (rnorm(6,mean=10, sd=3)), var2 = (rnorm(6,mean=6, sd=1)), var3 = c(0, 4, 1, 3, 1, 1), age = c(50, 32, 26, 46, 38, 62), sex = c(1, 0, 1, 1, 1, 0)) my_ancova <- function(data, variable, by, adj.vars = c("age", "sex"), ...) { lm( formula = reformulate(c(by, adj.vars), response = variable), data = data ) |> broom::tidy() |> dplyr::filter(!term %in% "(Intercept)") |> dplyr::slice_head(n = 1L) |> dplyr::mutate( method = glue::glue("ANCOVA adjusted for {paste(adj.vars, collapse = ', ')}") ) } my_ancova(df, variable = "var1", by = "group", adj.vars = c("age", "sex")) #> # A tibble: 1 × 6 #> term estimate std.error statistic p.value method #> <chr> <dbl> <dbl> <dbl> <dbl> <glue> #> 1 group -0.134 1.21 -0.111 0.922 ANCOVA adjusted for age, sex df |> tbl_summary( include = c("var1", "var2"), by = group, type= c("var1", "var2") ~ 'continuous', statistic = c("var1", "var2") ~ "{mean} ± {sd}" ) |> add_p(test = everything() ~ my_ancova) |> as_kable() ``` | **Characteristic** | **1**, N = 3 | **2**, N = 3 | **p-value** | |:-------------------|:------------:|:------------:|:-----------:| | var1 | 9.39 ± 1.94 | 11.87 ± 3.51 | 0.3 | | var2 | 5.90 ± 0.83 | 6.46 ± 2.32 | 0.7 | <sup>Created on 2024-03-12 with [reprex v2.1.0](https://reprex.tidyverse.org)</sup>
Cypress e2e tests crashing with Monaco Editor for React
|cypress|react-monaco-editor|
null
In my project inside scripts folder I have multiple Python executable scripts. I want to create multiple executables for each Python script. How can achieve this using pyinstaller?
How to convert multiple Python files to multiple executable using pyinstaller
I have a specific use case where I need to create a button (Custom Tools) within Perforce in which it enables the user to right-click a pending changelist with checked out files and it will shelve the files, revert the checked out files, then change the user to a single user and assign the changelist to a specific workspace(The same as the change ownership button) So far I have set the custom tools to run p4 with the following arguments: ``` shelve -f -Af -c %P revert -c %P //... change -U **user.name** %P ``` But this only gets me to shelving the changes, reverting and assigning to a user, I'm missing the workspace change but can't seem to figure this bit out from the docs. I ran perforce with full logging, which suggested I could run: ``` p4 user -o **user.name** p4 spec -o user p4 client -o **workspace.name** p4 change -i ``` But trying to run that locally in a cmd/powershell just outputs the information of the user and workspace. I am trying to do this to streamline a process as an alternate for manually shelving/unshelving
P4 change ownership through command line
|version-control|perforce|p4v|
null
That's most likely due to the lack of the annotations dependency. Add to your `build.gradle` and sync: ``` dependencies { implementation("androidx.annotation:annotation:1.7.1") } ```
I am doing TFS to Devops migration and for that I am running a JSON file. FOr testing I thought to migrate a single Epic(for example epic number 123). IS it possible to migrate single item and the items below this. TO make it run, what WIQLQueryBit I should write? "WIQLQueryBit": "AND [System.WorkItemType] NOT IN ('Epic = 123')",
How to migrate a single workitem in Devops
|azure-devops|devops|
{"Voters":[{"Id":2756409,"DisplayName":"TylerH"},{"Id":1974224,"DisplayName":"Cristik"},{"Id":354577,"DisplayName":"Chris"}],"SiteSpecificCloseReasonIds":[13]}
You can use the `rn` command after you've created the archive: 7z.exe a -r D:\TEST.zip ROOT_FOLDER\* 7z.exe rn D:\TEST.zip ROOT_FOLDER ROOT The `--help` flag gives you information about available commands: 7-Zip 19.00 (x64) : Copyright (c) 1999-2018 Igor Pavlov : 2019-02-21 Usage: 7z <command> [<switches>...] <archive_name> [<file_names>...] [@listfile] <Commands> a : Add files to archive b : Benchmark d : Delete files from archive e : Extract files from archive (without using directory names) h : Calculate hash values for files i : Show information about supported formats l : List contents of archive rn : Rename files in archive t : Test integrity of archive u : Update files to archive x : eXtract files with full paths
Fragment in FragmentStateAdapter become null when activity when switching between app
|android|android-viewpager2|
Using rmbolger's amazing posh-acme I deployed an SSL cert from let's encrypt into the personal certificate store of a Server 2022 VM following [this guide](https://www.dvolve.net/blog/2019/12/using-lets-encrypt-for-active-directory-domain-controller-certificates/). I did some custom modifications to the script and I am rather happy with it, but I seem to be unable to make that certificate work with other Server 2022 services than LDAP. I would like to extend the current approach to also cover ADFS and certificates for terminal servers. This script I wrote uses posh-acme to deploy the certificates on the server in the personal store. For the record I pasted the code below. Services like RDP or Federated Sign-In don't seem to pick it up though. ``` # Cloud Flare requires a simple API token, but we need to secure the string to keep it safe $token = ConvertTo-SecureString 'thatisactuallysecret' -AsPlainText -Force $pArgs = @{CFToken=$token} # The ActiveDirectory PowerShell module is installed by default on DCs $dc = Get-ADDomainController $env:COMPUTERNAME $certNames = @($dc.HostName, $dc.Domain) # This is optional, but usually a good idea. $notifyEmail = 'dev@gmservice.app' $certParams = @{ Domain = $certNames DnsPlugin = 'Cloudflare' PluginArgs = $pArgs AcceptTOS = $true Install = $true Contact = $notifyEmail # optional Verbose = $true # optional } New-PACertificate @certParams ```
|ssl|active-directory|adfs2.0|windows-server-2022|acme|
I have some trouble in Prestashop 1.6 using Smarty. I have an array, but its offset are not reset for each product. So for the first product, with attribute it has offset 1,2,3,4 Then for the next product it has offset 5,6,7,8 etc. I have that kind of array $combinations Smarty_Variable Object (3) ->value = Array (4) 5 => Array (14) attributes_values => Array (1) 1 => "S" attributes => Array (1) 0 => 1 price => 0 specific_price => Array (0) ecotax => 0 weight => 0 quantity => 20 reference => "" unit_impact => 0 minimal_quantity => "1" date_formatted => "" available_date => "" id_image => -1 list => "'1'" 6 => Array (14) I try to go through this array but it does not work when I put empty offset (it is inside a foreach) {$combinations[]['quantity']} How can I tell him to go through the first iteration, and then in the second automatically ? This return to me the following errors. > Fatal error: Cannot use [] for reading in /htdocs/tools/smarty/sysplugins/smarty_internal_templatebase.php(157) : eval()'d code on line 584 I can not tell it which offset to use, because for each product it goes up and is not reset to 0.
Smarty get offset of array
I'm trying to create a test to check the correct behaviout of my service object, ```ruby def api_call_post(connection, message, url) pp message response = connection.post do |conn| conn.url url conn.body = message conn.headers = @headers end check_response(response) end ... ``` this is the test : ```ruby test "create a title" do body = { "name" => 'some name', "external_id" =>'004', "title_type" =>'feature', "tags" => 'some tag' }.to_json puts body stub_request(:post, "some web") .with(body: body ) .to_return(status: 201, body: "Created", headers: {}) response = MovidaApi.new(payload).create_title assert response.success? assert_equal "Created", response.body end ... ``` The problem comes when I include the `.with` in the stub, (without it's working ok), I printed statements and the output are exactly the same. The error is: Error: MovidaApiTest#test_create_a_title: WebMock::NetConnectNotAllowedError: Real HTTP connections are disabled. Unregistered request: POST https://staging-movida.bebanjo.net/api/titles with headers {'Accept'=>'application/json', 'Accept-Encoding'=>'gzip;q=1.0,deflate;q=0.6,identity;q=0.3', 'Content-Length'=>'0', 'Content-Type'=>'application/json', 'User-Agent'=>'Ruby'} What i'm doing wrong? Thanks! I tried to use the snippet suggested but still don't work. I expect the test to pass
I'm building a docker image and I have the node_modules folder included in the .dockerignore file. This is what my file looks like: ``` node_modules ``` This is what my Dockerfile looks like: ``` FROM node:20.6.0-alpine WORKDIR /app COPY package.json . RUN npm install COPY . . CMD ["npm", "start"] ``` The node_modules is located directly in the directory I'm building the image from. I even tried adding Dockerfile to .dockerignore to make sure it wasn't my .dockerignore file that wasn't working but it worked fine and it did ignore Dockerfile. I'm building my image using `docker build -t <image-tag> .` Can somebody please point me to where I'm going wrong? No matter what I do, when I run the sh command using `docker run -it <image-id> sh` on my container, I see node_modules
This is syntactically correct, but according to TS, this option is not supported: @t.Unique("user-hash-device", ["hash", "userId", "deviceId"], {background: true}) @t.Entity('vibe_trace') /// ^^^^^^ export class Trace { // ... } (the above is not quite right, since it doesn't accept `{background: true}` there). 1. is there a way to set a unique compound index to the background with mongodb? 2. if so, how to declare that with TypeORM?
How to place unique compound index into background with MongoDB?
|mongodb|typeorm|database-indexes|mongodb-indexes|mongo-index|
According to the [documentation][1], You can use this annotation: nginx.ingress.kubernetes.io/use-regex: "true" [1]: https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/
I'm trying to create an AWS Custom Labels Cloudstack via the template provided by AWS: https://ml-specialist-sa-demo-us-east-2.s3.us-east-2.amazonaws.com/custom-brand-detection/1.0.0/amazon-rekognition-custom-brand-detection.template At first, I ran into the following issue which was displayed in the console: Resource handler returned message: "The runtime parameter of nodejs10.x is no longer supported for creating or updating AWS Lambda functions. We recommend you use the new runtime (nodejs18.x) while creating or updating functions. I updated the Node version to ^20 as well as the aws-sdk's. I rebuilt, deployed and attempted to create the stack but it failed because a previous resource wouldn't delete and the console stated the following reason: Received response status [FAILED] from custom resource. Message returned: Cannot find module 'aws-sdk' Require stack: - /var/task/lib/sagemaker/privateWorkforce.js - /var/task/lib/sagemaker/index.js - /var/task/index.js - /var/runtime/index.mjs. I've also tried deleting the previous stacks but I'm still getting this error. I've been looking around but can't seems to find an answer for this and attempted to reach out to AWS for an updated template but they provided a template that was also out of date so I'm not sure what to do at this point. If anyone has suggestions or can point me in the right direction, I'd appreciate it. Thanks, Nick Index.js const PrivateWorkforce = require('./privateWorkforce'); exports.PrivateWorkforceConfiguration = async (event, context) => { try { const workteam = new PrivateWorkforce(event, context); return workteam.isRequestType('Delete') ? workteam.deleteResource() : workteam.isRequestType('Update') ? workteam.updateResource() : workteam.createResource(); } catch (e) { e.message = `PrivateWorkforceConfiguration: ${e.message}`; throw e; } }; privateWorkforce.js const FS = require('fs'); const PATH = require('path'); const AWS = require('aws-sdk'); const mxBaseResponse = require('../shared/mxBaseResponse'); class PrivateWorkforce extends mxBaseResponse(class {}) { constructor(event, context) { super(event, context); /* sanity check */ const data = event.ResourceProperties.Data; this.sanityCheck(data); this.$data = data; this.$cognito = new AWS.CognitoIdentityServiceProvider({ apiVersion: '2016-04-18', }); this.$sagemaker = new AWS.SageMaker({ apiVersion: '2017-07-24', }); } sanityCheck(data) { const missing = [ 'SolutionId', 'UserPool', 'UserGroup', 'AppClientId', 'TopicArn', 'UserPoolDomain', ].filter(x => data[x] === undefined); if (missing.length) { throw new Error(`missing ${missing.join(', ')}`); } } get data() { return this.$data; } get solutionId() { return this.data.SolutionId; } get userPool() { return this.data.UserPool; } get userGroup() { return this.data.UserGroup; } get clientId() { return this.data.AppClientId; } get topicArn() { return this.data.TopicArn; } get userPoolDomain() { return this.data.UserPoolDomain; } get workteamName() { return `${this.userPoolDomain}-team`; } get cognito() { return this.$cognito; } get sagemaker() { return this.$sagemaker; } normalize(name) { return name.replace(/[^a-zA-Z0-9-]/g, '-'); } async preconfigure() { await this.cognito.createUserPoolDomain({ Domain: this.userPoolDomain, UserPoolId: this.userPool, }).promise(); await this.cognito.updateUserPoolClient({ ClientId: this.clientId, UserPoolId: this.userPool, AllowedOAuthFlows: [ 'code', 'implicit', ], AllowedOAuthFlowsUserPoolClient: true, AllowedOAuthScopes: [ 'email', 'openid', 'profile', ], ExplicitAuthFlows: [ 'USER_PASSWORD_AUTH', ], CallbackURLs: [ 'https://127.0.0.1', ], LogoutURLs: [ 'https://127.0.0.1', ], SupportedIdentityProviders: [ 'COGNITO', ], }).promise(); } async queryCurrentTeam() { const { Workteams, } = await this.sagemaker.listWorkteams({ MaxResults: 100, }).promise(); if (!Workteams.length) { return undefined; } const team = Workteams.shift(); if (!team.MemberDefinitions || !team.MemberDefinitions.length) { return undefined; } const { CognitoMemberDefinition, } = team.MemberDefinitions.shift(); return { UserPool: CognitoMemberDefinition.UserPool, ClientId: CognitoMemberDefinition.ClientId, }; } async cognitoCreateGroup(userPool) { if (!userPool) { throw new Error('cognitoCreateGroup - userPool is null'); } return this.cognito.createGroup({ GroupName: this.userGroup, Description: `${this.solutionId} labeling workteam user group`, UserPoolId: userPool, }).promise(); } async createTeam(current = {}) { if (current.UserPool) { await this.cognitoCreateGroup(current.UserPool); } const params = { Description: `(${this.solutionId}) labeling workteam`, MemberDefinitions: [{ CognitoMemberDefinition: { UserPool: current.UserPool || this.userPool, ClientId: current.ClientId || this.clientId, UserGroup: this.userGroup, }, }], WorkteamName: this.workteamName, NotificationConfiguration: { NotificationTopicArn: this.topicArn, }, Tags: [ { Key: 'SolutionId', Value: this.solutionId, }, ], }; await this.sagemaker.createWorkteam(params).promise(); const { Workteam, } = await this.sagemaker.describeWorkteam({ WorkteamName: this.workteamName, }).promise(); return Workteam; } async postconfigure(team = {}) { if (!team.SubDomain) { throw new Error('postconfigure - SubDomain is null'); } let template = PATH.join(PATH.dirname(__filename), 'fixtures/email.template'); template = FS.readFileSync(template); template = template.toString().replace(/%URI%/g, `https://${team.SubDomain}`); await this.cognito.updateUserPool({ UserPoolId: this.userPool, AdminCreateUserConfig: { AllowAdminCreateUserOnly: true, InviteMessageTemplate: { EmailMessage: template, EmailSubject: `You are invited by ${this.workteamName} to work on a labeling project.`, }, }, }).promise(); } async createResource() { await this.preconfigure(); const current = await this.queryCurrentTeam(); const team = await this.createTeam(current); await this.postconfigure(team); this.storeResponseData('UserPool', (current && current.UserPool) || this.userPool); this.storeResponseData('ClientId', (current && current.ClientId) || this.clientId); this.storeResponseData('UserGroup', this.userGroup); this.storeResponseData('TeamName', this.workteamName); this.storeResponseData('TeamArn', team.WorkteamArn); this.storeResponseData('Status', 'SUCCESS'); return this.responseData; } async deleteResource() { try { const { Workteam, } = await this.sagemaker.describeWorkteam({ WorkteamName: this.workteamName, }).promise(); /* delete workteam only if it exists */ if ((Workteam || {}).WorkteamArn) { const { Success, } = await this.sagemaker.deleteWorkteam({ WorkteamName: this.workteamName, }).promise(); if (!Success) { throw new Error(`failed to delete Workteam, ${this.workteamName}`); } } /* delete user group */ if ((Workteam || {}).MemberDefinitions) { const { UserGroup, UserPool, } = (Workteam.MemberDefinitions.shift() || {}).CognitoMemberDefinition || {}; if (UserGroup && UserPool && this.userPool !== UserPool) { await this.cognito.deleteGroup({ GroupName: UserGroup, UserPoolId: UserPool, }).promise(); } } } catch (e) { console.error(e); } try { const response = await this.cognito.describeUserPoolDomain({ Domain: this.userPoolDomain, }).promise(); /* delete domain only if it exists */ if (((response || {}).DomainDescription || {}).Domain) { await this.cognito.deleteUserPoolDomain({ Domain: this.userPoolDomain, UserPoolId: this.userPool, }).promise(); } } catch (e) { console.error(e); } this.storeResponseData('Status', 'DELETED'); return this.responseData; } async updateResource() { await this.deleteResource(); return this.createResource(); } async configure() { if (this.isRequestType('Delete')) { return this.deleteResource(); } if (this.isRequestType('Update')) { return this.updateResource(); } return this.createResource(); } } module.exports = PrivateWorkforce; package.json { "$schema": "http://json.schemastore.org/package", "name": "custom-resources", "version": "1.0.0", "description": "(Custom Brand Detection) AWS CloudFormation Custom Resource Lambda function", "main": "index.js", "private": true, "scripts": { "pretest": "npm install", "test": "mocha *.spec.js", "build:clean": "rm -rf dist && mkdir -p dist", "build:copy": "cp -rv index.js package.json lib dist/", "build:install": "cd dist && npm install --production", "build": "npm-run-all -s build:clean build:copy build:install", "zip": "cd dist && zip -rq" }, "author": "aws-specialist-sa-emea", "license": "MIT-0", "dependencies": { "adm-zip": "^0.4.14", "mime": "^2.4.5", "@aws-sdk/client-sagemaker": "^3.0.0" }, "devDependencies": { "core-lib": "file:../layers/core-lib" } }
I'm trying to log all cassandra queries of a springboot application into a console. I'm using 3.11 version of Cassandra. Tried using Query Logger as shown in the code below, but facing issues while autowiring Cluster i.e., **could not autowire. No beans of type 'Cluster' found.** Can anyone please do let me know if there are any alternative ways of logging the Cassandra queries from the application side ? **CassandraConfig class:** @Autowired Cluster cluster; //getters & setters of cluster @Bean public QueryLogger queryLogger(Cluster cluster) { QueryLogger queryLogger = QueryLogger.builder() .build(); cluster.register(queryLogger); return queryLogger; } **application.yml:** logging.level.com.datastax.driver.core.QueryLogger.NORMAL: DEBUG
How to do Cassandra Query Logging in Springboot Applicatiom?
|spring-boot|cassandra|
|c#|.net|apache-kafka|confluent-kafka-dotnet|
I got it working with slight modification of what @LãNgọcHải 's suggested: ``` const volumeResetHandler = async () => { inputRef.current.value = 0; (inputRef.current as any).addEventListener('change', volumeChangeHandler); (inputRef.current as any).dispatchEvent(new Event('change', { bubbles: true })); } ``` Thanks a lot @LãNgọcHải . Only a small issue I have is I had a logic based on `event.isTrusted==true` which is now `false` for any explicitly triggering event. I would have to find some work around for that. I guess that's what `bubbles` is for, isn't it @LãNgọcHải ?
Error WebMock::NetConnectNotAllowedError in testing with stub using minitest in rails
|ruby-on-rails|testing|minitest|webmock|
null
Here is an example for both cases: public class PECSExample { public static void main(String[] args) { List<C> producerCList = new ArrayList<>(); List<D> producerDList = new ArrayList<>(); List<E> producerEList = new ArrayList<>(); producerCList.add(new E()); producerDList.add(new E()); producerEList.add(new E()); producerExtends(producerCList); producerExtends(producerDList); producerExtends(producerEList); List<Object> consumerObjectList = new ArrayList<>(); List<A> consumerAList = new ArrayList<>(); List<B> consumerBList = new ArrayList<>(); consumerSuper(consumerObjectList); consumerSuper(consumerAList); consumerSuper(consumerBList); } public static void producerExtends(List<? extends C> producerList) { System.out.println("Producer printing"); producerList.forEach(System.out::println); } public static void consumerSuper(List<? super C> consumerList) { consumerList.add(new C()); consumerList.add(new D()); consumerList.add(new E()); System.out.println("Consumer printing"); consumerList.forEach(System.out::println); } } class A {} class B extends A {} class C extends B {} class D extends C {} class E extends D {} Output: Producer printing collections.E@6d311334 Producer printing collections.E@682a0b20 Producer printing collections.E@3d075dc0 Consumer printing collections.C@448139f0 collections.D@7cca494b collections.E@7ba4f24f Consumer printing collections.C@3b9a45b3 collections.D@7699a589 collections.E@58372a00 Consumer printing collections.C@4dd8dc3 collections.D@6d03e736 collections.E@568db2f2
Your function isn't updating `a` to point to the string; in your `main` function `a` is still `NULL` after the call to `fn`. Remember that C passes all function arguments by value; when you call `fn` the argument `a` is evaluated and the result of that evaluation (`NULL`) is *copied* to the formal argument `n`. `a` and `n` are completely different objects in memory and changes to one are not reflected in the other. In order for `fn` to write a new value to `a` you must pass a *pointer* to `a`: ``` void fn( char **n ) { *n = "hello, world"; // no need for a separate variable } int main( void ) { ... fn( &a ); ... } ``` Alternately, you can have `fn` return the new pointer value and assign it to `a`: ``` char *fn( void ) { return "hello, world"; } int main( void ) { char *a = fn(); ... } ``` although this only works if you return a pointer to a string literal, static variable, or the result of a function that returns a pointer like `malloc` or `fopen` or something like that; you can't return a pointer to a local variable like ``` char *fn( void ) { char buf[] = "hello, world"; return buf; } ``` because the local variable `buf` *ceases to exist* once the function returns and that pointer value is now *invalid*. General rule: ``` void update( T *ptr ) // for any non-array type T { *ptr = new_T_value(); // writes a new value to the thing } // ptr points to int main( void ) { T var; update( &var ); // write a new value to var ... } ``` If we replace `T` with a pointer type `P *`, we get ``` void update( P **ptr ) { *ptr = new_Pstar_value(); int main( void ) { P *var; update( &var ); ... } ``` The semantics are exactly the same; `ptr` still has an extra layer of indirection, we still use `&var` as the argument, etc.
|css|tailwind-css|border|rounded-corners|
I need to add IP Restrictions onto my Cloud Function, and before you mention it using a SA or other forms of auth are out of the question..! ;) I am having trouble, however. My Cloud Function is deployed all fine, and I've setup a VPC. This VPC has one subnet, deployed in the same region (europe-west1) and has an internal IP range 10.0.0.0/28, and no external IP ranges. There is a firewall, but I've set it to 0.0.0.0/0 to allow all requests through, and for all ports as well. I've added a Serverless VPC access as well, connected to the subnet mentioned above. However, when I run my curl command to trigger my Cloud Function, I am getting this back: ``` <html><head> <meta http-equiv="content-type" content="text/html;charset=utf-8"> <title>404 Page not found</title> </head> <body text=#000000 bgcolor=#ffffff> <h1>Error: Page not found</h1> <h2>The requested URL was not found on this server.</h2> <h2></h2> </body></html> ``` I'm confident its with regards to IP or something similar in the VPC setup, because as soon as I turn my Cloud Function connections to 'allow all traffic' I am able to ping it just fine. PS: its on 'Allow internal traffic only' Please let me know what I am doing wrong, or what steps I've missed. I would appreciate any help, devops is not my forte ;)
Connecting to a Cloud Function through a VPC returns 404
|google-cloud-platform|google-cloud-functions|vpc|
Since the `data` parameter of the `insertDataToEndList` function is allocated on the stack, its address is no longer referencing allocated memory once that function returns. Yet you've saved this address in `res->dataPtr = data;`. This will lead to undefined behaviour when you'll read the memory pointed to by that `dataPtr` (e.g. when later you print that list or do the merge). So don't use `&data`. Instead pass `data` to `insertDataToEndList` with this call: ``` newTail = createNewListNode(data, NULL); ``` And then change `insertDataToEndList` to fix this bug: ``` ListNode* createNewListNode(int data, ListNode* next) // First param is int { ListNode* res = malloc(sizeof(*res)); res->dataPtr = malloc(sizeof(int)); // Allocate the memory for the int (*res->dataPtr) = data; // Copy the int res->next = next; return res; } ``` Some other remarks: * [Don't cast what `malloc` returns](https://stackoverflow.com/q/605845/5459839) * Don't compare the result of `isEmpty(lst)` with `true`. Just do: ``` if (isEmptyList(lst)) ``` The code you received shows several bad practices, and makes one doubt about the quality of the course material you are looking at. For instance: * [`void main()` is wrong](https://stackoverflow.com/q/18928279/5459839), and like a comment says, it's *"...an indication that you're using a textbook written by someone who doesn't know the C language very well."*. * The `freeList` function takes `List*` argument, which suggests that the memory allocated for the list (not only its nodes) should be freed, yet this makes no sense, since the `main` function has declared them as `List` variables, so `free` should not be called on the list itself. It would be more consistent if the involved lists would all be allocated on the heap. * Storing `int` values in dynamically allocated memory for one int, is useless and a bad decision.
when trigger like this: --- ALTER TRIGGER dbo.TG_a ON dbo.a FOR DELETE AS BEGIN select * into dbo.b from deleted END === when i delete multi row in edit tab: there will be problem: there is already object name 'b' when trigger like this: --- ALTER TRIGGER dbo.TG_a ON dbo.a FOR DELETE AS BEGIN insert dbo.b select * from deleted END === when i delete multi row in edit tab: there will be multi duplicate value ==> problem: - i realise that trigger is firing for each row when i delete multi row on edit tab. - If i delete multi row by query, trigger will fire for all rows. - this only happen when delete because of: can not insert, update multi row on edit tab ===> question: - I usually edit value on edit tab so i need some way to make trigger fire for all rows that are deleted on edit tab
SQL Server: delete trigger run for each deleted row when delete multi row on edit tab
|sql-server|triggers|sql-delete|
null
The following code creates a cosecant squared radiation pattern. There is a phases and amplitude_norm_V vector which are the coefficients of AF expression. I am looking to create a GA inetative algorighn that given a know AF data will recreate phases and amplitude coefficients .I was given a link to possible method shown below. How to implemnt it in matlab? Thanks. https://en.wikipedia.org/wiki/Least_mean_squares_filter#:%7E:text=Least%20mean%20squares%20(LMS)%20algorithms,desired%20and%20the%20actual%20signal [enter image description here](https://i.stack.imgur.com/SfuNl.png) i wrote the matlab code `` amplitude_norm_V=[0.3213,0.4336,0.7539,1,0.7817,0.3201,0.32,0.3261]; x=sqrt(1/(sum(amplitude_norm_V.^2))); real_voltage=amplitude_norm_V.*x; real_power=(real_voltage.^2); sum(real_power); phases=[129.9121,175.4215,-144.6394,-116.9071,-93.7603,-60.0165,55.2841,89.4477] norm_phases=phases-175.4215; theta=linspace(0,pi,1000); theta_0=pi*(135.315/180); f=5.6; N=10; lambda=300/f; k=(2*pi)/lambda; d=0.8*lambda; total=0; tot=0; for z=1:8 total=total+AF; tot=tot+AF.^2; end plot((theta/pi)*180,20*log10(total)); `
iterative GA optimization algorithm
|arrays|matlab|genetic-algorithm|antenna-house|optim|
null
A root is a storage location, such as local variable, that could contain a reference and is known to have been initialized, and that your program use at some point in the future without needing to go via some other object reference. In ***Programming C# 10.0*** by **Ian Griffiths** in the seventh chapter it is stated that: > A root is a storage location, such as a local variable, that could contain a reference and is known to have been initialized, and that your program could use at some point in the future without needing to go via some other object reference. Not all storage locations are considered to be roots. If an object contains an instance field of some reference type, that field is not a root, because before you can use it, you’d need to get hold of a reference to the containing object, and it’s possible that the object itself is not reachable. However, a reference type static field is a root reference, because the program can read the value in that field at any time. additionally in **[this document][1]** by **Microsoft** it is said that: > An application's roots include static fields, local variables on a thread's stack, CPU registers, GC handles, and the finalize queue. Each root either refers to an object on the managed heap or is set to null. The garbage collector can ask the rest of the runtime for these roots. The garbage collector uses this list to create a graph that contains all the objects that are reachable from the roots. According to ***C# 12 in a Nutshell*** by **Joseph Albahari**, Root A root is something that keeps an object alive. If an object is not directly or indirectly referenced by a root, it will be eligible for garbage collection. The figure below from the same book can help in understanding: [![enter image description here][2]][2] [1]: https://learn.microsoft.com/en-us/dotnet/standard/garbage-collection/fundamentals [2]: https://i.stack.imgur.com/aUiou.png
How to adjust subtitle font size in Swift macOS?
Some other people were having the same issue. - [Pylance does not show auto import information from site-packages directory #3281](https://github.com/microsoft/pylance-release/issues/3281) - [Pylance not indexing all files and symbols for sqlalchemy even with package depth of 4](https://github.com/microsoft/pylance-release/issues/4637) Visual Studio Code's PyLance implementation seems to have some internal limits that may prevent indexing all files. However, this was not the case for me. Instead, PyLance was somehow corrupted. Running: `PyLance: Clear all persistent indices` from the command palette fixed the issue for. After this, PyLance seemed to behave.
To set a pipeline description in a Jenkinsfile, you can try, in theory, the `properties` step, assuming you have the [Project Description Setter](https://plugins.jenkins.io/project-description-setter/) Jenkins plugin installed. The `properties` step allows you to define job properties, which include the description of the pipeline itself through the plugin, using the [`org.jenkinsCi.plugins.projectDescriptionSetter.DescriptionSetterWrapper`](https://javadoc.jenkins-ci.org/plugin/project-description-setter/org/jenkinsCi/plugins/projectDescriptionSetter/DescriptionSetterWrapper.html). ```groovy pipeline { agent any options { buildDiscarder(logRotator(numToKeepStr: "30")) } properties([ [class: 'org.jenkinsCi.plugins.projectDescriptionSetter.DescriptionSetterWrapper', description: 'That is the pipeline description.'] ]) stages { stage('Example') { steps { echo 'Hello World' } } } post { // Post actions if any } } ``` But [`jenkinsci/project-description-setter-plugin` PR #1](https://github.com/jenkinsci/project-description-setter-plugin/pull/1) casts doubt on that approach: > I doubt you really want to use this plugin as is from Pipeline. Better to just do something like Groovy Post-Build does and permit a statement to set the description to some string. If you want to readFile that is your business. The `Groovy Post-Build` plugin allows users to execute a Groovy script in the Post-Build step of a job, which can then modify the build or the project, including setting the build or project description. ```groovy pipeline { agent any options { buildDiscarder(logRotator(numToKeepStr: "30")) } stages { // Your build stages } post { always { script { // Assuming "this" is a reference to the job def job = Jenkins.instance.getItemByFullName(env.JOB_NAME) // Set the description of the job (pipeline) job.description = 'This is the pipeline description.' } } } } ```
null
I saw this [thread](https://www.reddit.com/r/unrealengine/comments/gpc2hh/ghost_trail_motion_trail_removal/) where they describe an issue that I've been having where the machine struggles with calculating AA so it leaves a ghost trail. I want to record the viewport and use these as videos so no real time calculations are needed. Would this be solved by rendering/baking? I did a quick render test but the issue was still there but maybe I missed something. Before I go that route I wanted to know if it's possible.
I am trying to place shadows in one of the div and it's not showing up. Here is one div where I am trying to implement the shadow: #intro { padding: 0px; margin: 0px auto; width: 100%; float:inherit; overflow: hidden; height: 800px; position:inherit; background-color: #00b3e1;; box-shadow: 0 0 50px rgba(0,0,0,0.8); }
How about this? Not quite as seamless but a bit closer to what you want. diff = df["Input"] - df["Input"].shift(1) diff.columns = pd.MultiIndex.from_product([["Diff"], diff.columns]) df = pd.concat([df, diff], axis=1) Regarding the second part of your question (really a separate question), the problem is that `loc` can takes a scalar, an array or list, or a DataFrame with matching indices. Pandas sees the `X, Y, Z` of `Diff` as different from the `X, Y, Z` of `Input`, and therefore sees no match. You can make it work by converting the dataframe to a numpy array: for g in df.groupby(("Meta","ID")): df.loc[g[1].index, "Diff"] = (g[1]["Input"] - g[1]["Input"].shift(1)).to_numpy() (You can also use `values` instead of `to_numpy()`, but it is not recommended, see [here](https://stackoverflow.com/a/54324513/6220759) as to why.)
I'm using [flutter_carousel_widget](https://pub.dev/packages/flutter_carousel_widget) package to make a carousel with this code: FlutterCarousel( options: CarouselOptions( physics: const NeverScrollableScrollPhysics(), controller: _carouselController, onPageChanged: (index, reason) { currentView = index + 1; //setState is called to update the current page with respect to the current view setState(() {}); }, height: 50.0, indicatorMargin: 10.0, showIndicator: true, slideIndicator: CircularWaveSlideIndicator(), viewportFraction: 0.9, ), items: swipeList.map((i) { return const Text(''); }).toList(), ), The above code outputs this kind of Carousel slider: ![Result](https://i.stack.imgur.com/CY9IY.jpg) But I would like to change the look of Corousel slider like below: ![Expected](https://i.stack.imgur.com/JcIOz.jpg)
I have a .NET 6 application that my client has asked me to deploy on AWS Fargate and expose via a Rest API Gateway. I need to change the application context root to be /<stage name> instead of / I have the following Dockerfile so far ``` FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base WORKDIR /WebAdmin EXPOSE 80 EXPOSE 443 ARG API_GATEWAY_STAGE_NAME RUN echo "API_GATEWAY_STAGE_NAME is $API_GATEWAY_STAGE_NAME" FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build ARG BUILD_CONFIGURATION=Release WORKDIR /src COPY ["MyApp.Web.Admin/MyApp.Web.Admin.csproj", "MyApp.Web.Admin/"] RUN dotnet restore "./MyApp.Web.Admin/./MyApp.Web.Admin.csproj" COPY . . WORKDIR "/src/MyApp.Web.Admin" RUN sed -i "s/~\//\/${API_GATEWAY_STAGE_NAME}\//g" ./Views/Shared/_Layout.cshtml RUN more ./Views/Shared/_Layout.cshtml RUN dotnet build "./MyApp.Web.Admin.csproj" -c $BUILD_CONFIGURATION -o /WebAdmin/build FROM build AS publish ARG BUILD_CONFIGURATION=Release RUN dotnet publish "./MyApp.Web.Admin.csproj" -c $BUILD_CONFIGURATION -o /WebAdmin/publish /p:UseAppHost=false FROM base AS final WORKDIR /WebAdmin COPY --from=publish /WebAdmin/publish . ENTRYPOINT ["dotnet", "MyApp.Web.Admin.dll"] ``` How can I go about changing the context root of the application via either the Dockerfile or in code? I am running into issues where static assets such as images / css files are not coming back with the correct path and causing errors. The client does not have a DNS I can use yet so a Custom Domain Name is out of the question for now. Any assistance on how I can change the context path to have the application running for a demo would be much appreciated
I think, you need to set absolute path to the **resource**. const path = require('path'); ... resource = createAudioResource(path.join(__dirname, 'sounds', 'alert.mp3'));
You can use: ```ts switch (category) { case "fruit": food = food as Fruit return food case "grain": food = food as Grain return food case "meat": food = food as Meat return food } ```
You need to update or add these in your environment variables as following: In `.env.production` or `env.local` files NEXTAUTH_URL=http://YOUR_LOCAL_OR_DEPLOYED_HOST_URL:HOST_PORT AUTH_TRUST_HOST=http://YOUR_LOCAL_OR_DEPLOYED_HOST_URL:HOST_PORT Results be like: NEXTAUTH_URL=http://192.152.1.1:3000 AUTH_TRUST_HOST=ttp://192.152.1.1:3000
I am quite new to deep learning/machine learning and I have been trying to use my Apple M2 to accelerate training for my CNN in Jupyter notebook however, I am finding that despite using 'mps' it performs much slower per epoch (1m 30-40 seconds) than Google Colab CPU which is taking 40 seconds per epoch. I'm not sure why this is the case and was wondering if someone could help me understand why this might be or what I might be doing wrong. I am checking the availability and PyTorch version and utilising it accordingly: ``` # Check PyTorch has access to MPS (Metal Performance Shader, Apple's GPU architecture) print(f"Is MPS (Metal Performance Shader) built? {torch.backends.mps.is_built()}") print(f"Is MPS available? {torch.backends.mps.is_available()}") # Set the device device = "mps" if torch.backends.mps.is_available() else "cpu" print(f"Using device: {device}") print("version : ", torch.__version__) ====================================== Is MPS (Metal Performance Shader) built? True Is MPS available? True Using device: mps version : 2.2.2 ```
PyTorch training on M2 GPU slower than Colab CPU
|machine-learning|deep-learning|jupyter-notebook|conv-neural-network|
null
In that way it works. I think u put `tearoff=0` in `menubar` instead of `fileMenu`. If you put your `tearoff=0` in `menubar` it won't affect `fileMenu`. So, u need to specifically put `tearoff=0` in specific `tk.Menu()` import tkinter as tk window = tk.Tk() window.geometry("800x600") menubar = tk.Menu(window) window.config(menu=menubar) fileMenu = tk.Menu(menubar,tearoff=0) fileMenu.add_command( label="Exit", command=window.destroy, ) menubar.add_cascade(label="File", menu=fileMenu, underline=0) window.mainloop()
I was trying to connect the BNO08x IMU Breakout board with Pico RP2040, I tried connecting it to both the I2C ports, but it didnt worked. I am using the official Adafruit BNO08x lib. Below is the part of Example code included in the lib which I was trying to run: #include <Adafruit_BNO08x.h> // For SPI mode, we need a CS pin #define BNO08X_CS 10 #define BNO08X_INT 9 // For SPI mode, we also need a RESET //#define BNO08X_RESET 5 // but not for I2C or UART #define BNO08X_RESET -1 Adafruit_BNO08x bno08x(BNO08X_RESET); sh2_SensorValue_t sensorValue; void setup(void) { Serial.begin(115200); while (!Serial) delay(10); // will pause Zero, Leonardo, etc until serial console opens Serial.println("Adafruit BNO08x test!"); // Try to initialize! if (!bno08x.begin_I2C()) { //if (!bno08x.begin_UART(&Serial1)) { // Requires a device with > 300 byte UART buffer! //if (!bno08x.begin_SPI(BNO08X_CS, BNO08X_INT)) { Serial.println("Failed to find BNO08x chip"); while (1) { delay(10); } } Serial.println("BNO08x Found!"); I change the framework from Arduino to Earphilhower too, it didnt work. Soon realised that I need to change the I2C pin defination, not sure how to do that, also this was the message I got in the serial monitor: 1Adafruit BNO08x test! I2C address not found Failed to find BNO08x chip
Does rendering/baking get rid of any Antialiasing ghosting?