instruction
stringlengths
0
30k
null
I have this site: http://dl.dg-site.com/functionmentes/ There is a div with the color `#D9D9D9` Code of CSS: ``` css #full_bar { background: #D9D9D9; width: 100%; height: 100px; } ``` I want my div to be the full-width site and glued to the footer. How can I make this? I use a theme in WordPress. Thanks in advance!
I can not create a comment, but I think this has something to do with the encoding of the data when sending the card to Teams. It looks like Microsoft is checking the actual byte length instead of how many characters it consists of. When not providing an encoding, `StringContent` uses **UTF-8** to get the bytes. Providing for example four normal characters to this encoding yields a byte size of four, but providing the Japanese text `テスト` (translated from `test`) yields a size of nine. Basically all non-normal characters consists of more than one byte (when processed by an encoding supporting them). I'm also facing the task of sending dynamic content via an incoming webhook to a Teams channel and had the same question, because I already encountered a similar issue in another part of O365. Groups are allowed to have a description with a maximum length of 1024, but it looks like Microsoft also counts the bytes instead of the characters, even if the description is part of a larger JSON body sent to Graph. A description of 1024 characters with at least one special one will cause an error. I assume the actual source of the problem is the backend service which Graph calls in order to set the description. But that is another story and not a part of the question of this thread.
I am a beginner at swift just starting, I am trying to code in swift a code that generates a random number and display it on two labels onto my screen. After I'd like to calculate the sum of those two randomly generated numbers. It gives an error message when I calculate the sum though. @IBOutlet weak var number1: UILabel! @IBOutlet weak var number2: UILabel! @IBOutlet weak var number3: UILabel! let randomInt1 = Int.random(in: 1..<10) let randomInt2 = Int.random(in: 1..<10) @IBAction func StartButton(_ sender: Any) { number1.text = String(randomInt1) number2.text = String(randomInt2) var answer1 = [Int]() answer1[1] = randomInt1 + randomInt2 note: I tried to use this code but it gives an error message when I try and play it
How do I calculate random integers than add those random integers together?
null
It is prompting you to change the function name **isMultiple** which is camelCase to **is_multiple** which is snake_case. That's the default Linting for Rust. To avoid the warning you can either 1. Change the method name **isMultiple** to **is_multiple** or 2. Add the **#[allow(non_snake_case)]** statement just before your method, like this: #[allow(non_snake_case)] fn isMultiple () { println!("print"); }
We have built some function apps as part of a product and deployed them to a number of customers. Each customer has their own copy on their own Azure subscription. The function apps are Http Trigger which, for the most part, receive webhook data, do some lookup and manipulation of the data and add the result to the database. There are also some that lookup the data and return to the app. They could be taking a while to return data. We have seen very infrequently that the function apps are not reachable. i.e. if we call http://<my-app>.azurewebsites.net we get a 503 error. This is usually remedied by restarting them from within the Azure console. However there is one customer whose apps are regularly failing. Would this error be because of the code within the function app or is there a problem with the infrastructure? Is there a way of determining this?
I am currently making a wysiwyg editor, and need to support rotation by pressing "r" key while dragging some element with mouse I noticed that onkeydown event is not being called when I try to press a key while dragging something with a mouse. document.addEventListener('keydown', function(event) { console.log(`Key pressed: ${event.key}`); }); document.addEventListener('mousedown', function(event) { console.log(`Mouse button pressed: ${event.button}`); }); This is the example code you can try on your own (make something draggable as well). So my question - is it even possible to do this in DOM or is it an unavoidable limitation?
Javascript, DOM, detect keypresses while dragging an element
|javascript|dom|
maybe you can try add or change the curve argument value in TweenAnimationBuilder. Here is the example tween: Tween(begin: 24.sp, end: 16.sp), curve: Curves.easeInOut, by the default, the curve is Curve.linear, so you can try change the curve element. Try to refer the Curve library
It was actually an issue with the programming interface which was solved by opening an issue.
null
Use `ADD_MONTHS`: ```lang-sql SELECT ADD_MONTHS(date_column, -number_of_months_column) FROM table_name; ``` *Note: for adding/subtracting months, use the `ADD_MONTHS` function rather than an `INTERVAL` literal as `ADD_MONTHS` will handle cases such as `ADD_MONTHS(DATE '2024-01-31', 1)` where there is no 31st February while `DATE '2024-01-31' + INTERVAL '1' MONTH` would fail.* If your date always occurs in every month then you can multiply by a fixed interval: ```lang-sql SELECT date_column - number_of_months_column * INTERVAL '1' MONTH FROM table_name; ``` If you have the sample data: ```lang-sql CREATE TABLE table_name (id, date_column, number_of_months_column) AS SELECT 1, DATE '2024-01-01', 3 FROM DUAL UNION ALL SELECT 2, DATE '2024-03-31', 1 FROM DUAL; ``` Then using `ADD_MONTHS` outputs: | ADD\_MONTHS(DATE\_COLUMN,-NUMBER\_OF\_MONTHS\_COLUMN) | | :------------------------------------------------| | 2023-10-01 00:00:00 | | 2024-02-29 00:00:00 | Using an `INTERVAL` outputs the same for the first row but for the second row would give the exception: ``` error ORA-01839: date not valid for month specified ``` [fiddle](https://dbfiddle.uk/rojKi60n)
How to print all columns from a csv file
|python|pandas|csv|
Invoke-command works only whan any user logged
How to display notifying message while installing with Inno Setup if application is already installed on the machine?
Assuming *name* is an attribute on your instance *test* `getattr(test, 'name')` should return the corresponding value. Or `test.__dict__['name']`. You can read more about `getattr()` here: [The Python getattr Function][1] [1]: https://web.archive.org/web/20200703234823/http://effbot.org/zone/python-getattr.htm
null
Pojos are often used as Models/DTOs for inserting/updating data into a database. They need to by validated (with a Validator) to make sure they only contain the expected fields for the use-case: - insert (all fields must be validated and inserted) - update (only changed/modified fields must be validated and updated into the database) To make this possible, it would be helpful if the Java-Language already contains a mechanism to track 'modified' in pojos. A 'modified' for a field would be: - a setter for a field has been called. Can someone help me solve such a setter-call detection, with as less boilerplate as possible? --- Here an example of how i solved it with boilerplate, by letting the Pojos extend a common class: ```java public abstract class AbstractDTO { private Map<String, Object> modifiedFields = new HashMap<>(); public AbstractDTO() { } @JsonIgnore @XmlTransient protected void setAt(String key, Object value) { this.modifiedFields.put(key, value); } @JsonIgnore @XmlTransient public void resetModifiedFields() { this.modifiedFields.clear(); } @JsonIgnore @XmlTransient public Map<String, Object> getModifiedFields() { return this.modifiedFields; } } ``` Impl: ```java public class Product extends AbstractDTO { private Long productId; public Long getProductId() { return this.productId; } public void setProductId(Long productId) { this.productId = productId; this.setAt("productId", productId); } ``` That way i can now check for modified-fields: ```java boolean isModified = pojo.getModifiedFields().containsKey(field.getName()); ```
Java Pojos - Setter-Call (Field Touched) Detection
Looks like the old SVG was **post-processed** to make everything a ``path`` Tell tale sign is the ``rect`` ![](https://i.imgur.com/xR9gNh7.png) where all settings ended up in the (clear to read) ``d-path`` ![](https://i.imgur.com/cE0X7MS.png) https://jakearchibald.github.io/svgomg/ can do that conversion
If you need to update the mac address of the `firetvs` elements only under certain conditions, for example that `name` is equals to `mbr`, I'm providing an alternative solution with streams. Basically, the `firetvs`'s elements are streamed via an `Iterable`, then filtered by keeping only the ones with `mbr` as their `name`, and finally updated with the correct mac address. In the following sample, I've added only a couple of elements within `firetvs` to not clutter the code. However, here at [oneCompiler](https://onecompiler.com/java/428j9qtkj) I've included more elements showing the desired output. ``` public class Main { public static void main(String[] args) throws JsonProcessingException { String json = "{\n" + " \"lastUpdated\": \"\",\n" + " \"firetvs\": [\n" + " {\n" + " \"name\": \"lr\",\n" + " \"ip\": \"\",\n" + " \"mac\": \"\",\n" + " \"token\": \"\"\n" + " },\n" + " {\n" + " \"name\": \"mbr\",\n" + " \"ip\": \"\",\n" + " \"mac\": \"\",\n" + " \"token\": \"\"\n" + " }\n" + " ]\n" + "}"; String mac = "C1:41:Q4:E8:S1:98:V1"; ObjectMapper objectMapper = new ObjectMapper(); ObjectNode rootNode = objectMapper.readValue(json, ObjectNode.class); ArrayNode firetvsNode = (ArrayNode) rootNode.get("firetvs"); //Creating an Iterable form the Iterator of the firetvs' nodes Iterable<JsonNode> iterable = () -> firetvsNode.iterator(); StreamSupport.stream(iterable.spliterator(), false) //Streaming the firetvs' nodes .filter(node -> node.get("name").asText().equals("mbr")) //Keeping only the elements with mbr as their name .forEach(node -> ((ObjectNode) node).put("mac", mac)); //Updating the mac address of each element String jsonMacReplaced = objectMapper.writerWithDefaultPrettyPrinter().writeValueAsString(rootNode); System.out.println(jsonMacReplaced); } } ```
The dynamic page is working but does not fetch the data from any API I'm encountering an issue with my Next.js dynamic page that's supposed to fetch data from an API but doesn't seem to be doing so. Despite implementing the code to fetch data, the page remains empty. I've checked the API endpoint, and it's functional as it returns data when accessed directly. Could someone please help me debug this problem? Below is the code snippet of my dynamic page. this is my dynamic page code Path : src\app\Stocks[ticker]\page.js ``` "use client"; import { useRouter } from "next/navigation"; import { useEffect, useState } from "react"; const StockDetailsPage = () => { const router = useRouter(); const [stockData, setStockData] = useState([]); useEffect(() => { fetchData(); }, []); const fetchData = async () => { try { const apiUrl = "http://localhost:4000/stocks"; const response = await fetch(apiUrl); if (!response.ok) { throw new Error("Failed to fetch data"); } const data = await response.json(); console.log("API Response:", data); // Log the API response if (!data || !data.stocks || !Array.isArray(data.stocks)) { throw new Error("Invalid data format"); } setStockData(data.stocks); // Assuming the data is structured with a "stocks" array } catch (error) { console.error("Error fetching data:", error); setStockData([]); } }; return ( <> <div className="bg-background dark:bg-Background flex flex-col pl-[15px] pr-[15px] sm:pl-[100px] sm:pr-[40px] pt-2 sm:pt-4"> <h1 className="mb-2">Stock Data</h1> <div className="rounded-2xl mb-4 sm:mb-8 dark:bg-gradient-to-tl from-[#101010] dark:to-[#161616] border dark:border-[#252525] bg-white flex-grow text-center"> {stockData.map((stock) => ( <div key={stock.ticker_id} className="p-4"> <h2>{stock.ticker_id}</h2> <p>{stock.company_name}</p> <p>Price: {stock.price}</p> <p>Change: {stock.change}</p> <p>Change Percent: {stock.change_percent}</p> <p>Previous Close: {stock.prev_close}</p> <p>Open: {stock.open}</p> <p>Low: {stock.low}</p> <p>High: {stock.high}</p> <p>52 Weeks High: {stock["52_weeks_high"]}</p> <p>52 Weeks Low: {stock["52_weeks_low"]}</p> <p>Market Cap: {stock.market_cap}</p> <p>Volume: {stock.volume}</p> <p>Average Volume (3 Months): {stock.avg_vol_3_months}</p> <p>Turn Over: {stock.turn_over}</p> <p>Range: {stock.range}</p> <p>Shares Outstanding: {stock.shares_outstanding}</p> <p>Shares Float: {stock.shares_float}</p> </div> ))} </div> </div> </> ); }; export default StockDetailsPage;``` I was trying w`your text`ith multiple APIs, working in the console only
API not fetch data with dynamic page NEXT js
|reactjs|next.js|routes|
null
Aldimeola1122, Both NODETAIL and REMOVECC parms are used for reports. REMOVECC can be used to remove the ANSI control characters(page eject,blank line...) from a report. NODETAIL - Specifies that data records are not to be output for the reports produced for this OUTFIL group. With NODETAIL, the data records are completely processed with respect to input fields, statistics, counts, sections breaks, and so on, but are not written to the OUTFIL data set and are not included in line counts for determining the end of a page. You can use NODETAIL to summarize the data records without actually showing them. You specified the parms but did not specify anything for generating the stats. So try adding the following and it will count the key fields //SYSIN DD * SORT FIELDS=(5,24,CH,A,45,10,CH,A) OUTFIL REMOVECC,NODETAIL, SECTIONS=(5,24,45,10, TRAILER3=('KEY |',5,24,'|',45,10,' HAS COUNT OF :',COUNT)) /*
|swift|uikit|
There is a function inside which determines whether metadata is saved (by metadata I mean saving dtypes). When I save a Dataframe to a parquet file and then read data from that file I expect to see metadata persistence. I will perform this check in this way: ``` In [6]:(pd.read_parquet(os.path.join(folder, 's_parquet.parquet'), engine='fastparquet').dtypes == df_small.dtypes).all() Out [6]: False ``` ``` In [7]: pd.read_parquet(os.path.join(folder, 's_parquet.parquet'), engine='fastparquet').dtypes == df_small.dtypes Out [7]: date True tournament_id True team_id True members True location False importance False avg_age True prize True prob True win True dtype: bool ``` If you look at the data types directly you can see the following result: ``` In [8]: df_small.dtypes Out [8]: date datetime64[ns] tournament_id int32 team_id int16 members int8 location category importance category avg_age float32 prize float32 prob float32 win bool dtype: object ``` ``` In [9]: pd.read_parquet(os.path.join(folder, 's_parquet.parquet'), engine='fastparquet').dtypes Out [9]: date datetime64[ns] tournament_id int32 team_id int16 members int8 location category importance category avg_age float32 prize float32 prob float32 win bool dtype: object ``` And even if you compare the categorical data directly, you get the expected result. ``` In [10]: df_small['location'].dtypes == pd.read_parquet(os.path.join(folder, 's_parquet.parquet'), engine='fastparquet')['location'].dtypes Out [10]: True ``` What could be the reason for this behavior?
How can I find all files containing a specific text (string) on Linux?
There is no feature that can make certain number of Microsoft-hosted agents in the pool `Azure Pipelines` only work for a selection of projects. Currently, the permission for the pipelines in a project to use an agent is not managed on individual agent level but on its pool level. For this, you could create a feature request via: https://developercommunity.visualstudio.com/report?space=21&entry=suggestion . That will allow you to directly interact with the appropriate Product Group, and make it more convenient for the product group to collect and categorize your suggestions. As of now, I am afraid the only workaround is to isolate the big greedy project in one organization separated from other projects. If possible, you may consider setting up and maintaining your own [self-hosted](https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops&tabs=yaml%2Cbrowser)/[VMSS](https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops) agent pool.
I am doing object detection using yolov8 and sahi algorithms. I am writing a function to test whether some of the objects I detect are round or not, but I can’t send the bounding boxes of the objects I detect to the function. ``` def is_circle(image, bbox): img = cv.cvtColor(image, cv.COLOR_BGR2GRAY) x, y, w, h = bbox roi = img[y:y + h, x:x + w] edges = cv.Canny(roi, 50, 150, apertureSize=3) circles = cv.HoughCircles(edges, cv.HOUGH_GRADIENT, 1, 20, param1=50, param2=30, minRadius=0, maxRadius=0) return circles is not None detection_model = AutoDetectionModel.from_pretrained( model_type='yolov8', model_path="C:/Users/UGURHANDASDEMIR/PycharmProjects/pythonProject1/best.pt", confidence_threshold=0.5, device="cuda:0", # or 'cuda:0' ) image_dir = "img" image_paths = [os.path.join(image_dir, image_name) for image_name in os.listdir(image_dir) if image_name.endswith(('.png', '.jpg', '.jpeg'))] for image_path in image_paths: image = read_image(image_path) result = get_sliced_prediction( image, detection_model, slice_height = 1920, slice_width =1080 , overlap_height_ratio = 0.5, overlap_width_ratio = 0.5 ) for prediction in result.object_prediction_list: print(prediction.category.name) if prediction.category.name == 'uap' or prediction.category.name == 'uai' : bbox = prediction.bbox is_circle(image,bbox) else: inis_durumu=-1 I am getting the following error from this code: ``` ``` Traceback (most recent call last): File "C:\Users\UGURHANDASDEMIR\PycharmProjects\pythonProject1\dene.py", line 52, in <module> is_circle(image,bbox) File "C:\Users\UGURHANDASDEMIR\PycharmProjects\pythonProject1\dene.py", line 14, in is_circle x, y, w, h = bbox ^^^^^^^^^^ TypeError: cannot unpack non-iterable BoundingBox object ``` Please help
can’t send the bounding boxes of the objects I detect to the function
|python|opencv|object-detection|yolov8|sahi|
null
I see that the instructions in the [Android Gradle Plugin > Publish your library][1] documentation have changed significantly since I upgraded my library to versions 7 of "Gradle" and the "Android Gradle Plugin". According to the latest instructions (for versions 8 of "Gradle" and the "Android Gradle Plugin"), the parts of my library's `build.gradle` file which relate to publishing change to the following: ```lang-groovy plugins { id("com.android.library") id("maven-publish") } android { publishing { singleVariant("release") { withSourcesJar() } } } afterEvaluate { publishing { publications { release(MavenPublication) { from components.release groupId "com.tazkiyatech" artifactId "android-utils" version "1.0.0" } } repositories { maven { name = "BuildFolder" url = "${project.buildDir}/repository" } } } } ``` The most important change is that there's no longer a need for me to define a custom Gradle task which generates the Jar for my library's sources. I've put together a minimal project [here][2] which fully demonstrates how to publish an Android library with versions 8 of "Gradle" and the "Android Gradle Plugin". [1]: https://developer.android.com/build/publish-library [2]: https://github.com/adil-hussain-84/android-library-publishing-experiments/tree/main/library1
Hello devs Im trying to get IG stories from any user so i can embed with my app(IG downloader) but I cant since it requires access token for every user I want to obtain the sam data.Here is the api response { "error": { "message": "An access token is required to request this resource.", "type": "OAuthException", "code": 104, "fbtrace_id": "A9f9Q_CklsmRXd-1suSk1_w" } }.Is there a library like youtube dlp that can handle the same?
Getting IG stories using grapAPI
|android|api|facebook-graph-api|youtube-dl|instagram-story|
Your build command is responsible for this. With your parameters, you tell your builder to include the whole framework locally. If you need it locally rather than bootstrap it from the CDN, you can remove the `--all` parameter at the end and see if that works out for you. The SAPUI5 builder is not able to go through your code and separate which single controls you are using and which not. In order to [bootstrap the UI5 framework from the CDN][1], use the following line in your `index.html`: <script id="sap-ui-bootstrap" src="https://sdk.openui5.org/resources/sap-ui-core.js" ... [1]: https://sdk.openui5.org/topic/2d3eb2f322ea4a82983c1c62a33ec4ae
> I'm not seeing anywhere where it's possible to verify that the actual app itself has been purchased. Probably because the verification of app purchases, particularly for apps sold as a one-time purchase without in-app items or subscriptions, would involve a different approach. [Google's Licensing API](https://googleapis.dev/ruby/google-api-client/latest/Google/Apis/LicensingV1.html) is designed to help you enforce licensing policies for apps that are published on Google Play. While the Licensing API is primarily aimed at preventing unauthorized use, it can also be utilized to check if the app has been purchased by the user. You would need to integrate the Licensing API into your app, and set up your server to validate the license status. Since you want to introduce a subscription model, consider converting your app to a free download with an [IAP option for the full features](https://support.google.com/googleplay/android-developer/answer/1153481?hl=en). For existing users who have already purchased the app, you can grant them the full features via an in-app purchase entitlement. This migration would allow you to use the existing in-app purchase verification APIs for both old and new users. To identify previous purchasers: - implement a one-time initialization process in your app update to check if the full version was already purchased. This can be done using [DataStore](https://developer.android.com/topic/libraries/architecture/datastore), which replaces [`SharedPreferences`](https://developer.android.com/training/data-storage/shared-preferences). - for these users, automatically grant the equivalent in-app product entitlement. - going forward, use the standard in-app purchase verification process for both old and new users. On app launch, [check the license status](https://developer.android.com/google/play/licensing/server-side-verification). ```bash App (Client) Your Server Google Play Server | | | |---- Request Token ----->| | | |---- Verify Token --------->| | | | | |<--- Verification Result ---| |<--- Entitlement Info ---| | ``` ---- > You say that I can use DataStore to check whether the app was already purchased, but DataStore seems to simply be a general-purpose key-value store. Does it come pre-populated with purchase information, or were you simply suggesting I store state here? The latter: neither SharedPreferences nor DataStore come pre-populated with purchase information or have a built-in way to directly check for app purchases. They are general-purpose storage solutions meant to save app data across sessions. The idea remains to use them as a mechanism to track the app's state or specific conditions set by your logic, especially after manually handling the migration process for existing users to the new subscription model. Meaning: - When you update your app to include subscriptions, you also introduce a one-time initialization check for users who update the app. - For users who have previously purchased your app (before it was updated to include subscriptions or in-app purchases), you would manually set a flag or entitlement in SharedPreferences or DataStore during this one-time initialization. That flag indicates that they are entitled to the full features of the app without needing a subscription or additional purchase. Upon app launch (or at the relevant point in your app), check the SharedPreferences or DataStore for the presence of this entitlement flag. If the flag is set, treat the user as having full access to the app's features. That is a manual process you implement as part of transitioning your app's monetization model. That does not involve verifying the app purchase directly at the time of the check. Instead, it relies on you having correctly identified and flagged users as having previously purchased the app at the time of the app update. You might have to securely verify the user's purchase status before the app update, and then securely set the flag during the app's first launch post-update. ---- > Up until now, I have used LVL merely to verify that the app was properly purchased. Can I verify that it was purchased for a price greater than 0, or before a given date? The LVL can indeed be used to check if the app has been legitimately purchased from Google Play. That verification includes sending the license check response from Google Play to your server for further verification if needed. However, LVL does not directly provide information about the purchase price or the purchase date. Directly verifying that the app was purchased for a price greater than $0, or before a specific date, using LVL or Google Play's API, is not straightforward because these APIs are not designed to return historical purchase data or pricing information. If you wish to verify purchase details such as the price paid or the purchase date, you might need to maintain records of the original purchase transactions on your server. That could involve storing transaction details at the time of purchase, which can then be queried during the migration process. ---- > One case this doesn't address is that of a user who has deleted the app after purchasing, but then reinstalls the app after our conversion to a free app that offers subscriptions. How can I know such a user previously purchased? True: the local storage solutions like DataStore or SharedPreferences will not retain information once the app is uninstalled. That necessitates a solution that can persist outside of the local device storage. You would need a server-side verification with user accounts: implement user accounts in your app where users can register and login. When a user purchases the app (while it is still a one-time purchase), record this transaction against their user account on your server. Upon reinstalling the app, regardless of the device, the user can log in to their account, allowing your server to verify their purchase history and restore the appropriate access rights. That would require a reliable backend system to manage user accounts and transaction records but offers a seamless experience for users across devices and reinstallation scenarios. Another option would be to use Google Play's capabilities to query a user's purchase history for your app. That is more complex than working with the LVL, but can be achieved by integrating [Google Play's billing library](https://developer.android.com/google/play/billing/integrate) and [querying for past purchases](https://developer.android.com/google/play/billing/integrate#history) when the app is reinstalled. You would need to implement logic in your app to query for these transactions and, based on the response, restore access to the full features of the app. You might consider a combination of the above strategies for a robust solution: use server-side verification with user accounts as the primary method for maintaining purchase records and restoring access. Supplement this with Google Play receipts and order history checks for users who might not have created an account but can prove purchase via Google Play. ---- But, as [Brian Rak](https://stackoverflow.com/users/912961/brian-rak) mentions in [the comments](https://stackoverflow.com/questions/78102648/google-play-how-to-verify-one-time-purchase-of-app-itself-not-in-app-product-o/78120794#comment137762429_78120794): > I have to be able to verify the prior purchase of a user who is reinstalling the app. > > > I considered an update to the paid app that encouraged users to register their purchases, and that could be a partial solution. But it still means that a customer who misses that window is unsupported. I have customers stretching back 15 years, and I'm trying to avoid a massive user support issue, because I would have no good way to verify their claims And it seems that there is indeed no supported way to verify that a given user installing the (now free) app previously paid for it.
How can I show 'Back'/'Forward' buttons in the Android Studio 3.0.1 IDE | Iguana 2023.2.1?
|android|ide|android-studio-3.0|android-studio-girafee|android-studio-lguana|
|azure|terraform|locking|tfstate|
null
I kind of set up Laravel with breeze and then i decided to install 2fa so i installed fortify. It seems that some routes are not working as expected and i just can't figure what it is. After the user activates 2fa i can see the QR code and the verificatin codes. Also i can see that the data is populated in the database like the "two_factor_secret" and "Two_factor_recovery_codes". After i log out and try to log in again it just redirects me to the homepage but doesnt return for the 2fa challange view. As said on this thread: https://github.com/laravel/fortify/issues/305 i copied the "app/Http/Controllers/Auth/AuthenticatedSessionController.php" into my Controller and now the QR code is working but the user wont redirect to the 2fa challange screen after log in. This is the AuthenticatedSessionController.php file: ``` <?php namespace App\Http\Controllers\Auth; use Illuminate\Contracts\Auth\StatefulGuard; use Illuminate\Http\Request; use Illuminate\Routing\Controller; use Illuminate\Routing\Pipeline; use Laravel\Fortify\Actions\AttemptToAuthenticate; use Laravel\Fortify\Actions\CanonicalizeUsername; use Laravel\Fortify\Actions\EnsureLoginIsNotThrottled; use Laravel\Fortify\Actions\PrepareAuthenticatedSession; use Laravel\Fortify\Actions\RedirectIfTwoFactorAuthenticatable; use Laravel\Fortify\Contracts\LoginResponse; use Laravel\Fortify\Contracts\LoginViewResponse; use Laravel\Fortify\Contracts\LogoutResponse; use Laravel\Fortify\Features; use Laravel\Fortify\Fortify; use Laravel\Fortify\Http\Requests\LoginRequest; class AuthenticatedSessionController extends Controller { /** * The guard implementation. * * @var \Illuminate\Contracts\Auth\StatefulGuard */ protected $guard; /** * Create a new controller instance. * * @param \Illuminate\Contracts\Auth\StatefulGuard $guard * @return void */ public function __construct(StatefulGuard $guard) { $this->guard = $guard; } /** * Show the login view. * * @param \Illuminate\Http\Request $request * @return \Laravel\Fortify\Contracts\LoginViewResponse */ public function create(Request $request): LoginViewResponse { return app(LoginViewResponse::class); } /** * Attempt to authenticate a new session. * * @param \Laravel\Fortify\Http\Requests\LoginRequest $request * @return mixed */ public function store(LoginRequest $request) { return $this->loginPipeline($request)->then(function ($request) { return app(LoginResponse::class); }); } /** * Get the authentication pipeline instance. * * @param \Laravel\Fortify\Http\Requests\LoginRequest $request * @return \Illuminate\Pipeline\Pipeline */ protected function loginPipeline(LoginRequest $request) { if (Fortify::$authenticateThroughCallback) { return (new Pipeline(app()))->send($request)->through(array_filter( call_user_func(Fortify::$authenticateThroughCallback, $request) )); } if (is_array(config('fortify.pipelines.login'))) { return (new Pipeline(app()))->send($request)->through(array_filter( config('fortify.pipelines.login') )); } return (new Pipeline(app()))->send($request)->through(array_filter([ config('fortify.limiters.login') ? null : EnsureLoginIsNotThrottled::class, config('fortify.lowercase_usernames') ? CanonicalizeUsername::class : null, Features::enabled(Features::twoFactorAuthentication()) ? RedirectIfTwoFactorAuthenticatable::class : null, AttemptToAuthenticate::class, PrepareAuthenticatedSession::class, ])); } /** * Destroy an authenticated session. * * @param \Illuminate\Http\Request $request * @return \Laravel\Fortify\Contracts\LogoutResponse */ public function destroy(Request $request): LogoutResponse { $this->guard->logout(); if ($request->hasSession()) { $request->session()->invalidate(); $request->session()->regenerateToken(); } return app(LogoutResponse::class); } } ``` I also have a set up a challange view and it seems that the routes in fortify my service provider file is correct like this: ``` <?php namespace App\Providers; use App\Actions\Fortify\CreateNewUser; use App\Actions\Fortify\ResetUserPassword; use App\Actions\Fortify\UpdateUserPassword; use App\Actions\Fortify\UpdateUserProfileInformation; use Illuminate\Cache\RateLimiting\Limit; use Illuminate\Http\Request; use Illuminate\Support\Facades\RateLimiter; use Illuminate\Support\ServiceProvider; use Illuminate\Support\Str; use Laravel\Fortify\Fortify; class FortifyServiceProvider extends ServiceProvider { /** * Register any application services. */ public function register(): void { // } /** * Bootstrap any application services. */ public function boot(): void { Fortify::createUsersUsing(CreateNewUser::class); Fortify::updateUserProfileInformationUsing(UpdateUserProfileInformation::class); Fortify::updateUserPasswordsUsing(UpdateUserPassword::class); Fortify::resetUserPasswordsUsing(ResetUserPassword::class); RateLimiter::for('login', function (Request $request) { $throttleKey = Str::transliterate(Str::lower($request->input(Fortify::username())).'|'.$request->ip()); return Limit::perMinute(5)->by($throttleKey); }); RateLimiter::for('two-factor', function (Request $request) { return Limit::perMinute(5)->by($request->session()->get('login.id')); }); Fortify::loginView(function () { return view('auth.login'); }); Fortify::registerView(function () { return view('auth.register'); }); // Two factor Fortify Fortify::twoFactorChallengeView(function () { return view('auth.two-factor-challenge'); }); Fortify::resetPasswordView(function (Request $request) { return view('auth.reset-password', ['request' => $request]); }); Fortify::verifyEmailView(function () { return view('auth.verify-email'); }); Fortify::confirmPasswordView(function () { return view('auth.confirm-password'); }); } } ``` Any idea why it just doesnt redirect me to the 2fa challange view? I kinda have the feeling i have to set up the logic by myself but don't know where to start. As i know now Breeze doesnt come with 2fa functionality but fortify should handle these redirects by itself no? Well it doesn#t as it seems and i tried to solve this by following this thread but with no success: https://github.com/laravel/fortify/issues/305
Laravel Breeze 2fa challange not showing up
|laravel|fortify|
null
|azure|azure-bicep|
null
Try this structure, using **one** `class MovieModel: ObservableObject` and passing this model to other views using `.environmentObject(movieModel)`. See this link [monitoring data](https://developer.apple.com/documentation/swiftui/monitoring-model-data-changes-in-your-app), it gives you some good examples of how to manage data in your app. struct ContentView: View { var body: some View { ListView() } } class MovieModel: ObservableObject { @Published var movies: [Movie] = [Movie(id: 1, name: "Titanic")] func fetchMovie(id: Int) { //...request // for testing if let index = movies.firstIndex(where: {$0.id == id}) { movies[index].name = movies[index].name + " II" } } func fetchAllMovies() { // for testing movies = [Movie(id: 1, name: "Titanic"), Movie(id: 2, name: "Mickey Mouse")] } } struct Movie: Identifiable { let id: Int var name: String //.... } struct ListView: View { @StateObject var movieModel = MovieModel() var body: some View { List { ForEach(movieModel.movies) { movie in MovieItem(movie: movie) } } .onAppear { movieModel.fetchAllMovies() } .environmentObject(movieModel) } } struct MovieItem: View { @EnvironmentObject var movieModel: MovieModel var movie: Movie var body: some View { Text(movie.name) .onAppear { movieModel.fetchMovie(id: movie.id) } } } If you plan to use ios17, then have a look at this link [Managing model data in your app](https://developer.apple.com/documentation/swiftui/managing-model-data-in-your-app) for how to manage data in your app.
We would like to allow Maintainers to be able to merge some MRs, even if checks in the pipeline for the MR fail. Developers should not be able to do the same thing. I see there is a way to skip a pipeline run, and to allow merge of MRs with skipped pipelines, but there does not seem to be a way to control who can skip the pipeline run. There are custom approval rules, but you cannot make a custom approval rule conditional, so if we added a rule that had all Maintainers they would ALWAYS have to approve all MRs, and I don't think it would override the "require pipeline success" cofiguration. I do not see a way to achieve what we want without allowing all developers to skip pipelines and merge. Am I missing something?
On Gitlab, is there a way to allow Maintaners to merge MRs even if some checks fail?
|gitlab|gitlab-ci|
I have successfully saved the oscilloscope graph data into the variable image_data, but I can't convert the data of this variable into PNG file, BMP file, and so on. How should I write .... ```python import pyvisa from PIL import Image rm = pyvisa.ResourceManager() scope = rm.open_resource('USB0::0x0957::0x1796::MY63080487::0::INSTR') scope.write(':DISPlay:DATA? PNG, COLOR') image_data = scope.read_raw() print(image_data) b'#800017910\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x03 \x00\x00\x01\xf7\x08\x02\x00\x00\x00\xcc\xb6\xac\x81\x00\x00 \x00IDATx\x01\xed\xdd\xcd\xebnI~\x10\xf0\xfb\x0b\xb3\x18\x04w\x823\x12\x98\xc0\x0c\x98\xbf`z`L\x14\x17Y\x04\x1c!\x8b\x08\x01\xb3H\xb8\x1d\x08\x12!h@\x92\xce"m"\xc4\x10p\x90\x80\xf7G\xb2\x88\x10t\x16\x01G\xc8"\x0bQ\x87\x91t\xff\x05\x11f\xc0\x81\xe0\xb4\xe0N\x90Y\x04\xaf\xd5]\xdd\xdf\xae[\xe7\xe59\xef\xaf\x9f\xcbp\xa7N\x9d\xaaoU}\xea<\xcf\xf3\xed\xf3<}\xfa\xe9\xf5\xeb\xd7/\xfc!@\x80\x00\x01\x02\x04\x08\x10XN\xe0\x87\x96\x0b%\x12\x01\x02\x04\x08\x10 @\x80\xc0\x87\x02\x12,\xd7\x01\x01\x02\x04\x08\x10 @`a\x01\t\xd6\xc2\xa0\xc2\x11 @\x80\x00\x01\x02\x04$X\xae\x01\x02\x04\x08\x10 @\x80\xc0\xc2\x02\x9f\x89xOOO\xa9\xdc\xfc\xcd{Y\x9f\xcb\xb9K\xd9\xb2l\x13\x01S!\xd5\x97\xcd\xbaj\xa2K4n\x1d\xa8\xb52\xf7m\x9d@s\xf4\x18\xa8\xa7W:\x15\x03\xc5|\xa2}\xae\x89\x06\xa9>\xda\xacQ\x19\xe3\xf6\x0fT\x0e\x9d\xbb\xc4\xac\xf2a\xfa;\xb5\xa9*\x87\xd4D\xf7\xb2Pu\xcc\xa3W\xc1\xf3\x88\xe9\xeff}\x19\xaaY\xce\xd1r}\xee[\xd6\x94\xf5U\xdf\xd6Y\xf5\xb4O\xa7Fu)\xa7\x11\x8bj\xad\x8c\x89\xb5\x9e\x1d^\x99\xe3\xe4\xf61b\x04W @\x80\x00\x81#\x0b|\x9a`=\x9cez\xa3/\xdf\xe5\xe3\xb0Y\x88P\xa9}\x9cM\x95e9\xb7\xa9j\xf2\xe1\xf0\xca\x18\xa8Yh\r\xd5l\xd6S\xd35\xf9\xe1\xd3\x9b\xd9\xb2un\xad1\xcb}I\xbdR\x9b\xd6\xbe\xc7\xaf\x9c ```
My Firebase save each user with 4 features:name,email,password and confirm password. The class of users: [enter image description here](https://i.stack.imgur.com/pF3M1.png) The Firebase data that i saved: [enter image description here](https://i.stack.imgur.com/TPCFf.png) In my app i have an option to change the current password to new password,its require to put you previous password(the current that the Firebase saved) and then put new password(the user put the data about the passwords on UI 'EditText') . See here how it looks: [enter image description here](https://i.stack.imgur.com/xvLcQ.png) And then the user has to click on 'confirm button' to change the old password to new.But before its change,***the computer need to check if the old password compare to the current password that saved on Firebase*** .I don't know how to do that,please help me!! I tried to do that code but not sure if i get the access for the variable the i want. [enter image description here](https://i.stack.imgur.com/gMWla.png) It always changed even when the previous pass don't compare to the current.. code of database(for one user): https://file:///C:/Users/User/Downloads/geography-trivi-new-default-rtdb-export.json
I'm trying to use "Realm" with a few lines of code found on "stackoverflow". I get an enigmatic compilation error: Generic parameter 'Element' could not be inferred on the line: let dogs = List\<Dog\>() every time I use List\<\> this error appears. I've tried everything, cleaning up the build, quitting xcode, starting again from scratch with a new project, but there's still this error... ``` import SwiftUI import RealmSwift class Dog: Object { @objc dynamic var name = "" @objc dynamic var age = 0 } class Person: Object { @objc dynamic var name = "" let dogs = List<Dog>() } struct ContentView: View { var body: some View { VStack { Image(systemName: "globe") .imageScale(.large) .foregroundStyle(.tint) Text("toto") .onAppear { let myDog = Dog() myDog.name = "Rex" myDog.age = 5 hop(myDog: myDog) } } .padding() } func hop(myDog: Dog) { let realm = try! Realm() let person = Person() person.name = "Tim" person.dogs.append(myDog) try! realm.write { realm.add(person) } } } ```
Realm : Generic parameter 'Element' could not be inferred
|realm|
null
I am referring to your question in the comment. TL;TR; There is no difference when you replace a single property of a post or the complete post. SwiftUI provides DEBUG method which can show you which data causes your view to update. `Self._printChanges()` So lets write a quick demo: import SwiftUI struct Post: Codable, Identifiable { var id: Int var title: String var message: String } @Observable class PostManager { var posts: [Post] = [Post(id: 1, title: "1", message: "1")] func updatePostMessage(postId: Int, message: String) { // Switch these methods to check the output of which view is re-rendered posts.updatePostMessage(with: Post(id: postId, title: "1", message: message)) posts.updatePostTitle(with: Post(id: postId, title: "1", message: message)) // posts.updateCompletePost(with: Post(id: postId, title: "1", message: message)) } } // This code could go to a different file to answer your initial question extension [Post] { mutating func updatePostMessage(with newPost: Post) { if let index = self.firstIndex(where: { $0.id == newPost.id }) { self[index].message = newPost.message } } mutating func updateCompletePost(with newPost: Post) { if let index = self.firstIndex(where: { $0.id == newPost.id }) { self[index] = newPost } } mutating func updatePostTitle(with newPost: Post) { if let index = self.firstIndex(where: { $0.id == newPost.id }) { self[index].title = newPost.title } } } struct TitleView: View { let title: String var body: some View { let _ = Self._printChanges() Text(title) } } struct MessageView: View { let message: String var body: some View { let _ = Self._printChanges() Text(message) } } struct ContentView: View { @State var postManager = PostManager() var body: some View { let _ = Self._printChanges() VStack { ForEach(postManager.posts) { post in TitleView(title: post.title) MessageView(message: post.message) } Button { postManager.updatePostMessage(postId: 1, message: "Message") } label: { Text("Update") } } } } We are going to ignore the initial log when the view gets rendered for the first time. When we use the method which only changes the message of the post the console is: ContentView: @dependencies changed. MessageView: @self changed. If you change the update method in the extension above and update the complete post the console is: ContentView: @dependencies changed. MessageView: @self changed. And only if you explicitly set a different title it would update the TitleView. ContentView: @dependencies changed. TitleView: @self changed. MessageView: @self changed.
I've seen some posts about how to dynamically add test fields to a form when a button is clicked. like in this post https://stackoverflow.com/questions/60939821/how-to-add-two-v-text-fields-when-i-click-a-button-dynamically-using-vuetify Im not understanding where I would put the v-for in this scenario. But, how can I achieve this using nuxt2 and vue2. I use a vuex store for the input data, but how can I identify the new text fields being added so I can update the store accordingly? Ideally I'd have a v-row with 2 v-cols and when I click the button, another row appears with the 2 v-cols, but Im not understanding where I would put the v-for in this scenario. Small snippet based on the above linked question. ``` <template> <v-card> <v-card-text> <v-row> <v-col> <v-text-field :label="textField.label1" v-model="textField.value1" ></v-text-field> Im not understanding where I would put the v-for in this scenario. <v-text-field :label="textField.label2" v-model="textField.value2" ></v-text-field> </v-col> <v-col> <v-btn @click="addRow">Add Row</v-btn> </v-col> </v-row> </v-card-text> </v-card> </template> ```
I know this is an old question, but I thought I would answer given that there is no module-specific answer. I also experience this error with the UMLS::Interface module and was able to fix it. The reason you are experiencing this error is because the DBI module is not recognizing that you have set a default UMLS database name in your my.cnf file. If this happens, the authors of the UMLS::Interface module expect there to be a database named 'umls' (case sensitive). In my case, the database was called 'UMLS'. Two solutions to this problem: **Solution 1) Edit the UMLS::Interface CuiFinder.pm script** Go to your UMLS Interface Installation Directory #Location on Ubuntu 22.04 cd /usr/local/share/perl/5.34.0/UMLS/Interface sudo nano CuiFinder.pm #Ctrl + W #Ctrl + T #Enter 2464, which will send you to line 2464 #Enter user password Here you will see a line of code that reads: my $dsn = "DBI:mysql:umls;mysql_read_default_group=client;" This is the code excecuted when the my.cnf in your mysql setup is not recognized or there are no client settings (See solution #2). Here, you need to change 'umls' to your db name. In my case, I had created my db with the name 'UMLS', so in my case the line now reads: my $dsn = "DBI:mysql:UMLS;mysql_read_default_group=client;" You should now be able to complete the script without errors. **Solution 2) Edit your mysql my.cnf client settings** After reading your post further, I noticed you already had edited your .ini file, so it appears solution 1 may be the best bet for anyone trying to get this to work. Instead of altering the perl code in the UMLS:Interface module, you can also try to get the DBI module to recognize that your default DB is the UMLS database. I was unable to get this to work on Ubuntu 22.04 running mysql Ver 8.0.36-0ubuntu0.22.04. DBI insisted that I did not have a default database, which is weird because it recognized the default user and password in order to access the DB . Go to your my.cnf file location. cd /etc/mysql Edit the my my.cnf file (note this is read only by default) sudo nano my.cnf Copy and Paste the Following in the file: [client] user=<your username here> password=<your user's password> port=3306 #this is the default but can also check this in mysql with SHOW VARIABLES LIKE '%port%;' socket=/var/run/mysqld/mysqld.sock #this is the default but can also check this in mysql with 'SHOW VARIABLES LIKE '%socket%;'' database=<your database name> In theory, this should allow the CuiFinder.pm script to execute the first clause of the if else statement and pull in your default database without editing the perl code. Like I said, DBI did not recognize this.
You are not injecting the text in the iframe actually, so it is simpler to define a component that will embed both the additionnal text and the facebook post. Simplified version of your snippet below: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> facebookArr = [ { post_id: "pfbid0cm7x6wS3jCgFK5hdFadprTDMqx1oYr6m1o8CC93AxoE1Z3Fjodpmri7y2Qf1VgURl" }, { post_id: "pfbid0azgTbbrM5bTYFEzVAjkVoa4vwc5Fr3Ewt8ej8LVS1hMzPquktzQFFXfUrFedLyTql" } ]; let facebookContainer = document.getElementById("facebook-feed-container"); facebookArr.forEach((post) => { const postId = post.post_id; const postContainer = document.createElement("div"); facebookContainer.append(postContainer); postContainer.classList.add('post'); const postTitle = document.createElement("div"); postContainer.append(postTitle) postTitle.classList.add('post--title'); postTitle.innerHTML = "additional text to append" const postFrame = document.createElement("iframe"); postContainer.append(postFrame) postFrame.classList.add('post--frame'); postFrame.id= `fb-post__${postId}` postFrame.src = `https://www.facebook.com/plugins/post.php?href=https%3A%2F%2Fwww.facebook.com%2FIconicCool%2Fposts%2F${postId}&show_text=true&width=500` postFrame.width = "200" postFrame.height = "389" postFrame.scrolling = "no" postFrame.frameborder = "0" postFrame.allowfullscreen = "true" postFrame.allow = "autoplay; clipboard-write; encrypted-media; picture-in-picture; web-share" }); <!-- language: lang-css --> #facebook-feed-container { display: flex; flex-direction: row; row-gap: 1rem; column-gap: 3rem; padding: 1rem; } .post--title { padding: 0.5rem; text-align: center; color: red; } .post--frame { border: none; overflow: hidden; } <!-- language: lang-html --> <div id="facebook-feed-container"></div> <!-- end snippet -->
{"Voters":[{"Id":4415625,"DisplayName":"dani-vta"}]}
{"Voters":[{"Id":-1,"DisplayName":"Community"}]}
Invoke-command works only when any user is logged
As stated [here][1] tensorboard is part of tensorflow but does not depend on it. One can use it in pytorch such as ```python from torch.utils.tensorboard import SummaryWriter ``` It is, however, annoying that this import causes a long trace of tensorflow related warnings ```bash 2024-03-28 12:11:43.296359: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2024-03-28 12:11:43.331928: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-03-28 12:11:43.970865: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT ``` I wonder if this is necessary and / or how they can be (safely?) muted. [1]: https://stackoverflow.com/questions/57547471/does-torch-utils-tensorboard-need-installation-of-tensorflow
null
{"Voters":[{"Id":147356,"DisplayName":"larsks"},{"Id":13317,"DisplayName":"Kenster"},{"Id":1431720,"DisplayName":"Robert"}],"SiteSpecificCloseReasonIds":[18]}
{"Voters":[{"Id":23869325,"DisplayName":"alon"}],"DeleteType":1}
First of all, congrats of implementing PACE, that's not an easy task for most devs. DG4 (iris) is usually not present. At the time the company that had IP rights to the methods required for iris authentication didn't open them up sufficiently (in time). **If** it is present it is protected using EAC (authentication of the reader using card verifiable certificates). PACE only establishes reader authentication in the sense that it makes sure that the reader has access to the document or at least the CAN or MRZ. Access to protected biometric data requires the reader to have a DS certificate trusted / signed by the CVCA certificate / private key that is controlled by the country of issuance. That DS certificate & private key can then be used to create IS certificates & private key combinations to get access to DG3 and DG4.
{"Voters":[{"Id":14463414,"DisplayName":"dan c"}]}
I am making a simple golf score tacking app and am using the [react-native-picker](https://github.com/react-native-picker/picker) library to help me select the course I want to view. The functionality is all working and it picks the correct value, but for some reason, the picker bounces back up to the first item in the list. This screenshot was taken directly after I used the picker to scroll down to "Whispering Pines Golf Resort", and you can see that the subtitle underneath has updated correctly. Only the picker itself is bouncing back up to the top. ![IOS Picker Issue](https://i.stack.imgur.com/0eSUH.jpg) The weird part is that it looks completely fine on Android, which uses a drop-down style picker. ```lang-js <Picker style={styles.picker} selectedValue={state.currentCourse} onValueChange={(itemValue) => getUserScores(itemValue)} > {state.courses.map((course) => ( <Picker.Item key={course.name} label={course.name} value={course} /> ))} </Picker> ``` The state.courses array is populated earlier just by importing a json file containing the data. Is there something wrong with my code or is this a bug in the library?
React Native - Picker Bouncing Back to the Top on IOS
|javascript|reactjs|react-native|picker|
I used a **ready-made template** for my **asp.net** project... I don't know much about html and Css, I just **deleted and customized** some of the code by trial and error. I spent 20 days on this back end and front end... To **test mobile response**, I **minimized my web browser** in PC... everything was **fine**... Now I encountered this when I tested it on my **phone**... Can I kill myself? goddd i will lose my job help me please This is **Pc** response : its **okey** [![enter image description here][1]][1] This Is **PC Minimized** Response : Its **Okey** too [![enter image description here][2]][2] This Is In **Mobile Browser**... : **very bad** How It Possible???!! [![enter image description here][3]][3] How It Possibleeee ???!!!! everything is bad see footer !! it show footer like PC not mobile ! why??? what i should to do? The best and fastest way to solve this problem in the whole project? i dont know what i should share... what html tag are important now? [1]: https://i.stack.imgur.com/mTvZo.jpg [2]: https://i.stack.imgur.com/hCwEz.jpg [3]: https://i.stack.imgur.com/VrpZV.jpg
Html + Css Good Response on computer (even in minimized mode) but bad on mobile
|html|css|asp.net-core|bootstrap-4|response|
{"Voters":[{"Id":23869325,"DisplayName":"alon"}]}
Looking for a way to retrieve control flow information, such as local variables and their values and which instructions/branches were executed for the tested methods using JaCoCo (0.8.11). I am running it on IntelliJ with maven. The only clue I found was specifying jacoco agent .jar file when executing tests, but even then it is unclear how exactly to use it. So I am looking for clarifications or any alternative solutions. Any advise is greatly appreciated.
Get control flow information with JaCoCo
|java|maven|jacoco|control-flow|jacoco-maven-plugin|
null
The comment by @Christian Stieber identifies the immediate problem: > When you do the recursive call to `count_end`, you wanted to assign the return value to `ctr`. ```lang-cpp //start = count_end(a, start + 1, ctr); // Wrong! ctr = count_end(a, start + 1, ctr); // Right. ``` With this fix, the two `cout` expressions display the same value. For the trial run below, I entered a famous quotation by Mahatma Gandhi. ```lang-none An ounce of patience is worth more than a tonne of preaching. 12 is count12 ``` You can clean up the output by by removing `cout` from function `count_end`, and rewording the message. ```lang-none WORD COUNTER Enter some text: An ounce of patience is worth more than a tonne of preaching. Word count: 12 ``` This works fine, so long as the user does not forget to enter a period. Without one, however, function `count_end` will overrun the buffer, while it searches for a period. Rather than stopping when a period is encountered, you should stop when the terminating null character (`'\0'`) of the C-string is found. Here is the complete program: ```lang-cpp // main.cpp #include <iostream> using std::cin; using std::cout; int count_end(char* a, int start, int ctr) { if (a[start] == '\0') { // Stop when '\0' is found //cout << ctr << "\n"; // Delete this return ctr; } else if (a[start] == ' ' && a[start - 1] != ' ') { ctr++; } //start = count_end(a, start + 1, ctr); // Wrong! ctr = count_end(a, start + 1, ctr); // Right. return ctr; } int main() { char sentence[1000]; cout << "WORD COUNTER\n\n" << "Enter some text: "; cin.getline(sentence, 1000); cout << '\n'; int x; x = count_end(sentence, 1, 1); cout << "Word count: " << x << "\n\n"; return 0; } // end file: main.cpp ``` #### A C++ Solution When a student posts a question about a "C++" program that use more `C` than `C++`, the explanation is usually that he is in a course that teaches both languages at the same time. If that is the case here, my advice is to write two solutions to each homework assignment, one using `C`, and the other, `C++`. The program in the previous section patches up the `C` program from the OP. This section shows an approach that could be used in a `C++` program. It uses a simple `Lexer` class that stores a string, and doles out words one at a time. ```lang-cpp std::string s{ "An ounce of patience is worth more than a tonne of preaching." }; Lexer lexer{ s }; // constructor takes a string, and saves a copy. ``` Member function `get_word` is the only public member function in a `Lexer`. It returns the next word from string `s`. When there are no more words remaining, function `get_word` returns an empty string. ```lang-cpp std::string word = lexer.get_word(); ``` To count words, you can test whether the value returned by `get_word` is empty. The while-loop below says, in effect, "While the string returned by `lexer.get_word()` is not empty, keep looping." ```lang-cpp int main() { std::string s{ "An ounce of patience is worth more than a tonne of preaching." }; Lexer lexer{ s }; int count{}; while (!lexer.get_word().empty()) ++count; std::cout << "There are " << count << " words in the string: \n\n" << std::quoted(s) << "\n\n"; return 0; } ``` #### Output ```lang-none There are 12 words in the string: "An ounce of patience is worth more than a tonne of preaching." ``` #### Class `Lexer` The implementation below defines a word to be any sequence of non-whitespace characters. So letters, digits, and punctuation marks are all considered to be characters belonging to a word. Member variable `next_char` is a subscript into string `s`. It is the index of the next character to be extracted from string `s`. It starts out at zero, and is incremented as words are extracted from string s. When `next_char == s.size()`, there are no words remaining in `s`. After skipping over whitespace characters, function `get_word` marks the start of a word by saving the value of `next_char`: ```lang-cpp auto const start{ next_char }; ``` Function `get_word` then runs a loop, searching for either the end of string `s`, or the whitespace character that marks the end of the word, whichever comes first. After the loop, `next_char` is an index to the (possibly nonexistent) character that follows the final character of the word. The value returned from function `get_word` is a substring formed from variables `s`, `start`, and `next_char`. The size of the substring is `next_char - start`. Here is the complete function. The loop says, in effect, "While characters remain, and the next character is not whitepace, keep looping." ```lang-cpp std::string get_word() noexcept { skip_whitespace(); auto const start{ next_char }; while (next_char != s.size() && !is_space(s[next_char])) ++next_char; return s.substr(start, next_char - start); } ``` This is the complete program: ```lang-cpp // main.cpp #include <cctype> #include <iomanip> #include <iostream> #include <string> #include <utility> class Lexer { std::string s; std::string::size_type next_char{}; public: Lexer (std::string s) noexcept : s{ std::move(s) } {} std::string get_word() noexcept { skip_whitespace(); auto const start{ next_char }; while (next_char != s.size() && !is_space(s[next_char])) ++next_char; return s.substr(start, next_char - start); } private: static bool is_space(unsigned char const c) noexcept { return std::isspace(c); } void skip_whitespace() noexcept { while (next_char != s.size() && is_space(s[next_char])) ++next_char; } }; int main() { std::string s{ "An ounce of patience is worth more than a tonne of preaching." }; Lexer lexer{ s }; int count{}; while (!lexer.get_word().empty()) ++count; std::cout << "There are " << count << " words in the string: \n\n" << std::quoted(s) << "\n\n"; return 0; } // end file: main.cpp ```
Unlike the 1st answer I disagree. Try both code segments with an `.explain()` and you will see the generated Physical Plan for Execution is *exactly the same*. Spark is based on `lazy evaluation`. That is to say: > All transformations in Spark are lazy, in that they do not compute > their results right away. Instead, they just remember the > transformations applied to some base dataset (e.g. a file). The > transformations are only computed when an action requires a result to > be returned to the driver program. This design enables Spark to run > more efficiently. For example, we can realize that a dataset created > through map will be used in a reduce and return only the result of the > reduce to the driver, rather than the larger mapped dataset. The upshot of all this is, that I ran similar code to yours with 2 filters applied, and note that as the **Action** `.count` causes just-in-time evaluation, Catalyst filtered out based on both the first and the 2nd filter. This is known as `"Code Fusing"` - which can be done to late execution aka Lazy Evaluation. **Snippet 1 and Physical Plan** from pyspark.sql.types import StructType,StructField, StringType, IntegerType from pyspark.sql.functions import col data = [("James","","Smith","36636","M",3000), ("Michael","Rose","","40288","M",4000), ("Robert","","Williams","42114","M",4000), ("Maria","Anne","Jones","39192","F",4000), ("Jen","Mary","Brown","","F",-1) ] schema = StructType([ \ StructField("firstname",StringType(),True), \ StructField("middlename",StringType(),True), \ StructField("lastname",StringType(),True), \ StructField("id", StringType(), True), \ StructField("gender", StringType(), True), \ StructField("salary", IntegerType(), True) \ ]) df = spark.createDataFrame(data=data,schema=schema) df = df.filter(col('lastname') == 'Jones') df = df.select('firstname', 'lastname', 'salary') df = df.filter(col('lastname') == 'Jones2') df = df.groupBy('lastname').count().explain() == Physical Plan == AdaptiveSparkPlan isFinalPlan=false +- HashAggregate(keys=[lastname#212], functions=[finalmerge_count(merge count#233L) AS count(1)#228L]) +- Exchange hashpartitioning(lastname#212, 200), ENSURE_REQUIREMENTS, [plan_id=391] +- HashAggregate(keys=[lastname#212], functions=[partial_count(1) AS count#233L]) +- Project [lastname#212] +- Filter (isnotnull(lastname#212) AND ((lastname#212 = Jones) AND (lastname#212 = Jones2))) +- Scan ExistingRDD[firstname#210,middlename#211,lastname#212,id#213,gender#214,salary#215] **Snippet 2 and Same Physical Plan** from pyspark.sql.types import StructType,StructField, StringType, IntegerType from pyspark.sql.functions import col data2 = [("James","","Smith","36636","M",3000), ("Michael","Rose","","40288","M",4000), ("Robert","","Williams","42114","M",4000), ("Maria","Anne","Jones","39192","F",4000), ("Jen","Mary","Brown","","F",-1) ] schema2 = StructType([ \ StructField("firstname",StringType(),True), \ StructField("middlename",StringType(),True), \ StructField("lastname",StringType(),True), \ StructField("id", StringType(), True), \ StructField("gender", StringType(), True), \ StructField("salary", IntegerType(), True) \ ]) df2 = spark.createDataFrame(data=data2,schema=schema2) df2 = df2.filter(col('lastname') == 'Jones')\ .select('firstname', 'lastname', 'salary')\ .filter(col('lastname') == 'Jones2')\ .groupBy('lastname').count().explain() == Physical Plan == AdaptiveSparkPlan isFinalPlan=false +- HashAggregate(keys=[lastname#299], functions=[finalmerge_count(merge count#320L) AS count(1)#315L]) +- Exchange hashpartitioning(lastname#299, 200), ENSURE_REQUIREMENTS, [plan_id=577] +- HashAggregate(keys=[lastname#299], functions=[partial_count(1) AS count#320L]) +- Project [lastname#299] +- Filter (isnotnull(lastname#299) AND ((lastname#299 = Jones) AND (lastname#299 = Jones2))) +- Scan ExistingRDD[firstname#297,middlename#298,lastname#299,id#300,gender#301,salary#302] **withColumn** Doing this: df = df.filter(col('lastname') == 'Jones') df = df.select('firstname', 'lastname', 'salary') df = df.withColumn("salary100",col("salary")*100) df = df.withColumn("salary200",col("salary")*200).explain() or via chaining gives the same result as well. I.e. it does not matter how you write the transformations. The final Physical Plan is what counts, but that optimization has some overhead though. Depends how you think on this, `select` is the alternative. == Physical Plan == *(1) Project [firstname#399, lastname#401, salary#404, (salary#404 * 100) AS salary100#414, (salary#404 * 200) AS salary200#419] +- *(1) Filter (isnotnull(lastname#401) AND (lastname#401 = Jones)) +- *(1) Scan ExistingRDD[firstname#399,middlename#400,lastname#401,id#402,gender#403,salary#404]
Old answer --- This XPath, //u/text() will select all text node children of all `u` elements in the document: I want this but I want this If you only want the first text node children, use //u/text()[1] Note that this will select first text nodes of *all* `u` elements in the document. If you only want the first of these text nodes, use (//u/text())[1] --- Updated answer --- Oops, a comment by @y.arazim made me realize that the tags here, <anchor/>I don't want this also<anchor/> despite their positioning, are self-closing, not opening and closing tags around the text. I wrote the old answer based on that mistake. See [@y.arazim's answer][1] (+1) for an XPath that meets his interpretation of OP's requirements (and properly accounts for the self-closing `anchor` tags). If OP more simply wants the `u` text node children before or after any `anchor` sibling elements, then this XPath would suffice: //u/text()[not(preceding-sibling::anchor and following-sibling::anchor)] [1]: https://stackoverflow.com/a/78244217/290085
I've seen some posts about how to dynamically add test fields to a form when a button is clicked. like in this post https://stackoverflow.com/questions/60939821/how-to-add-two-v-text-fields-when-i-click-a-button-dynamically-using-vuetify Im not understanding where I would put the v-for in this scenario. But, how can I achieve this using nuxt2 and vue2. I use a vuex store for the input data, but how can I identify the new text fields being added so I can update the store accordingly? Ideally I'd have a v-row with 2 v-cols and when I click the button, another row appears with the 2 v-cols, but Im not understanding where I would put the v-for in this scenario. Small snippet based on the above linked question. ```html <template> <v-card> <v-card-text> <v-row> <v-col> <v-text-field :label="textField.label1" v-model="textField.value1" ></v-text-field> Im not understanding where I would put the v-for in this scenario. <v-text-field :label="textField.label2" v-model="textField.value2" ></v-text-field> </v-col> <v-col> <v-btn @click="addRow">Add Row</v-btn> </v-col> </v-row> </v-card-text> </v-card> </template> ```
import torch.utils.tensorboard causes tensorflow warnings
|python|pytorch|tensorboard|
Both `Triangle_function()` and `Trapz()` call `plot()`. However, `plot()` creates a new figure and new subplot each time it is called: ``` def plot(MF): fig = Figure() plot1= fig.add_subplot(111) ``` Therefore, both functions cannot draw to the same figure. This should be refactored to get the behaviour you are looking for
I have a `Long` value named `delta` which contains a time value in milliseconds. I am trying to format it in a readable `String` like `00:00:00`, or if the hour value is zero `00:00`. ```kotlin private fun remainingTime(delta: Long): String { val hours = TimeUnit.MILLISECONDS.toHours(delta) return if (hours == 0L) { String.format( "%02d:%02d", TimeUnit.MILLISECONDS.toMinutes(delta) - TimeUnit.MINUTES.toMinutes(TimeUnit.MILLISECONDS.toHours(delta)), TimeUnit.MILLISECONDS.toSeconds(delta) - TimeUnit.MINUTES.toSeconds(TimeUnit.MILLISECONDS.toMinutes(delta)) ) } else { String.format( "%02d:%02d:%02d", hours, TimeUnit.MILLISECONDS.toMinutes(delta) - TimeUnit.MINUTES.toMinutes(TimeUnit.MILLISECONDS.toHours(delta)), TimeUnit.MILLISECONDS.toSeconds(delta) - TimeUnit.MINUTES.toSeconds(TimeUnit.MILLISECONDS.toMinutes(delta)) ) } } ``` Above code works, but looks very inefficient. I bet there is a smart Kotlin way to solve this?
Have you tried checking your cPanel Firewall? Please try navigate to Plugins > ConfigServer Security & Firewall > Firewall Configuration And Look for **SMTP_Block**, also try to check if it was enabled and if it was enabled make sure to disabled.
I am using the Quasar Framework on a Vue3 application. I have "successfully" integrated the `<q-slider />` component. However, upon panning the knob on the component, the following message gets logged on the console. `Unable to preventDefault inside passive event listener invocation` Efforts to resolve the issue have pointed me towards the browser with a common css solution being `style="touch-action: pan-x;"` but this does not work for me. I have tested this on; - Chrome: issue exists, - Edge: issue exists, - Firefox: issue exists - Safari: issue DOES NOT exist. My Quasar versions are: - "@quasar/extras": "^1.16.9", - "quasar": "^2.15.1", - "@quasar/app-vite": "^1.8.0", Edit: Code snippet ```` <script lang="ts" setup> import { computed, ref } from 'vue' import { sumBy } from 'lodash' const strategy = ref([ { label: 'Needs', color: 'primary', value: 50 }, { label: 'Wants', color: 'negative', value: 29 }, { label: 'Savings', color: 'positive', value: 20 }, ]) const balance = computed(() => 100 - sumBy(strategy.value, 'value')) </script> <template> <div> <div v-for="item, index in strategy" :key="`strategy-item-${index}`" > <div class="row text-caption"> <span class="text-body2"> {{ item.label }} </span> <q-space></q-space> {{ item.value }}% </div> <q-slider snap :step="5" :max="100" :color="item.color" v-model="item.value" :inner-max="item.value + balance" /> </div> </div> </template> ````
I have an Android library project which uses versions 7 of "Gradle" and the "Android Gradle Plugin" (versions 7.6.1 and 7.4.2 respectively to be precise). Here are the parts of my library's `build.gradle` file which relate to publishing: ```lang-groovy plugins { id("com.android.library") id("maven-publish") } task generateSourcesJar(type: Jar) { from android.sourceSets.main.java.srcDirs archiveClassifier.set("sources") } afterEvaluate { publishing { publications { release(MavenPublication) { from components.release artifact generateSourcesJar groupId "com.tazkiyatech" artifactId "android-utils" version "1.0.0" } } repositories { maven { name = "BuildFolder" url = "${project.buildDir}/repository" } } } } ``` The various `publish...` Gradle tasks that are available to my project work fine until I bump up the versions of "Gradle" and the "Android Gradle Plugin" in the project to version 8. Once I upgrade to version 8, the various `publish...` Gradle tasks fail and return the following error: ```lang-none * What went wrong: A problem was found with the configuration of task ':library:generateSourcesJar' (type 'Jar'). - Gradle detected a problem with the following location: '/Users/adil/Work/TazkiyaTech/android-utils/library/build/libs/library-sources.jar'. Reason: Task ':library:generateMetadataFileForReleasePublication' uses this output of task ':library:generateSourcesJar' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Possible solutions: 1. Declare task ':library:generateSourcesJar' as an input of ':library:generateMetadataFileForReleasePublication'. 2. Declare an explicit dependency on ':library:generateSourcesJar' from ':library:generateMetadataFileForReleasePublication' using Task#dependsOn. 3. Declare an explicit dependency on ':library:generateSourcesJar' from ':library:generateMetadataFileForReleasePublication' using Task#mustRunAfter. For more information, please refer to https://docs.gradle.org/8.2.1/userguide/validation_problems.html#implicit_dependency in the Gradle documentation. ``` I've been unable to action the possible solutions listed in the error output given that I can't work out how to create a dependency between the `generateSourcesJar` task which I own and the `generateMetadataFileForReleasePublication` task which I don't own. How can I get around this error and publish my library using versions 8 of "Gradle" and the "Android Gradle Plugin"?
|security|docker-compose|