instruction
stringlengths
0
30k
βŒ€
|c++|
null
I'm running into an error with an end-to-end Capybara spec in my Rails application. Here is the relevant part of my view: ```ruby #show.html.erb ... <%= button_to 'Cancel', cancel_subscription_path, class: 'button button--danger', data: { turbo_confirm: 'Are you sure you want to cancel?', turbo_method: 'post' } %> <div id="cancel_message"></div> ... ``` ```ruby # cancel_subscription_spec.rb require 'rails_helper' RSpec.describe 'CancelSubscriptions', js: true do it 'cancels an active subscription' do email = sign_up_and_subscribe visit subscription_path accept_confirm do click_button 'Cancel' end subscription = User.find_by(email:).current_subscription expect(subscription.cancelled?).to be(true) end end ``` `sign_up_and_subscribe` is just a simple helper method that signs up to the application and fills in the Stripe form to create a subscription - it works in other tests and I also see the subscription created in the Stripe dashboard, this isn't the issue. When the spec reaches `visit subscription_path` and attempts to click the cancel button, I get the following error: ``` Selenium::WebDriver::Error::UnexpectedAlertOpenError: unexpected alert open: {Alert text : } (Session info: chrome-headless-shell=123.0.6312.87) ``` I figured I was misusing the `accept_confirm` method, but from what I found in the documentation, this is the correct usage. What I found strange was that commenting out the `accept_confirm` in the spec and the `turbo_confirm` in the view didn't actually change the error message. Is Capybara caching the page in between test runs? I would consider changing the behaviour to make the spec pass at this point, I prefer to have a well-tested user flow even if that means a small change in the user experience, but I can't even do that, the issue remains the same. In case it's helpful, the relevant part of the `rails_helper.rb` is: ```ruby Capybara.register_driver :chrome do |app| Capybara::Selenium::Driver.new app, browser: :chrome, options: Selenium::WebDriver::Chrome::Options.new(args: %w[headless disable-gpu]) end Capybara.javascript_driver = :chrome ``` I can post the rest if it might be useful, but it's mostly unrelated as far as I can see. My `cancel` action in the controller looks like this: ```ruby def cancel @subscription = current_user.current_subscription SubscriptionCanceller.new(subscription_id: @subscription.provider_id).call rescue SubscriptionCancellationError => _e render :cancellation_error end ``` where I have `turbo_stream` views for `cancellation_error` and `cancel` itself, both of which work in the browser when I test them manually. Just in case that affects it, here is the `cancel.turbo_stream.erb` ```ruby <%= turbo_stream.replace 'cancel_message' do %> <div>We are cancelling your subscription.</div> <% end %> ``` If there is any further detail/code required, let me know and I'll post it.
I'm not at a proper machine to test this on, but I think it's simpler if you use a two-stage approach. First, generate each of the images with its label, then pipe all those into a second `montage` command to lay them all out together. In simple terms, that will look like this: for f in *.jpg; do # Generate a single, labelled image on stdout in MIFF format magick "$f" ... MIFF:- done | magick montage MIFF:- ... result.jpg I use `MIFF` as the intermediate format because it can preserve and propagate any bit-depth, any transparency and all meta-data. So, your actual command will look more like this: for f in $(ls -rt *.jpg) ; do # Take first 8 characters, no need to invoke external process like "cut" label="${f:0:8}" magick "$f" -label "$label" -font Arial -pointsize 16 \ -auto-orient -resize 200x200 -background black \ -fill grey MIFF:- done | magick montage -background black MIFF:- -geometry +2+2 -tile 8x result.jpg --- Remember not to put any debug code inside the `for` loop that writes on `stdout` because it will get sent to the final `montage` command and confuse it. So, if you want debug statements inside the `for` loop, use `stderr` like this: >&2 echo DEBUG: Processing file $f Note that you may want to add `-extent 200x200` to the command inside the `for` loop to ensure smaller, or different aspect-ratio images are padded up to a full 200x200.
I am trying to fetch lots of images from firebase storage and display it in my flutter app but I am facing speed issues. It is taking a lot of time for the images to load. Also once the images are loaded they are loading again after I scroll down and scroll up. Is there any solution to this ?. This is my code. void main() async { WidgetsFlutterBinding.ensureInitialized(); await Firebase.initializeApp(); runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( debugShowCheckedModeBanner: false, home: WallpaperScreen(), ); } } class WallpaperScreen extends StatefulWidget { @override _WallpaperScreenState createState() => _WallpaperScreenState(); } class _WallpaperScreenState extends State<WallpaperScreen> { List<String> imageUrls = []; @override void initState() { super.initState(); fetchImages(); } Future<void> fetchImages() async { try { final storageRef = FirebaseStorage.instance .refFromURL("firebase url"); final result = await storageRef.listAll(); for (var item in result.items) { try { final url = await item.getDownloadURL(); setState(() { imageUrls.add(url); }); } catch (error) { print("Error fetching URL: $error"); } } } catch (e) { print("Error fetching images: $e"); } } @override Widget build(BuildContext context) { return Scaffold( body: Container( decoration: const BoxDecoration( gradient: LinearGradient( begin: Alignment.topCenter, end: Alignment.bottomCenter, colors: [ Color(0xff0e0023), Color(0xff3a1e54), ], ), ), child: Column( mainAxisAlignment: MainAxisAlignment.center, children: [ const SizedBox(height: 40), const Align( alignment: Alignment.topCenter, child: Text( 'Wallpapers', style: TextStyle( fontSize: 24, fontWeight: FontWeight.bold, color: Colors.white), ), ), Expanded( child: Padding( padding: const EdgeInsets.all(20.0), child: GridView.builder( itemCount: imageUrls.length, gridDelegate: const SliverGridDelegateWithFixedCrossAxisCount( crossAxisCount: 2, childAspectRatio: 9 / 16, crossAxisSpacing: 30, mainAxisSpacing: 30, ), itemBuilder: (BuildContext context, int index) { return ClipRRect( borderRadius: BorderRadius.circular(10), child: InkWell( onTap: () { Navigator.push( context, MaterialPageRoute( builder: (context) => FullScreenImageScreen( imageUrl: imageUrls[index], ), ), ); }, child: Image.network( imageUrls[index], fit: BoxFit.cover, loadingBuilder: (BuildContext context, Widget child, ImageChunkEvent? loadingProgress) { if (loadingProgress == null) { return child; } else { return Center( child: CircularProgressIndicator( value: loadingProgress.expectedTotalBytes != null ? loadingProgress.cumulativeBytesLoaded / loadingProgress.expectedTotalBytes! : null, ), ); } }, ), ), ); }, ), ), ), ], ), ), ); } }
Firebase storage : How to load images faster in flutter
|flutter|firebase|
Below code helped me to solve the above problem Object serviceBean = applicationContext.getBean(beanClassName); if (AopUtils.isAopProxy(serviceBean) && serviceBean instanceof Advised) { serviceBean = ((Advised) serviceBean).getTargetSource().getTarget(); } Object savedObject = serviceBean.getClass().getMethod("save", moduleClass).invoke(serviceBean, object);
@Before("staticinitialization(org.mazouz.aop.Main1)") While making the below generic @Before("staticinitialization(*.*.*(..))") Im getting the below error Syntax error on token "staticinitialization(*.*.*(..))", ")" expected
My GUI application is not responsive, **I want it to resize according to the window size.** Currently am using .`place(x,y)` for placing components. Example: label.place(x=100,y=50) [Current components when windows maximized](https://i.stack.imgur.com/c0mvp.png) Tried looking for a solution, and was informed of `.pack()` method for placing component, it has 2 paramas relative x (relx) and relative y (rely) in respect to its frame/master. Tested it out but still my GUI app isn't .responsive.
You may have misunderstood the doc here. Your actual concern seems to be about how to implement [SRP][1]. This isn't a valid concern because [the document mentions that][2]: > The app generates SRP details with the Amazon Cognito SRP features > that are built in to AWS SDKs. You don't need to worry about the details of SRP implementation. The SDK will handle that for you. Here's the second part of the question: How can you securely use Amazon Cognito in an unsecured environment like a web frontend or mobile app? Since Cognito offers [two types of operations][3]: 1. Authenticated operations: These require IAM credentials, an access token, a session token, a client secret, or a combination of these. 2. Unauthenticated operations: These don't require including any secrets in your code. Please refer to [this link][3] and open the "Unauthenticated user operations" section for more details. [![enter image description here][4]][4] If you're interested in the actual code implementation, there are [examples of usage][5] in JS that you can easily copy and paste into your project (be sure to follow any required prerequisites). [1]: https://www.linkedin.com/pulse/what-secure-remote-password-synologyc2/ [2]: https://docs.aws.amazon.com/en_en/cognito/latest/developerguide/amazon-cognito-user-pools-authentication-flow.html#amazon-cognito-user-pools-client-side-authentication-flow [3]: https://docs.aws.amazon.com/cognito/latest/developerguide/user-pools-API-operations.html#user-pool-apis-auth-unauth [4]: https://i.stack.imgur.com/sqXY0.png [5]: https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javascriptv3/example_code/cognito-identity-provider
after migrating to ReduxToolKit 2.0 and Redux 5.0 am struggling in the extraReducer and createSlice a bit (am still a beginner)
|reactjs|react-redux|redux-toolkit|
null
{"Voters":[{"Id":20259506,"DisplayName":"Chamalka Jayashan"}]}
Use a named pipe. On the host OS, create a script to loop and read commands, and then you call `eval` on that. Have the docker container read to that named pipe. To be able to access the pipe, you need to mount it via a volume. This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information. My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought. Some of the other answers such as running a script under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.
null
I want to make screen transition framework. Please tell me the best way. Conditions - Use ```tkraise()``` method. - Fast process during screen transitions is performed in main thread. (ex. tkinter(gui), change variable) - Processing that takes time during screen transitions is performed on the sub thread. (ex. control for databases, files) - Don't tkinter process in sub thread. Polling between threads is done using a queue. - File hierarchy is below. Create a file for each screen. ``` Multi threads ex Frame1 ↓ ↓ Raise waiting frame: main thread ↓ Data get from database: sub thread ↓ Set data Label and Treeview: main thread ↓ Raise control frame: main thread ↓ Frame2 ``` ``` File hierarchy app\ β”œ gui\ β”œ frames.py ο½₯ο½₯ο½₯ This file is define frame1, frame2, ο½₯ο½₯ο½₯, frameN. β”œ frame1.py β”œ frame2.py ο½₯ο½₯ο½₯ β”” frameN.py β”œ db\ β”œ (Omit) β”” main.py ``` I made this sample code. Screen transition and multi threads. ```Python import tkinter as tk from tkinter import ttk def on_button1(): frame2.tkraise() def on_button2(): frame1.tkraise() if __name__ == '__main__': root = tk.Tk() root.grid_rowconfigure(0, weight=1) root.grid_columnconfigure(0, weight=1) frame1 = ttk.Frame(root) frame1.grid(row=0, column=0, sticky='nsew') frame2 = ttk.Frame(root) frame2.grid(row=0, column=0, sticky='nsew') button1 = ttk.Button(frame1, text='To frame2', command=on_button1) button1.pack() button2 = ttk.Button(frame2, text='To frame1', command=on_button2) button2.pack() root.mainloop() ``` ```Python import queue import threading import time import tkinter as tk from datetime import datetime from tkinter import ttk def on_button(): button['state'] = tk.DISABLED thread = threading.Thread(target=ctrl_db) thread.start() def ctrl_db(): # Long process. (omit) time.sleep(3) # Don't tkinter process in sub thread. def func(): svar.set(datetime.now().strftime('%Y/%m/%d (%a) %H:%M:%S')) button['state'] = tk.NORMAL # Request tkinter process to main thread. queue.put([func,]) def polling(): # Process the received tkinter process. while not queue.empty(): func, *args = queue.get() func(*args) root.after(1000, polling) if __name__ == '__main__': queue = queue.Queue() root = tk.Tk() button = ttk.Button(root, text='Button', command=on_button) button.pack() svar = tk.StringVar() label = ttk.Label(root, textvariable=svar) label.pack() root.after(1000, polling) root.mainloop() ```
i am using streamlit to build a chat application and the query txt is not getting rest on submission. here is my code and screenshot for the same ``` def submit(): record_timing() # Record time before submitting message st.session_state.something = st.session_state.widget st.session_state.widget = '' if "messages" not in st.session_state: st.session_state.messages = [{"role": "assistant", "content": "How may I help you today?"}] if user_prompt := st.text_input("Your message here", on_change=submit, key="text_input"): # Assign unique key st.session_state.messages.append({"role": "user", "content": user_prompt}) with st.chat_message("user"): st.write(user_prompt) if st.session_state.messages[-1]["role"] != "assistant": with st.chat_message("assistant"): with st.spinner("Thinking..."): response = model(user_prompt, max_length, temp) placeholder = st.empty() full_response = '' for item in response: full_response += item placeholder.markdown(full_response) placeholder.markdown(full_response) message = {"role": "assistant", "content": full_response} st.session_state.messages.append(message) ``` [output](https://i.stack.imgur.com/t7jco.png) i want the user question area ot be reset after submission
Text_input is not being cleared out/reset using streamlit
|python|chatbot|huggingface-transformers|streamlit|large-language-model|
null
I have written a Fortran subroutine to compute the size of an array and I want to get the result directly in R. However, I do not get the expected result. First I build the `size95.f95` file ```lang-none subroutine fsize(x, n) double precision, intent(in):: x(:) integer, intent(out) :: n n = size(x) end subroutine fsize ``` Then I compile it at CMD of Windows with ``` R CMD SHLIB size95.f95 ``` Although, after I load and test it in R for a 10-element vector, I get a length of 1 instead of 10. ``` x <- 1:10 dyn.load("size95.dll") dotCall64::.C64("fsize", SIGNATURE = c("double", "integer"), INTENT = c("r", "w"), x=x, n=dotCall64::integer_dc(1)) # $x # NULL # # $n # [1] 1 ``` The desired output should be ``` # $x # NULL # # $n # [1] 10 ```
I'm using below extension function to Upsert data with EF Core with EFCore.BulkExtensions, but the issue is the execution for this function when I tried to insert 2 millions record is taking around 17 minutes and lately it throws this exception **Could not allocate space for object 'dbo.SORT temporary run storage: 140737501921280' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.\r\nThe transaction log for database 'tempdb' is full due to 'ACTIVE_TRANSACTION' and the holdup lsn is (41:136:347)**, and I can see the storage for "C" partition is decreasing when I executed this function. [![enter image description here][1]][1] when I restart the sql server, the free space becomes around 30 GB, I tried to use multi threading (Parallel) with the insertion with no noticed time change, so what do recommend or is there any issue in the code blow, **Note:** the for loop is not taking too much time even if it was 2 millions records. ``` public static async Task<OperationResultDto> AddOrUpdateBulkByTransactionAsync<TEntity>(this DbContext _myDatabaseContext, List<TEntity> data) where TEntity : class { using (var transaction = await _myDatabaseContext.Database.BeginTransactionAsync()) { try { _myDatabaseContext.Database.SetCommandTimeout(0); var currentTime = DateTime.Now; // Disable change tracking _myDatabaseContext.ChangeTracker.AutoDetectChangesEnabled = false; // Set CreatedDate and UpdatedDate for each entity foreach (var entity in data) { var createdDateProperty = entity.GetType().GetProperty("CreatedDate"); if (createdDateProperty != null && (createdDateProperty.GetValue(entity) == null || createdDateProperty.GetValue(entity).Equals(DateTime.MinValue))) { // Set CreatedDate only if it's not already set createdDateProperty.SetValue(entity, currentTime); } var updatedDateProperty = entity.GetType().GetProperty("UpdatedDate"); if (updatedDateProperty != null) { updatedDateProperty.SetValue(entity, currentTime); } } // Bulk insert or update var updateByProperties = GetUpdateByProperties<TEntity>(); var bulkConfig = new BulkConfig() { UpdateByProperties = updateByProperties, CalculateStats = true, SetOutputIdentity = false }; // Batch size for processing int batchSize = 50000; for (int i = 0; i < data.Count; i += batchSize) { var batch = data.Skip(i).Take(batchSize).ToList(); await _myDatabaseContext.BulkInsertOrUpdateAsync(batch, bulkConfig); } // Commit the transaction if everything succeeds await transaction.CommitAsync(); return new OperationResultDto { OperationResult = bulkConfig.StatsInfo }; } catch (Exception ex) { // Handle exceptions and roll back the transaction if something goes wrong transaction.Rollback(); return new OperationResultDto { Error = new ErrorDto { Details = ex.Message + ex.InnerException?.Message } }; } finally { // Re-enable change tracking _myDatabaseContext.ChangeTracker.AutoDetectChangesEnabled = true; } } } ``` [1]: https://i.stack.imgur.com/ts9IF.png
Upsert huge amount of data by EFCore.BulkExtensions
|c#|performance|.net-core|entity-framework-core|efcore.bulkextensions|
I have two applications: a Unity app and a Qt app. I need to include the Unity app content as an item in my Qt application. What are the best way for doing so? Could we have some form of communication between the two via RTSPΒ or UDP? Is there even a way to send the texture through UDP socket in Unity? Both applications will operate on the same machine. I tried to embed the unity app inside the qt app using the following code : WId id = (WId)FindWindow(NULL, L"unityApp"); if (!id) return -1; QApplication a(argc, argv); QWindow* window = QWindow::fromWinId(id); window->requestActivate(); QWidget* widget = QWidget::createWindowContainer(window); widget->setGeometry(0,0, 9300, 720); widget->show(); but this solution uses qt widgets and i need it to be in the qml. I found another solution that i can get the textureId from unity and show it in my qt app but i have no idea if this is possible. I'm using Qt6.5.3 on Windows.
The issue there is the BackgroundView itself. Just remove its frame. If you want it centered remove VStack and the Spacer: struct BackgroundView: View { var body: some View { ZStack { Circle() .fill(.green) } .ignoresSafeArea() } }
null
|php|mysql|
A common task I have is plotting time series data and creating gray bars that denote NBER recessions. For instance, `recessionplot()` from Matlab will do exactly that. I am not aware of similar funcionality in Python. Hence, I wrote the following function to automate this process: def add_nber_shade(ax: plt.Axes, nber_df: pd.DataFrame, alpha: float=0.2): """ Adds NBER recession shades to a singe plt.axes (tipically an "ax"). Args: ax (plt.Axes): The ax you want to change with data already plotted nber_df (pd.DataFrame): the Pandas dataframe with a "start" and an "end" column alpha (float): transparency Returns: plt.Axes: returns the same axes but with shades """ min_year = pd.to_datetime(min(ax.lines[0].get_xdata())).year nber_to_keep = nber_df[pd.to_datetime(nber_df["start"]).dt.year >= min_year] for start, end in zip(nber_to_keep["start"], nber_to_keep["end"]): ax.axvspan(start, end, color = "gray", alpha = alpha) return ax Here, `nber_df` that looks like the following (copying the dictionary version): {'start': {0: '1857-07-01', 1: '1860-11-01', 2: '1865-05-01', 3: '1869-07-01', 4: '1873-11-01', 5: '1882-04-01', 6: '1887-04-01', 7: '1890-08-01', 8: '1893-02-01', 9: '1896-01-01', 10: '1899-07-01', 11: '1902-10-01', 12: '1907-06-01', 13: '1910-02-01', 14: '1913-02-01', 15: '1918-09-01', 16: '1920-02-01', 17: '1923-06-01', 18: '1926-11-01', 19: '1929-09-01', 20: '1937-06-01', 21: '1945-03-01', 22: '1948-12-01', 23: '1953-08-01', 24: '1957-09-01', 25: '1960-05-01', 26: '1970-01-01', 27: '1973-12-01', 28: '1980-02-01', 29: '1981-08-01', 30: '1990-08-01', 31: '2001-04-01', 32: '2008-01-01', 33: '2020-03-01'}, 'end': {0: '1859-01-01', 1: '1861-07-01', 2: '1868-01-01', 3: '1871-01-01', 4: '1879-04-01', 5: '1885-06-01', 6: '1888-05-01', 7: '1891-06-01', 8: '1894-07-01', 9: '1897-07-01', 10: '1901-01-01', 11: '1904-09-01', 12: '1908-07-01', 13: '1912-02-01', 14: '1915-01-01', 15: '1919-04-01', 16: '1921-08-01', 17: '1924-08-01', 18: '1927-12-01', 19: '1933-04-01', 20: '1938-07-01', 21: '1945-11-01', 22: '1949-11-01', 23: '1954-06-01', 24: '1958-05-01', 25: '1961-03-01', 26: '1970-12-01', 27: '1975-04-01', 28: '1980-08-01', 29: '1982-12-01', 30: '1991-04-01', 31: '2001-12-01', 32: '2009-07-01', 33: '2020-05-01'}} The function is very simple. It retrieves the minimum and maximum dates that were plotted, slices the given dataframe with start and end dates and then it plots the bars. There are two major ways. In one way it will work as intended, but not in the other way. *The way it works*: df = pd.DataFrame(np.random.randn(3000, 2), columns=list('AB'), index=pd.date_range(start='1970-01-01', periods=3000, freq='W')) plt.figure() plt.plot(df.index, df['A'], lw = 0.2) add_nber_shade(plt.gca(), nber) plt.show() *The way it does not work* (using Pandas to plot directly) plt.figure() df.plot(y=["A"], lw = 0.2, ax = plt.gca(), legend=None) ut.add_nber_shade(plt.gca(), nber) plt.show() It throws out the following error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[106], line 3 1 plt.figure() 2 df.plot(y=["A"], lw = 0.2, ax = plt.gca(), legend=None) ----> 3 ut.add_nber_shade(plt.gca(), nber) 4 plt.show() File ~/Dropbox/Projects/SpanVol/src/spanvol/utilities.py:20, in add_nber_shade(ax, nber_df, alpha) 8 def add_nber_shade(ax: plt.Axes, nber_df: pd.DataFrame, alpha: float=0.2): 9 """ 10 Adds NBER recession shades to a singe plt.axes (tipically an "ax"). 11 (...) 18 plt.Axes: returns the same axes but with shades 19 """ ---> 20 min_year = pd.to_datetime(min(ax.lines[0].get_xdata())).year 21 nber_to_keep = nber_df[pd.to_datetime(nber_df["start"]).dt.year >= min_year] 23 for start, end in zip(nber_to_keep["start"], nber_to_keep["end"]): File ~/miniconda3/envs/volatility/lib/python3.11/site-packages/pandas/core/tools/datetimes.py:1146, in to_datetime(arg, errors, dayfirst, yearfirst, utc, format, exact, unit, infer_datetime_format, origin, cache) 1144 result = convert_listlike(argc, format) 1145 else: -> 1146 result = convert_listlike(np.array([arg]), format)[0] 1147 if isinstance(arg, bool) and isinstance(result, np.bool_): ... File tslib.pyx:552, in pandas._libs.tslib.array_to_datetime() File tslib.pyx:541, in pandas._libs.tslib.array_to_datetime() TypeError: <class 'pandas._libs.tslibs.period.Period'> is not convertible to datetime, at position 0 This is because Pandas is doing some transformation under the hood to deal with the index and is transforming it into some other class. Is there a simple way to either fix the function or some way to prevent pandas from doing it? Thanks a lot!
How to prevent Pandas from plotting index as Period?
|python|pandas|matplotlib|plot|time-series|
I am using CatBoostRegressor for my research as you can see below: ''' model = CatBoostRegressor(loss_function='RMSE', random_seed= seed, verbose=False) model.fit(train_XN, train_Y) ''' My question is whether CatBoostRegressor automatically uses a portion of the training data for validation and early stopping in case of overfitting even if I don't explicitly specify an evaluation set (eval_set=(X_val, y_val)) during the fitting process.
eval_set in CatBoostRegressor
|validation|catboostregressor|
In WooCommerce, I aim to enforce a minimum quantity restriction on purchased items by category. Here's an example: - For products 1, 2, and 3 (all in the Shoes category), the combined minimum quantity is 10. - For products 4, 5, and 6 (all in the Pants category), the combined minimum quantity is 10. - For products 7, 8, and 9 (all in the Shirt category), the combined minimum quantity is 10. Each category should have at least 10 products ordered. However, these can be spread across multiple products within the same category. I found a piece of code that partially meets my needs. It displays the minimum notice, but it allows mixing of multiple categories, which is not what I want. Here's the code: ```php add_action('woocommerce_check_cart_items', 'custom_set_min_total'); function custom_set_min_total() { if (is_cart() || is_checkout()) { global $woocommerce, $product; $i = 0; foreach ($woocommerce->cart->cart_contents as $product) : $minimum_cart_product_total = 10; if (has_term( array('oneya', 'twoya'), 'product_cat', $product['product_id'])) : $total_quantity += $product['quantity']; endif; endforeach; foreach ($woocommerce->cart->cart_contents as $product) : if (has_term( array('oneya', 'twoya'), 'product_cat', $product['product_id'])) : if ($total_quantity < $minimum_cart_product_total && $i == 0) { wc_add_notice( sprintf( 'A minimum of 10 products is required per unique design. You have not met these requirements. Please add at least 10 items to proceed with your order.', $minimum_cart_product_total, $total_quantity ), 'error' ); } $i++; endif; endforeach; } } ```
Get ID of user being updated inside Form Request to ignore validation when updating same user
When using **kubectl get nodes**, the connection refused error you are experiencing is probably caused by the way your Ansible playbook handles the kubeconfig file. **See how to resolve the problem and make calico deployment possible here.** - Make sure the master node's Kubernetes API server is up and working by checking its port 6443 This can be accomplished by confirming that no firewall rules are preventing the connection and monitoring the kube-apiserver service's status. - Due to security considerations, it is generally discouraged to run Ansible jobs with root capabilities. Instead, create a special user for Kubernetes management and configure Ansible to utilize that user. - Inside the playbook once the kubeconfig has been copied to a non-root user's home directory.Ensure that the user has the necessary permissions, and utilize the file module's owner and group options to properly specify ownership and group. - Change the playbook's kubectl get nodes task to point to the non-root user's kubeconfig directory. - Using the official calico for Kubernetes CNI provider on [Ansible Galaxy][1], you can deploy calico once you have a functional kubeconfig established for your non-root user. Instead of pasting the kubeconfig directly into the playbook of enhanced security advantage, think about storing it as a Kubernetes secret. Using [Ansible Vault][2] to safely handle private data, such as Kubeadm tokens Refer to this [blog][3] by Austin for more information [1]: https://galaxy.ansible.com/ui/repo/published/community/kubernetes/ [2]: https://docs.ansible.com/ansible/latest/cli/ansible-vault.html [3]: https://austinsnerdythings.com/2022/04/25/deploying-a-kubernetes-cluster-within-proxmox-using-ansible/
I am trying to sign in to a project I'm working on to test the authentication. Currently, when I try to sign in, I'm met with the error "_supabaseClient__WEBPACK_IMPORTED_MODULE_1__.supabase.auth.signIn is not a function" This is a React project. I'm using Supabase for my authentication. I will include my LogIn.js file (the file housing the log in function) as well as my package.json file so you can see all my dependencies. I cannot for the life of me figure this out haha. Thanks in advance. LogIn.js: ``` import React, { useState } from "react"; import { supabase } from "../supabaseClient"; import "./LogIn.css"; const LogIn = () => { const [email, setEmail] = useState(""); const [password, setPassword] = useState(""); const handleLogIn = async (e) => { e.preventDefault(); const { error } = await supabase.auth.signIn({ email, password }); if (error) { alert(`Error logging in: ${error.message}`); } else { alert("Login successful!"); } }; return ( <div className="auth-form-container"> <form className="login-form" onSubmit={handleLogIn}> <h1>Welcome back to BookSwap</h1> <input type="email" placeholder="Email" value={email} onChange={(e) => setEmail(e.target.value)} required /> <input type="password" placeholder="Password" value={password} onChange={(e) => setPassword(e.target.value)} required /> <button type="submit">Log In</button> </form> </div> ); }; export default LogIn; ``` package.json: ``` { "name": "argon-design-system-react", "version": "1.1.2", "description": "React version of Argon Design System by Creative Tim", "main": "index.js", "repository": { "type": "git", "url": "git+https://github.com/creativetimofficial/argon-design-system-react.git" }, "keywords": [ "react", "reactjs", "argon", "argon-react", "design", "design-react", "argon-design", "argon-design-react", "kit", "react-kit", "argon-design-system", "argon-design-system-react", "design-system-react" ], "author": "Creative Tim", "license": "MIT", "bugs": { "url": "https://github.com/creativetimofficial/argon-design-system-react/issues" }, "homepage": "https://demos.creative-tim.com/argon-design-system-react/", "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject", "install:clean": "rm -rf node_modules/ && rm -rf package-lock.json && npm install && npm start", "compile-sass": "sass src/assets/scss/argon-design-system-react.scss src/assets/css/argon-design-system-react.css", "minify-sass": "sass src/assets/scss/argon-design-system-react.scss src/assets/css/argon-design-system-react.min.css --style compressed" }, "eslintConfig": { "extends": "react-app" }, "browserslist": [ ">0.2%", "not dead", "not ie <= 11", "not op_mini all" ], "dependencies": { "@fortawesome/fontawesome-svg-core": "^6.5.1", "@fortawesome/free-solid-svg-icons": "^6.5.1", "@fortawesome/react-fontawesome": "^0.2.0", "@supabase/supabase-js": "^2.41.1", "bootstrap": "4.6.2", "classnames": "2.3.2", "headroom.js": "^0.12.0", "moment": "2.29.4", "nouislider": "15.4.0", "react": ">=16.0.0", "react-datetime": "3.2.0", "react-dom": ">=16.0.0", "react-router-dom": "6.11.1", "react-scripts": "5.0.1", "reactstrap": "8.10.0", "sass": "1.62.1" }, "peerDependencies": { "react": ">=16.0.0", "react-dom": ">=16.0.0" }, "devDependencies": { "@types/markerclustererplus": "2.1.33", "@types/react": "18.2.6", "eslint-plugin-flowtype": "8.0.3", "jquery": "3.7.0", "typescript": "5.0.4" }, "overrides": { "svgo": "3.0.2" } } ``` I have tried changing the .signIn to .signInWithPassword, in case I wasn't utilizing the right function based on the library I was using, but that didn't work. I still got the same error, apart from the function it was referencing (sign in vs. sign in with password). I changed the entire LogIn.js function at one point to utilize the supabase OTP method, which worked, but I don't want to utilize a OTP. I want to be able to log in with a username/password. ***EDIT*** I tried using signInWithPassword again, and it worked. I'm guessing the first time that I did SignInWithPassword (capital S on Sign), and didn't notice.
|php|arrays|wordpress|random|
null
- After set formatting with `.Font.Bold = True`, `InsertAfter` inserts the text with the same format. ```vb Set wdDoc = wdApp.Documents.Add Set WordRange = wdDoc.Content Const TXT = "Severity: " With WordRange .Text = TXT & Sheets("CleanSheet").Cells(2, 2).Value & vbCr Dim iEnd As Long: iEnd = Len(TXT) wdDoc.Range(0, iEnd).Font.Bold = True End With ``` --- - If you prefer to use `Selection` ```vb Set wdDoc = wdApp.Documents.Add Const wdCollapseEnd = 0 With wdApp.Selection .Font.Bold = True .Typetext "Severity: " .Collapse wdCollapseEnd .Font.Bold = False .Typetext Sheets("CleanSheet").Cells(2, 2).Value & vbCr End With ``` _Microsoft documentation:_ > [Range.Collapse method (Word)](https://learn.microsoft.com/en-us/office/vba/api/word.range.collapse?WT.mc_id=M365-MVP-33461&f1url=%3FappId%3DDev11IDEF1%26l%3Den-US%26k%3Dk(vbawd10.chm157155429)%3Bk(TargetFrameworkMoniker-Office.Version%3Dv16)%26rd%3Dtrue) > [Selection.TypeText method (Word)](https://learn.microsoft.com/en-us/office/vba/api/word.selection.typetext?WT.mc_id=M365-MVP-33461&f1url=%3FappId%3DDev11IDEF1%26l%3Den-US%26k%3Dk(vbawd10.chm158663163)%3Bk(TargetFrameworkMoniker-Office.Version%3Dv16)%26rd%3Dtrue) > [Selection object (Word)](https://learn.microsoft.com/en-us/office/vba/api/word.selection?WT.mc_id=M365-MVP-33461&f1url=%3FappId%3DDev11IDEF1%26l%3Den-US%26k%3Dk(vbawd10.chm2421)%3Bk(TargetFrameworkMoniker-Office.Version%3Dv16)%26rd%3Dtrue)
Python3 Tkinter I want to make screen transition framework
|python-3.x|
|security|mobile|server|proxy|
I’m trying to figure out an algorithm to distribute a set of 20 to 5000 items into a variable number of boxes (2 to 50). Items are identified by a number of properties, and we need to distribute them into the boxes as evenly as possible by their properties (evenly, not randomly). Each item has (in order of priority): 1. Color: red, blue, green, yellow 2. Shape: round, square, or triangle 3. Size: small, medium, large 4. New or Used The goal is for each box to end up with the same total number of items, the same number of each color, same number of each shape, etc. The property values of the incoming items are effectively random (or at least unpredictable). There can also be a variable number of properties, so maybe weight, material, or other criteria may be added later on. If the number of items is not evenly divisible by the number of boxes, then some boxes will have 1 more item than the others (whole units only). We want the contents of the boxes to end up as close to identical as possible – no box should be different or better than any other box (aside from issues where whole units cannot be divided). If it were just 1 property, that is easy enough – just loop through each color and split them between the boxes – similar to dealing cards. But then add the subsequent criteria, and then… well then short of brute force, I’m stuck. So I’m hoping somebody here has some advice on how to accomplish this in a reasonably efficient way.
Distribute items evenly into buckets
|algorithm|
My Component import { h } from 'preact'; import { quoteSignal } from '/quoteStore.ts'; export default function Button() { return ( <button>hello 1 {quoteSignal.value}</button> ); } my signal import { signal } from '@preact/signals'; // Create a signal for the shared label export const quoteSignal = signal('first'); My island import { useEffect } from 'preact/hooks'; import { quoteSignal } from '/quoteStore.ts'; export default function QuoteIsland() { useEffect(() => { quoteSignal.value = "Test Label"; console.log("useeffect") const fetchQuote = async () => { const response = await fetch('/api/quote/mystock'); const data = await response.json(); console.log(data.price); quoteSignal.value = data.price; }; fetchQuote(); const intervalId = setInterval(fetchQuote, 10000); // 10 seconds return () => clearInterval(intervalId); }, []); return null; // This component does not need to render anything } Test Label is never displayed, as are none of the updates from api, even though the console log shows them. The button only ever displays as its label "first".
deno signal not updating in components
|typescript|deno|fresh|
i have already installed node & react but unable to install typescript in that same project. do i need to uninstall all of them and install again with typescript? Or we can do it manually ,if yes then please tell me how to do this? typescript installation??
null
I am trying to create a Gutenburg ACF block, following [this tutorial](https://www.advancedcustomfields.com/resources/blocks/). And I get the error below: Notice: WP_Block_Type_Registry::register was called incorrectly. Block type names must contain a namespace prefix. I found [this](https://support.advancedcustomfields.com/forums/topic/new-block-json-method-yields-no-block-types-exist/#post-157203) on the ACF forum and tried all the solutions with no luck. My current code in `functions.php`: add_action( 'init', 'register_acf_blocks' ); function register_acf_blocks() { register_block_type( get_template_directory_uri(). '/blocks/testimonial' ); } I also tried this: add_action( 'init', 'register_acf_blocks' ); function register_acf_blocks() { register_block_type( get_template_directory_uri(). '/blocks/testimonial/block.json' ); And this: add_action( 'init', 'register_acf_blocks' ); function register_acf_blocks() { register_block_type( __DIR__ . '/blocks/testimonial' ); } And also this: add_action( 'init', 'register_acf_blocks' ); function register_acf_blocks() { register_block_type( __DIR__ . '/blocks/testimonial/block.json' ); } I verified all my file paths.
``` import mysql.connector import pandas as pd # Connect to MySQL mydb = mysql.connector.connect( host='localhost', user='root', password='**********', database='Customer' ) cursor = mydb.cursor() # Read CSV file data = pd.read_csv("Customers.csv") # Insert data into MySQL for row in data.itertuples(): cursor.execute(""" INSERT INTO Customer (Gender, Age, AnnualIncome, SpendingScore, Profession, WorkExperience, FamilySize) VALUES (%s, %s, %s, %s, %s, %s, %s) """, (row.Gender, row.Age, row.AnnualIncome, row.SpendingScore, row.Profession, row.WorkExperience, row.FamilySize)) # Commit changes and close connection mydb.commit() mydb.close() ``` I'm not sure what I'm doing wrong I'm continuously getting this error message: AttributeError: 'Pandas' object has no attribute 'AnnualIncome'. These are the column names in the CVS file:CustomerID,Gender,Age,Annual Income ($),Spending Score (1-100),Profession,Work Experience,Family Size. And this is the sql table that I've created- CREATE TABLE Customer.Customers( CustomerID int AUTO_INCREMENT PRIMARY KEY, Gender ENUM('Male', 'Female'), Age int, AnnualIncome int, SpendingScore int, Profession VARCHAR(255), WorkExperience int, FamilySize int ); Can anyone show me where I went wrong and why I'm getting this error message. Your response is greatly appreciated. I'm expecting to see the data that is within the CVS file imported into the sql table that I've created on vscode.
>can not on localhost For server docker-compose ports, add this: `- '127.0.0.1:5000:5000'` You are exposing docker image port, but you also need to expose docker-compose network port.
I am wondering what is a difference in usage between `op_kwargs` and `templates_dict` in Airflow, since both are templated fields in `PythonOperator` `template_fields= ['templates_dict', 'op_args', 'op_kwargs']`. I checked the [documentation][1] but it's still not clear for me: ``` op_kwargs (Optional[Mapping[str, Any]]) – a dictionary of keyword arguments that will get unpacked in your function templates_dict (Optional[Dict[str, Any]]) – a dictionary where the values are templates that will get templated by the Airflow engine sometime between __init__ and execute takes place and are made available in your callable’s context after the template has been applied. (templated) ``` Any thoughts? [1]: https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/operators/python/index.html#airflow.operators.python.PythonOperator
RSpec Capybara throwing Selenium error when trying to click a button with browser confirm
|javascript|ruby-on-rails|ruby|selenium-webdriver|capybara|
How can i code in openpyxl for draggin formula filled down, similarly copy and paste in excel just like ctrl + d xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
When I create a `LazyVerticalGrid` I can set the minSize of the columns with this: LazyVerticalGrid( columns = GridCells.Adaptive(minSize = 192.dp), The problem is that I want to specify that max number of columns must be 2, but I can't find any parameter to do it. How can I achieve it?
I am creating a basic c program for the raspberry pi pico w, I want it to basically log onto the internet and get the time, then using the time switch a relay either on or off. I am using the curl library as that is what the internet suggests. My issue is I cannot even build the project. I get an the following error: sys/socket.h: No such file or directory 428 | # include <sys/socket.h> Please can you help me out with this? here is a link if you could tell me what I am doing wrong that would be awesome: https://github.com/FlawlessCalamity/pico-realay-on-off/tree/main
Nothing worked for me expect plain old "restart IDE"
I'm integrating output from OpenAI's ChatGPT streaming API into a web application and facing challenges with rendering newline characters `\n` as well as dealing with `\\n` (which represents an escaped newline character). The output can contain newlines that need to be displayed as HTML line breaks (`<br />`) in most text. However, there are specific scenarios, such as within code blocks or when \n or \\n are part of the instructional content, where these newline indicators should be preserved as-is. The simple naive approach of globally replacing `\n` with `<br />` does not suffice, as it fails to distinguish between contexts where a newline is meant for formatting versus when it is part of the actual content (e.g., `\n` appearing in a string within a programming tutorial or `\\n` used to demonstrate escape sequences in text). I find it a bit funny I've programmed for so many years and I've likely run into this problem in various forms, and it seems to have always been handled very automatically by whatever escaping mechanisms, yet when I think of the problem it seems a lot more complicated. Is there some kind of text formatting standard or convention for how this should be handled? Or perhaps this something like this little search replace is all I really need and it's capable of handling all cases? ``` var myEscapedJSONString = myJSONString.replace(/\\n/g, "\\n") .replace(/\\'/g, "\\'") .replace(/\\"/g, '\\"') .replace(/\\&/g, "\\&") .replace(/\\r/g, "\\r") .replace(/\\t/g, "\\t") .replace(/\\b/g, "\\b") .replace(/\\f/g, "\\f"); ``` From: https://stackoverflow.com/a/4253415/1663462
I'm using the LAG() function in SQL to calculate the difference between the current month's tips sum and the previous month's tips sum for each taxi driver. However, the LAG() function is returning 0 for every row, even though there should be previous rows to reference. This is the example of the query I am using now: SELECT taxi_id, EXTRACT(YEAR FROM DATE(trip_start_timestamp)) AS year, EXTRACT(MONTH FROM DATE(trip_start_timestamp)) AS month, ROUND(SUM(tips), 3) AS tips_sum, LAG(ROUND(SUM(tips), 3), 1, 0) OVER (PARTITION BY taxi_id ORDER BY year, month) AS previous_tips_sum FROM taxi_trips GROUP BY taxi_id, EXTRACT(YEAR FROM DATE(trip_start_timestamp)), EXTRACT(MONTH FROM DATE(trip_start_timestamp)) ORDER BY tips_sum DESC;
SQL LAG() function returning 0 for every row despite available previous rows
|sql|google-bigquery|window-functions|
null
@Before("staticinitialization(org.mazouz.aop.Main1)") While making the below generic such that it can work for any package @Before("staticinitialization(*.*.*(..))") Im getting the below error Syntax error on token "staticinitialization(*.*.*(..))", ")" expected
I'm using ```Make``` (```Automake```) to compile and execute unit tests. However these tests need to read and write test data. If I just specify a path, the tests only work from a specific directory. Even using ```builddir``` isn't particularly useful, because it is of course evaluated at compiletime rather than at runtime. What would be nice is, if the builddir would get passed at runtime instead. But would it be better to specify them via the command arguments or via the environment? And is it "better" to specify individual files or a generic directory? I would consider a ```PATH```-like search behaviour overkill for just a test, or would that be recommended? So the question is, how would I specify the path to a test file best, in terms of portability, interoperability, maintainability and common sense?
How to refer to test data files?
|c|unit-testing|testing|makefile|automake|
I'm using the LAG() function in SQL to calculate the difference between the current month's tips sum and the previous month's tips sum for each taxi driver. However, the LAG() function is returning 0 for every row, even though there should be previous rows to reference. This is the example of the query I am using now: SELECT taxi_id, EXTRACT(YEAR FROM DATE(trip_start_timestamp)) AS year, EXTRACT(MONTH FROM DATE(trip_start_timestamp)) AS month, ROUND(SUM(tips), 3) AS tips_sum, LAG(ROUND(SUM(tips), 3), 1, 0) OVER (PARTITION BY taxi_id ORDER BY year, month) AS previous_tips_sum FROM taxi_trips GROUP BY taxi_id, EXTRACT(YEAR FROM DATE(trip_start_timestamp)), EXTRACT(MONTH FROM DATE(trip_start_timestamp)) ORDER BY tips_sum DESC;
Is there a way to distribute more similarly than the method below?? I'm also curious about how to distribute it from more than one list. The list can contain only five numbers. ``` intList = [] for i in range(10): intList.append(random.randint(10,200)) listX = [] listY = [] intList.sort(reverse=True) for i in range(len(intList)): if sum(listX) >= sum(listY): if len(listY) != 5: listY.append(intList[i]) else: listX.append(intList[i]) else: if len(listX) != 5: listX.append(intList[i]) else: listY.append(intList[i]) print(f"listx = {listX} \nlisty = {listY}\n sum x = {sum(listX)}, y = {sum(listY)}") ```
This is more a hint than a question: The goal is to parse a command line **and** create a useful *usage* message code: for arg ; do case "$arg" in --edit) # edit file cd "$(dirname $0)" && vim "$0" ;; --noN) # do NOT create 'NHI1/../tags' let noN=1 ;; --noS) # do NOT create 'HOME/src/*-latest/tags' let noS=1 ;; --help) # write this help message ;& *) echo "usage: $(basename $0) options..." 1>&2 awk '/--?\w+\)/' "$0" 1>&2 exit ;; esac done This create the *usage* message: > build_tags.bash -x usage: build_tags.bash options... --edit) # edit file --noN) # do NOT create 'NHI1/../tags' --noS) # do NOT create 'HOME/src/*-latest/tags' --help) # write this help message The clue is that the *definition* of the *case* target is also the *documentation* of the *case* target.
Parse command line arguments and write useful usage message without additional code
I have to execute a shell script `log-agent.sh` inside my docker container. **Dockerfile** FROM openjdk:11-jre LABEL org.opencontainers.image.authors="gurucharan.sharma@paytm.com" # Install AWS CLI RUN apt-get update && \ apt-get install -y awscli && \ apt-get clean VOLUME /tmp ARG JAR_FILE ARG PROFILE ADD ${JAR_FILE} app.jar ENV PROFILE_ENV=${PROFILE} EXPOSE 8080 COPY entrypoint.sh / COPY log-agent.sh / # Set permissions for log-agent.sh RUN chmod +x /log-agent.sh # Use entrypoint.sh as the entry point ENTRYPOINT ["/entrypoint.sh"] # Execute log-agent.sh # RUN /bin/bash -c '/logs/log-agent.sh' CMD ["/bin/bash", "-c", "/log-agent.sh"] **entrypoint.sh** #!/bin/bash # Run your script /bin/bash /log-agent.sh # Checking for lead detection level if [ -z "$LEAK_DETECTION_LEVEL" ] then LEAK_DETECTION_LEVEL=advanced fi # Start the Spring Boot application java -Dspring.profiles.active=${PROFILE_ENV} -XX:+UseG1GC -Dio.netty.leakDetection.level=${LEAK_DETECTION_LEVEL} -Djava.security.egd=file:/dev/./urandom -Dloader.main=com.adtech.DemoApplication -jar app.jar The application startup is successful, but the container is not executing the script. No errors in the logs as well. Here is what I have already verified: 1. File location. 2. File permissions. 3. Validated the shell script for correctness. (proper shebang operator) 4. Script is executing correctly if executed manually from the container using the docker exec command Any suggestions?
ValueError: The shape of the target variable and the shape of the target value in `variable.assign(value)` must match. variable.shape=(1, 1, 3, 3), Received: value.shape=(1, 1, 32, 3). Target variable: <KerasVariable shape=(1, 1, 3, 3), dtype=float32, path=conv2d_13/kernel> Model: "functional_52" ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ Connected to ┃ ┑━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ β”‚ input_layer (InputLayer) β”‚ (None, 32, 32, 3) β”‚ 0 β”‚ - β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ conv2d (Conv2D) β”‚ (None, 30, 30, 32) β”‚ 896 β”‚ input_layer[0][0] β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ flatten (Flatten) β”‚ (None, 28800) β”‚ 0 β”‚ conv2d[0][0] β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ dense (Dense) β”‚ (None, 32) β”‚ 921,632 β”‚ flatten[0][0] β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ dense_1 (Dense) β”‚ (None, 6) β”‚ 198 β”‚ dense[0][0] β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ spatial_transformer β”‚ (None, 32, 32, 3) β”‚ 0 β”‚ conv2d[0][0], β”‚ β”‚ (SpatialTransformer) β”‚ β”‚ β”‚ dense_1[0][0] β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ conv2d_1 (Conv2D) β”‚ (None, 32, 32, 3) β”‚ 12 β”‚ spatial_transformer[0][0] β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ sequential (Sequential) β”‚ (None, 3) β”‚ 8,963,395 β”‚ conv2d_1[0][0] β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ dense_5 (Dense) β”‚ (None, 3) β”‚ 12 β”‚ sequential[0][0] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Total params: 9,886,145 (37.71 MB) Trainable params: 9,878,721 (37.68 MB) Non-trainable params: 7,424 (29.00 KB) Not able to load the model not understanding which layer is troubling Tried debugging not undertstanding
How to appropriately handle newlines and the escaping of them?
|text|escaping|newline|
The data structure for this is a [Trie][1] (Prefix Tree): * Time Efficiency: Search, Insertion & Deletion: With a time-complexity of **O(m)** where m is the length of the string. * Space Efficiency: Store the unique characters present in the strings. This can be a space advantage compared to storing the entire strings. ```php <?php class TrieNode { public $childNode = []; // Associative array to store child nodes public $endOfString = false; // Flag to indicate end of a string } class Trie { private $root; public function __construct() { $this->root = new TrieNode(); } public function insert($string) { if (!empty($string)) { $this->insertRecursive($this->root, $string); } } private function insertRecursive(&$node, $string) { if (empty($string)) { $node->endOfString = true; return; } $firstChar = $string[0]; $remainingString = substr($string, 1); if (!isset($node->childNode[$firstChar])) { $node->childNode[$firstChar] = new TrieNode(); } $this->insertRecursive($node->childNode[$firstChar], $remainingString); } public function commonPrefix() { $commonPrefix = ''; $this->commonPrefixRecursive($this->root, $commonPrefix); return $commonPrefix; } private function commonPrefixRecursive($node, &$commonPrefix) { if (count($node->childNode) !== 1 || $node->endOfString) { return; } $firstChar = array_key_first($node->childNode); $commonPrefix .= $firstChar; $this->commonPrefixRecursive($node->childNode[$firstChar], $commonPrefix); } } // Example usage $trie = new Trie(); $trie->insert("Softball - Counties"); $trie->insert("Softball - Eastern"); $trie->insert("Softball - North Harbour"); $trie->insert("Softball - South"); $trie->insert("Softball - Western"); echo "Common prefix: " . $trie->commonPrefix() . PHP_EOL; ?> ``` Output: Common prefix: Softball - [Demo][2] Trie Visualization (Green nodes are marked: endOfString): [![enter image description here][3]][3] [1]: https://en.wikipedia.org/wiki/Trie [2]: https://onecompiler.com/php/428w3k8pe [3]: https://i.stack.imgur.com/4gw0r.png
I have a mapview, with itemizedoverlays, exactly like in the example of Android Developer's guide: http://developer.android.com/resources/tutorials/views/hello-mapview.html At that example, when you press on a item, it's showed a dialog with a title and a body: protected boolean onTap(int index) { OverlayItem item = mOverlays.get(index); AlertDialog.Builder dialog = new AlertDialog.Builder(mContext); dialog.setTitle(item.getTitle()); dialog.setMessage(item.getSnippet()); dialog.show(); return true; } It works fine, and I still need to show that dialog, but I need to add A BUTTON, that when I press it it loads a new activity, and maybe some more text lines. How can I do it?
Can I personalize the onTap() dialog of the items on my googlemapview?? (I want to add a button on it)
Unable to install typescript in my system?
|typescript|
null
{"Voters":[{"Id":4939819,"DisplayName":"Hassaan"},{"Id":3603681,"DisplayName":"Professor Abronsius"},{"Id":8034901,"DisplayName":"brombeer"}],"SiteSpecificCloseReasonIds":[18]}
Configure CmakeLists.txt to avoid manually copying dlls
I am using voila Jupyter lab extension. Voila renders all plots, ipywidgets and markdown on one system but not in another system. All my package versions, browsers are up to date [Plots not rendering on one system](https://i.stack.imgur.com/DgvHd.png) [Plots rendering on another system](https://i.stack.imgur.com/xXb28.png) I updated all the package versions, ipywidgets voila versions and browser versions for both the systems The only difference between two systems is the plotly build and channel. [Works for this system][1] [Does not work for this system][2] [1]: https://i.stack.imgur.com/9d7QD.png [2]: https://i.stack.imgur.com/tb9Q7.png Also it works for matplotlib on both systems
The implementation of the dropdown is dependent on the browser and the os. Unfortunately there are not enough events to describe the actions you wish for, like when the options are shown or hidden. The closest I got was with `focus` and `blur`. I tried hacking it with `mousedown` and `change` and `click` but the most reliable is to wait for the user to `blur` before changing values. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> var oldValue = null function CreateStageDropdown(pRootStage = g_pLanguageTree) { let eDropDown = document.createElement("select"); document.getElementById("MainDiv").appendChild(eDropDown); let tOptions = CreateStageDropdown_Options(pRootStage, 0); for (let i = 0; i < tOptions.length; i++) { let eOption = document.createElement("option"); eOption.label = tOptions[i]; eOption.value = i; eDropDown.appendChild(eOption); } eDropDown.addEventListener("blur", function(Event) { let sIn = eDropDown.options[eDropDown.selectedIndex].label oldValue = sIn let sOut = sIn.replaceAll(/\s*/g, ""); eDropDown.options[eDropDown.selectedIndex].label = sOut }); eDropDown.addEventListener("change", function(Event) { // eDropDown.blur() }); eDropDown.addEventListener("focus", function(Event) { if (oldValue !== null) { eDropDown.options[eDropDown.selectedIndex].label = oldValue oldValue = null } //eDropDown.value = sOut; }); return [eDropDown, tOptions]; } function CreateStageDropdown_Options(pStage, iNestingLayers) { let tOptions = []; tOptions.push("\u00A0\u00A0\u00A0\u00A0".repeat(iNestingLayers) + pStage.Name); let tChildren = pStage.Children; if (tChildren.length > 0) { for (let i = 0; i < tChildren.length; i++) { let pChild = tChildren[i]; tOptions.push(CreateStageDropdown_Options(pChild, iNestingLayers + 1)); } } tOptions = tOptions.flat(); return tOptions; } let g_pLanguageTree = { "Name": "Indo-European", "Children": [{ "Name": "Germanic", "Children": [{ "Name": "North Germanic", "Children": [{ "Name": "Norwegian", "Children": [] }, { "Name": "Swedish", "Children": [] }, { "Name": "Danish", "Children": [] } ] }, { "Name": "West Germanic", "Children": [{ "Name": "German", "Children": [] }, { "Name": "Dutch", "Children": [] }, { "Name": "English", "Children": [] } ] }, { "Name": "East Germanic", "Children": [{ "Name": "Gothic", "Children": [] }] } ] }, { "Name": "Italic", "Children": [{ "Name": "Umbrian", "Children": [] }, { "Name": "Oscan", "Children": [] }, { "Name": "Latin", "Children": [{ "Name": "Spanish", "Children": [] }, { "Name": "French", "Children": [] }, { "Name": "Italian", "Children": [] }, { "Name": "Portuguese", "Children": [] }, { "Name": "Romanian", "Children": [] } ] }, ] }, { "Name": "Anatolian", "Children": [{ "Name": "Hittite", "Children": [] }, { "Name": "Luwian", "Children": [] } ] }, { "Name": "Armenian", "Children": [] } ] } CreateStageDropdown() <!-- language: lang-html --> <p>Select a sub item </p> <div id="MainDiv"> </div> <p>and then</p> <button>lose focus</button> <!-- end snippet -->
|machine-learning|databricks|artifact|automl|
|machine-learning|model|nlp-question-answering|fine-tuning|chat-gpt-4|
I've extensively searched the depths of the internet, eager to seamlessly incorporate TTF font styling to user input. With a vast repository of over 12,000 TTF font files hosted on my website, the current functionality generates input text as PNG files. However, I'm now enthusiastic about enabling users to directly copy styled text. I'm open to exploring solutions in any programming language. Feel free to review my current implementation here: https://update.cutestfonts.com/. I'm flexible and capable of working with any programming language or solution. I tried converting the ttf font to woff, but when someone copies it doesn't pick the ttf font style. I tried manipulating the current code like imagettftext() but no success.
Copy Text converted using TTf fonts to clipboard
|javascript|python|php|reactjs|laravel|
null
{"OriginalQuestionIds":[5533050],"Voters":[{"Id":6868543,"DisplayName":"j6t"},{"Id":86072,"DisplayName":"LeGEC","BindingReason":{"GoldTagBadge":"git"}}]}
{"OriginalQuestionIds":[75329101],"Voters":[{"Id":11107541,"DisplayName":"starball","BindingReason":{"GoldTagBadge":"visual-studio-code"}}]}
I want to create a game for 2 people, where each person has to control a plane by dragging his finger. But I don't see any methods in Compose to detect the second drag. The onStartDrag method is not called for the second drag when I press the screen with two fingers. My code: ``` @Composable fun AirRacesContent() { var boxWidth by remember { mutableStateOf(0.dp) } var boxHeight by remember { mutableStateOf(0.dp) } val density = LocalDensity.current val TAG = "AirRacesContent" Box(modifier = Modifier .fillMaxSize() .paint(painterResource(id = R.drawable.background), contentScale = ContentScale.FillBounds) .onSizeChanged { size -> with(density) { boxWidth = size.width.toDp() boxHeight = size.height.toDp() } } .pointerInput(Unit){ detectDragGestures ( onDragStart = {offset -> Log.d(TAG, "drag start: $offset") }, onDrag = { change, dragAmount -> } ) }) { val planeSize = 70.dp Image( painter = painterResource(id = R.drawable.plane1), contentDescription = null, modifier = Modifier.size(planeSize) .offset(boxWidth / 4 - planeSize/2, boxHeight * 8 / 10) ) Image( painter = painterResource(id = R.drawable.plane2), contentDescription = null, modifier = Modifier.size(planeSize) .offset(boxWidth / 4 * 3 - planeSize/2, boxHeight * 8 / 10) ) } } ```