text
stringlengths
15
59.8k
meta
dict
Q: how inadvertently adding UTF8 format while copy paste? I have a string with non uft8 format $str = Processing’s; following json_encode works fine because i'm copying and pasting code to my editor $str = Processing’s; echo json_encode($str); output: Processing\u2019s" if i retrive the same string Processing’s from DB and do json_encode it returns NULL because of it is non UTF8 string. How when I copy paste code works like UTF8 encoded string? what happens when I copy and Paste the codes?
{ "language": "en", "url": "https://stackoverflow.com/questions/32883088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Check if users are a member of an AD group - Powershell i have a txt file with a list of usernames and I want to check if these users are a member of a specific AD-group. If they aren't a member of this group, their username has to be written in a csv file. So the output should be a csv file with all usernames who are not in the specified AD-group. I always get the error "A connection to the directory on which to process the request was unavailable. This is likely a transient condition".The error is reproduced when implementing a workflow that triggers up to 20 powershell sessions to run, each importing the Active Directory module to create a connection with AD. This is my short script: $userlist = get-content -Path "C:\Temp\users.txt" $group = "group_xy" $result = foreach ($user in $userlist) { $groupmembers = Get-ADgroup -Filter {Name -eq $group}|Get-ADGroupMember if ($groupmembers.samaccountname -notmatch $users){ [PSCustomObject]@{ Name = $user Group = $group Member = 'False' } } } $result |Export-csv "C:\Temp\Result.csv" -NoTypeInformation How can I solve this? BR A: if ($groupmembers.samaccountname -notmatch $users) { Should be if ($groupmembers.samaccountname -notmatch $user) { Since you never define $users I think this is the error
{ "language": "en", "url": "https://stackoverflow.com/questions/72011628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Null when sharing data between Activity and Fragment I got a problem that I can't share a RxBleClient between MainActivity and Fragments. I pass a instance of RxBleClient to MyViewModel, and while trying to receive from ViewModel in Fragment it is just a Null. Fragment is a part of ViewPager2. Here is how I did this: View Model code: public class MyViewModel extends ViewModel { private MutableLiveData<RxBleClient> rxBleClient = new MutableLiveData<RxBleClient>(); public MyViewModel() { } public void setRxBleClient(RxBleClient item) { rxBleClient.setValue(item); } public LiveData<RxBleClient> getRxBleClient() { return rxBleClient; } } MainActivity: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); MyViewModel model = new ViewModelProvider(this).get(MyViewModel.class); // Turn on Bluetooth Intent enableBtIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_ENABLE); int REQUEST_ENABLE_BT = 1; this.startActivityForResult(enableBtIntent, REQUEST_ENABLE_BT); // pass instance of RxBleClient to ViewModel rxBleClient = RxBleClient.create(getApplicationContext()); model.setRxBleClient(rxBleClient); // bind ButterKnife.bind(this); // rest of code Fragment code: @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.fragment_connection_list, container, false); scanResultList = new ArrayList<ScanResult>(0); // Set the adapter if (view instanceof RecyclerView) { Context context = view.getContext(); RecyclerView recyclerView = (RecyclerView) view; if (mColumnCount <= 1) { recyclerView.setLayoutManager(new LinearLayoutManager(context)); } else { recyclerView.setLayoutManager(new GridLayoutManager(context, mColumnCount)); } recyclerView.setAdapter(new MyConnectionRecyclerViewAdapter(scanResultList)); } // get View Model myViewModel = new ViewModelProvider(getActivity()).get(MyViewModel.class); myViewModel.getRxBleClient().observe(getViewLifecycleOwner(), client -> { this.rxBleClient = client; // HERE IS NULL }); return view; } Where did I make mistake? First edit: This is my only one Activity, yet. This Activity is hosting ViewPager2: // FRAGMENT ADAPTER fragmentStateAdapter = new ViewPager2FragmentStateAdapter(getSupportFragmentManager(), getLifecycle()); viewPager2 = findViewById(R.id.pager); viewPager2.setAdapter(fragmentStateAdapter); // TOP BAR final String tabTitles[] = {"Debugging", "Bluetooth", "Video"}; final @DrawableRes int tabDrawable[] = {R.drawable.ic_bug_report_24px, R.drawable.ic_bluetooth_24px, R.drawable.ic_videocam_24px}; tabLayout = findViewById(R.id.tab_layout); new TabLayoutMediator(tabLayout, viewPager2, (tab, position) -> tab.setText(tabTitles[position]).setIcon(tabDrawable[position]) ).attach();
{ "language": "en", "url": "https://stackoverflow.com/questions/66808886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Use config.py file in other directories of my poject My project's structure is: My_Project: ├── scripts │ ├── script.py ├── _main_config │ ├── config.py I'm trying to call from script.py file the config.py file in main_config. I'm working with sagemaker studio so currently my project is not a module. First I added init.py files to both the main project dir and to scripts dir as well. I saw that i can use the following code: import sys sys.path.append('/root/My_Project/main_config') from main_config.config import BehConfig BehConfig is a simple class i created in the config.py file: class BehConfig: x = 'test' y = ['test1', 'test2'] When I tried to print both x and y i got the correct values. When I tried to modify the config.py file and add for instance, a value for the list y, when i ran the print command i still got the previous output without the value that i added. Does anyone know why it's not working? My guess is that it has to do with the sys.path.append above the import but i was unable to fix the issue. Thank you for your help!
{ "language": "en", "url": "https://stackoverflow.com/questions/74420794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Mysql query staying in 'SENDING DATA' state for long time when using LEFT JOIN I have query which is going in the SENDING DATA state for very long period of time. Can someone please help me with this : below are details Mysql Query: select a.msgId,a.senderId,a.destination,a.inTime,a.status as InStatus,b.status as SubStatus,c.deliverTime,substr(c.receipt,82,7) as DlvStatus from inserted_history a left join submitted_history b on b.msgId = a.msgId left join delivered_history c on a.msgId = c.msgId where a.inTime between '2010-08-10 00:00:00' and '2010-08-010 23:59:59' and a.systemId='ND_arber' Total records in delivered_history : 223870168 Total records in inserted_history : 264817239 Total records in submitted_history : 226637058 Explain query returns: id , select_type , table , type , possible_keys , key , key_len , ref , rows , Extra 1 , SIMPLE , a , ref , systemId,idx_time , systemId , 14 , const , 735310 , Using where 1 , SIMPLE , b , ref , PRIMARY , PRIMARY , 66 , gwreports2.a.msgId , 2270405 , 1 , SIMPLE , c , ref , PRIMARY , PRIMARY , 66 , gwreports2.a.msgId , 2238701 , CREATE TABLE for delivered_history CREATE TABLE `delivered_history` ( `msgId` VARCHAR(64) NOT NULL, `systemId` VARCHAR(12) NOT NULL, `deliverTime` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00', `smscId` VARCHAR(64) NOT NULL, `smsc` VARCHAR(20) NOT NULL, `receipt` BLOB NULL, `errcode` INT(11) NULL DEFAULT NULL, PRIMARY KEY (`msgId`, `deliverTime`), INDEX `systemId` (`systemId`), INDEX `smsc` (`smsc`), INDEX `idx_time` (`deliverTime`) ) ROW_FORMAT=DEFAULT CREATE TABLE for inserted_history CREATE TABLE `inserted_history` ( `msgId` VARCHAR(64) NOT NULL, `systemId` VARCHAR(12) NOT NULL, `senderId` VARCHAR(15) NOT NULL, `destination` VARCHAR(15) NOT NULL, `inTime` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00', `status` VARCHAR(20) NOT NULL, `msgText` BLOB NULL, `msgType` VARCHAR(15) NULL DEFAULT NULL, PRIMARY KEY (`msgId`, `inTime`), INDEX `systemId` (`systemId`), INDEX `senderId` (`senderId`), INDEX `destination` (`destination`), INDEX `status` (`status`), INDEX `idx_time` (`inTime`) ) ROW_FORMAT=DEFAULT CREATE TABLE for submitted_history CREATE TABLE `submitted_history` ( `msgId` VARCHAR(64) NOT NULL, `systemId` VARCHAR(12) NOT NULL, `submitTime` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00', `status` VARCHAR(20) NOT NULL, `smscId` VARCHAR(64) NOT NULL, `smsc` VARCHAR(16) NOT NULL, `errcode` INT(6) NULL DEFAULT '0', PRIMARY KEY (`msgId`, `submitTime`), INDEX `systemId` (`systemId`), INDEX `smsc` (`smsc`), INDEX `status` (`status`), INDEX `idx_time` (`submitTime`) ) ROW_FORMAT=DEFAULT ALL TABLES ARE DATE PARTIONED on the timestamp fields List of the global variables in Mysql Server Variable_name , Value auto_increment_increment , 1 auto_increment_offset , 1 autocommit , ON automatic_sp_privileges , ON back_log , 50 basedir , /usr/ big_tables , OFF binlog_cache_size , 32768 binlog_format , STATEMENT bulk_insert_buffer_size , 8388608 character_set_client , latin1 character_set_connection , latin1 character_set_database , latin1 character_set_filesystem , binary character_set_results , latin1 character_set_server , latin1 character_set_system , utf8 character_sets_dir , /usr/share/mysql/charsets/ collation_connection , latin1_swedish_ci collation_database , latin1_swedish_ci collation_server , latin1_swedish_ci completion_type , 0 concurrent_insert , 1 connect_timeout , 10 datadir , /var/lib/mysql/ date_format , %Y-%m-%d datetime_format , %Y-%m-%d %H:%i:%s default_week_format , 0 delay_key_write , ON delayed_insert_limit , 100 delayed_insert_timeout , 300 delayed_queue_size , 1000 div_precision_increment , 4 engine_condition_pushdown , ON error_count , 0 event_scheduler , OFF expire_logs_days , 10 flush , OFF flush_time , 0 foreign_key_checks , ON ft_boolean_syntax , + -><()~*: &| ft_max_word_len , 84 ft_min_word_len , 4 ft_query_expansion_limit , 20 ft_stopword_file , (built-in) general_log , OFF general_log_file , /var/run/mysqld/mysqld.log group_concat_max_len , 1024 have_community_features , YES have_compress , YES have_crypt , YES have_csv , YES have_dynamic_loading , YES have_geometry , YES have_innodb , YES have_ndbcluster , NO have_openssl , DISABLED have_partitioning , YES have_query_cache , YES have_rtree_keys , YES have_ssl , DISABLED have_symlink , YES hostname , smscdb identity , 0 ignore_builtin_innodb , OFF init_connect , init_file , init_slave , innodb_adaptive_hash_index , ON innodb_additional_mem_pool_size , 1048576 innodb_autoextend_increment , 8 innodb_autoinc_lock_mode , 1 innodb_buffer_pool_size , 8388608 innodb_checksums , ON innodb_commit_concurrency , 0 innodb_concurrency_tickets , 500 innodb_data_file_path , ibdata1:10M:autoextend innodb_data_home_dir , innodb_doublewrite , ON innodb_fast_shutdown , 1 innodb_file_io_threads , 4 innodb_file_per_table , OFF innodb_flush_log_at_trx_commit , 1 innodb_flush_method , innodb_force_recovery , 0 innodb_lock_wait_timeout , 50 innodb_locks_unsafe_for_binlog , OFF innodb_log_buffer_size , 1048576 innodb_log_file_size , 5242880 innodb_log_files_in_group , 2 innodb_log_group_home_dir , ./ innodb_max_dirty_pages_pct , 90 innodb_max_purge_lag , 0 innodb_mirrored_log_groups , 1 innodb_open_files , 300 innodb_rollback_on_timeout , OFF innodb_stats_on_metadata , ON innodb_support_xa , ON innodb_sync_spin_loops , 20 innodb_table_locks , ON innodb_thread_concurrency , 8 innodb_thread_sleep_delay , 10000 innodb_use_legacy_cardinality_algorithm , ON insert_id , 0 interactive_timeout , 28800 join_buffer_size , 131072 keep_files_on_create , OFF key_buffer_size , 1073741824 key_cache_age_threshold , 300 key_cache_block_size , 1024 key_cache_division_limit , 100 language , /usr/share/mysql/english/ large_files_support , ON large_page_size , 0 large_pages , OFF last_insert_id , 0 lc_time_names , en_US license , GPL local_infile , ON locked_in_memory , OFF log , OFF log_bin , ON log_bin_trust_function_creators , OFF log_bin_trust_routine_creators , OFF log_error , log_output , FILE log_queries_not_using_indexes , OFF log_slave_updates , OFF log_slow_queries , OFF log_warnings , 1 long_query_time , 10.000000 low_priority_updates , OFF lower_case_file_system , OFF lower_case_table_names , 0 max_allowed_packet , 536870912 max_binlog_cache_size , 4294963200 max_binlog_size , 104857600 max_connect_errors , 10 max_connections , 151 max_delayed_threads , 20 max_error_count , 64 max_heap_table_size , 16777216 max_insert_delayed_threads , 20 max_join_size , 18446744073709551615 max_length_for_sort_data , 1024 max_prepared_stmt_count , 16382 max_relay_log_size , 0 max_seeks_for_key , 4294967295 max_sort_length , 1024 max_sp_recursion_depth , 0 max_tmp_tables , 32 max_user_connections , 0 max_write_lock_count , 4294967295 min_examined_row_limit , 0 multi_range_count , 256 myisam_data_pointer_size , 6 myisam_max_sort_file_size , 2146435072 myisam_recover_options , BACKUP myisam_repair_threads , 1 myisam_sort_buffer_size , 8388608 myisam_stats_method , nulls_unequal myisam_use_mmap , OFF net_buffer_length , 16384 net_read_timeout , 30 net_retry_count , 10 net_write_timeout , 60 new , OFF old , OFF old_alter_table , OFF old_passwords , OFF open_files_limit , 20000 optimizer_prune_level , 1 optimizer_search_depth , 62 optimizer_switch , index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on pid_file , /var/run/mysqld/mysqld.pid plugin_dir , /usr/lib/mysql/plugin port , 3306 preload_buffer_size , 32768 profiling , OFF profiling_history_size , 15 protocol_version , 10 pseudo_thread_id , 0 query_alloc_block_size , 8192 query_cache_limit , 1073741824 query_cache_min_res_unit , 4096 query_cache_size , 536870912 query_cache_type , ON query_cache_wlock_invalidate , OFF query_prealloc_size , 8192 rand_seed1 , rand_seed2 , range_alloc_block_size , 4096 read_buffer_size , 131072 read_only , OFF read_rnd_buffer_size , 33554432 relay_log , relay_log_index , relay_log_info_file , relay-log.info relay_log_purge , ON relay_log_space_limit , 0 report_host , report_password , report_port , 3306 report_user , rpl_recovery_rank , 0 secure_auth , OFF secure_file_priv , server_id , 3 skip_external_locking , ON skip_networking , OFF skip_show_database , OFF slave_compressed_protocol , OFF slave_exec_mode , STRICT slave_load_tmpdir , /tmp slave_net_timeout , 3600 slave_skip_errors , OFF slave_transaction_retries , 10 slow_launch_time , 2 slow_query_log , OFF slow_query_log_file , /var/run/mysqld/mysqld-slow.log socket , /var/run/mysqld/mysqld.sock sort_buffer_size , 67108864 sql_auto_is_null , ON sql_big_selects , ON sql_big_tables , OFF sql_buffer_result , OFF sql_log_bin , ON sql_log_off , OFF sql_log_update , ON sql_low_priority_updates , OFF sql_max_join_size , 18446744073709551615 sql_mode , sql_notes , ON sql_quote_show_create , ON sql_safe_updates , OFF sql_select_limit , 18446744073709551615 sql_slave_skip_counter , sql_warnings , OFF ssl_ca , ssl_capath , ssl_cert , ssl_cipher , ssl_key , storage_engine , MyISAM sync_binlog , 0 sync_frm , ON system_time_zone , IST table_definition_cache , 256 table_lock_wait_timeout , 50 table_open_cache , 500 table_type , MyISAM thread_cache_size , 8 thread_handling , one-thread-per-connection thread_stack , 196608 time_format , %H:%i:%s time_zone , SYSTEM timed_mutexes , OFF timestamp , 1282125419 tmp_table_size , 16777216 tmpdir , /tmp transaction_alloc_block_size , 8192 transaction_prealloc_size , 4096 tx_isolation , REPEATABLE-READ unique_checks , ON updatable_views_with_limit , YES version , 5.1.37-1ubuntu5-log version_comment , (Ubuntu) version_compile_machine , i486 version_compile_os , debian-linux-gnu wait_timeout , 28800 warning_count , 0 A: Your explain plan that you gave: id , select_type , table , type , possible_keys , key , key_len , ref , rows , Extra 1 , SIMPLE , a , ref , systemId idx_time) , systemId , 14 , const , 735310 , Using where 1 , SIMPLE , b , ref , PRIMARY , PRIMARY , 66 , gwreports2.a.msgId , 2270405 , 1 , SIMPLE , c , ref , PRIMARY , PRIMARY , 66 , gwreports2.a.msgId , 2238701 , shows that you are hitting: 735310 * 2270405 * 2238701 = 3T rows!!!!!! Effectively your not using your indexes to their fullest potential. How to interpret your 'explain plan': For every row in table 'a' (735310 ), you hit table 'b' 2270405 times. For every row you hit in table 'b', you hit table 'c' 2238701 times. As you can see, this is an exponential problem. Yes, the 8MB of InnoDb Buffer space is small, but getting your explain plan down to xxxx * 1 * 1 will result in incredible speeds, even for 8MB of Buffer Space. Given your Query: SELECT a.msgId,a.senderId,a.destination,a.inTime,a.status as InStatus,b.status as SubStatus,c.deliverTime,substr(c.receipt,82,7) as DlvStatus FROM inserted_history a LEFT JOIN submitted_history b ON b.msgId = a.msgId -- USES 1 column of PK LEFT JOIN delivered_history c ON a.msgId = c.msgId -- USES 1 column of PK WHERE a.inTime BETWEEN '2010-08-10 00:00:00' AND '2010-08-010 23:59:59' -- NO key AND a.systemId='ND_arber' -- Uses non-unique PK Here are the problems I see: A) Your _history tables are partitioned on the columns with 'Timestamp' datatype, YET you are NOT those columns in your JOIN/WHERE criteria. The engine must hit EVERY partition without that information. B) Access to submitted_history and delivered_history is using only 1 column of a 2-column PK. You are only getting partial benefit of the PK. Can you get more columns to be part of the JOIN? You must get the # of rows found for this table as close to '1' as possible. C) msgID = varchar(64) and this is the 1st column of the PK for each table. Your Keys on each table are ** HUGE **!! - Try to reduce the size of columns for the PK, or use different columns. Your data patterns of the other keys shows that you have LOTS of disk/ram space tied up in non-PK keys. Question 1) What does "Show Indexes FROM " (Link) for each of the tables report?? The column 'Cardinality' will show you how effective each of your keys really are. The smaller the cardinality is, the WORST/Less effective that index is. You want cardinality as close to "total rows" as possible for ideal performance. Question 2) Can you re-factor the SQL such that the JOIN'd columns of each table are those with the highest cardinality for that table? Question 3) Is the columns of 'timestamp' datatype really the best column for the partitioning? If your access patterns always use 'msgId', and msgId is the 1st column of the PK, then . Question 4) Is msgId unique? My guess is yes, and the 2nd column of the PK is not really necessary. Read up on Optimizing SQL (Link) and have the index cardinality reports of your tables. This is the path to figure out how to optimize an query. You want the 'rows' of the explain plan to be N * 1 * 1. SIDE NOTE:InnoDb & MyISAM engines does NOT automatically update table cardinality for non-unique columns, the DBA needs to manually run 'Analyze Table' periodically to ensure its accuracy. Good Luck. A: Would it be possible to alter the index of inserted_history, systemId (systemId) to be systemId (systemId, inTime). Or add an additional index My logic being that this should help to speed up the selection of the inserted_history (a) rows which forms the basis of the join. The where clause "where a.inTime between '2010-08-10 00:00:00' and '2010-08-010 23:59:59' and a.systemId='ND_arber'" would all be selectable by index. At present, rows are selectable by systemId but then all those rows need to be scanned for the time. Just as a matter of interest, how many records would there be (on average) for each system id. Also as msgid is not unique on its own, how many records (on average) in the other tables will have teh same msgid. A: Main Idea Are you using InnoDB? It looks like your buffer pool is only 8MB. That could easily be the problem, you're dealing with a lot of data and InnoDB doesn't have much memory. Can you bump the innodb_buffer_pool_size up? You'll have to restart MySQL, but I'm betting that would make a HUGE difference, even if you only give it 256 or 512MB. Update: I see your storage engine and table format seem to default to MyISAM, so unless you specified otherwise this wouldn't apply. I wonder if the myisam_sort_buffer_size would help? We don't use MyISAM so I'm not familiar with tuning it. Random Thought I wonder if having having the primary key be alphanumeric (especially VARCHAR) has anything to do with it. I remember we had problems with performance on non-numeric primary keys, but that database dated from 4.0 or 4.1, so that may not apply (or ever have been true). Secondary Idea After the memory thing above, my best guess would be to give MySQL more hints. When I have a query that's running slow, I often find giving it more information helps it out. You have messageId/time indexes on each table. Maybe something more like this would work better: select a.msgId,a.senderId,a.destination,a.inTime,a.status as InStatus, b.status as SubStatus,c.deliverTime,substr(c.receipt,82,7) as DlvStatus from inserted_history a left join submitted_history b on b.msgId = a.msgId left join delivered_history c on a.msgId = c.msgId where a.inTime between '2010-08-10 00:00:00' and '2010-08-010 23:59:59' and a.systemId='ND_arber' AND c.inTime between b.inTime >= a.inTime and c.inTime >= b.inTime I'm guessing things get inserted into A, then B, then C. If you have better limits (say when something goes in A, it's always sent out and submitted within one day) add that information could help. I wonder about this both because I've seen it help my query performance in some situations, but also because you have the data partitioned on the datetime. That may help the optimizer. My other suggestion would be to run your query for a short amount of time, say 10 minutes instead of a full day, and make sure the results are right. Then try 30. Increase it and see when it falls off into "come back tomorrow" territory. That may tell you something.
{ "language": "en", "url": "https://stackoverflow.com/questions/3510944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the preferred way to undo a `svn switch` with svn 1.7? Switching below the root of a project, svn switches that subdirectory only. Prior svn 1.7 one could simply delete that directory and run an update. But now with svn 1.7, it doesn't store svn-information per directory. How can I undo the effect of the switch? extra: apart the command line client, can it be done via TortoiseSVN? :) on win32 note this question was already asked, but for svn < 1.7 which is a quite different situation
{ "language": "en", "url": "https://stackoverflow.com/questions/11900427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: A simple model in Winbugs but it says "This chain contains uninitialized variables" I have some simple time to event data, no covariates. I was trying to fit a Weibull distribution to it. So I have the following code. Everything looks good until I load my initials. It says "this chain contains uninitialized variables". But I don't understand. I think Weibull dist only has 2 parameters, and I already specified them all. Could you please advise? Thanks! model { for(i in 1 : N) { t[i] ~ dweib(r, mu)I(t.cen[i],) } mu ~ dexp(0.001) r ~ dexp(0.001) } # Data list( t.cen=c(0,3.91,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,21.95,23.98,33.08), t=c(2.34,NA,5.16,5.63,6.17,6.8,7.03,8.05,8.13,8.36,8.83,10.16, 10.55,10.94,11.48,11.95,13.05,13.59,16.02,20.08,NA,NA, NA), N=23 ) # Initial values list( r=3,mu=3 ) A: The other uninitialised variables are the missing (NA) values in the vector of t. Remember that the BUGS language makes no distinction between data and parameters, and that supplying something as data with the value NA is equivalent to not supplying it as data.
{ "language": "en", "url": "https://stackoverflow.com/questions/38665210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Solving Matrix elements by using while iteration in Matlab I have a known 5X6 sized M matrix, and a 6x1 sized K matrix which its 4 elements unknowns and two knowns. P matrix obtained by multiplication of these two matrices and has different 5 elements all which function of an x variable. Altogether, there are 5 equations and 5 unknowns (four from the K, and one from the P(which is x)) M[5X6]*K[6X1]=P[5X1] P=[constant*x, constant2*x, constant3*x, constant4*x, constant5*x) How can I obtain a while iteration in Matlab to solve these 5 unknowns? Is this possible in Matlab with the while or for loop? `
{ "language": "en", "url": "https://stackoverflow.com/questions/44032023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: truncate git repository keepinng regular snapshots I want to keep track on a 500kb json text that changes its content minutely. I would like to use git, so I can use git pull on another server to download the latest version of that file without the problem, that the file could change during download and I also want to have a versioning of that file for the last months/years this way at the same time. I thought of creating a git repository where I commit every file change, but I noticed, after some days, this repository gets the size of many GB (even with git gc because there changes so much in the file) I could truncate the git regularly to a particular depth but that is not what I need. I need the information, how the file looked like a week ago, a month ago, a year ago. although I don't need as many commits the longer in the past it is. Is this even possible with git and some bash magic? I am fine with deleting and recreating the repository and using --amend in that git Or would you suggest another solution? A: There is at least one way to do this; I'll outline an approach below. First a few things to think about: Depending on the nature of the changes that occur, you might want to see if frequent packing of the database might help; git is pretty good at avoiding wasted space (for text files, at least). Of course with the commit load you describe - 1440 commits per day, give or take? - the history will tend to grow. Still, unless the changes are dramatic on every commit, it seems like it could be made better than "many GB in a few days"; and maybe you'd reach a level where a compromise archiving strategy would become practical. It's always worth thinking, too, about whether "all the data I need to keep" is bigger than "all the data I need regular access to"; because then you can consider whether some of the data should be preserved in archive repos, possibly on backup media of some form, rather than as part of the live repo. And, as you allude in your question, you might want to consider whether git is the best tool for the job. Your described usage doesn't use most of git's capabilities; nor does it exercise the features that really make git excel. And conversely, other tools might make it easier to progressively thin out the history. But with all of that said, you still might reach the decision to start with "per minute" data, then eventually drop it to "per hour", and maybe still later reduce to *per week". (I'd discourage defining too many levels of granularity; the most "bang for your buck" will come with discarding sub-hourly snapshots. Hour->day would be borderline, day->week would probably be wasteful. If you get down to weekly, that's surely sparse enough...) So when some data "ages out", what to do? I suggest that you could use some combination of rebasing (and/or related operations), depth limits, and replacements (depending on your needs). Depending on how you combine these, you could keep the illusion of a seamless history without changing the SHA ID of any "current" commit. (With more complex techniques, you could even arrange to never change a SHA ID; but this is noticeably harder and will reduce the space savings somewhat.) So in the following diagrams, there is a root commit identified as 'O'. Subsequent commits (the minutely changes) are identified by a letter and a number. The letter indicates the day the commit was created, the numbers sequentially mark off minutes. You create your initial commit and place branches on it for each granularity of history you'll eventually use. (As changes accumulate each minute, they'll just go on master.) O <--(master)(hourly)(weekly) After a couple days you have O <-(hourly)(weekly) \ A1 - A2 - A3 - ... - A1439 - A1440 - B1 - B2 - ... - B1439 - B1440 - C1 <--(master) And maybe you've decided that at midnight, any sub-hour snapshot that's 24 hours old can be discarded. So as day C starts, the A snapshots are older than 24 hours and should be reduced to hourly snapshots. First we must create the hourly snapshots git checkout hourly git merge --squash A60 git commit -m 'Day A 1-60' git merge --squash A120 git commit -m 'Day A 61-120' ... And this gives you O <-(weekly) |\ | A60' - A120' - ... - A1380' - A1440' <-(hourly) \ A1 - A2 - A3 - ... - A1439 - A1440 - B1 - B2 - ... - B1439 - B1440 - C1 <--(master) Here A1440' is a rewrite of A1440, but with a different parentage (such that its direct parent is "an hour ago" instead of "a minute ago"). Next, to make the history seamless you would have B1 identify A1440' as its parent. If you don't care about changing the SHA ID of every commit (including current ones), a rebase will work git rebase --onto A1440' A1440 master Or in this case (since the TREEs at A1440 and A1440' are the same) it would be equivalent to re-parent B1 - see the git filter-branch docs for details of that approach. Either way you would end up with O <-(weekly) |\ | A60' - A120' - ... - A1380' - A1440' <-(hourly) | \ | B1' - B2' - ... - B1439' - B1440' - C1' <-(master) \ A1 - A2 - A3 - ... - A1439 - A1440 - B1 - B2 - ... - B1439 - B1440 - C1 Note that even though the granularity of changes in the B and C commits is unchanged, these are still "rewritten" commits (hence the ' notation); and in fact the original commits have not yet been physically deleted. They are unreachable, though, so they'll eventually be cleaned up by gc; if it's an issue, you can expedite this by discarding reflogs that are more than 24 hours old and then manually running gc. Alternatively, if you want to preserve SHA ID's for the B and C commits, you could use git replace. git replace A1440 A1440' This has a number of drawbacks, though. There are a few known quirks with replacements. Also in this scenario the original commits are not unreachable (even though they aren't shown by default); you would have to shallow the master branch to get rid of them. The simplest way to shallow a branch is to clone the repo, but then you have to jump through extra hoops to propagate the replacement refs. So this is an option if you never want the master ref to "realize" it's moving in an abnormal way, but not as simple.
{ "language": "en", "url": "https://stackoverflow.com/questions/46157791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Calculate How Many Overlapping Dates There Are Within Multiple Date Ranges I need to be able to calculate the number of days when someone was taking at least 1 drug from Class A and 1 drug from Class B. One of the biggest issues I am encountering is that prescriptions for the same Drug Class may overlap each other and when they are joined to Class B prescriptions I end up double counting days. For example, between 01/01/21 and 06/30/21, what's the total number of days for each individual where they took a drug from Class A and from Class B I have fake data below as an example. TABLE 1 | ID | START | END | CLASS | |:---- |:------| :-----| :-----| |1234 |12-Feb-21 |19-Feb-21| A |1234 |20-Feb-21 |22-Feb-21| A |1243 |13-Mar-21 |23-Mar-21| A |1234 |21-Apr-21 |1-May-21 |A |1234 |20-Jun-21 |25-Jun-21 |A |1234 |11-Jul-21 |16-Jul-21 |A |4321 |25-Jan-21 |24-Feb-21 |A |4321 |31-Jan-21 |2-Mar-21 |A |4321 |28-Feb-21 |30-Mar-21 |A |4321 |25-Mar-21 |24-Apr-21 |A |4321 |25-Mar-21 |24-Apr-21 |A |4321 |25-Apr-21 |25-May-21 |A |4321 |29-Apr-21 |29-May-21 |A |4321 |23-May-21 |22-Jun-21 |A |4321 |26-May-21 |25-Jun-21 |A |4321 |23-Jun-21 |23-Jul-21 |A |4321 |23-Jun-21 |23-Jul-21 |A TABLE 2 | ID | START | END | CLASS | |:---- |:------| :-----| :-----| |1234 |18-Jan-21 |17-Feb-21 |B |1234 |17-Mar-21 |16-Apr-21 |B |1234 |14-Apr-21 |14-May-21 |B |1234 |12-May-21 |11-Jun-21 |B |1234 |9-Jun-21 |9-Jul-21 |B |1234 |11-Jul-21 |10-Aug-21 |B |4321 |25-Jan-21 |24-Feb-21 |B |4321 |11-Feb-21 |13-Mar-21 |B |4321 |7-Mar-21 |6-Apr-21 |B |4321 |4-Apr-21 |4-May-21 |B |4321 |30-Apr-21 |30-May-21 |B |4321 |24-May-21 |23-Jun-21 |B |4321 |20-Jun-21 |20-Jul-21 |B PS - I am working in Oracle SQL Developer A: A relatively simple approach is a brute-force approach. This splits the periods into days for each class. Then joins and aggregates to get the total: with cte1(id, d, endd, class) as ( select id, startd, endd, class from table1 union all select id, d + interval '1' day, endd, class from cte1 where d < endd ), cte2(id, d, endd, class) as ( select id, startd, endd, class from table2 union all select id, d + interval '1' day, endd, class from cte2 -- edit here where d < endd ) select cte1.id, count(*) from cte1 join cte2 on cte1.id = cte2.id and cte1.d = cte2.d group by cte1.id; Here is a db<>fiddle.
{ "language": "en", "url": "https://stackoverflow.com/questions/68370598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Creating a completely custom pagination component for tabulator Is there any way with which we can create a custom pagination component for tabulator? I don't want to : * *Add custom controls to the Footer. (Already Checked Footer docs) *Put the pagination element in a custom container. (Already checked with Pagination docs) Use case: Suppose I want to use a third party paginator for it's theming and interactivity options. Is there a way by which I can just apply the pagination classes to it and it will work as expected ? A: Tabulator is a modular system, you could easily either modify the existing pagination module, or create your own from scratch. There is a guide to Building Your Own Module along with several Example Modules to show how things work in practice. If you want a good starting point for the pagination module then why not start with the source code for the existing pagination module, which you can find Here
{ "language": "en", "url": "https://stackoverflow.com/questions/68177939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Reshape a large matrix with missing values and multiple vars of interest I need to reorganize a large dataset into a specific format for further analysis. Right now the data are in long format, with multiple records through time for each point. I need to reshape the data so that each point has a single record, but it will add many new columns of the time-specific data. I’ve looked at previous similar posts but I need to ultimately convert several of the current variables into columns, and I can’t find an example of such. Is there a way to accomplish this in a single reshape, or will I have to do several and then concatenate the new columns back together? Another wrinkle before I post the example is that not all points were sampled at each time-step, so I need those values to show up as NA. For example, (see data below) SitePoint A1 was not sampled at all in 2012, SitePoint A10 was not sampled during the first round in 2012, but K83 was sampled all nine times. mydatain <- structure(list(SitePoint = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 6L, 6L), .Label = c("A1", "A10", "K145", "K83", "T15", "T213"), class = "factor"), Year_Rotation = structure(c(1L, 2L, 3L, 4L, 5L, 6L, 1L, 2L, 3L, 4L, 5L, 6L, 8L, 9L, 1L, 2L, 4L, 5L, 6L, 7L, 8L, 9L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 1L, 7L), .Label = c("2010_1", "2010_2", "2010_3", "2011_1", "2011_2", "2011_3", "2012_1", "2012_2", "2012_3" ), class = "factor"), MR_Fire = structure(c(5L, 6L, 6L, 2L, 9L, 9L, 5L, 6L, 6L, 2L, 9L, 9L, 7L, 8L, 16L, 17L, 21L, 22L, 23L, 25L, 3L, 4L, 10L, 11L, 12L, 13L, 14L, 15L, 18L, 19L, 20L, 1L, 2L, 2L, 5L, 6L, 6L, 11L, 11L, 12L, 7L, 24L), .Label = c("0", "1", "10", "11", "12", "13", "14", "15", "2", "23", "24", "25", "35", "36", "37", "39", "40", "47", "48", "49", "51", "52", "53", "8", "9"), class = "factor"), fire_seas = structure(c(2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 3L, 3L, 3L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 3L), .Label = c("dry", "fire", "wet" ), class = "factor"), OptTSF = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 1L, 1L)), .Names = c("SitePoint", "Year_Rotation", "MR_Fire", "fire_seas", "OptTSF"), row.names = c(31L, 32L, 33L, 34L, 35L, 36L, 67L, 68L, 69L, 70L, 71L, 72L, 73L, 74L, 10543L, 10544L, 10545L, 10546L, 10547L, 10548L, 10549L, 10550L, 14988L, 14989L, 14990L, 14991L, 14992L, 14993L, 14994L, 14995L, 14996L, 17370L, 17371L, 17372L, 17373L, 17374L, 17375L, 17376L, 17377L, 17378L, 19353L, 19354L), class = "data.frame") Ultimately I need something like this: myfinal <- structure(list(SitePoint = structure(1:6, .Label = c("A1", "A10", "K145", "K83", "T15", "T213"), class = "factor"), MR_Fire_2010_1 = c(12L, 12L, 39L, 23L, 0L, 14L), MR_Fire_2010_2 = c(13L, 13L, 40L, 24L, 1L, NA), MR_Fire_2010_3 = c(13L, 13L, NA, 25L, 1L, NA), MR_Fire_2011_1 = c(1L, 1L, 51L, 35L, 12L, NA), MR_Fire_2011_2 = c(2L, 2L, 52L, 36L, 13L, NA), MR_Fire_2011_3 = c(2L, 2L, 53L, 37L, 13L, NA), MR_Fire_2012_1 = c(NA, NA, 9L, 47L, 24L, 8L), MR_Fire_2012_2 = c(NA, 14L, 10L, 48L, 24L, NA), MR_Fire_2012_3 = c(NA, 15L, 11L, 49L, 25L, NA), season_2010_1 = structure(c(2L, 2L, 1L, 2L, 2L, 1L), .Label = c("dry", "fire"), class = "factor"), season_2010_2 = structure(c(2L, 2L, 1L, 2L, 2L, NA), .Label = c("dry", "fire"), class = "factor"), season_2010_3 = structure(c(1L, 1L, NA, 1L, 1L, NA), .Label = "fire", class = "factor"), season_2011_1 = structure(c(2L, 2L, 1L, 2L, 2L, NA), .Label = c("dry", "fire"), class = "factor"), season_2011_2 = structure(c(2L, 2L, 1L, 2L, 2L, NA), .Label = c("dry", "fire"), class = "factor"), season_2011_3 = structure(c(2L, 2L, 1L, 2L, 2L, NA), .Label = c("dry", "fire"), class = "factor"), season_2012_1 = structure(c(NA, NA, 2L, 1L, 1L, 2L), .Label = c("fire", "wet"), class = "factor"), season_2012_2 = structure(c(NA, 1L, 2L, 1L, 1L, NA), .Label = c("fire", "wet"), class = "factor"), season_2012_3 = structure(c(NA, 1L, 2L, 1L, 1L, NA), .Label = c("fire", "wet"), class = "factor"), OptTSF_2010_1 = c(1L, 1L, 0L, 1L, 1L, 1L), OptTSF_2010_2 = c(1L, 1L, 0L, 1L, 1L, NA), OptTSF_2010_3 = c(1L, 1L, NA, 1L, 1L, NA), OptTSF_2011_1 = c(1L, 1L, 0L, 0L, 1L, NA), OptTSF_2011_2 = c(1L, 1L, 0L, 0L, 1L, NA), OptTSF_2011_3 = c(1L, 1L, 0L, 0L, 1L, NA), OptTSF_2012_1 = c(NA, NA, 1L, 0L, 0L, 1L), OptTSF_2012_2 = c(NA, 1L, 1L, 0L, 0L, NA), OptTSF_2012_3 = c(NA, 1L, 1L, 0L, 0L, NA)), .Names = c("SitePoint", "MR_Fire_2010_1", "MR_Fire_2010_2", "MR_Fire_2010_3", "MR_Fire_2011_1", "MR_Fire_2011_2", "MR_Fire_2011_3", "MR_Fire_2012_1", "MR_Fire_2012_2", "MR_Fire_2012_3", "season_2010_1", "season_2010_2", "season_2010_3", "season_2011_1", "season_2011_2", "season_2011_3", "season_2012_1", "season_2012_2", "season_2012_3", "OptTSF_2010_1", "OptTSF_2010_2", "OptTSF_2010_3", "OptTSF_2011_1", "OptTSF_2011_2", "OptTSF_2011_3", "OptTSF_2012_1", "OptTSF_2012_2", "OptTSF_2012_3"), class = "data.frame", row.names = c(NA, -6L )) The actual dataset is about 23656 records X 15 variables, so doing it by hand is likely to cause major headaches and potential for mistakes. Any help or suggestions are appreciated. If this has been answered elsewhere, apologies. I couldn’t find anything directly applicable; everything seemed to related to three columns and only one of those being extracted as new variables. Thanks. SP A: dcast from the devel version of data.table i.e., v1.9.5 can cast multiple columns simultaneously. It can be installed from here. library(data.table) ## v1.9.5+ dcast(setDT(mydatain), SitePoint~Year_Rotation, value.var=c('MR_Fire', 'fire_seas', 'OptTSF')) A: You can use reshape to change the structure of your dataframe from long to wide using the following code: reshape(mydatain,timevar="Year_Rotation",idvar="SitePoint",direction="wide")
{ "language": "en", "url": "https://stackoverflow.com/questions/29376509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Simple way to get Twitter users home feed Does anyone know a good way to, using oAuth, retrieve a users home feed (using PHP or JavaScript)? I've been searching the web (Google, YouTube and the Twitter website), but have not yet found a simple tutorial like developers.facebook.com has. If you know of a good tutorial, or you have written some code that works, I'd be glad to see it. I would prefer the entire process, from authorizing a user to displaying their feed, but anything is better than what I have now. Hoping for answers! A: OAuth is method for authentication, you should use REST API provided by Twitter. Please check this: https://dev.twitter.com/docs/api/1.1 (statuses/user_timeline) Edit: https://github.com/abraham/twitteroauth Please check "Extended flow using example code" section, there's everything you want to know. Just one note, if you have long-live access token (from your app dashboard, see oauth tab), you just pass token and token secret as third and fourth parameter when you create new instance of TwitterOAuth class, like this: $connection = new TwitterOAuth(CONSUMER_KEY, CONSUMER_SECRET, OAUTH_TOKEN, OAUTH_SECRET);
{ "language": "en", "url": "https://stackoverflow.com/questions/16132609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Does MrBean module still automatically support dynamically implementing simple interfaces when deserializing? When converting from Jersey1 client to Jersey2 client (with jackson 2.8.6), I now get a mapping exception when trying to read my interface WebTarget resource = helper.resource(path); if(schedule != null) resource = resource.queryParam("schedule", schedule); return resource.request().get(new GenericType<MyInterface>(){}); Caused by: javax.ws.rs.client.ResponseProcessingException: com.fasterxml.jackson.databind.JsonMappingException: Can not construct instance of com.mycompany.MyInterface: abstract types either need to be mapped to concrete types, have custom deserializer, or contain additional type information the old client code looked like this: WebResource resource = helper.resource(path); if(schedule != null) resource = resource.queryParam("schedule", schedule); try { return resource.get(new GenericType<MyInterface>(){}); } catch(UniformInterfaceException e) { throw new RuntimeException(e.getResponse().getEntity(String.class)); } in both cases, all i did with the client object mapper was: objectMapper.registerModule(new MrBeanModule()); is dynamic instantiation of interfaces gone now? If not, what additional configuration steps need to be performed to get it working? (our dependencies are a bit of a mess, so i think i was using Jersey 1.6 with Jackson 1.9.8) EDIT: as another example of the previous behavior we relied upon, see http://www.cowtowncoder.com/blog/archives/2011/08/entry_459.html where there is a simple interface with no annotations A: i think i found it after a lot of blood, sweat, and tears. i found that the ObjectMapper i had configured was actually not the one that was being used. Jersey1 clientConfig.getSingletons().add(new JacksonJsonProvider(objectMapper)); Client client = new Client(urlConnectionClientHandler, clientConfig); JAXRS2 (what did not work) clientConfig.register(new JacksonJsonProvider(objectMapper)); Client client = ClientBuilder.newClient(cc); i found that the component creating the ObjectMapper that was ACTUALLY being used was a JacksonJaxbJsonProvider and that registering it with the ClientConfig did not work, but registering it on the client did. JAXRS2 (what did work) Client client = ClientBuilder.newClient(cc); client.register(new JacksonJaxbJsonProvider(objectMapper, JacksonJaxbJsonProvider.DEFAULT_ANNOTATIONS));
{ "language": "en", "url": "https://stackoverflow.com/questions/48575742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: scilab sending java main-class error and doesn't wanna work scilab was working just fine 2 weeks ago but now after i updated software ubuntu if i try to open it nothing happens and if i try to open it using Terminal i get this error WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.scilab.modules.jvm.LibraryPath (file:/usr/share/scilab/modules/jvm/jar/org.scilab.modules.jvm.jar) to field java.lang.ClassLoader.sys_paths WARNING: Please consider reporting this to the maintainers of org.scilab.modules.jvm.LibraryPath WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Could not access to the Main Scilab Class: Exception in thread "main" java.lang.ExceptionInInitializerError at org.scilab.modules.localization.Messages.gettext(Unknown Source) at org.scilab.modules.commons.xml.XConfiguration.<clinit>(Unknown Source) at org.scilab.modules.core.Scilab.<clinit>(Unknown Source) Caused by: java.lang.NullPointerException at java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2646) at java.base/java.lang.Runtime.loadLibrary0(Runtime.java:830) at java.base/java.lang.System.loadLibrary(System.java:1870) at org.scilab.modules.localization.MessagesJNI.<clinit>(Unknown Source) ... 3 more Scilab cannot create Scilab Java Main-Class (we have not been able to find the main Scilab class. Check if the Scilab and thirdparty packages are available). i tried reinstalling it and reinstalling java but still doesn't work A: I have encountered the same problem. After googling I found out that this bug was fixed in Scilab 6.0.2. You can download it here: https://www.scilab.org/download/6.0.2 Current version of Scilab, that you can get from sudo apt-get scilab is 6.0.1 (for me scilab-cli was working, but GUI was not) Currently Scilab 6.0.2 works in GUI mode and CLI under Ubuntu 18.04.4 LTS. XCOS works too. It is a little bit laggy after start, but it might be my setup.
{ "language": "en", "url": "https://stackoverflow.com/questions/61426014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to delete old versions of tables in Delta Lake As I understand from documentation, delta lake allows to roll back or "time travel", as they say, to some specific version of a table. But how can I make sure that deleting data will actually delete it without creating a new version? A: This can be implemented using vaccum of the Delta Lake and if the retention is set. Please refer : https://docs.databricks.com/delta/delta-utility.html#delta-vacuum
{ "language": "en", "url": "https://stackoverflow.com/questions/58184611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to get PowerShell to keep a command window open? When I run a program on PowerShell it opens a new window and before I can see the output, the window closes. How do I make it so PowerShell keeps this window open? A: The OP seemed satisfied with the answer, but it doesn't keep the new window open after executing the program, which is what he seemed to be asking (and the answer I was looking for). So, after some more research, I came up with: Start-Process cmd "/c `"your.exe & pause `"" A: I was solving a similar problem few weeks ago. If you don't want to use & (& '.\program.exe') then you can use start process and read the output by start process (where you read the output explicitly). Just put this as separate PS1 file - for example (or to macro): param ( $name, $params ) $process = New-Object System.Diagnostics.Process $proInfo = New-Object System.Diagnostics.ProcessStartInfo $proInfo.CreateNoWindow = $true $proInfo.RedirectStandardOutput = $true $proInfo.RedirectStandardError = $true $proInfo.UseShellExecute = $false $proInfo.FileName = $name $proInfo.Arguments = $params $process.StartInfo = $proInfo #Register an Action for Error Output Data Received Event Register-ObjectEvent -InputObject $process -EventName ErrorDataReceived -action { foreach ($s in $EventArgs.data) { Write-Host $s -ForegroundColor Red } } | Out-Null #Register an Action for Standard Output Data Received Event Register-ObjectEvent -InputObject $process -EventName OutputDataReceived -action { foreach ($s in $EventArgs.data) { Write-Host $s -ForegroundColor Blue } } | Out-Null $process.Start() | Out-Null $process.BeginOutputReadLine() $process.BeginErrorReadLine() $process.WaitForExit() And then call it like: .\startprocess.ps1 "c:\program.exe" "params" You can also easily redirect output or implement some kind of timeout in case your application can freeze... A: If the program is a batch file (.cmd or .bat extension) being launched with cmd /c foo.cmd command, simply change it to cmd /k foo.cmd and the program executes, but the prompt stays open. If the program is not a batch file, wrap it in a batch file and add the pause command at the end of it. To wrap the program in a batch file, simply place the command in a text file and give it the .cmd extension. Then execute that instead of the exe. A: Try doing: start-process your.exe -NoNewWindow Add a -Wait too if needed. A: With Startprocess and in the $arguments scriptblock, you can put a Read-Host $arguments = { "Get-process" "Hello" Read-Host "Wait for a key to be pressed" } Start-Process powershell -Verb runAs -ArgumentList $arguments A: pwsh -noe -c "echo 1"
{ "language": "en", "url": "https://stackoverflow.com/questions/9244280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: how to play you tube embeded code in android application? Hi all this is my youtube embed code. But I have no idea about this how to play this in android application? Can we play this in webview or device player ? "object width="441" height="353" param name="movie" value="http://www.youtube.com/v/u1zgFlCw8Aw?fs=1" param param name="allowFullScreen" value="true" param param name="allowScriptAccess" value="always" param embed src="http://www.youtube.com/v/u1zgFlCw8Aw?fs=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="441" height="353" embed> object" A: I use this to launch the default device player - Intent intent = new Intent(Intent.ACTION_VIEW,Uri.parse(url)); // use this if you want to launch from a non activity class intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); context.startActivity(intent); Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/5813833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to run jQuery before loading page? I have many images on site and some scripts on it. But scripts can run only after loading whole page. How to accelerate this? A: $(document).ready(function() { //code here }); will run a script when the document structure is ready, but before all of the images have loaded. if you want to run script before the document structure is ready, just put your code anywhere. A: Sometimes if you only use $(document).ready(), there will be a flash of content. To avoid the flash, you can hide the body with css then show it after the page is loaded. * *Add the line below to your CSS: html { visibility:hidden; } *And these to your JS: $(document).ready(function() { //your own JS code here document.getElementsByTagName("html")[0].style.visibility = "visible"; }); Then the page will go from blank to showing all content when the page is loaded, no flash of content, no watching images load etc. Inspired by this, thanks to the author. A: Load the images after the page loads? It may depends on what kind of jquery code you're trying to run--what specifically are you trying to do? A: Use $(document).ready(function() { //code here }); and put the jQuery-Script tags at the end of the document. .ready fires when the DOM is ready (which does not mean that the images are already loaded). A: If your JavaScript does any work on your Dom elements, then you have to wait until the page loads. If you need to run the scripts before the image are loaded, you can always lazy load the images. That way you don't have to wait for the images to load. Lazy loading is basically loading the images through JavaScript, so you can control when they load.
{ "language": "en", "url": "https://stackoverflow.com/questions/7821813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Kubernetes (AKS) : nginx ingress error 308 Permanent Redirect error. Private nginx Ingress controller Over all description of what I am doing: I am using a private nginx ingress controller in AKS (Azure Kubernetes Service) and setting up Secrets Store CSI Driver to enable NGINX Ingress Controller with TLS using this doc: Doc1 Following are the steps I am doing as per the doc: * *Deploying Secrets provider: apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: azure-tls spec: provider: azure secretObjects: # secretObjects defines the desired state of synced K8s secret objects - secretName: ingress-tls-csi type: kubernetes.io/tls data: - objectName: <CertName> key: tls.key - objectName: <CertName> key: tls.crt parameters: usePodIdentity: "false" useVMManagedIdentity: "true" userAssignedIdentityID: <GUIDForManagedIdentityProviderHavingAccessToKeyvault> keyvaultName: <KeyvaultName> # the name of the AKV instance objects: | array: - | objectName: <CertName> objectType: secret tenantId: <GUIDForKeyVaultTenant> # the tenant ID of the AKV instance *Deploying a private nginx ingress controller using this documentation: Doc2 helm upgrade nginx-ingress ingress-nginx/ingress-nginx ` --install ` --version 4.1.3 ` --namespace ingress-nginx ` --set controller.replicaCount=2 ` --set controller.nodeSelector."kubernetes\.io/os"=linux ` --set controller.image.registry="ashwin.azurecr.io" ` --set controller.image.image="ingress-nginx/controller" ` --set controller.image.tag="v1.2.1" ` --set controller.image.digest="" ` --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux ` --set controller.admissionWebhooks.patch.image.registry="ashwin.azurecr.io" ` --set controller.admissionWebhooks.patch.image.image="ingress-nginx/kube-webhook-certgen" ` --set controller.admissionWebhooks.patch.image.tag="v1.1.1" ` --set controller.admissionWebhooks.patch.image.digest="" ` --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux ` --set defaultBackend.image.registry="ashwin.azurecr.io" ` --set defaultBackend.image.image="defaultbackend-amd64" ` --set defaultBackend.image.tag="1.5" ` --set defaultBackend.image.digest="" ` -f "..\..\manifests\internal-controller-tls.yaml" --debug The ..\..\manifests\internal-controller-tls.yaml file has this content: controller: service: loadBalancerIP: 10.0.0.11 annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/healthz" extraVolumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "azure-tls" extraVolumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true *Deployed the ingress having this configuration( Picked from here Doc1): apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: healthcheck-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: ingressClassName: nginx tls: - hosts: - ingress.cluster.apaca.demo.com secretName: ingress-tls-csi rules: - host: ingress.cluster.apaca.demo.com http: paths: - path: /cluster-ingress-healthz(/|$)(.*) pathType: Prefix backend: service: name: service1 port: number: 80 - path: /(.*) pathType: Prefix backend: service: name: service2 port: number: 80 After following the 3 steps I am seeing 308 Permanent Redirect when i do a curl command to the http endpoint of the ingress: azadmin@acs-apaca-aksVm:~$ curl -v http://ingress.cluster.apaca.demo.com * Rebuilt URL to: http://ingress.cluster.apaca.demo.com/ * Trying 10.0.0.11... * TCP_NODELAY set * Connected to ingress.cluster.apaca.demo.com (10.0.0.11) port 80 (#0) > GET / HTTP/1.1 > Host: ingress.cluster.apaca.demo.com > User-Agent: curl/7.58.0 > Accept: */* > < HTTP/1.1 308 Permanent Redirect < Date: Thu, 14 Jul 2022 04:28:53 GMT < Content-Type: text/html < Content-Length: 164 < Connection: keep-alive < Location: https://ingress.cluster.apaca.demo.com < <html> <head><title>308 Permanent Redirect</title></head> <body> <center><h1>308 Permanent Redirect</h1></center> <hr><center>nginx</center> </body> </html> * Connection #0 to host ingress.cluster.apaca.demo.com left intact azadmin@acs-apaca-aksVm:~$ But when i put this additional annotation in kubernetes ingress, nginx.ingress.kubernetes.io/ssl-redirect: "false" the http endpoint shows the correct content This is what i see when i do a curl to the http ingress endpoint: azadmin@acs-apaca-aksVm:~$ curl -v http://ingress.cluster.apaca.demo.com * Rebuilt URL to: http://ingress.cluster.apaca.demo.com/ * Trying 10.0.0.11... * TCP_NODELAY set * Connected to ingress.cluster.apaca.demo.com (10.0.0.11) port 80 (#0) > GET / HTTP/1.1 > Host: ingress.cluster.apaca.demo.com > User-Agent: curl/7.58.0 > Accept: */* > < HTTP/1.1 200 OK < Date: Thu, 14 Jul 2022 04:33:34 GMT < Content-Type: text/html; charset=utf-8 < Content-Length: 617 < Connection: keep-alive < <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <link rel="stylesheet" type="text/css" href="/static/default.css"> <title>WELCOME TO AZURE KUBERNETES SERVICE</title> <script language="JavaScript"> function send(form){ } </script> </head> <body> <div id="container"> <form id="form" name="form" action="/"" method="post"><center> <div id="logo">WELCOME TO AZURE KUBERNETES SERVICE</div> <div id="space"></div> <img src="/static/acs.png" als="acs logo"> <div id="form"> </div> </div> </body> * Connection #0 to host ingress.cluster.apaca.demo.com left intact </html>azadmin@acs-apaca-aksVm:~$ but with the additional annotation of nginx.ingress.kubernetes.io/ssl-redirect: "false" the requests will only be http. When i do a curl to the https endpoint for ingress.. I see this in both the cases (case 1: Annotation not added to the ingress, case 2: Annotation added to the ingress). azadmin@acs-apaca-aksVm:~$ curl -v https://ingress.cluster.apaca.demo.com * Rebuilt URL to: https://ingress.cluster.apaca.demo.com/ * Trying 10.0.0.11... * TCP_NODELAY set * Connected to ingress.cluster.apaca.demo.com (10.0.0.11) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS Unknown, Certificate Status (22): * TLSv1.3 (IN), TLS handshake, Unknown (8): * TLSv1.3 (IN), TLS Unknown, Certificate Status (22): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (OUT), TLS alert, Server hello (2): * SSL certificate problem: unable to get local issuer certificate * stopped the pause stream! * Closing connection 0 curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. azadmin@acs-apaca-aksVm:~$ Please help me understand what should I change here so that the 308 redirects error go away and I can have successfull https connected to the ingress endpoint
{ "language": "en", "url": "https://stackoverflow.com/questions/72975387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Adding feature with duplicate ID to Vector source Right now, if I add two features with the same id to a Vector source, it seems like the second one is discarded. Is there anyway to force OpenLayers to replace features with the existing ids? A: var features=layer.getSource().getFeatures(); for(var i=0;i<features.length;i++){ if(features[i].get('id')==id){ layer.getSource().removeFeature(features[i]); break; } } } or from @sox: layer.getSource().removeFeature(layer.getSource().getFeatureById(id));
{ "language": "en", "url": "https://stackoverflow.com/questions/35036808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the best strategy to update web project from JSF 1.1 to JSF 2.x? We have a large - and quite old - project using JSF 1.1 (specifically the MyFaces implementation) with JSP as view technology. Right now we want to upgrade the used JSF version, view technology and taglibs in use. Migrating from JSF 1.2 to JSF 2.0 gave some idea of the woes we are likely to encounter. I am new to JSF but I do understand that Facelets are only supported since JSF 2.x, so we need to update the JSF version before we can begin replacing JSPs with Facelets. Currenty, we also use some old taglibs like Ajax4JSF and an old version of RichFaces, that - IIRC - is based on Ajax4JSF and of which one main feature is also to provide Ajax functionality and Ajax-enabled components to JSF. If I got this right, JSF 2.x supports Ajax functionality natively and therefor taglibs like Ajax2JSF aren't really needed anymore. Since it seems that RichFaces' "life cycle" has finished last year (according to wikipedia), this also seems to be a good opportunity to also replace RichFaces with PrimeFaces, which seems to be still in active development and gets recommended quite often. I am not sure, though, if this replacement makes sense or if these two component libraries even offer similar components or if they aim two provide two totally different sets of components and this replacement is a bad idea to begin with. One way or the other, I'd like to know in which order these 3 steps would make the most sense. Since Facelets require JSF 2.x and current versions of modern taglibs also only seem to work with new JSF versions, I assume updating JSF has to be done first, but concerning the replacement of JSPs with Facelets and of RichFaces/Ajax4JSF with PrimeFaces, in which order should these two be done? My guess is that current taglibs might also depend on Facelets so perhaps this order makes the most sense? * *JSF 1.1 => JSF 2.x *JSP => Facelets *RichFaces => PrimeFaces A: To answer my own question: After working on this upgrade for some time and running into a few dead ends, for me the following order of steps has turned out best: * *Upgrade JSF implementation (in my case: from MyFaces 1.1 to MyFaces 2.2.12) *Replace JSP files with Facelets and comment out all occurences of tags from unsupported tag libraries (some functionality will be lost until the migration is completed, but this way I have to migrate those taglibs just once - to use them in Facelets - and not twice (once for use in JSPs with JSF 2, and then for use in Facelets)) *Replace and update unsupported and outdated tag libs
{ "language": "en", "url": "https://stackoverflow.com/questions/43091931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Unable to set custom colors inside ExpandableListView I'm an Android newbie. I'm trying to set custom colors inside an ExpandableListView adapter. I have defined my colors in colors.xml, but I'm unable to use them in my adapter. I get an error "The method getResources() is undefined for the type ExpandableListAdapter" The function expects an int. I've tried to pass my result from getResources in, but, it does'nt work. I've also tried to pass in a hex value, but it does'nt change anything. How can I use my custom colors in my code? public View getGroupView(int groupPosition, boolean arg1, View convertView, ViewGroup arg3) { int n = 0; String laptopName = (String) getGroup(groupPosition); if (convertView == null) { LayoutInflater infalInflater = (LayoutInflater) context .getSystemService(Context.LAYOUT_INFLATER_SERVICE); convertView = infalInflater.inflate(R.layout.group_item, null); } TextView item = (TextView) convertView.findViewById(R.id.demo); item.setTypeface(null, Typeface.BOLD); item.setText(laptopName); convertView.setBackgroundColor(getResources().getColor(R.color.purple)); return convertView; } Thanks guys, the following snippet works this.context = (Activity) context; convertView.setBackgroundColor(this.context.getResources().getColor(R.color.purple)); A: As loulou8284 mentioned you can put it in your XML, or if it is fixed, define it with Color.rgb(), but to make your code running you need to get the reference to your Context as your class is not declared inside a context-class: convertView.setBackgroundColor(getContext().getResources().getColor(R.color.purple)); A: Assuming you have a context instance somewhere in the adapter instead of this convertView.setBackgroundColor(getResources().getColor(R.color.purple)); it should be this convertView.setBackgroundColor((your context).getResources().getColor(R.color.purple)); and if you don't have a reference to the context just pass it in to the adapter constructor A: You can declare the color in you .xml file ( in your item xml file ) A: Use setBackgroundResource() rather than setBackgroundColor() setBackgroundResource() takes an integer resource index as parameter, and load whatever resource that index points to (for example; a drawable, a string or in your case a color). setBackgroundColor(), however takes an integer representing a color. That is, not a color-resource, but a direct, hexadecimal, rgba value (0xAARRGGBB).
{ "language": "en", "url": "https://stackoverflow.com/questions/17846084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: calculating marginal tax rates I found part of the answer to my problem of calculating marginal taxes using this: income_tax <- function(income, brackets = c(18200, 37000, 80000, 180000, Inf), rates = c(0, .19, .325, .37, .45)) { sum(diff(c(0, pmin(income, brackets))) * rates) } I want to also be able to calculate a fixed component so that i can add say $100 to the tax calculated for every income above the first bracket of 18200. I tried this but it adds $100 to all incomes below 18200 as well. income_tax <- function(income, brackets = c(18200, 37000, 80000, 180000, Inf), rates = c(0, .19, .325, .37, .45), fixed = c(0,100,0,0,0)) { sum(diff(c(0, pmin(income, brackets))) * rates + pmin(income, fixed)) } any help on whatever obvious error i've made would be much appreciated! A: How about this? income_tax <- function(income, brackets = c(18200, 37000, 80000, 180000, Inf), rates = c(0, .19, .325, .37, .45), fixed = c(0,100,0,0,0)) { check <- diff(c(0,pmin(income, brackets))) sum(check * rates + fixed * (check>0)) } which gives, income_tax(18200) # [1] 0 income_tax(18201) #[1] 100.19
{ "language": "en", "url": "https://stackoverflow.com/questions/74311623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Docker Couchbase: Cannot connect to port 8091 using curl from within entrypoint script Running docker-machine version 0.5.0, Docker version 1.9.0 on OS X 10.11.1. I've a Couchbase image of my own (not the official one). From inside the entrypoint script, I'm running some curl commands to configure the Couchbase server and to load sample data. Problem is, curl fails with error message Failed to connect to localhost port 8091: Connection refused. I've tried 127.0.0.1, 0.0.0.0, localhost, all without any success. netstat shows that port 8091 on localhost is listening. If I later log on to the server using docker exec and run the same curl commands, those work! What am I missing? Error: couchbase4 | % Total % Received % Xferd Average Speed Time Time Time Current couchbase4 | Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8091: Connection refused netstat output: root@cd4d3eb00666:/opt/couchbase/var/lib# netstat -lntu Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:21100 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:21101 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:9998 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8091 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8092 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:41125 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:11209 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:11210 0.0.0.0:* LISTEN tcp6 0 0 :::11209 :::* LISTEN tcp6 0 0 :::11210 :::* LISTEN A: Here is my Dockerfile: FROM couchbase COPY configure-cluster.sh /opt/couchbase CMD ["/opt/couchbase/configure-cluster.sh"] and configure-cluster.sh /entrypoint.sh couchbase-server & sleep 10 curl -v -X POST http://127.0.0.1:8091/pools/default -d memoryQuota=300 -d indexMemoryQuota=300 curl -v http://127.0.0.1:8091/node/controller/setupServices -d services=kv%2Cn1ql%2Cindex curl -v http://127.0.0.1:8091/settings/web -d port=8091 -d username=Administrator -d password=password curl -v -u Administrator:password -X POST http://127.0.0.1:8091/sampleBuckets/install -d '["travel-sample"]' This configures the Couchbase server but still debugging how to bring Couchbase back in foreground. Complete details at: https://github.com/arun-gupta/docker-images/tree/master/couchbase A: It turns out that if I do the curls after restarting the server, those work. Go figure! That said, note that the REST API for installing sample buckets is undocumented as far as I know. arun-gupta's blog and his answer here are the only places where I saw any mention of a REST call for installing sample buckets. There's a python script available but that requires installing python-httplib2. That said, arun-gupta's last curl statement may be improved upon as follows: if [ -n "$SAMPLE_BUCKETS" ]; then IFS=',' read -ra BUCKETS <<< "$SAMPLE_BUCKETS" for bucket in "${BUCKETS[@]}"; do printf "\n[INFO] Installing %s.\n" "$bucket" curl -sSL -w "%{http_code} %{url_effective}\\n" -u $CB_USERNAME:$CB_PASSWORD --data-ascii '["'"$bucket"'"]' $ENDPOINT/sampleBuckets/install done fi where SAMPLE_BUCKETS can be a comma-separated environment variable, possible values being combinations of gamesim-sample, beer-sample and travel-sample. The --data-ascii option keeps curl from choking on the dynamically created JSON. Now if only there was an easy way to start the server in the foreground. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/34131670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to query associations in Linq to Entity framework in .NET Ria Services I have just started with Linq and Linq to Entity Framewok. On top of that with the .NET Ria services. My problem is that I have 2 tables Folder and Item with a many to many relationsship using a third "connection" table FolderItem like this: (source: InsomniacGeek.com) In the .NET RIA Service domain service, I want to create a method that returns all Items for a given FolderID. In T-SQL , that would something like this: SELECT * FROM Item i INNER JOIN FolderItem fi ON fi.ItemID = i.ID WHERE fi.FolderID = 123 My Linq knowledge is limited, but I want to do something like this: public IQueryable<Item> GetItems(int folderID) { return this.Context.Items.Where(it => it.FolderItem.ID == folderID); } This is not the correct syntax, it gives this error: Cannot convert lambda expression to type 'string' because it is not a delegate type What is the correct way of doing this (with associations) ? Can I user the .Include("FolderItem") somehow? Please, method syntax only. PS. Here's how it would look like using a Query Expression: public IQueryable<Item> GetItemsByFolderID(int folderID) { return from it in this.Context.Items from fi in it.FolderItem where fi.Folder.ID == folderID select it; } The qeustion is, how would it look like using the Method Based Query Syntax? A: Your GetItems looks fine to me. You could also do: public IQueryable<Item> GetItems(int folderID) { return this.Context.FolderItems .Where(fi => fi.ID == folderID) .Select(fi => fi.Items); } Both should return the same thing. A: You can have the parent entity contain the child entities. There are 2 things you have to do to do this: 1) Update your domain query to include the folder items and items: return from x in Context.FolderItems .Include("FolderItem") .Include("FolderItem.Item") where x.ID == folderID select x 2) Update the metadata file so that the RIA service knows to return the associations to the client: [MetadataTypeAttribute(typeof(FolderMetadata))] public partial class Folder { internal sealed class FolderMetadata { ... [Include] public FolderItem FolderItem; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/1370239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: HTML::Entities::encode_entities_numeric: Convert hex output to decimal I am using HTML::Entities module to encode some special chars. Here is my sample code: use HTML::Entities qw(encode_entities_numeric); my $str = "some special chars like € ™ © ®"; encode_entities_numeric($str); print $str; Output: &#x20AC; &#x2122; &#xA9; &#xAE; As output is in HTML numeric hex code of the char. I want output in form of HTML numeric decimal value of the chars like &#8364; &#8482; &#169; &#174; Is there a way to do this in encode_entities_numeric() A: HTML::Entities does the conversion in a little sub named num_entry. Redefine that to be whatever you want: use utf8; use HTML::Entities qw(encode_entities_numeric); { no warnings 'redefine'; sub HTML::Entities::num_entity { sprintf "&#%d;", ord($_[0]); } } my $str = "some special chars like € ™ © ®"; encode_entities_numeric($str); print $str; For what it's worth, this is the sort of thing Perl wants you to do. It's designed so that you can change things in this manner. It would be nicer if HTML::Entities were set up to allow derived classes, but it's not. Perl, recognizing that the world is messy like this, has ways for you to adjust that for what you need. A: No, it's not configurable (because &#x20AC; and &#8364; are 100% equivalent in HTML).
{ "language": "en", "url": "https://stackoverflow.com/questions/34042846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to update a user's friends from facebook on my site? I have an app in Rails and I have it configured so that when a user logs into my app, a list of their facebook friends who have authorized my app is stored in my database so that I can display this info to the user. But does anyone have any best practices for keeping this information up to date? For example, if user A and user B both use my app, and user A adds user B as a friend, how can I ensure that user B is displayed on user A's friend list? I know that I could query facebook every time that I display a user's friends list but the main reason for storing a user's facebook friends in the database is to increase performance and prevent having to make this kind of call each time a user clicks through. Any ideas or best practices? A: maybe have a scheduled task to update that information maybe every day or week? Here is one gem for helping on that https://github.com/bvandenbos/resque-scheduler/
{ "language": "en", "url": "https://stackoverflow.com/questions/11712824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Upload image using Django I am trying to upload an image using Django and there is no errors. But there is no file or directories inside the media folder.And except image field all other fields are updated in table. Models.py from django.db import models # Create your models here. class Player_Profile(models.Model): name=models.CharField(max_length=50, null=True) email=models.EmailField(max_length=50) profile_picture=models.ImageField(upload_to='profile_picture/%y%m%d', blank=True, null=True) age = models.BooleanField() views.py from django.shortcuts import render_to_response from django.template import RequestContext from upload.models import Player_Profile def home(request): return render_to_response('upload/index.html',context_instance=RequestContext(request)) def submit(request): if request.method == 'POST': username=request.POST.get('username') email=request.POST.get('email') age=request.POST.get('age') pic=request.FILES.get('myfile') profile_obj=Player_Profile(profile_picture=pic,name=username, email=email,age=age).save() return render_to_response('upload/welcome.html',context_instance=RequestContext(request)) index.html <form action="/upload/submit/" method="POST" encrypt="multipart/form-data"> {% csrf_token %} User Name :<input type="text" name="username" id="usrname"/><br/> Age :<input type="text" name="age" id="age"/><br/> Email :<input type="email" name="email"> <input type="file" name="myfile" /><br/> <input type="submit" name="submit" value="Upload" /> and inside settings MEDIA_ROOT = '/home/mridul/Desktop/Django/interim/pic/uploadpic/media' MEDIA_URL = '/media/' and manually create media directory inside uploadpic directory. A: It's not: <form action="/upload/submit/" method="POST" encrypt="multipart/form-data"> it's <form action="/upload/submit/" method="POST" enctype="multipart/form-data"> i.e. enctype not encrypt As an aside, you should use a Form or ModelForm to do this, it will make your life much easier.
{ "language": "en", "url": "https://stackoverflow.com/questions/18250577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SQL Server stored procedure or function has too many arguments specified, when trying to pass 2 arguments in C# I am trying to execute a stored procedure called getLastFeatureUpdate. I will explain below the problem, step by step: I have created a table in SQL like this: CREATE TABLE testTable ( DayTime INT NOT NULL, /*yyddhhmm, 1010102345*/ FeatureNbr SMALLINT NOT NULL, Val FLOAT(53) NOT NULL ); I have now created a stored procedure called getLastFeatureUpdate. Important to notice here is that I use 2 parameters @maxDateTime and @tableName, as those are different each time. So I will then pass in those 2 parameters in the C# code that follows in the end. The stored procedure (if I remove @tableName text from the procedure and the C# code. Then the code does work to mention) CREATE PROCEDURE getLastFeatureUpdate @maxDateTime float(53) = 0, @tableName text AS SELECT test.FeatureNbr, test.DayTime, test.Val FROM @tableName test WHERE DayTime = (SELECT MAX(DayTime) FROM @tableName WHERE FeatureNbr = test.FeatureNbr --This is what you are missing AND DayTime <= @maxDateTime) --10102248 The C# code where I want to return the data from testTable. Which are shown in: MessageBox.Show(d1 + "," + d2 + "," + d3); But here is where I get the error: Procedure or function getLastFeatureUpdate has too many arguments specified Notice, that if I don't pass on the 2nd row here with @tableName, the code will work (I then have to remove @tableName as a parameter also in the stored procedure getLastFeatureUpdate) cmd.Parameters.Add(new SqlParameter("@maxDateTime", 10102248)); cmd.Parameters.Add(new SqlParameter("@tableName", "testTable")); //If not using this parameter, the code will work C# code: void getLastFeatureUpdate() { using (SqlConnection conn = new SqlConnection(GetConnectionString())) { conn.Open(); // 1. create a command object identifying the stored procedure SqlCommand cmd = new SqlCommand("getLastFeatureUpdate", conn); // 2. set the command object so it knows to execute a stored procedure cmd.CommandType = CommandType.StoredProcedure; // 3. add parameter to command, which will be passed to the stored procedure cmd.Parameters.Add(new SqlParameter("@maxDateTime", 10102248)); cmd.Parameters.Add(new SqlParameter("@tableName", "testTable")); //If not using this parameter, the code will work // execute the command using (SqlDataReader rdr = cmd.ExecuteReader()) { // iterate through results, printing each to console while (rdr.Read()) { int v1 = (int)rdr["DayTime"]; int v2 = (Int16)rdr["FeatureNbr"]; double v3 = (double)rdr["Val"]; MessageBox.Show(v1 + "," + v2 + "," + v3); } } } } static private string GetConnectionString() { return "Data Source=(LocalDB)\\MSSQLLocalDB;AttachDbFilename=C:\\Users\\andre\\source\\repos\\TestDatabaseCreation\\DatabaseTest.mdf;Integrated Security=True;Connect Timeout=30"; } A: You cannot parameterize the table name in SQL Server, so: that SQL is invalid, and the CREATE PROC did not in fact run. What the contents of the old proc are: only you can know, but: it isn't the code shown. It was probably a dummy version you had at some point in development. Try typing: exec sp_helptext getLastFeatureUpdate; Specifically, the server should have told you: Msg 1087, Level 16, State 1, Procedure getLastFeatureUpdate, Line 12 [Batch Start Line 0] Must declare the table variable "@tableName". Msg 1087, Level 16, State 1, Procedure getLastFeatureUpdate, Line 19 [Batch Start Line 0] Must declare the table variable "@tableName".
{ "language": "en", "url": "https://stackoverflow.com/questions/64228888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Javafx TableView Scroll not working Guys I am trying to make an application involving a tableview. I want my program to run a certain method when the user is scrolling, stopped scrolling, or scrolled to the bottom of the page in the TableView. I was trying to the use the on Scroll, on Scroll Finished in Scene Builder, but nothing happens. enter image description here enter image description here
{ "language": "en", "url": "https://stackoverflow.com/questions/49215594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: error with gtkmm 3 in ubuntu 12.04 i install libgtkmm-3.0-dev in ubuntu 12.04 and i try to learn and write program with c++ and gtkmm 3 i go to this link "http://developer.gnome.org/gtkmm-tutorial/unstable/sec-basics-simple-example.html.en" and try to compile simple example program : #include <gtkmm.h> int main(int argc, char *argv[]) { Glib::RefPtr<Gtk::Application> app = Gtk::Application::create(argc, argv, "org.gtkmm.examples.base"); Gtk::ApplicationWindow window; return app->run(window); } my file name is "basic.cc" and i open terminal and type following command to compile: g++ basic.cc -o basic `pkg-config gtkmm-3.0 --cflags --libs` compile completed without any error but when i try to run program with type ./basic in terminal i get following error : ~$ ./simple ./simple: symbol lookup error: ./simple: undefined symbol:_ZN3Gtk11Application6createERiRPPcRKN4Glib7ustringEN3Gio16ApplicationFlagsE ~$ how can i solve this problem ? i can cimpile any gtkmm 2.4 code with this command : " g++ basic.cc -o basic pkg-config gtkmm-3.0 --cflags --libs " and this command : " g++ basic.cc -o basic pkg-config gtkmm-2.4 --cflags --libs " thanks A: I think you hit this gtkmm bug, apparently triggered by more recent versions of GTK+, and now fixed: https://bugzilla.gnome.org/show_bug.cgi?id=681323 I have asked Ubuntu to update their package, but they are usually slow about that if they do it at all: https://bugs.launchpad.net/ubuntu/+source/gtkmm3.0/+bug/1046469 A: You might want to try reinstalling libgtkmm-3.0-dev. The code compiles fine for me but I get a Seg Fault. It does work when I change Gtk::ApplicationWindow to Gtk::Window. A: there is nothing wrong with your install. that code is bad. try it again, using Gtk::Window window; instead of the ApplicationWindow. When the GNOME documentation for a given class has a description of "TODO", that's a bad thing.
{ "language": "en", "url": "https://stackoverflow.com/questions/11076059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: is there any conditional annotation in JUnit to mark few test cases to be skipped? As far as I know to skip a test case the simplest thing to do is to remove the @Test annotation, but to do it over a large number of test cases is cumbersome. I was wondering if there is any annotation available in JUnit to turn off few test cases conditionally. A: As other people put here @Ignore ignores a test. If you want something conditional that look at the junit assumptions. http://junit.sourceforge.net/javadoc/org/junit/Assume.html This works by looking at a condition and only proceeding to run the test if that condition is satisfied. If the condition is false the test is effectively "ignored". If you put this in a helper class and have it called from a number of your tests you can effectively use it in the way you want. Hope that helps. A: You can use the @Ignore annotation which you can add to a single test or test class to deactivate it. If you need something conditional, you will have to create a custom test runner that you can register using @RunWith(YourCustomTestRunner.class) You could use that to define a custom annotation which uses expression language or references a system property to check whether a test should be run. But such a beast doesn't exist out of the box. A: Hard to know if it is the @Ignore annotation that you are looking for, or if you actually want to turn off certain JUnit tests conditionally. Turning off testcases conditionally is done using Assume. You can read about assumptions in the release notes for junit 4.5 There's also a rather good thread here on stack over flow: Conditionally ignoring tests in JUnit 4 A: If you use JUnit 4.x, just use @Ignore. See here
{ "language": "en", "url": "https://stackoverflow.com/questions/6096061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Removing an item from a stack, whilst iterating over it, in android Usually, in java, to delete an item from a stack (or set) I would do something along the lines of: Stack<Particle> particles = new Stack<Particle>(); int i = 0, ; while(i < particles.size()) { if(particles.elementAt(i).isAlive()) { i ++; } else { particles.remove(i); } } I've searched the android docs and googled quite a few times in an attempt to achieve the same results, but nothing seems to work. Can anyone help me here? A: Try looping using an Iterator, since per Oracle Iterator.remove() is the only safe way to remove an item from a Collection (including a Stack) during iteration. From http://docs.oracle.com/javase/tutorial/collections/interfaces/collection.html Note that Iterator.remove is the only safe way to modify a collection during iteration; the behavior is unspecified if the underlying collection is modified in any other way while the iteration is in progress. So something like the following should work: Stack<Particle> particles = new Stack<Particle>(); ... // Add a bunch of particles Iterator<Particle> iter = particles.iterator(); while (iter.hasNext()) { Particle p = iter.next(); if (!p.isAlive()) { iter.remove(); } } I've used this approach in a real Android app (OneBusAway Android - see code here), and it worked for me. Note that in the code for this app I also included a try/catch block in case the platform throws an exception, and in this case just iterate through a copy of the collection and then remove the item from the original collection. For you, this would look like: try { ... // above code using iterator.remove } catch(UnsupportedOperationException e) { Log.w(TAG, "Problem removing from stack using iterator: " + e); // The platform apparently didn't like the efficient way to do this, so we'll just // loop through a copy and remove what we don't want from the original ArrayList<Particle> copy = new ArrayList<Particle>(particles); for (Particle p : copy) { if (!p.isAlive()) { particles.remove(p); } } } This way you get the more efficient approach if the platform supports it, and if not you still have a backup. A: Have you ever try this: Stack<String> stack = new Stack<String>(); stack.push("S"); stack.push("d"); for (String s : stack){ stack.pop(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/19041763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: "Event driven" static webpage updates I get that by the very nature of static web pages, their content is, well, "static". In a world though where it's becoming ever more popular to host static pages on a service like AWS S3 and run some cloud computing functionality through something like AWS Lambda, I was wondering if it was possible to update your static content when something were to change, making your sites less static. Sure, you can send out Ajax calls every second or so to see if anything changed, but in my humble opinion, that seems stupid and not really like a viable option. Say for instance I kick off a process from my static page that will write an entry into a key store database when the process completed. Is there a way to update the web page to let the user know the process has finished, without reloading the page?
{ "language": "en", "url": "https://stackoverflow.com/questions/42589664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How I change keypair path in cli to create Solana token? I make a Token on Solana. For my practice, first I create a token in Devnet. But now I create a token in Mainnet Beta . When I work in Devnet, I create a file system wallet ,and now, I create a file system wallet and when I change the path of old keypair path . I can't do this and get error. Please help me .
{ "language": "en", "url": "https://stackoverflow.com/questions/72508955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: NSTextView / NSScrollView - A few questions to help me understand it's proper usage I have created a "notes" field designed to hold multiple paragraphs of text which I would like to store in a custom object. Originally, I just used an NSTextField as a temporary solution, but this does not allow me to scroll or have multiple paragraphs of text... In IB I have placed a NSTextView (which seems to be wrapped inside an NSScrollView.) Upon execution of my program, seems to allow me to enter text in multiple paragraphs, scroll, etc. In short it LOOKS to be exactly what I want would like it to be. So far so good. Now, I need to retrieve the data from this field and store it in my custom object. This is where I'm getting a bit lost within the developer documentation... My goals are fairly straight forward: * *Allow users to type away in the box. *Store the contents of the box into a variable (array, etc.) in my custom object when the user moves to another field, leaving the notes field. *Display the users stored text in the text box next time the record is viewed. Second, is there a simple way to retrieve and store the data into a "notes" variable in my custom object (such as an NSString object? I would think having multiple would exclude an NSString object as an option here, but maybe I'm wrong) or am I getting into a more complex area here (such as having to store it in an array of NSString objects, etc.)? Any help would be appreciated! A: You can get the data using -string, defined by NSText (e.g. NSString *savedString = [aTextView string]) Your save code can be put in your NSTextDelegate (read, delegate of the NSTextView, because it's the immediate superclass), in – textDidEndEditing: which will be called, well, when editing is finished (e.g. when the user clicks outside the view) or one of the other methods. Then to reload the saved string if you emptied the text view or something, use [textView setString:savedString] before editing begins. NSTextDelegate documentation: here. I'm not sure what you mena when you say "store the contents of the box into a variable (array, etc.) Are you hoping for an array of custom notes? Text views store a string of data, so the easiest way of storing its value is using one string; if you need an array of notes you'd have to split the string value into different paragraphs, which shouldn't be too hard.
{ "language": "en", "url": "https://stackoverflow.com/questions/10704629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Error while importing tensorflow in jupyter environment Its repetitively showing this error while importing tensorflow I am using a separate environment in anaconda with jupyter installed.Can anyone help me solve this error ImportError Traceback (most recent call last) E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in 63 try: ---> 64 from tensorflow.python._pywrap_tensorflow_internal import * 65 # This try catch logic is because there is no bazel equivalent for py_extension. ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: ImportError Traceback (most recent call last) in 1 import os ----> 2 import tensorflow as tf 3 import matplotlib.pyplot as plt 4 import numpy as np 5 import pandas as pd E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow_init_.py in 39 import sys as _sys 40 ---> 41 from tensorflow.python.tools import module_util as _module_util 42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader 43 E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python_init_.py in 37 # go/tf-wildcard-import 38 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top ---> 39 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow 40 41 from tensorflow.python.eager import context E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in 81 for some common reasons and solutions. Include the entire stack trace 82 above this error message when asking for help.""" % traceback.format_exc() ---> 83 raise ImportError(msg) 84 85 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long ImportError: Traceback (most recent call last): File "E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in from tensorflow.python._pywrap_tensorflow_internal import * ImportError: DLL load failed: The specified module could not be found. A: DLL load fail error is because either you have not installed Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019 or your CPU does not support AVX2 instructions There is a workaround either you have to compile Tensorflow from source or use google colaboratory to work. Follow the instructions mentioned here to build Tensorflow from source.
{ "language": "en", "url": "https://stackoverflow.com/questions/64513180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Android SQLite database schema from xml file Is it possible to create SQLite database schema using XML db definition. Does OrmLite provide such functionality?
{ "language": "en", "url": "https://stackoverflow.com/questions/16541579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Cancel backButton click in my navigationController I have a custom navigationController : #import "customNavigationController.h" #import "StartViewController.h" #import "EtapeViewController.h" @implementation customNavigationController - (UIViewController *)popViewControllerAnimated:(BOOL)animated { // Accueil du guide, on cache le navigationBar if([self.viewControllers count] == 2){ self.navigationBarHidden = TRUE; return [super popViewControllerAnimated:animated]; } // Si on est pas à l'accueil, on fait l'action normal sur le backBarButton else { // Si on est dans une étape, le backButton va servir à reculer dans les étapes, et non reculer vers le workflow NSString *className = NSStringFromClass([[self.viewControllers objectAtIndex:[self.viewControllers count] - 1] class]); if ([className isEqualToString:@"EtapeViewController"]) { EtapeViewController *etape = [self.viewControllers objectAtIndex:[self.viewControllers count] - 1]; if (etape.show_previous_button) { [etape previousEtape:nil]; return FALSE; } return [super popViewControllerAnimated:animated]; } else { return [super popViewControllerAnimated:animated]; } } } @end In some case, I want to cancel the click event of the backButton (on the line that reads "return FALSE"), but it doesn't work. Is there a way to do it? A: in place of return FALSE, you can do: return nil; or return [self topViewController]; Either should have the right side effect. That being said, be careful with your UI design here. Make sure the user knowns why the back button doesn't work somehow. A: I don't understand why you would make the Back button ignore taps? It seems like this would confuse users and the App Store team would consider this a bug. Perhaps you could you post a screenshot? It would probably be better to redesign your interface and consider 1) using toolbar buttons for navigation (like Mobile Safari) or 2) fully support UINavigation based views rather than working around it. Update: It sounds like you're going to perform a different action, like displaying a confirmation? I don't know of any official ways to do what you want, since the UINavigationControllerDelegate methods just notify you about transitions, they don't let you cancel/modify them. (And if the transition is animated then playing with the navigation controller's view stack probably won't help.) So you could always float a transparent (or almost transparent) window over the back button and intercept taps that way. Here's some sample bar that does something similar with the status bar: https://github.com/myell0w/MTStatusBarOverlay A: Why don't you disable the back button in the situations that you don't want the user to tap it?
{ "language": "en", "url": "https://stackoverflow.com/questions/5236640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: using .wrapInner instead of .show() newbie here. I am trying to used jquery wrapInner to show the next choice for the user while trying to hide the original element. Here is my jsfiddle. Once I click endorse radio button it hide the elements in between . The cancel button show the elements back. But clicking endorse radio button again nothing happen. Any help on this would be more than appreciated ! html: <div id="engr-action" > <div id="engr-choice"> <input id="endorse" class="engr-choice" type="radio" name="encoder-pick"/> <label for="endorse">ENDORSEMENT</label> <input id="encode" class="engr-choice" type="radio" name="encoder-pick"/> <label for="encode">ENCODE</label> </div> <button id="cancel-action">Cancel</button> </div> jquery: $(function(){ $('#engr-choice').buttonset(); $('#cancel-action') .button() .click(function() { $('#engr-choice').html(''); $('#endorse-edit').hide(); $('#engr-choice').wrapInner('<input id="endorse" class="engr-choice" type="radio" name="encoder-pick"/><label for="endorse">ENDORSEMENT</label>'); $('#engr-choice input').removeAttr('checked'); $('#engr-choice').buttonset('refresh'); return false; }); $('#endorse').click(function(){ $('#engr-choice').html(''); $('#engr-choice').wrapInner('<div id="endorse-edit"><a href="">Edit</a></div>'); $('#endorse-edit').button(); return false; }); }); A: Since your element is generated "on the fly", thru javascript, your $('#endorse').click(.. event wont work as that element did not exist on DOM, so in order to add events to your elements, created on the fly, you would need to use event delegation, so change: $('#endorse').click(function(){ .. to $(document).on('click', '#endorse',function(){ ... See:: Updated jsFiddle A: You can try this: Fiddle setup $(function () { $('#engr-choice').buttonset(); $('#cancel-action').button().click(function () { $('#engr-choice').html(''); $('#endorse-edit').hide(); $('#engr-choice').append('<input id="endorse" class="engr-choice" type="radio" name="encoder-pick"/> <label for="endorse">ENDORSEMENT</label>'); $('#engr-choice input').prop('checked', false); return false; }); $('#engr-action').on('click', '#endorse', function () { $('#engr-choice').html(''); $('#engr-choice').wrapInner('<div id="endorse-edit"><a href="">Edit</a></div>'); $('#endorse-edit').button(); }); }); As you are putting html elements via javascript/jQuery so direct binding of events won't be available for them, so you need to do it via event delegation that is to delegate the event to the static closest parent which is in your case is #engr-action or you can do it with $(document) which is always available to delegate the events.
{ "language": "en", "url": "https://stackoverflow.com/questions/17228645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Read XML with attribute names in Scala I have the following XML: <TABLES> <TABLE attrname="Red"> <ROWDATA> <ROW Type="solid" track="0" Unit="0"/> </ROWDATA> </TABLE> <TABLE attrname="Blue"> <ROWDATA> <ROW Type="light" track="0" Unit="0"/> <ROW Type="solid" track="0" Unit="0"/> <ROW Type="solid" track="0" Unit="0"/> </ROWDATA> </TABLE> I am using Spark and Scala. I want to read each field in the ROW tag and differentiate by the attribute names. Currently the code below just reads all the values inside the ROW tag but I want to read them based on the attribute names. val df = session.read .option("rowTag", "ROW") .xml(filePath) df.show(10) df.printSchema() Thanks in advance. A: Check below code. val spark = SparkSession.builder().master("local").appName("xml").getOrCreate() import com.databricks.spark.xml._ import org.apache.spark.sql.functions._ import spark.implicits._ val xmlDF = spark.read .option("rowTag", "TABLE") .xml(xmlPath) .select(explode_outer($"ROWDATA.ROW").as("row"),$"_attrname".as("attrname")) .select( $"row._Type".as("type"), $"row._VALUE".as("value"), $"row._Unit".as("unit"), $"row._track".as("track"), $"attrname" ) xmlDF.printSchema() xmlDF.show(false) Schema root |-- type: string (nullable = true) |-- value: string (nullable = true) |-- unit: long (nullable = true) |-- track: long (nullable = true) |-- attrname: string (nullable = true) Sample Data +-----+-----+----+-----+--------+ |type |value|unit|track|attrname| +-----+-----+----+-----+--------+ |solid|null |0 |0 |Red | |light|null |0 |0 |Blue | |solid|null |0 |0 |Blue | |solid|null |0 |0 |Blue | +-----+-----+----+-----+--------+
{ "language": "en", "url": "https://stackoverflow.com/questions/67642800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: CSS wordwrapping and overflow not breaking paragraph I have a paragraph next to a heading. Both are display:inline-block but when the page is compressed through a resolution reduction then the entire paragraph falls to the next line. I have tried wordwrapping and overflow but they seem to have no affect. Is there something else I can use ? Code below does not reflect the wordwrapping or overflow. .header_class_name { display: inline-block } .para_class_name { display: inline-block } <h4 class="header_class_name">my title here</h4> <p class="para_class_name">paragraph here</p> A: You can use display: inline; inside of display: inline-block;. .header_class_name { display: inline; } .para_class_name { display: inline; word-break: break-word; } <h4 class="header_class_name">my title here</h4> <p class="para_class_name">Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.</p> A: edit - adding a solution based on your comment try using white-space: nowrap;, if that doesn't work, you can try setting the containing div size to a higher width. you can try doing several things: * *wrap both divs with <nobr> *make the parent div a flexbox with flex-direction: row; flex-wrap: nowrap; the problem with this solution is that they would 'leak' outside your viewport width, so I would consider allowing line break/making the font smaller/using elipsis A: You can use min-width property, so when window shrinks, layout should stay firm.
{ "language": "en", "url": "https://stackoverflow.com/questions/70751766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to decode unicode string to unicode value I have a program in Python 2.7 that does the following: * *Ask the user for input (In Non English characters. E.g. Hebrew, English) *Split each character of the sentence in a list. (The input can be a small paragraph, or an email) *Convert the characters to Unicode values. So in the end every item of the list is a unicode escape char e.g. "u/0391" that can be manipulate it as string. Ι started quite well but I can't split the letters in the array nor print the right unicode value. Gr_text = unicode(raw_input("Type your message below:\n"), 'unicode-escape') Gr = Gr_text.split() print Gr Example input: Ενα απλο παραδειγμα. The input (translate as "A simple example") is in Greek language without intonations. This sentence should be transform in a list as ['\u0395', '\u03bd', '\u03b1','\u0020', '\u03b1', '\u03c0', '\u03bb', '\u03bf','\u0020', '\u03c0', '\u03b1', '\u03c1', '\u03b1', '\u03b4', '\u03b5', '\u03b9', '\u03b3', '\u03bc', '\u03b1','\u0020',] To point out I also want to convert spaces and special characters. Then I get every letter of the list as string of unicode and not as simple letter so I can manipulate and give it other value. A: I have tested this and it works for me but your mileage may vary. import sys, locale Gr_text = raw_input('Type your message below:\n').decode(sys.stdin.encoding or locale.getpreferredencoding(True)) Gr = Gr_text.split() print Gr “Full Disclosure” credit goes to https://stackoverflow.com/a/477496/1427800
{ "language": "en", "url": "https://stackoverflow.com/questions/33188609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Random number generator that doesn't use rand()/srand() C functions I'm developing some library in C that can be used by various user applications. The library should be completely "transparent" - a user application can init it and finalize, and it's not supposed to see any change in the running application. The problem is - I'm using C srand()/rand() functions in the library initialization, which means that the library does affect user's application - if a user generates random numbers, they will be affected by the fact that rand() was already called. So, can anyone point to some simple non-GPL alternative to rand() random number generator in C? It doesn't have to be really strong - I'n not doing any crypto with the numbers. I was thinking to write some small and really simple generator (something like take time and XOR with something and do something with some prime number and bla bla bla), but I was wondering if someone has a pointer to a more decent generator. A: It generates the next number by keeping some state and modifying the state every time you call the function. Such a function is called a pseudorandom number generator. An old method of creating a PRNG is the linear congruential generator, which is easy enough: static int rand_state; int rand(void) { rand_state = (rand_state * 1103515245 + 12345) & 0x7fffffff; return rand_state; } As you can see, this method allows you to predict the next number in the series if you know the previous number. There are more sophisticated methods. Various types of pseudorandom number generators have been designed for specific purposes. There are secure PRNGs which are slow but hard to predict even if you know how they work, and there are big PRNGs like Mersenne Twister which have nice distribution properties and are therefore useful for writing Monte Carlo simulations. As a rule of thumb, a linear congruential generator is good enough for writing a game (how much damage does the monster deal) but not good enough for writing a simulation. There is a colorful history of researchers who have chosen poor PRNGs for their programs; the results of their simulations are suspect as a result. A: If C++ is also acceptable for you, have a look at Boost. http://www.boost.org/doc/libs/1_51_0/doc/html/boost_random/reference.html It does not only offer one generator, but several dozen, and gives an overview of speed, memory requirement and randomness quality.
{ "language": "en", "url": "https://stackoverflow.com/questions/12897992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I have the same height for the color bar? I would like to plot some data, however the color bar is higher than the plot. How can I fix this? import numpy as np import matplotlib.pyplot as plt import scipy.interpolate fig, ax = plt.subplots() # Generate data: x = np.linspace(1, 1, 10) y = np.linspace(1,100, 10) for i in range(10, 100, 10): x = append(x, np.linspace(i, i, 10)) y = append(y, np.linspace(1, 100, 10)) z = numpy.random.uniform(-5, 5, size=100) # Set up a regular grid of interpolation points xi, yi = np.linspace(x.min(), x.max(), 100), np.linspace(y.min(), y.max(), 100) xi, yi = np.meshgrid(xi, yi) # Interpolate rbf = scipy.interpolate.Rbf(x, y, z, function='linear') zi = rbf(xi, yi) s = ax.imshow(zi, vmin=z.min(), vmax=z.max(), origin='lower', extent=[x.min(), x.max(), y.min(), y.max()]) plt.xlim([0,200]) #s = ax.scatter(x, y, c=z, marker = 's') plt.colorbar(mappable=s, ax=ax) plt.show() A: The easiest solution (for your example) is to remove the line plt.xlim([0,200]) But since you've put it there, I assume that you really want/need it there. So then, you have to manually adapt the height of the colorbar: cb = plt.colorbar(mappable=s, ax=ax) plt.draw() posax = ax.get_position() poscb = cb.ax.get_position() cb.ax.set_position([poscb.x0, posax.y0, poscb.width, posax.height]) Using the shrink argument of colorbar as @MaxNoe suggests might also do the trick. But you will have to fiddle around to get the right value.
{ "language": "en", "url": "https://stackoverflow.com/questions/26897467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: File_get_contents not evaluating to false when file does not exist I'm trying to test an exception in my code. public function testGetFileThrowsException(){ $this->expectException(FileNotFoundException::class); $file = "db.json"; $this->review->getData($file); } The "db.json" file doesn't exist. My goal is tp have the getData() file to throw the FileNotFoundException. Here is the getData() code: public function getData($path){ if(file_get_contents($path) === false){ throw new FileNotFoundException; } return $file; } The problem is that instead of evaluating to False and throw the exception, the file_get_contents function returns: 1) CompanyReviewTest::testGetFileThrowsException file_get_contents(db.json): failed to open stream: No such file or directory So the test doesn't run successfully. Any ideas on why does this happen? A: file_get_contents() generates an E_WARNING level error (failed to open stream) which is what you'll want to suppress as you're already handling it with your exception class. You can suppress this warning by adding PHP's error control operator @ in front of file_get_contents(), example: <?php $path = 'test.php'; if (@file_get_contents($path) === false) { echo 'false'; die(); } echo 'true'; ?> The above echoes false, without the @ operator it returns both the E_WARNING and the echoed false. It may be the case that the warning error is interfering with your throw function, but without seeing the code for that it's hard to say. A: You have 2 solution the poor one is to hide the error like that public function getData($path){ if(@file_get_contents($path) === false){ throw new FileNotFoundException; } return $file; } Or check maybe if the file exist (better solution i guess) public function getData($path){ if(file_exists($path) === false){ throw new FileNotFoundException; } return $file; }
{ "language": "en", "url": "https://stackoverflow.com/questions/44042711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Excel VBA insert row plus copy last row formula Formula below just inserts the row and changing color for certain offset. I need to copy formula from previuos cells H, M, N. Any ideas? Sub button() LastRow = ActiveSheet.Cells(Rows.Count, "D").End(xlUp).Row Range("D" & LastRow + 1).EntireRow.Insert With Range("D" & Rows.Count).End(xlUp).Offset(1) .Value = .Offset(-1).Value + 1 .Offset(, -1).Interior.ColorIndex = 0 .Offset(, -2).Interior.ColorIndex = 0 .Offset(, -3).Interior.ColorIndex = 0 End With End Sub A: So now it's working Sub Prideti_produkta() LastRow = ActiveSheet.Cells(Rows.Count, "D").End(xlUp).Row Range("D" & LastRow + 1).EntireRow.Insert Range("H" & LastRow + 1).FillDown Range("K" & LastRow + 1).FillDown Range("M" & LastRow + 1).FillDown Range("N" & LastRow + 1).FillDown With Range("D" & Rows.Count).End(xlUp).Offset(1) .Value = .Offset(-1).Value + 1 .Offset(, -1).Interior.ColorIndex = 0 .Offset(, -2).Interior.ColorIndex = 0 .Offset(, -3).Interior.ColorIndex = 0 End With End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/60270416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Apply Function on DataFrame Index What is the best way to apply a function over the index of a Pandas DataFrame? Currently I am using this verbose approach: pd.DataFrame({"Month": df.reset_index().Date.apply(foo)}) where Date is the name of the index and foo is the name of the function that I am applying. A: A lot of answers are returning the Index as an array, which loses information about the index name etc (though you could do pd.Series(index.map(myfunc), name=index.name)). It also won't work for a MultiIndex. The way that I worked with this is to use "rename": mix = pd.MultiIndex.from_tuples([[1, 'hi'], [2, 'there'], [3, 'dude']], names=['num', 'name']) data = np.random.randn(3) df = pd.Series(data, index=mix) print(df) num name 1 hi 1.249914 2 there -0.414358 3 dude 0.987852 dtype: float64 # Define a few dictionaries to denote the mapping rename_dict = {i: i*100 for i in df.index.get_level_values('num')} rename_dict.update({i: i+'_yeah!' for i in df.index.get_level_values('name')}) df = df.rename(index=rename_dict) print(df) num name 100 hi_yeah! 1.249914 200 there_yeah! -0.414358 300 dude_yeah! 0.987852 dtype: float64 The only trick with this is that your index needs to have unique labels b/w different multiindex levels, but maybe someone more clever than me knows how to get around that. For my purposes this works 95% of the time. A: You can convert an index using its to_series() method, and then either apply or map, according to your needs. ret = df.index.map(foo) # Returns pd.Index ret = df.index.to_series().map(foo) # Returns pd.Series ret = df.index.to_series().apply(foo) # Returns pd.Series All of the above can be assigned directly to a new or existing column of df: df["column"] = ret Just for completeness: pd.Index.map, pd.Series.map and pd.Series.apply all operate element-wise. I often use map to apply lookups represented by dicts or pd.Series. apply is more generic because you can pass any function along with additional args or kwargs. The differences between apply and map are further discussed in this SO thread. I don't know why pd.Index.apply was omitted. A: As already suggested by HYRY in the comments, Series.map is the way to go here. Just set the index to the resulting series. Simple example: df = pd.DataFrame({'d': [1, 2, 3]}, index=['FOO', 'BAR', 'BAZ']) df d FOO 1 BAR 2 BAZ 3 df.index = df.index.map(str.lower) df d foo 1 bar 2 baz 3 Index != Series As pointed out by @OP. the df.index.map(str.lower) call returns a numpy array. This is because dataframe indices are based on numpy arrays, not Series. The only way of making the index into a Series is to create a Series from it. pd.Series(df.index.map(str.lower)) Caveat The Index class now subclasses the StringAccessorMixin, which means that you can do the above operation as follows df.index.str.lower() This still produces an Index object, not a Series. A: Assuming that you want to make a column in you're current DataFrame by applying your function "foo" to the index. You could write... df['Month'] = df.index.map(foo) To generate the series alone you could instead do ... pd.Series({x: foo(x) for x in foo.index})
{ "language": "en", "url": "https://stackoverflow.com/questions/20025325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "124" }
Q: Call component from another module not working Module to be shared exports: [ PatientClinicalTabComponent ], which is having a emitter patient id Module to be Imported imports: [ PatientModule ] Html Page <ng-template ngbNavContent> <app-patient-clinical-tab [patientId]='patientId'> </app-patient-clinical-tab> </ng-template> But it showing error when the project is build Error: projects/order/src/app/order-create-edit-tab/order-create-edit-tab.component.html:149:11 - error NG8001: 'app-patient-clinical-tab' is not a known element: * *If 'app-patient-clinical-tab' is an Angular component, then verify that it is part of this module. *If 'app-patient-clinical-tab' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppress this message. 149 <app-patient-clinical-tab [patientId]='patientId'> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ projects/order/src/app/order-create-edit-tab/order-create-edit-tab.component.ts:15:16 15 templateUrl: './order-create-edit-tab.component.html', ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error occurs in the template of component OrderCreateEditTabComponent. Error: projects/order/src/app/order-create-edit-tab/order-create-edit-tab.component.html:149:37 - error NG8002: Can't bind to 'patientId' since it isn't a known property of 'app-patient-clinical-tab'. * *If 'app-patient-clinical-tab' is an Angular component and it has 'patientId' input, then verify that it is part of this module. *If 'app-patient-clinical-tab' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppress this message. *To allow any property add 'NO_ERRORS_SCHEMA' to the '@NgModule.schemas' of this component.m 149 <app-patient-clinical-tab [patientId]='patientId'> ~~~~~~~~~~~~~~~~~~~~~~~ projects/order/src/app/order-create-edit-tab/order-create-edit-tab.component.ts:15:16 15 templateUrl: './order-create-edit-tab.component.html', ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error occurs in the template of component OrderCreateEditTabComponent A: you have to options: * *declare PatientClinicalTabComponent in PatientModule(and nowhere else). Just use it inside PatientModule *create e new module called PatientClinicalTabModule. Declare PatientClinicalTabComponent inside PatientClinicalTabModule and then import PatientClinicalTabModule inside PatientModule this will solve your problem
{ "language": "en", "url": "https://stackoverflow.com/questions/73862344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the difference between overflow: auto and overflow: clip? Simple question, one element has overflow:auto and another element has overflow:clip. What is the difference? .some-div{ overflow: auto; } .another-div{ overflow: clip; } A: From the specification If the computed value of overflow on a block box is neither visible nor clip nor a combination thereof, it establishes an independent formatting context for its contents. The creation of formatting context is the main difference Here is a demo .box { border:2px solid; margin:10px; } .box div { float:left; width:50px; height:50px; background:blue; } <div class="box" style="overflow:auto"> <div></div> text </div> <div class="box" style="overflow:clip"> <div></div> text </div> Notice how in the second case, the div remain collapsed because there is no creating of a block formatting context to contain the float element
{ "language": "en", "url": "https://stackoverflow.com/questions/72791248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: nHibernate statistics per session I know I can get global statistics for nHibernate using these techniques http://nhibernate.info/blog/2008/10/26/exploring-nhibernate-statistics-part-1-simple-data-fetching.html What I'm after is a way to get statistics for the current session in the current thread. EG, I want to know how many entities where loaded in my session, how many db queries made, etc. I see Hibernating Rhinos can break down stats per session, so NH must be storing it in some form? dave A: Instead of using ISessionFactory.Statistics, just use ISession.Statistics. class Program { static void Main(string[] args) { ISession session = NHibernateHelper.GetSession(); var stats = session.Statistics; Console.WriteLine("Entity count: {0}", stats.EntityCount); Console.WriteLine("Collection count: {0}", stats.CollectionCount); Console.ReadLine(); } } Statistics at the session level are limited when compared with the session factory level though.
{ "language": "en", "url": "https://stackoverflow.com/questions/23525341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Regex to remove whitespace in between quotes but not between words inside the quotes? I am programming in Java. I am struggling to transform this: Text0 Text1 " Text2 Text3 Text4 " Text5 Text6 into this: Text0 Text1 "Text2 Text3 Text4" Text5 Text6 I have tried lookaheads and lookbehinds: (?<=\")\s+(\w*\s*\w*)\s+(?=\") manages to match all the text inside the quotes, but when switching to: (?<=\")\s+(\W*\S*\W*)\s+(?=\") I get an error. Not sure why. My short knowledge of regex limits me. Help would be appreciated. A: It's easier not to use (just) regex. Split the string on quotes (-1 to keep any trailing empty parts): String[] parts = str.split("\"", -1); Trim the odd-numbered elements: for (int i = 1; i < parts.length; i += 2) { parts[i] = parts[i].trim(); } Join the parts again: String newStr = String.join("\"", parts);
{ "language": "en", "url": "https://stackoverflow.com/questions/47477247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: monaco editor theme can not be changed dynamically I want to update monaco editor theme on the fly, but I found it does not work: this.editorOptions = { ...this.editorOptions, readOnly: true, // this.readOnly, value: this.code, language: 'java', theme: 'vs' }; this.currentEditor.updateOptions(this.editorOptions); If I change the readOnly, it worked fine, but theme is NOT updated at all. the create logic is like this: this.editorOptions = { ...this.editorOptions, readOnly: this.readOnly, value: this.code, language: this.updatedType.toLowerCase(), //theme: 'vs-dark' theme: t === 'light' ? 'dv-light-theme' : 'dv-dark-theme' }; this.currentEditor = monaco.editor.create(this._editorContainer.nativeElement, this.editorOptions); Please help and show how you can update the theme on the fly dynamically. A: I just learned from https://blog.expo.dev/building-a-code-editor-with-monaco-f84b3a06deaf that to set the theme you call monaco.editor.setTheme('<theme-name>'). I was incorrectly calling setTheme on my editor instance.
{ "language": "en", "url": "https://stackoverflow.com/questions/69181035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Connecting to W3Schools sample WebService via jQuery I am trying to connect W3Schools sample webservice via jQuery Ajax but it's not working for me. Here is the JS: function ConnectToWebService() { var pdata = "Celsius:123"; $.ajax({ type: "POST", dataType: "text", data:pdata, contentType: "application/text; charset=utf-8", url: "http://www.w3schools.com/webservices/tempconvert.asmx?op=CelsiusToFahrenheit", success: function (msg) { $('#divToBeWorkedOn').html(msg.d); }, error: function (e) { alert("could not connect to service"); } }); } A: The error on the page says $.mobile is undefined. Include the proper URL to where $.mobile is defined and try again. A: This line doesn't work: $.mobile.allowCrossDomainPages = false; If you take it off your javascript will work. Just so you know, I'm getting here that "could not connect to service". Next time insert some logs or alerts in your code to debug. I just put one before and one after the line that was not working to see if the ajax request was being sent and saw that this line was the problem. (In chrome ctrl+shift+c opens debug window, open console and you can see js logs (console.log). A lot better than alert for debug) Ps: For cross domain ajax call use jsonp, as Ehsan Sajjad commented: * *jQuery AJAX cross domain *Make cross-domain ajax JSONP request with jQuery Ps2: I never used this, but it might be useful: Cross-origin Ajax
{ "language": "en", "url": "https://stackoverflow.com/questions/25182486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: .NET desktop app to run on all screen size I have to develop an .NET app using C# that runs on all screen size(8", 10, 14, 22, etc). So whatever is the screen size, the app should work & dispaly properly. How should I start up with this ? With this in mind I believe using WPF would be the best rather than WinForms (Pls correct if I am wrong). I can give weight to each component & text size and handle that but what about the actual window size. That height & width I can't define a number (like 300*250) or so. Window size shoula also be based on the screen size. Can anyone help me know how do I work out with this. WPF or WinFroms ? A: Use relative size/location instead of absolute (example -> Use Grid.RowDefinition = */Auto, instead of fixed size, Use stackpanel, use dock panel) Automatic layout overview Resolution independent or monitor size independent WPF apps Same question on MSDN with links in answer Metro Apps are supposed to run on different form factors. You can look at Guidelines for the different form factors on metro UI. It will help in understanding the challenges, and how to plan/resolve these challenges.
{ "language": "en", "url": "https://stackoverflow.com/questions/10631374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is PhantomJS one process per browser instance just like Chrome or is it single process? Google Chrome works in a single process per browser instance mode. This is a problem if the number of browser instances goes up to a very large number. I wish to do a lot of testing with PhantomJS with many browser instances and am worried about this issue. A: PhantomJS is single process, just like node.js, and will never spawn something to process requests. Basically, everything is shared in the same instance (web pages, html ressources, ...) You can spawn custom process, using execFile/spawn modules.
{ "language": "en", "url": "https://stackoverflow.com/questions/20160251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Android Material Design on KitKat 4.4 I am starting to learn Android development. I read on the Google dev site that Material Design is only available on Android 5.0. I wonder why some applications on Android 4.4 have material design. I am using Android 4.4. Any guides or tips give me? Thanks. A: There is a design backward compatibility library that bring important material design components to Android 2.1 and above. See : http://android-developers.blogspot.fr/2015/05/android-design-support-library.html Generally speaking, Android always provides some support libraries to provide backward compatibility to previous version on Android. You will find all you need here : https://developer.android.com/topic/libraries/support-library/features.html A: First I should remember Material Design isn't only some code or component else ,It's totally a concept about how design your app to get best UI/UX. And yes ,android 5 and above ,have some pretty and new features and components ,that you can use some of them in android <5 by some support library that Google or someone else provide them , for example look at these two lib: com.android.support:appcompat-v7 com.android.support:design these are most useful libs by Google to have material components in android <5 development.
{ "language": "en", "url": "https://stackoverflow.com/questions/39394729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to compare two URLs in java? Here's a simple problem - given two urls, is there some built-in method, or an Apache library that decides whether they are (logically) equal? For example, these two urls are equal: http://stackoverflow.com http://stackoverflow.com/ A: URL::equals reference URL urlOne = new URL("http://stackoverflow.com"); URL urlTwo = new URL("http://stackoverflow.com/"); if( urlOne.equals(urlTwo) ) { // .... } Note from docs - Two URL objects are equal if they have the same protocol, reference equivalent hosts, have the same port number on the host, and the same file and fragment of the file. Two hosts are considered equivalent if both host names can be resolved into the same IP addresses; else if either host name can't be resolved, the host names must be equal without regard to case; or both host names equal to null. Since hosts comparison requires name resolution, this operation is a blocking operation. Note: The defined behavior for equals is known to be inconsistent with virtual hosting in HTTP. So, instead you should prefer URI::equals reference as @Joachim suggested. A: While URI.equals() (as well as the problematic URL.equals()) does not return true for these specific examples, I think it's the only case where equivalence can be assumed (because there is no empty path in the HTTP protocol). The URIs http://stackoverflow.com/foo and http://stackoverflow.com/foo/ can not be assumed to be equivalent. Maybe you can use URI.equals() wrapped in a utility method that handles this specific case explicitly. A: The following may work for you - it validates that 2 urls are equal, allows the parameters to be supplied in different orders, and allows a variety of options to be configured, that being: * *Is the host case sensitive *Is the path case sensitive *Are query string parameters case sensitive *Are query string values case sensitive *Is the scheme case sensitive You can test it like so: class Main { public static void main(String[] args) { UrlComparer urlComparer = new UrlComparer(); expectResult(false, "key a case different", urlComparer.urlsMatch("//test.com?A=a&B=b", "//test.com?a=a&b=b")); expectResult(false, "key a case different", urlComparer.urlsMatch("https://WWW.TEST.COM?A=1&b=2", "https://www.test.com?b=2&a=1")); expectResult(false, "key a value different", urlComparer.urlsMatch("/test?a=2&A=A", "/test?a=A&a=2")); expectResult(false, "key a value different", urlComparer.urlsMatch("https://WWW.TEST.COM?A=a&b=2", "https://www.test.com?b=2&A=1")); expectResult(false, "null", urlComparer.urlsMatch("/test", null)); expectResult(false, "null", urlComparer.urlsMatch(null, "/test")); expectResult(false, "port different", urlComparer.urlsMatch("//test.com:22?A=a&B=b", "//test.com:443?A=a&B=b")); expectResult(false, "port different", urlComparer.urlsMatch("https://WWW.TEST.COM:8443", "https://www.test.com")); expectResult(false, "protocol different", urlComparer.urlsMatch("http://WWW.TEST.COM:2121", "https://www.test.com:2121")); expectResult(false, "protocol different", urlComparer.urlsMatch("http://WWW.TEST.COM?A=a&b=2", "https://www.test.com?b=2&A=a")); expectResult(true, "both null", urlComparer.urlsMatch(null, null)); expectResult(true, "host and scheme different case", urlComparer.urlsMatch("HTTPS://WWW.TEST.COM", "https://www.test.com")); expectResult(true, "host different case", urlComparer.urlsMatch("https://WWW.TEST.COM:443", "https://www.test.com")); expectResult(true, "identical urls", urlComparer.urlsMatch("//test.com:443?A=a&B=b", "//test.com:443?A=a&B=b")); expectResult(true, "identical urls", urlComparer.urlsMatch("/test?a=A&a=2", "/test?a=A&a=2")); expectResult(true, "identical urls", urlComparer.urlsMatch("https://www.test.com", "https://www.test.com")); expectResult(true, "parameter order changed", urlComparer.urlsMatch("https://www.test.com?a=1&b=2&c=522%2fMe", "https://www.test.com?c=522%2fMe&b=2&a=1")); expectResult(true, "parmeter order changed", urlComparer.urlsMatch("https://WWW.TEST.COM?a=1&b=2", "https://www.test.com?b=2&a=1")); } public static void expectResult(boolean expectedResult, String msg, boolean result) { if (expectedResult != result) throw new RuntimeException(msg); } } UrlComparer.java import java.net.URI; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Objects; import java.util.TreeMap; import org.apache.http.NameValuePair; import org.apache.http.client.utils.URLEncodedUtils; public class UrlComparer { private boolean hostIsCaseSensitive = false; private boolean pathIsCaseSensitive = true; private boolean queryStringKeysAreCaseSensitive = true; private boolean queryStringValuesAreCaseSensitive = false; private boolean schemeIsCaseSensitive = false; public boolean urlsMatch(String url1, String url2) { try { if (Objects.equals(url1, url2)) return true; URI uri1 = new URI(url1); URI uri2 = new URI(url2); // Compare Query String Parameters Map<String, String> mapParams1 = getQueryStringParams(uri1); Map<String, String> mapParams2 = getQueryStringParams(uri2); if (!mapsAreEqual(mapParams1, mapParams2, getQueryStringValuesAreCaseSensitive())) return false; // Compare scheme (http or https) if (!stringsAreEqual(uri1.getScheme(), uri2.getScheme(), getSchemeIsCaseSensitive())) return false; // Compare host if (!stringsAreEqual(uri1.getHost(), uri2.getHost(), getHostIsCaseSensitive())) return false; // Compare path if (!stringsAreEqual(uri1.getPath(), uri2.getPath(), getPathIsCaseSensitive())) return false; // Compare ports if (!portsAreEqual(uri1, uri2)) return false; return true; } catch (Exception e) { return false; } } protected Map<String, String> getQueryStringParams(URI uri) { Map<String, String> result = getListAsMap(URLEncodedUtils.parse(uri, "UTF-8"), getQueryStringKeysAreCaseSensitive()); return result; } protected boolean stringsAreEqual(String s1, String s2, boolean caseSensitive) { // Eliminate null cases if (s1 == null || s2 == null) { if (s1 == s2) return true; return false; } if (caseSensitive) { return s1.equals(s2); } return s1.equalsIgnoreCase(s2); } protected boolean mapsAreEqual(Map<String, String> map1, Map<String, String> map2, boolean caseSensitiveValues) { for (Map.Entry<String, String> entry : map1.entrySet()) { String key = entry.getKey(); String map1value = entry.getValue(); String map2value = map2.get(key); if (!stringsAreEqual(map1value, map2value, caseSensitiveValues)) return false; } for (Map.Entry<String, String> entry : map2.entrySet()) { String key = entry.getKey(); String map2value = entry.getValue(); String map1value = map2.get(key); if (!stringsAreEqual(map1value, map2value, caseSensitiveValues)) return false; } return true; } protected boolean portsAreEqual(URI uri1, URI uri2) { int port1 = uri1.getPort(); int port2 = uri2.getPort(); if (port1 == port2) return true; if (port1 == -1) { String scheme1 = (uri1.getScheme() == null ? "http" : uri1.getScheme()).toLowerCase(); port1 = scheme1.equals("http") ? 80 : 443; } if (port2 == -1) { String scheme2 = (uri2.getScheme() == null ? "http" : uri2.getScheme()).toLowerCase(); port2 = scheme2.equals("http") ? 80 : 443; } boolean result = (port1 == port2); return result; } protected Map<String, String> getListAsMap(List<NameValuePair> list, boolean caseSensitiveKeys) { Map<String, String> result; if (caseSensitiveKeys) { result = new HashMap<String, String>(); } else { result = new TreeMap<String, String>(String.CASE_INSENSITIVE_ORDER); } for (NameValuePair param : list) { if (caseSensitiveKeys) { if (!result.containsKey(param.getName())) result.put(param.getName(), param.getValue()); } else { result.put(param.getName(), param.getValue()); } } return result; } public boolean getSchemeIsCaseSensitive() { return schemeIsCaseSensitive; } public void setSchemeIsCaseSensitive(boolean schemeIsCaseSensitive) { this.schemeIsCaseSensitive = schemeIsCaseSensitive; } public boolean getHostIsCaseSensitive() { return hostIsCaseSensitive; } public void setHostIsCaseSensitive(boolean hostIsCaseSensitive) { this.hostIsCaseSensitive = hostIsCaseSensitive; } public boolean getPathIsCaseSensitive() { return pathIsCaseSensitive; } public void setPathIsCaseSensitive(boolean pathIsCaseSensitive) { this.pathIsCaseSensitive = pathIsCaseSensitive; } public boolean getQueryStringKeysAreCaseSensitive() { return queryStringKeysAreCaseSensitive; } public void setQueryStringKeysAreCaseSensitive(boolean queryStringKeysAreCaseSensitive) { this.queryStringKeysAreCaseSensitive = queryStringKeysAreCaseSensitive; } public boolean getQueryStringValuesAreCaseSensitive() { return queryStringValuesAreCaseSensitive; } public void setQueryStringValuesAreCaseSensitive(boolean queryStringValuesAreCaseSensitive) { this.queryStringValuesAreCaseSensitive = queryStringValuesAreCaseSensitive; } } A: sameFile public boolean sameFile(URL other)Compares two URLs, excluding the fragment component. Returns true if this URL and the other argument are equal without taking the fragment component into consideration. Parameters: other - the URL to compare against. Returns: true if they reference the same remote object; false otherwise. also please go through this link http://download.oracle.com/javase/6/docs/api/java/net/URL.html#sameFile(java.net.URL) As iam unable to add comment , browser throwing Javascript error. so iam adding my comment here. regret for inconvience. //this what i suggeted >URL url1 = new URL("http://stackoverflow.com/foo"); >URL url2 = new URL("http://stackoverflow.com/foo/"); >System.out.println(url1.sameFile(url2)); // this is suggested by Joachim Sauer >URI uri = new URI("http://stackoverflow.com/foo/"); >System.out.println(uri.equals("http://stackoverflow.com/foo")); // Both are giving same result so Joachim Sauer check once.
{ "language": "en", "url": "https://stackoverflow.com/questions/5402485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Is that correct work of recycleView android 7.0+? I have list of data, and size of data = 7, but when I try to display all the elements, it displays only 6 and in the debug mode in the method onBindViewHolder the last binds position is 5. I thought that was some bugs with my list, so i added 8th element, but it all the same displays only 6 elements. On android lower than 7.0 everything works properly Adapter public class BottomNavigationAdapter extends RecyclerView.Adapter<BottomNavigationAdapter.MyViewHolder> { private ArrayList<BottomNavigationData> bottomNavigationDatas = new ArrayList<>(); private Context mContext; public BottomNavigationAdapter(ArrayList<BottomNavigationData> bnl, Context context) { bottomNavigationDatas = bnl; mContext = context; } public class MyViewHolder extends RecyclerView.ViewHolder { ImageView imageView; TextView txtview; public MyViewHolder(View view) { super(view); imageView = (ImageView) view.findViewById(R.id.ivBottomItemIcon); txtview = (TextView) view.findViewById(R.id.tvBottomTitle); } } @Override public BottomNavigationAdapter.MyViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View itemView = LayoutInflater.from(parent.getContext()).inflate(R.layout.bottom_navigation_item, parent, false); return new MyViewHolder(itemView); } @Override public void onBindViewHolder(BottomNavigationAdapter.MyViewHolder holder, int position) { BottomNavigationData currentNavigationData = bottomNavigationDatas.get(position); String image_url = Constants.SERVER_HOST + "/" + currentNavigationData.image; Glide.with(mContext).load(image_url).into(holder.imageView); holder.txtview.setText(currentNavigationData.name); holder.imageView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { } }); } @Override public int getItemCount() { return (bottomNavigationDatas != null) ? bottomNavigationDatas.size() : 0; } recycle view initialisation rvBottomNavigation = (RecyclerView) view.findViewById(R.id.rvBottomNavigation); LinearLayoutManager horizontalLayoutManager = new LinearLayoutManager(getActivity(), LinearLayoutManager.HORIZONTAL, false); rvBottomNavigation.setLayoutManager(horizontalLayoutManager); bottomNavigationAdapter = new BottomNavigationAdapter(bottomNavigationDatas, getActivity().getApplicationContext()); rvBottomNavigation.setAdapter(bottomNavigationAdapter); bottomNavigationAdapter.notifyDataSetChanged(); my output on android 7.0+: output on lower than 7.0: So as you can see, i have one more item at the end. Code is exactly the same. How can i fix it? EDIT added full code of adapter item.xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="wrap_content" android:layout_height="match_parent" android:gravity="center" android:orientation="vertical" android:layout_marginTop="7dp" android:layout_marginLeft="@dimen/small_margin" android:layout_marginRight="@dimen/small_margin" > <ImageView android:id="@+id/ivBottomItemIcon" android:layout_gravity="center_horizontal" android:layout_width="30dp" android:layout_height="30dp" /> <TextView android:id="@+id/tvBottomTitle" android:textColor="@android:color/white" android:layout_gravity="center_horizontal" android:gravity="center" android:layout_width="wrap_content" android:layout_height="match_parent" /> </LinearLayout> my_fragment.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" tools:context=".activities.GiftsActivity"> <mfp.avdm.chudobox.custom.HeaderGridView android:id="@+id/gvGoods" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_above="@+id/footer" android:fitsSystemWindows="true" android:horizontalSpacing="3dp" android:numColumns="2" android:scrollbars="none" android:stretchMode="columnWidth" android:verticalSpacing="3dp" /> <LinearLayout android:id="@+id/footer" android:layout_width="match_parent" android:layout_height="60dp" android:layout_alignParentBottom="true" android:background="@drawable/toolbare"> <HorizontalScrollView android:layout_width="match_parent" android:layout_height="wrap_content" android:scrollbars="none"> <android.support.v7.widget.RecyclerView android:id="@+id/rvBottomNavigation" android:layout_gravity="center_vertical" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </HorizontalScrollView> </LinearLayout> </RelativeLayout> A: Code seems fine but I am sure this is not the way to do it, * *You should null check your arrayList in your activity itself, then proceed to set adapter. *For adapter,you should provide activityContext rather than applicationContext, adapters often hold listeners to open activities or to show toasts, it that case it should be activityContext not applicationContext *For your problem, I will suggest setting layout manager after setting adapter. In this way, you won't need to call notifyDatasetChang. Updated part In your xml, you are adding RecyclerView inside horizontal scrollview which is the reason its giving trouble. Take your recyclerview outside the horizontal scrollview because you are setting the LinearLayoutManager with horizontal orientation in your code anyways. Then set both height and width of the recyclerview to match_parent. Also, if you don't want to add anything else in the footer view, you can remove the linearLayout and set your recyclerview as footer with width=match_parent and height=60dp. Don't forget to delete horizontalscrollview code.
{ "language": "en", "url": "https://stackoverflow.com/questions/47809274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Disposal of Shared members in a ServicedComponent with ActivationOption.Server For several reasons I need to create a COM+ component in .Net Framework 4. The intent is to host the component in its own process (dllhost.exe), hence the usage of ActivationOption.Server. My component code needs to persist data between object activations, which is maintained by a worker thread. This worker thread and its data are held in static (shared) members of my base class. The shared data is independent of the caller, its security context, transactions, etc. Also, the worker thread performs background processing on the data. I need to clean up the data and orderly terminate the worker thread when the dllhost process is disposed. Since there are no static (shared) destructors, I don't know how to do it. Is there anything I could implement while inheriting ServicedComponent? Any other ideas? Thank you. Here's some code to start: Imports System.EnterpriseServices <Assembly: ApplicationName("MySender")> <Assembly: ApplicationActivation(ActivationOption.Server)> <ClassInterface(ClassInterfaceType.None), ProgId("MySender.Sender")> _ <Transaction(EnterpriseServices.TransactionOption.NotSupported)> _ Public Class Sender Inherits ServicedComponent Implements SomeLib.IMsgSender Shared worker As myWorker Shared sync As New Object Public Sub MyInstanceMethod(msg as string) Implements SomeLib.IMsgSender.SendMessage SyncLock sync If worker Is Nothing Then worker = New myWorker worker.StartThread() End If End SyncLock worker.Process(msg) End Sub 'Something like this does not exist!' Shared Sub Dispose() SyncLock sync If worker IsNot Nothing Then worker.StopThread() End If End SyncLock End Sub End Class A: The AppDomain.ProcessExit event will fire before unloading the domain. If the code to run doesn't take too long, it could be used like this: Imports System.EnterpriseServices <Assembly: ApplicationName("MySender")> <Assembly: ApplicationActivation(ActivationOption.Server)> <ClassInterface(ClassInterfaceType.None), ProgId("MySender.Sender")> _ <Transaction(EnterpriseServices.TransactionOption.NotSupported)> _ Public Class Sender Shared Sub New AddHandler AppDomain.CurrentDomain.ProcessExit, AddressOf MyDisposalCode End Sub '.... Shared Sub MyDisposalCode(sender as Object, e as EventArgs) 'My disposal code End Sub End Class It's important to notice that .Net will enforce a 2 second timeout on this code.
{ "language": "en", "url": "https://stackoverflow.com/questions/32659436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: String Is Empty When Read So basically I call a MYSQL query that grabs a secret key that was generated and saved to the database. Then it sets a string called "ServerCryptKey" to the output of the query which should be "yQnK9xDCxaLIGwzpsKEXeJR2Iz5ZoHnWyLHHikkQv5zbC8B5Sf36ZU9HteHSW5Ov". $SQLGetUsers = $odb -> query("SELECT * FROM settings"); while ($getInfo = $SQLGetUsers -> fetch(PDO::FETCH_ASSOC)) { $NewServerHash = $getInfo['newhash']; } I can call the string fine and it will read the above ^, but whenever I have it call it from this function it reads nothing and shows it's an empty string... function encrypt($data) { $secret = $ServerCryptKey; //Generate a key from a hash $key = md5(utf8_encode($secret), true); //Take first 8 bytes of $key and append them to the end of $key. $key .= substr($key, 0, 8); //Pad for PKCS7 $blockSize = mcrypt_get_block_size('tripledes', 'ecb'); $len = strlen($data); $pad = $blockSize - ($len % $blockSize); $data .= str_repeat(chr($pad), $pad); //Encrypt data $encData = mcrypt_encrypt('tripledes', $key, $data, 'ecb'); return base64_encode($encData); } function decrypt($data) { $secret = $ServerCryptKey; //Generate a key from a hash $key = md5(utf8_encode($secret), true); //Take first 8 bytes of $key and append them to the end of $key. $key .= substr($key, 0, 8); $data = base64_decode($data); $data = mcrypt_decrypt('tripledes', $key, $data, 'ecb'); $block = mcrypt_get_block_size('tripledes', 'ecb'); $len = strlen($data); $pad = ord($data[$len-1]); return substr($data, 0, strlen($data) - $pad); } If anyone can help me figure out why this doesn't grab the string, please let me know, thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/42449277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Angular bootstrap button click function I am using a simple form of my template. In form there is already a validation is added i just need to check when user click on button after all required field how is it possible ? <section class="contact-area pb-80" style="margin-top:10%"> <div class="container"> <div class="section-title"> <h2>Your Information</h2> <div class="bar"></div> <p>Please provide the correct information we will contact you with your information.</p> </div> <div class="row h-100 justify-content-center align-items-center"> <div class="col-lg-6 col-md-12"> <img src="assets/img/1.png" alt="image"> </div> <div class="col-lg-6 col-md-12"> <form id="contactForm"> <div class="row"> <div class="col-lg-12 col-md-12"> <div class="form-group"> <input type="text" name="name" id="name" class="form-control" required placeholder="Enter your name"> </div> </div> <div class="col-lg-12 col-md-12"> <div class="form-group"> <input type="email" name="email" id="email" class="form-control" required placeholder="Enter your email"> </div> </div> <div class="col-lg-12 col-md-6"> <div class="form-group"> <input type="text" name="phone_number" id="phone_number" required class="form-control" placeholder="Enter your number (Whats app)"> </div> </div> <div class="col-lg-12 col-md-6"> <div class="form-group"> <input type="text" name="msg_subject" id="msg_subject" class="form-control" required placeholder="Enter your skype name"> </div> </div> <div class="col-lg-12 col-md-12"> <button type="submit" class="btn btn-primary">Send Message</button> </div> </div> </form> </div> </div> </div> </section> This is my code I need to simply check when user clicks on Send Message But after all fields are filled as in form. Validation is working fine but the problem is how can I check the button? I have created a simple function like this submit(){ console.log('submit'); } I if add (click)="submit()" on the button so every time the user clicks it's calling the function. I want when all fields are filled then this function can work. A: Inner template: <form #myForm="ngForm"> <input ngModel name="name" type="text" required /> <input ngModel name="email" type="email" required /> <input ngModel name="phone-number" type="text" required /> <button (click)="onSubmit(myForm)">Submit</button> </form> Innner .ts: onSubmit(form: NgForm) { console.log(form.invalid); } Be sure that you have imported FormsModule into your module: import { FormsModule } from '@angular/forms'; @NgModule({ // ... imports: [..., FormsModule, ...], // ... }) A: Important: import { FormsModule } from '@angular/forms'; into your app-module and add it to the imports array. @NgModule({ imports: [FormsModule], }) Create a template variable in form element like myForm given below and on button tag make use of property binding [disabled]="!myForm.form.valid". Now Submit button will only work when each form element is filled. Template file: <form #myForm="ngForm" method="POST" (ngSubmit)="submitForm(myForm)"> <div> <label>Name</label> <input name="name" type="text" required ngModel/> </div> <div> <label>Email</label> <input name="email" type="email" required ngModel/> </div> <div> <label>Phone no</label> <input name="phone" type="number" required ngModel/> </div> <button type="submit" [disabled]="!myForm.form.valid">Submit</button> <pre>{{myForm.value | json}}</pre> </form> You can find my code snippet on StackBlitz
{ "language": "en", "url": "https://stackoverflow.com/questions/60922131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the most efficient way I can to get a list of locations sorted by proximity given a location? For example, Say I have a database of locations (latitude/longtitude), and a point. How can I grab the top 25 nearest locations from the database of locations? Is there too a library, or a resource I can read up on various geospatial operations like above? Thanks! A: What you want to do sounds like a nearest neighbour search. K-d trees are an efficient data structure to achieve this. The CGAL library has spatial searching functions if you're looking for a library for C++.
{ "language": "en", "url": "https://stackoverflow.com/questions/7325068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Grouping data using a value from a deep subarray How can I assign first level keys using a column value from the third level of a multidimensional array? My input: $array = [ [ ["ID" => 2, "vendor_id" => "37", "order_id" => 776], ], [ ["ID" => 2, "vendor_id" => "37", "order_id" => 786], ] ]; My current output is like this: array(1) { [787]=> array(2) { [0]=> array(40) { ["ID"]=> string(1) "1" ["vendor_id"]=> string(2) "37" ["order_id"]=> string(3) "776" } [1]=> array(40) { ["ID"]=> string(1) "2" ["vendor_id"]=> string(2) "37" ["order_id"]=> string(3) "787" } } } I want to group the value of order_id separately as a key - the end result would look like this: array(1) { [776]=> array(2) { [0]=> array(40) { ["ID"]=> string(1) "2" ["vendor_id"]=> string(2) "37" ["order_id"]=> string(3) "776" } } [787]=> array(2) { [0]=> array(40) { ["ID"]=> string(1) "2" ["vendor_id"]=> string(2) "37" ["order_id"]=> string(3) "787" } } } A: Get the current item or use reset and extract the entire columns indexing by order_id: $result = array_column(current($array), null, 'order_id'); If there could be multiple arrays, then just loop and append: $result = []; foreach($array as $v) { $result += array_column($v, null, 'order_id'); } A: you can use that function //$array is your input array //$mergedArray is the result wanted $mergedArray = array_reduce($array, function($acc, $val) { foreach ($val as $order) { $acc[$order['order_id']] = $order; } return $acc; }, []); you can try it on http://sandbox.onlinephpfunctions.com/ <?php $array = [ 787 => [ 0 => [ "ID" => 1, "vendor_id" => "37", "order_id" => 776], 1 => [ "ID" => 2, "vendor_id" => "37", "order_id" => 787], 2 => [ "ID" => 1, "vendor_id" => "37", "order_id" => 790], ], 734 => [ 0 => [ "ID" => 1, "vendor_id" => "37", "order_id" => 722], 1 => [ "ID" => 2, "vendor_id" => "37", "order_id" => 735], 2 => [ "ID" => 1, "vendor_id" => "37", "order_id" => 734], ], ]; $t = array_reduce($array, function($acc, $val) { foreach ($val as $order) { $acc[$order['order_id']] = $order; } return $acc; }, []); var_dump($t); A: The other answers do not maintain the multi-level structure described as your desired output. The order_id values must be used as first level keys, and the second level keys must be incremented/indexed as they are encountered. This will allow for multiple entries which share the same first level key. foreach() inside of a foreach(): (Demo) $result = []; foreach ($array as $group) { foreach ($group as $row) { $result[$row['order_id']][] = $row; } } var_export($result); array_reduce() with foreach(): (Demo) var_export( array_reduce( $array, function ($carry, $group) { foreach ($group as $row) { $carry[$row['order_id']][] = $row; } return $carry; } ) );
{ "language": "en", "url": "https://stackoverflow.com/questions/59289636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Sharepoint - Load a form's view based on user I am using a data connection list to load desired views in an InfoPath form on SharePoint. I have a permission list with 2 columns: usernames and control group. My form on the main list loads a specific view based on what the username and group are of the permission list. You have to filter out the group based on the form's username() function to match the username column and set that as a condition to (on form load) change it to a specific view. All this works, but the problem comes in when you have a user with multiple control groups. The filter only returns the first instance it finds. I can't think of a way to fix this. Maybe load the other list as a repeating table into the form, but then how would I reference that table in the conditions of a form load rule? Or is there a way to get a field filter to look past the first item it finds? Update: I forgot to mention that I have to use a field to hold your filtered username:id:group aka group[title=username()] and then use that in the form load conditions. I think this is where the problem is, as this filter is what doesn't store all instances of the users id from the control list, but only the first. SharePoint 2010 with forms created in InfoPath 2010 A: Are you querying the data in from info path or using visual studio, if you are querying in info path check the condition as Display name matches the username() and the query the data
{ "language": "en", "url": "https://stackoverflow.com/questions/20505189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I remove space between children in the row? enter image description here I would like to remove space between icon and text. How can I remove space between children in the row?///////// ///////////////////////////////////////////////////////////////////////////////////////// Widget topSection() { return Row( mainAxisAlignment: MainAxisAlignment.end, children: [ Padding( padding: EdgeInsets.only(left: 20.0, top: 180), child: MaterialButton( onPressed: () {}, color: Colors.black, textColor: Colors.white, child: Icon( Icons.done, size: 24, ), shape: CircleBorder(), )), Expanded( child: Padding( padding: EdgeInsets.only(top: 180), child: Column( crossAxisAlignment: CrossAxisAlignment.start, children: <Widget>[ Padding( padding: EdgeInsets.only(left: 4), child: Text("Wellness Coaching", style: TextStyle( fontSize: 18, fontFamily: "LatoR", fontWeight: FontWeight.w900)), ), Padding( padding: EdgeInsets.only(left: 4), child: Text("Connect to your data", style: TextStyle(fontSize: 18, fontFamily: "LatoR")), ) ], ), )), ], ); } A: mainAxisAlignment: MainAxisAlignment.end, Try deleting this line.
{ "language": "en", "url": "https://stackoverflow.com/questions/62502238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: How to use pillow module in pyscript to open screenshot using an html onclick button? <button id="button" type = "button" class="btn btn-primary" pys-onClick="run_python"> NEXT </button> <py-script> from PIL import Image op = Element("output") def run_python(*args,**kwargs): image = Image.open("home/saikumar/Desktop/screenshot/selenium1.jpg"/) image.show() </py-script> I am trying to open the screenshot image which is present in my local directory using the PIL module by clicking button but I am unable open and getting error like "file not found" and I have to open the screenshot by clicking NEXT button. Error case: Uncaught PythonError: Traceback (most recent call last): File "<exec>", line 4, in run_python File "/lib/python3.10/site-packages/PIL/Image.py", line 3068, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 44] No such file or directory: 'home/saikumar/Desktop/screenshot/selenium1.jpg' at new_error (pyodide.asm.js:14:238191) at pyodide.asm.wasm:0xedbcb at pyodide.asm.wasm:0xedccc at Module._pythonexc2js (pyodide.asm.js:14:932707) at Module.callPyObjectKwargs (pyproxy.gen.ts:374:12) at Module.callPyObject (pyproxy.gen.ts:384:17) at PyProxyClass.apply (pyproxy.gen.ts:1145:19) at Object.apply (pyproxy.gen.ts:1022:18) new_error @ pyodide.asm.js:14 $wrap_exception @ pyodide.asm.wasm:0xedbcb $pythonexc2js @ pyodide.asm.wasm:0xedccc Module._pythonexc2js @ pyodide.asm.js:14 Module.callPyObjectKwargs @ pyproxy.gen.ts:374 Module.callPyObject @ pyproxy.gen.ts:384 apply @ pyproxy.gen.ts:1145 apply @ pyproxy.gen.ts:1022
{ "language": "en", "url": "https://stackoverflow.com/questions/73622503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: KeyEvent.KEYCODE_VOLUME_UP(DOWN) registering double hits I have a simple app where I do the following: public void onClick(View v){ switch(v.getId()) { case R.id.buttonup: onButtonUp(); break; case R.id.buttondown: onButtonDown(); break; } } public boolean dispatchKeyEvent(KeyEvent event){ int keyCode = event.getKeyCode(); switch (keyCode) { case KeyEvent.KEYCODE_VOLUME_UP: onButtonUp(); return true; case KeyEvent.KEYCODE_VOLUME_DOWN: onButtonDown(); return true; default: return super.dispatchKeyEvent(event); } } void onButtonUp(){ increment_some_static_class_variable; } void onButtonUp(){ decrement_some_static_class_variable; } The thing is, whenever I press volume buttons, the onButtonUp and onButtondown functions are called twice. This does not happen when I press on screen buttons (handled in onClick(View)). I didn't find anybody having this issue so I am asking the folks here. I am new to Android and this is my first Application. Using Log, I found that both calls to onButtonUp and onButtonDown are coming from dispatchKeyEvent function. What could be wrong here? I hope I explained the problem well. Suggestions/Solutions are most welcome. A: KeyEvent can represent multiple actions, specifically both ACTION_DOWN and ACTION_UP. Since you aren't checking the action in your dispatchKeyEvent callback you are calling your button click methods for both DOWN and UP events on the volume buttons. Try like this: public boolean dispatchKeyEvent(KeyEvent event){ int keyCode = event.getKeyCode(); if(event.getAction() == KeyEvent.ACTION_DOWN){ switch (keyCode) { case KeyEvent.KEYCODE_VOLUME_UP: onButtonUp(); return true; case KeyEvent.KEYCODE_VOLUME_DOWN: onButtonDown(); return true; default: return super.dispatchKeyEvent(event); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/15778008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What are the next step after compiling PYBIND11_MODULE in C++ file I was trying to use PYbind 11 to wrap the following small C++ test programme into a Python module, so I can call the test C++ function from python files. My problem is : while the C++ file compiled successfully, I have no clue as to what steps to take next to import the newly created module file in python. I tried to run "from example import add" in a test Python file in Spyder but received error message saying there is no module named example. I'm using Windows10 (x64bit), Python3.7 and Visual studio 2017 community. Can some one please help? Thank you very much! #include //#include <Aspose.Cells.h> #include <pybind11/pybind11.h> void print(const char*); int add(int i, int j) { return i + j; } PYBIND11_MODULE(example, m) { m.doc() = "pybind11 example plugin"; // optional module docstring m.def("add", &add, "A function which adds two numbers"); } int main() { const char *x = "C Plus plus is wonderful."; char *z; char b = 'z'; z = &b; int num = 10; int* a = 0; print(x); } void print(const char* z) { std::cout << "pointer z is" << z << "\n"; std::cin.get(); } UPDATE: I followed Stuart's suggestion below when building my test c++ programme. I made two attempts: at the first attempt, I changed Target Extension to ".pyd"; whereas at the second attempt, I kept Target Extension as "dll". In both attempts, I received the same error message from Visual Studio, which seem to suggest that the DLL file being built can not be started (as shown in the screenshot that immediately follows) Error Messages for Starting DLL Programme However, the actual building of dll file seemed successful, as I can see one dll file and one Python Extension Module file, with filenames and path listed as follows: C:\Users\rmili\source\repos\ConsoleApplication5\x64\Debug\ConsoleApplication5.dll and C:\Users\rmili\source\repos\ConsoleApplication5\x64\Debug\ConsoleApplication5 The Visual Studio output message upon building is pasted at the end. My problem is: I created a Test.py file in the same directory (C:\Users\rmili\source\repos\ConsoleApplication5\x64\Debug) and tried to run it in Spyder after including just one line command "import example ". Sypder returned a error message saying " No MODULE Named Example". Can anyone please help? Thanks a lot! 1>------ Build started: Project: ConsoleApplication5, Configuration: Debug x64 ------ 1>LINK : C:\Users\rmili\source\repos\ConsoleApplication5\x64\Debug\ConsoleApplication5.dll not found or not built by the last incremental link; performing full link 1> Creating library C:\Users\rmili\source\repos\ConsoleApplication5\x64\Debug\ConsoleApplication5.lib and object C:\Users\rmili\source\repos\ConsoleApplication5\x64\Debug\ConsoleApplication5.exp 1>ConsoleApplication5.vcxproj -> C:\Users\rmili\source\repos\ConsoleApplication5\x64\Debug\ConsoleApplication5.dll ========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ========== UPDATE2: I followed every details of Stuart's instructions in his Update2. I got example.pyd (as shown in the following screenshot) However, I got error message when running in Spyder, as follows: (Sorry I only managed to copy the second half of Spyder output message as its very hard to do text selection in Spyder console) File "C:\Users\rmili\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/rmili/source/repos/ConsoleApplication5/x64/Debug/Test.py", line 9 d = "C:\Users\rmili\source\repos\ConsoleApplication5\x64\Debug" ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape I changed backward slash "" to forward slash"/" in the value that's being assigned to "d", and got the following error again: File "C:\Users\rmili\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/rmili/source/repos/ConsoleApplication5/x64/Debug/Test.py", line 12, in <module> import example ModuleNotFoundError: No module named 'example' UPDATE3: As shown in the following screenshot, example module can not be found in Windows command line prompt. [Unable to find example module in Windowscommand line prompt]3 A: Make sure the compiled output file is named example.pyd (or has a symlink of that name pointing to it), and try running python from the same directory. Update: How to build a .pyd in Visual Studio On Windows, compiled Python modules are simply DLL files, but they have a .pyd file extension. You mentioned that your C++ file compiles successfully. Did you compile it as an executable (.exe), or as a .dll? You should compile it as a DLL, but change the file extension to .pyd. The Visual Studio documentation explains how to change your project to create a DLL. Here's what it says: * *Open the project's Property Pages dialog box. For details, see Set C++ compiler and build properties in Visual Studio. *Click the Configuration Properties folder. *Click the General property page. *Modify the Configuration Type property. Also, on that same settings page, you can find an option to change the Target Extension property. Change it to .pyd. (Or simply rename the file yourself after it is built.) Update 2 I think you need to change three settings: * *Target Name * *Change to example *Target Extension * *Change to .pyd *Configuration Type * *Change to Dynamic Library (.dll) Also, I recommend deleting (or commenting out) everything from example.cpp except for the code shown below. (I don't know if the presence of a main() function may cause problems, so just remove it.) After that, building your project should produce the following file: C:\Users\rmili\source\repos\ConsoleApplication5\x64\Debug\example.pyd Than, from the Spyder console, try this: import os d = "C:\\Users\\rmili\\source\\repos\\ConsoleApplication5\\x64\\Debug" os.chdir(d) import example example.add(1,2) I don't have a Windows machine to test with. But in case it's useful, here's how I compiled your example on my Mac. (On Mac and Linux, they use the extension .so instead of .pyd.) // example.cpp #include <pybind11/pybind11.h> int add(int i, int j) { return i + j; } PYBIND11_MODULE(example, m) { m.doc() = "pybind11 example plugin"; m.def("add", &add, "A function which adds two numbers"); } $ # Compile $ clang++ -I${CONDA_PREFIX}/include -I${CONDA_PREFIX}/include/python3.7m -undefined dynamic_lookup -shared -o example.so example.cpp $ # Test $ python -c "import example; print(example.add(10,20))" 30 A: I have found answer to my problem: * *Make sure all steps I described previously in my post are done *this is what I missed -` It is important to make sure the file type of "example" is Python Extension Module, as shown in the following screenshot . As shown in screenshots of my Updates, initially the Type of my "example.pyd" file was just "File". I managed to convert it to Python Extension Module by adding "cp35-win_amd64." in the file extension, resulting in file name "examplelib.cp35-win_amd64.pyd", and then remove the same texts that were added.
{ "language": "en", "url": "https://stackoverflow.com/questions/63054260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: scaling up angular firestore app by associating data to each user I created an app using Angular 7, and crud with firestore. Everything is working fine with one user. Now, I am trying to scale it, and associate data with each logged user. I can't find detailed material on the web. I have users in collection. I want to create a collection of 'vacations' and nest it under each user, for his collection of vacations. I believe the first step, is to get the current logged user uid, and update whatever function I might have, to edit the collection, then .doc('current user id'). This is code I used to get current user's uid: this.userID = this.firestore.collection('users').doc(this.auth().user.uid); errors in pre-compiler: 1- Type 'AngularFirestoreDocument<{}>' is not assignable to type 'string' 2- Cannot invoke an expression whose type lacks a call signature. Type 'AuthService' has no compatible call signatures. This is currently how the data is structured: This is the auth service: export class AuthService { user$: Observable<User>; constructor( public afAuth: AngularFireAuth, public afs: AngularFirestore, public router: Router ) { // Get the auth state, then fetch the Firestore user document or return null this.user$ = this.afAuth.authState.pipe( switchMap(user => { // Logged in if (user) { return this.afs.doc<User>(`users/${user.uid}`).valueChanges(); } else { // Logged out return of(null); } }) ) } googleLogin() { const provider = new auth.GoogleAuthProvider() return this.oAuthLogin(provider); } public oAuthLogin(provider) { return this.afAuth.auth.signInWithPopup(provider) .then((credential) => { this.updateUserData(credential.user) }) } public updateUserData(user) { // Sets user data to firestore on login const userRef: AngularFirestoreDocument<User> = this.afs.doc(`users/${user.uid}`); const data = { uid: user.uid, email: user.email, displayName: user.displayName, photoURL: user.photoURL } return userRef.set(data, { merge: true }) } } Appreciate your help. A: Googles official answer can be found here - Get the currently signed-in user - Firebase Below is a function that returns a promise of type string. The promise resolves with the user's uid which is returned from onAuthStateChangedwhich() along with the rest of the user's firebase auth object(displayName...etc). getCurrentUser(): Promise<string> { var promise = new Promise<string>((resolve, reject) => { this.afAuth.auth.onAuthStateChanged(returnedUser => { if (returnedUser) { resolve(returnedUser.uid); } else { reject(null); } }); }) return promise } You would call it in the constructor or ngOninit: userDoc: User = null; constructor() { this.getCurrentUser().then((userID: string) => { //here you can use the id to get the users firestore doc this.afs.collection('users').doc(userID).valueChanges() .subscribe(userFirestoreDoc => { // remember to subscribe this.userDoc = userFirestoreDoc; }) }).catch(nullID => { //when there is not a current user this.userDoc = null }) } To add a collection for 'vacations' nested in the users doc you need to add a sub collection to the that users firestore doc. I would advise only doing this once the user gets/adds their first vacation. You can simply set a doc in the subcollection and if the sub collection doesn't already exist firestore will first create it and then add the new doc(vaction) so this is the only code you need to set the new collection and new vaction inside that collectio. this.afs.collection('users').doc(this.userDoc.uid).collection('vacations).add(vactionObject).then(returnedVaction => { }).catch(error => { }) A: users =》last login This field on the google auth user object is updated on every new login with the time of the last success login
{ "language": "en", "url": "https://stackoverflow.com/questions/56013795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Laravel 5 dynamically run migrations so I have created my own blog package in a structure of Packages/Sitemanager/Blog I have a service provider that looks like the following: namespace Sitemanager\Blog; use Illuminate\Support\ServiceProvider as LaravelServiceProvider; class BlogServiceProvider extends LaravelServiceProvider { /** * Indicates if loading of the provider is deferred. * * @var bool */ protected $defer = false; /** * Bootstrap the application events. * * @return void */ public function boot() { $this->handleConfigs(); $this->handleMigrations(); $this->handleViews(); $this->handleRoutes(); } /** * Register the service provider. * * @return void */ public function register() { // Bind any implementations. $this->app->make('Sitemanager\Blog\Controllers\BlogController'); } /** * Get the services provided by the provider. * * @return array */ public function provides() { return []; } private function handleConfigs() { $configPath = __DIR__ . '/config/blog.php'; $this->publishes([$configPath => config_path('blog.php')]); $this->mergeConfigFrom($configPath, 'blog'); } private function handleTranslations() { $this->loadTranslationsFrom(__DIR__.'/lang', 'blog'); } private function handleViews() { $this->loadViewsFrom(__DIR__.'/views', 'blog'); $this->publishes([__DIR__.'/views' => base_path('resources/views/vendor/blog')]); } private function handleMigrations() { $this->publishes([__DIR__ . '/migrations' => base_path('database/migrations')]); } private function handleRoutes() { include __DIR__.'/routes.php'; } } Now, what i would like to do is run the migrations dynamically if they have never been run before or within an installation process i suppose. I've seen in older documentation you could so something like this: Artisan::call('migrate', array('--path' => 'app/migrations')); However, this is invalid in laravel 5, how can I approach this? A: Artisan::call('migrate', array('--path' => 'app/migrations')); will work in Laravel 5, but you'll likely need to make a couple tweaks. First, you need a use Artisan; line at the top of your file (where use Illuminate\Support\ServiceProvider... is), because of Laravel 5's namespacing. (You can alternatively do \Artisan::call - the \ is important). You likely also need to do this: Artisan::call('migrate', array('--path' => 'app/migrations', '--force' => true)); The --force is necessary because Laravel will, by default, prompt you for a yes/no in production, as it's a potentially destructive command. Without --force, your code will just sit there spinning its wheels (Laravel's waiting for a response from the CLI, but you're not in the CLI). I'd encourage you to do this stuff somewhere other than the boot method of a service provider. These can be heavy calls (relying on both filesystem and database calls you don't want to make on every pageview). Consider an explicit installation console command or route instead. A: After publishing the package: php artisan vendor:publish --provider="Packages\Namespace\ServiceProvider" You can execute the migration using: php artisan migrate Laravel automatically keeps track of which migrations have been executed and runs new ones accordingly. If you want to execute the migration from outside of the CLI, for example in a route, you can do so using the Artisan facade: Artisan::call('migrate') You can pass optional parameters such as force and path as an array to the second argument in Artisan::call. Further reading: * *https://laravel.com/docs/5.1/artisan *https://laravel.com/docs/5.2/migrations#running-migrations A: For the Laravel 7(and probably 6): use Illuminate\Support\Facades\Artisan; Artisan::call('migrate'); will greatly work.
{ "language": "en", "url": "https://stackoverflow.com/questions/37953783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: logits and labels must be broadcastable, labels and logits dimension mismatch I am new to Tensorflow and making a Multiclass Classifier. My dataset has 6 classes with images 150x150. I am getting error regarding Mismatch of dimensions of Logits and Labels, I have seen other such questions on stackoverflow and my problem don't seem to be any of them. Please anyone tell me what's the error, and also tell while constructing the I need not to tell batch size for output layer ( it's None if you see model.summary,it is automatically known later), so what things could affect it.? The Error says: logits and labels must be broadcastable: logits_size=[40,6] labels_size=[10,6] Model is defined as follows: batch_size = 10 epochs = 100 IMG_HEIGHT = 150 IMG_WIDTH = 150 IMG_CHANNEL = 3 classes = 6 inputs = Input(shape=(IMG_HEIGHT, IMG_WIDTH ,3)) conv1 = Conv2D(96, 11, strides=(4,4) , padding='valid', activation='relu')(inputs) pool1 = MaxPooling2D(pool_size=(3, 3), strides=(2,2), padding='valid', data_format=None)(conv1) a = tf.keras.layers.Lambda(tf.nn.local_response_normalization) lrn1 = a(pool1) conv2 = Conv2D(256, 5, padding='same', strides=(1,1) , activation='relu')(lrn1) pool2 = MaxPooling2D(pool_size=(3, 3), strides=(2,2), padding='valid', data_format=None)(conv2) b = tf.keras.layers.Lambda(tf.nn.local_response_normalization) lrn2 = b(pool2) conv3 = Conv2D(384, 3, padding='same', strides=(1,1), activation='relu')(lrn2) conv4 = Conv2D(384, 3, padding='same', strides=(1,1), activation='relu')(conv3) conv5 = Conv2D(256, 3, padding='same', strides=(1,1), activation='relu')(conv4) conv6 = MaxPooling2D(pool_size=(3, 3), strides=(2,2), padding='valid', data_format=None)(conv5) flat1 = Flatten()(conv6) dense1 = Dense(4096, activation='relu')(flat1) drop1 = Dropout(0.5)(dense1) dense2 = Dense(4096, activation='relu')(drop1) drop2 = Dropout(0.5)(dense2) dense3 = Dense(classes,activation='softmax')(drop2) model = Model(inputs=inputs, outputs=dense3, name="one") opt = SGD(lr=0.1, momentum=0.1) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) # Generator Declaration train_image_generator = ImageDataGenerator( rescale=1./255, rotation_range=45, width_shift_range=.15, height_shift_range=.15, horizontal_flip=True, zoom_range=0.5 ) # Generator for our training data validation_image_generator = ImageDataGenerator( rescale=1./255 ) # Generator for our validation data train_data_gen = train_image_generator.flow_from_directory( batch_size=batch_size, directory=train_dir, shuffle=True, class_mode='categorical', target_size=(IMG_HEIGHT, IMG_WIDTH) ) val_data_gen = validation_image_generator.flow_from_directory( batch_size=batch_size, directory=val_dir, class_mode='categorical', ) history = model.fit( train_data_gen, steps_per_epoch=3000//batch_size, epochs=epochs, validation_data=val_data_gen, validation_steps=3000//batch_size ) A: It seems you are using AlexNet, but on samll size images If you are using AlexNet architecture on smaller images, you need to resize images or you have to do some changes in Pool and Stride hyperparameters, because with current shape something funcky happens in conv5 affecting the architecture.
{ "language": "en", "url": "https://stackoverflow.com/questions/62779735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Add dimension to pandas DataFrame I have a pandas DataFrame that includes a pipe-separated string in one of the fields. I've split this into a list inside an apply and added it to the DataFrame. The number and content of the values in the pipe-separated string vary. df = DataFrame([{'wibble': 'a', 'pipestring': 'aa|aaa|aaa'}, {'wibble': 'b', 'pipestring': 'bb|bbbb|bbb|bbbbbb'}]) df['pipelist'] = df['pipestring'].map(lambda x: x.split('|')) I'm pretty new to pandas so could be completely wrong about this, but I think this would be better represented via a DataFrame with multiple index levels so I can take advantage of panda's indexing and other (fabulous) tools. However I can't figure out how to do this. Any pointers / advice on what I should be doing instead much appreciated. A: What is your computational goal more specifically? Here's a way to split your data up and create a combined frame In [44]: x = df['pipestring'].apply(lambda x: pd.Series(x.split('|'))) In [45]: x Out[45]: 0 1 2 3 0 aa aaa aaa NaN 1 bb bbbb bbb bbbbbb In [46]: df.join(x).set_index(['wibble']) Out[46]: pipestring pipelist 0 1 2 3 wibble a aa|aaa|aaa [aa, aaa, aaa] aa aaa aaa NaN b bb|bbbb|bbb|bbbbbb [bb, bbbb, bbb, bbbbbb] bb bbbb bbb bbbbbb A: The quickest way to get started with that is to stack your dataframe: In [44]: df = df.stack() In [45]: df.ix[0, 'pipelist'] Out[45]: ['aa', 'aaa', 'aaa'] In [46]: df Out[46]: 0 pipestring aa|aaa|aaa wibble a pipelist [aa, aaa, aaa] 1 pipestring bb|bbbb|bbb|bbbbbb wibble b pipelist [bb, bbbb, bbb, bbbbbb] Does that get you where you want to be?
{ "language": "en", "url": "https://stackoverflow.com/questions/15390280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MaxJsonLength error in NewtonSoft JsonConvert.SerializeObject I have a ASP.NET project, with some AJAX calling a webmethod, that would return JSON. If the DataSet has about 100 lines, there is no problem. But with 1000 lines, it launch the error: Error during serialization or deserialization using JSON JavaScriptSerializer. The size of the string exceeds the value set in the maxJsonLength property [WebMethod(EnableSession = true)] public static string PublicWebMethod() { DataSet ds = new DataSet(); // in the reality do a mountrous query if (ds.Tables[0].Rows.Count > 0) { return JsonConvert.SerializeObject(clsUtil.ToArray(ds.Tables[0])); } else { return "false"; } } } How can I solve this by setting a configuration in the method? I don't want to change the web.config A: I had the similar issue and took a while to figure out the problem and fix. Please include the following code after <system.web.extensions> <scripting> <webServices> <jsonSerialization maxJsonLength="50000000" /> </webServices> </scripting> </system.web.extensions>
{ "language": "en", "url": "https://stackoverflow.com/questions/53657225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Multiple web docker containers on a host What is the best practice concerning multiple web containers running on a single host. How would I assign these subdomains subdomain1.mydomain.com to container1 and subdomain2.mydomain.com to container2 when both containers run on host1 with ports 50000 and 50001? How would this configuration work if I wanted to assign paths to both containers e.g.: mydomain.com/pathnameleadingtocontainer1 , mydomain.com/pathnameleadingtocontainer2 again with both containers running on host1 with ports 50000 and 50001?
{ "language": "en", "url": "https://stackoverflow.com/questions/46452578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to process the files in NodeJS sent from client FormData? I use HTML5 FormData to upload files from the client to Node Server. In the node server, I have included multiparty module to process the uploaded, but not sure of how it works. server.js: var parms = urlLib.parse(req.url, true); var form = new multiparty.Form(); form.parse(req, function (err,fields,files) { console.log(fields, files); }); I want to know how to write the code for processing the uploaded file. I have tried using fs.readFile and fs.createReadStream, but node threw an error "file path must be of type String" but the uploaded files are blob object. So, how do i proceed further?
{ "language": "en", "url": "https://stackoverflow.com/questions/39064542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: socket.io client not able to connect and receive messages emit by server I am emitting messages from socket.io server running on port 8001 but my socket.io client not able to connect and receive these messages my index.html (client): <script src="https://cdn.socket.io/socket.io-4.0.0.js"></script> <script> //var socket = io(); //var socket = io.connect('http://localhost:8001'); var socket = io('http://localhost:8001', { transports : ['websocket'] }); socket.on('connect', function(){ console.log("connected"); socket.on("message", data => { console.log(data); }); }); </script> My nodejs server code: const app = require("express")(); const server = require("http").createServer(app); const io = require("socket.io")(server, { cors: { origin: '*', } }); io.on("connection", () => { console.log("Connected!"); }); var redis = require('redis'); //var url = "redis://:@localhost:6379"; //var redis = require('redis-url').connect(); //var client = redis.createClient(url); var client = redis.createClient(); //var client = redis.createClient(); client.on("error", function(error) { console.error(error); }); client.subscribe('notification'); client.on('message', function(channel, msg) { console.log("Message received: "+msg); io.sockets.emit(msg); }); console.log('starting server on 8001...'); server.listen(8001); My node js server console logs: starting server on 8001... Message received: from laravel A: io.sockets.send(msg); this worked for me. also make sure you are using the same version of socket.io on both client and server
{ "language": "en", "url": "https://stackoverflow.com/questions/66648712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: OpenVPN Test cases I am looking for the answer: how can I auto disconnect connection of VPN Client after specific idle time or specific connection duration? There are many clients connected to my OpenVPN server but they forgot to disconnect VPN client or they connect for a long time but do nothing. I am using OpenVPN Access Server v2.4.12
{ "language": "en", "url": "https://stackoverflow.com/questions/72880702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: PHP site/script map with dependencies Just inherited a large PHP 5.3 site and wondered if there were some sort of crawler or site map tool that would identify the files and their dependencies. A: You can determine PHP version and extension dependencies with PEAR's PHP_CompatInfo package. As for PEAR packages the app might be using, you can see what's installed using pear list -a I don't know of a tool that will tell you which external script dependencies are in use other than grep.
{ "language": "en", "url": "https://stackoverflow.com/questions/5930074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Difference in the Validation and Evaluation Accuracy I am using pre-trained GoogLeNet and then fine tuned it on my dataset for classifying 11 classes. Validation dataset seems to give the "loss3/top1" 86.5%. But when I am evaluating the performance on my evaluation dataset it gives me 77% accuracy. Whatever changes I did it train_val.prototxt, I did the same changes in deploy.prototxt. Is the difference between the validation and evaluation accuracy is normal or I did something wrong? Any suggestions? A: In order to you get the fair estimation of your trained model on the validation dataset you need to set the test_itr and test_batch_size in a meaningful manner. So, test_itr should be set to: Val_data / test_batch_Size Where, Val_data is the size of your validation dataset and test_batch_Size is validation batch size value set in batch_size for the Validation phase.
{ "language": "en", "url": "https://stackoverflow.com/questions/37674651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Resolving Promises sequentially angular Actually I need to do execute var1 before var2 in angular Please let me know how I can achieve this. Thanks for your help let var1 = Promise.resolve(database.query(sql)) .then((res) => { someVar = res; return someVar; })) let var2 = Promise.resolve(database.query(sqlanother)){ .then((resanother) => { someanotherVar = resanother; return someVaranother; }))
{ "language": "en", "url": "https://stackoverflow.com/questions/67798079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why check if value of an array is an array? I came across the following code in the book PHP Solutions, 2nd Edition <?php foreach ($_POST as $key => $value) { // assign to temporary variable and strip whitespace if not an array $temp = is_array($value) ? $value : trim($value); // if empty and required, add to $missing array if (empty($temp) && in_array($key, $required)) { $missing[] = $key; } if (in_array($key, $expected)) { // otherwise, assign to a variable of the same name as $key ${$key} = $temp; } } ?> My question is in regards to this line: $temp = is_array($value) ? $value : trim($value); If $value is the value held in the array (user inputted content from a form), why is it critical to determine whether or not the value is an array? Is it for security? A: "why is it critical to determine whether or not the value is an array?" Because trim expects a string and the function might not work when passing an array. It has nothing to do with security.
{ "language": "en", "url": "https://stackoverflow.com/questions/12919009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Simple regex redirect not working I'm trying to setup a simple regex redirect. I have the following: ^.+?testurl\.com/folder-path/(\w+?$)/secondfolder Which should then redirect to http://www.testurl.com/folder-path/$1 This works if I don't add the /secondfolder to the end of the regex, but as soon as I add /secondfolder it doesn't work so I presume my syntax is wrong. Can anybody shed any light on what is wrong about the /secondfolder part of this? Thanks! A: $ denotes the end of the string (while ^ marks the start), so you should put it at the end of the string. ^.+?testurl\.com/folder-path/(\w+?)/secondfolder$
{ "language": "en", "url": "https://stackoverflow.com/questions/21022168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Failed to bundle asset error with CDK + Lambda I have this project structure (where control is the name and root of my project): control |_ src |_ control_loader -> this has a function inside called also control_loader |_ utils |_ some_helper_function.py |_ __init__.py |_ __init__.py |_ lib |_ some-cdk-where-i-declare-a-lambda.ts |_ requirements.txt Inside some-cdk-where-i-declare-a-lambda.ts I have this (among all the other necessary stuff): new Function(this, `${this.appName}${this.stageName}ControlLambdaLoader`, { code: Code.fromAsset(path.join(__dirname, '../src'), { bundling: { image: Runtime.PYTHON_3_8.bundlingImage, command: [ 'bash', '-c', 'pip install -r requirements.txt -t /asset-output && cp -au . /asset-output', ], }, }), runtime: Runtime.PYTHON_3_8, handler: 'control_loader.control_loader', vpc, vpcSubnets: vpc.selectSubnets({ subnetType: SubnetType.PRIVATE_WITH_NAT, }), }); However, upon running cdk synth, I get the following: (venv) PS C:\Users\rodri\Documents\control> cdk synth npx: installed 15 in 1.145s Bundling asset controlPipelineStack/controlBetaDeployStage/controlbetaStack/controlbeta/controlbetaControlLambdaLoader/Code/Stage... Failed to bundle asset controlPipelineStack/controlBetaDeployStage/controlbetaStack/controlbeta/controlbetaControlLambdaLoader/Code/Stage, bundle output is located at C:\Users\rodri\Documents\control\cdk.out\asset.059c3b383943a1fadd3d933b670a7d351991e742d24a9785474b35c846267fde-error: Error: spawnSync docker EN OENT This is a very cryptic error. I know the bundling is done by docker to push the dependencies as a zip asset, but, any idea of where is this failing? I also tried to change the location of requirements.txt to inside src and that didn't help. I can deploy everything I remove the Lambda out. What am I doing wrong? Also, how do I make the bundle include some_helper_function.py as well? Thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/72080147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Implementing a scheduler for a backup program for a backup program I am doing I have already finished the GUI. Now I want to do the functional requirements. Each backup can have schedules. There are predefined settings like every Sunday or Monday but the user can also specify his own schedules. As I have never done anything like this, I was wondering what a good approach would be to running a backup every x hours or days. I was thinking about using Threads or writing a service but both fields are totally new to me. What would be the best method here? A: If threaded development and service development are both totally new then I think you will struggle to implement this in a useful way. Even so... Scheduler-type applications are best run as services, because otherwise you need the user to be logged in to be running the application. Services run independently of the user being logged in. Because of this, however, services don't have a user interface so your GUI needs to package up the details of schedule into a configuration file somewhere, then signal the service to re-load that configuration file so that the service will then know what to do and when to do it. The service will normally spawn a worker thread to do pretty much everything, and that worker thread needs to be able to respond to the service being shut down (read up on AutoResetEvent to see how this might be done across threads). The thread will then wait until either an event or for the appropriate time to arrive and then do whatever it has to do. None of this is actually complicated, but I suggest you do some digging into multithreaded programming first. A: I agree with ColinM, Services are best for the Scheduler Type of application. You have to combine the Services with application to run your code at scheduled intervals. See the article for more details - http://msdn.microsoft.com/en-us/magazine/cc163821.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/17421676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Current Status of Sproutcore/Ember/Blossom/Sencha and Mobile devices (or alt frameworks) I've been looking over Sproutcore, Ember and Blossom and other competitive framework efforts (e.g. Sencha) to select for a HTML5 client side application project. The state, information, and documentation from these projects is a bit fragmented and in need of clarity, so I am presenting this to the community. My project is to be a native-like HTML5 application with desktop level complexity in need of a complete application framework, that will work well on desktops and run with good speed on mobile devices with touch awareness. The widgets should be native-like (not web-like), but customizable so to be unique to the application. Questions/framework Requirement: * *Native vs. Web style Applications. Framework should make it easy to build native-like user experiences with the ability to make a custom native feel (not just wholly imitating mac/win/iOS). Some of the text surrounding Ember indicates it is really meant for web-style apps - which given no UI layer maybe goes without saying. Frameworks like Sencha, can it easily accommodate custom widgets? *Mobile Appropreatness. Framework should be appropreate for mobile devices and have facilities for touch input and gestures.Several notes I've seen in my research indicate that Sproutcore and Blossom aren't very appropreate for mobile, and that Ember is better geared towards mobile (size?). It isn't clear whether the touch/mobile libraries are very developed in Sproutcore/Blossom and if they will be supported it the current state going forward. (and blossom compile to native is not acceptable). On the otherhand, Frameworks like Sencha, do they have the facility to work well on desktop as well as mobile? *Framework Completeness. The framework should be a fairly complete application framework, with desktop-like OO expectations and management for automatically and efficiently syncing, managing, and serializing the data model with the server. Not sure if there is much difference between Ember and Sproutcore, how do other efforts like Sencha stack up? A: Your question covers a lot of ground. I will pick some quotes and answer them directly. My project is to be a native-like HTML5 application with desktop level complexity in need of a complete application framework Ember.js specifically bills itself as a "web-style" framework, not a an RIA framework. That said, you can build anything you want, but you would be trailblazing. Sproutcore bills itself as an RIA framework. You have complete control over the DOM, so if you can do it in the browser, you can do it in Sproutcore. Ext-Js is also a good application framework for desktops (Sencha Touch is for Mobile). If you like the way its examples look, then its a good choice. You can of course customize the dom and write your own widgets. Blossom is basically Sproutcore with a canvas based view layer. It just went into beta, so you would definitely be trailblazing if you went with it. So, you can basically use any of the frameworks you mentioned for the RIA part of your enterprise. I would eliminate Ember.js simply because the framework itself purposes itself for web-style (e.g. twitter) as opposed to RIA (e.g. iCloud) apps, which is not what you want. The widgets should be native-like (not web-like), but customizable so to be unique to the application. All three of your remaining options can do this. If you like Senchas widgets, its a good choice. I don't know if they are native enough for you. That said, with any of the remaining frameworks you can customize the DOM to your heart's content. Mobile Appropreatness. Framework should be appropreate for mobile devices This is a tough one. Sencha Touch (which is separate but similar to Ext-Js) is very popular and gets the job done. It is performant too; a non-trivial app ran fine on my original Droid (which surprised me). Sproutcore is very heavy weight. It has mobile support (i.e. for touch events) but you need to very careful about the dom you create, so as not to overwhelm the browser. I wouldn't choose Sproutcore for mobile, although you could if you are very careful. and blossom compile to native is not acceptable That does not seem reasonable to me. To be clear, NONE of these frameworks run natively on mobile devices; they ALL run in the browser. Blossom comes closes as the canvas API is mapped directly to the native API, giving you a truly native app. The only way you could get closer would be to use objective-c/java for iOs and Android. So basically, at this point your left with Sencha(Ext-Js) and Blossom. Blossom is still in Beta, you would be trailblazing if you tried it. Sencha is established, has great support (Blossom support is good on irc), and a large developer base. So Sencha is the choice, unless you really want to be cutting edge, and take a little risk. A: Troy. Indeed, ember can run with another view layer framework such as jQuery Mobile which can provide a "app-like" look and feel.There is a github project: https://github.com/LuisSala/emberjs-jqm. In my view, if you need very cool animation you can use blossom.If you want to build a app, SC or ember should be OK. I'll choose ember because it 's loosely coupled.
{ "language": "en", "url": "https://stackoverflow.com/questions/9695033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: LINQ ignore incorrect type being assigned to property So, I have my own mapper (NOT AutoMapper) which maps models to each other. You give the model you'd like to map to, and with the Map method you push an object in. Beside of this I wrote the Extend method which functions as a override for the Map method to, for example, add properties which are not available in the object being mapped. Problem: The problem hereby is that my public Mapper<T> Extend(Func<T, T> func) method doesn't like the different types. Possible solutions: There are 2 solutions I'm thinking of: * *Ignore the error and map the value within my Extend method. Which isn't possible as far as I know due to the expression being executed immediately. *Create a LINQ method which maps the value for me. Eg; q => q.Ownership = obj.Ownerships.First().Map(). Question: How can I resolve this error and achieve what I want?
{ "language": "en", "url": "https://stackoverflow.com/questions/26358093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Wrapper Function for a Recursive Function that takes a "pass by reference" value Here's the question: I'm trying to do, Node* foo(Node *& ptr, other args) { // some work here } In the wrapper function, I have to declare a temporary value (which is meaningless): Node* wrapper( ... ) { Node* p = nullptr; return foo(p, other args); } Is there any way to get rid of the first line in the wrapper function? Thanks!! A: I guess in foo you assign ptr some value (otherwise the *& has no value). You cannot pass nullptr and you have to declare a pointer like you shown in the wrapper because nullptr is an rvalue. An rvalue is an expression, or an "unnamed object" and you cannot take the address of it. There is more information here Why don't rvalues have an address?.
{ "language": "en", "url": "https://stackoverflow.com/questions/18973964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: GCC cosine optimization and sigbus I have a program that writes in some FPGA memory. When the program is compiled with optimizations (either O1, O2 or O3) it crashes with a bus error. However there is no crash when the program is compiled without optimizations. Here is a minimal example which crashes: void set_reg(const uint32_t *data) { uint32_t val = data[0]; dvm.write32(dac_map, 0, static_cast<uint32_t>(8192 * cos(val))); // crash } When I add volatile there is no more crash: void set_reg(const uint32_t *data) { volatile uint32_t val = data[0]; dvm.write32(dac_map, 0, static_cast<uint32_t>(8192 * cos(val))); // no crash } If I remove the call to cos there is no problem. This seems related to GCC: program doesn't work with compilation option -O3. When I add the flag -ffloat-store the program don't crash. However, on the actual code which is more complicated, adding the flag -ffloat-store doesn't solve the problem. I don't understand the kind of optimization GCC does and how this leads to a SIGBUS. If anyone could explain that, it would be useful for debugging. Thank you. NB: 1) GCC version is arm-linux-gnueabihf-g++ (Ubuntu/Linaro 5.3.1-14ubuntu2) 5.3.1 20160413 2) The function write32 is defined as void write32(MemMapID id, uint32_t offset, uint32_t value) { ASSERT_WRITABLE *(volatile uintptr_t *) (GetBaseAddr(id) + offset) = value; }
{ "language": "en", "url": "https://stackoverflow.com/questions/37970290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: split textarea value properly jQuery regex I am having problems to properly split textarea value. My current snippet split each line that starts with "-" and displays it as value of span element, but, it wont collect next line value which does not start with "-". For example if I paste this text into textarea: - first match rest of first match - second match - third match Script should output: <span style="color:red;">- first match rest of first match </span><br> <span style="color:red;">- second match</span><br> <span style="color:red;">- third match</span><br> $(document).ready(function() { const regex = /^\s*-\s*/; $("#txt").keyup(function() { const entered = $('#textarea').val() const lines = entered.split(/\n/); let spans = ""; for (const line of lines) { if (regex.test(line)) { spans += "<span style='color:red;'>- " + line.replace(regex, '') + "</span><br/>"; } } $(".results").html(spans); }); }); .row { background: #f8f9fa; margin-top: 20px; padding: 10px; } .col { border: solid 1px #6c757d; } <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script> <div class="container"> <div class="row"> <div class="col-12"> <form> <textarea id="textarea" rows="5" cols="60" placeholder="Type something here..."></textarea> </form> </div> <div class="col-12 results"></div> </div> </div> So, basically script should split textarea value from line that starts with "-" until next line which starts "-". Code snippet is also available here: https://jsfiddle.net/zecaffe/f7zv3udh/1/ A: Why not just a split to the \n-? $(document).ready(function() { $("#textarea").keyup(function() { const entered = $('#textarea').val() const lines = entered.split(/\n-/); let spans = ""; lines.forEach((l,i)=>{ // remove the first - if(i===0 && l[0]==="-") l = l.slice(1) spans += "<span style='color:red;'>- " + l + "</span><br/>"; }) $(".results").html(spans); }); }); .row { background: #f8f9fa; margin-top: 20px; padding: 10px; } .col { border: solid 1px #6c757d; } <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script> <div class="container"> <div class="row"> <div class="col-12"> <form> <textarea id="textarea" rows="5" cols="60" placeholder="Type something here..."></textarea> </form> </div> <div class="col-12 results"></div> </div> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/65173969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ansible.vars.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'private_ip' From: Best way to launch aws ec2 instances with ansible - name: Add the newly created EC2 instance(s) to the local host group (located inside the directory) local_action: lineinfile dest="/etc/ansibles/aws/hosts" regexp={{ item.private_ip }} insertafter="[webserver]" line={{ item.private_ip }} with_items: "{{ ec2.instances }}" creates this error: fatal: [localhost]: FAILED! => { "failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'ansible.vars.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'private_ip'\n\n I have defined variable private_ip: under vars , with a value A: I think that the private_ip property in the code above references to the property of the ec2 variable that's used to catch the returned values from the ec2 module (from the last step), no the one that you defined elsewhere. - name: Launch the new EC2 Instance local_action: ec2 group={{ security_group }} instance_type={{ instance_type}} image={{ image }} wait=true region={{ region }} keypair={{ keypair }} count={{count}} register: ec2 (this is where the variable is defined!!!) Essentially Ansible is complaining that the ec2 variable hasen't got the attribute 'private_ip' , so check the preceding code and see how that variable gets defined. In the example above you're trying to get the private_ip address from aws. Is that really what you want? Most of the time you want the public ip address since that's what you will use to connect to the ec2 machine, provision it, deploy your app etc...
{ "language": "en", "url": "https://stackoverflow.com/questions/45895990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Message/Folder permissions Is there a way to change folder and/or message permissions? I noticed that folders created on root folder of a user, are not visible - is this a bug or a feature? Thank you. A: I sent a POST request with: URL= outlook.office.com/api/v2.0/me/MailFolders/root/childfolders content inside: { "DisplayName": "ExampleName" } The folder was created successfully, and reachable via API's. However it is not visible in the Outlook WebUI nor in the Outlook application. Is this by design? – The end-point for the root folder is incorrect, there is no need to use the “root” keyword. And this end-point give the error when I try to create a folder. Here is an sample to create a folder under the root folder for your reference: POST: https://outlook.office.com/api/v2.0/me/MailFolders Header: authorization: bearer {token} content-type: application/json BODY: {"DisplayName":"FolderName"} And we will the 201 status code and response like figure below:
{ "language": "en", "url": "https://stackoverflow.com/questions/36452849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: calculate values In an order form, each row has 3 fields: quantity, price, total. How do I create a function that on changing a number the subtotal is calculated and so is the total? Can anyone suggest a way? A: You'd need to add a listener to each row so that when the price or quantity are updated, you can get the new quantity and price and update the total column. In jQuery, something like: $('.row').on('change', function() { var quantity = $('.quantity', this).val(), // get the new quatity price = $('.price', this).val(), // get the new price total = price*quantity; $('.total', this).val(total); //set the row total var totals = $.map($('.row .total'), function(tot) { return tot.val(); // get each row total into an array }).reduce(function(p,c){return p+c},0); // sum them $('#total').val(totals); // set the complete total }); This assumes that each order form row container has the class row, each quantity has the class quantity, each row total has the class total and the order form total has the id total.
{ "language": "en", "url": "https://stackoverflow.com/questions/12786618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-4" }
Q: published bot on skype but it's not working I have a bot that I've created by merging LUIS and QnA together in a single project using Microsoft Bot Builder. I published the bot to an EC2 instance from visual studio and I'm using the Bot Framework Emulator for testing and it works perfectly. (MUST use ngrok for tunneling). Now I want to deploy the bot on Skype. I logged into the Bot Framework Portal and I registered my bot. Now comes the configuration part. I'm not quite sure what to set as the HTTP endpoint here. I found this in the Bot Framework documentation: Complete the Configuration section of the form. Provide your bot's HTTPS messaging endpoint. This is the endpoint where your bot will receive HTTP POST messages from Bot Connector. If you built your bot by using the Bot Builder SDK, the endpoint should end with /api/messages. * *If you have already deployed your bot to the cloud, specify the endpoint generated from that deployment. *If you have not yet deployed your bot to the cloud, leave the endpoint blank for now. You will return to the Bot Framework Portal later and specify the endpoint after you've deployed your bot. When I published from Visual Studio, from the Azure App Service Activity windows, I found this line: Start Web Deploy Publish the Application/package to https://ec2-00-000-000-00.compute-1.amazonaws.com:PORT/msdeploy.axd?site=bots ... I used that address for the Messaging Endpoint in the configuration and I published my app. However when I'm testing it on Skype, i'm not receiving any messages from the bot. I don't know what the problem is exactly, does this have something to do with ngrok? Or am I missing a step here, is there something else I should be doing to deploy the bot on Skype? Maybe something to do with the appid/password that I need to use ... i really don't know Would really appreciate an explanation of how this works exactly. I don't really understand how the whole deployment procedure works exactly, feels like i'm swimming in murky waters. A: Your endpoint is going to be the root of your deployed web application instance, plus the route that your bot is listening on. For example, one of my bots is deployed to the free version of Azure Web Sites. The URL for a site such as this is https://APPLICATION_NAME.azurewebsites.net and the route that the bot listens on is the default /api/messages. This makes the endpoint https://APPLICATION_NAME.azurewebsites.net/api/messages. If you connect directly to your app's endpoint, you should at least get a JSON dump with an error message. To make sure your site is getting deployed, drop an HTML file into the root of EC2 and see if you can access this.
{ "language": "en", "url": "https://stackoverflow.com/questions/44923000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Posting on same form in JSP I want to search for a product...so I have made a form... but my products are being retreived in the doGet() method and when I search for a product, the doPost() method is called.... SO what should i Do? A: It's actually unclear what your problem is. If you want the form submit to be idempotent/bookmarkable, then just remove method="post" from the HTML <form> element if you want the request to be bookmarkable. Don't forget to remove doPost() method from the servlet as well. Or if you actually want to let the form submit to a different servlet, then just create another servlet, register/map it the same way but on a bit different URL pattern and finally change the action URL of the HTML <form> element.
{ "language": "en", "url": "https://stackoverflow.com/questions/5763993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Custom Gradle Plugin Exec task with extension does not use input properly I am following the Writing Custom Plugins section of the Gradle documentation, specifically the part about Getting input from the build. The following example provided by the documentation works exactly as expected: apply plugin: GreetingPlugin greeting.message = 'Hi from Gradle' class GreetingPlugin implements Plugin<Project> { void apply(Project project) { // Add the 'greeting' extension object project.extensions.create("greeting", GreetingPluginExtension) // Add a task that uses the configuration project.task('hello') << { println project.greeting.message } } } class GreetingPluginExtension { def String message = 'Hello from GreetingPlugin' } Output: > gradle -q hello Hi from Gradle I'd like to have the custom plugin execute an external command (using the Exec task), but when changing the task to a type (including types other than Exec such as Copy), the input to the build stops working properly: // previous and following sections omitted for brevity project.task('hello', type: Exec) { println project.greeting.message } Output: > gradle -q hello Hello from GreetingPlugin Does anyone know what the issue could be? A: It is not related to the type of the task, it's a typical << misunderstanding. When you write project.task('hello') << { println project.greeting.message } and execute gradle hello, the following happens: configuration phase * *apply custom plugin *create task hello *set greeting.message = 'Hi from Gradle' executon phase * *run task with empty body *execute << closure { println project.greeting.message } in this scenario output is Hi from Gradle When you write project.task('hello', type: Exec) { println project.greeting.message } and execute gradle hello, the following happens configuration phase * *apply custom plugin *create exec task hello *execute task init closure println project.greeting.message *set greeting.message = 'Hi from Gradle' (too late, it was printed in step 3) the rest of workflow does not matter. So, small details matter. Here's the explanation of the same topic. Solution: void apply(Project project) { project.afterEvaluate { project.task('hello', type: Exec) { println project.greeting.message } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/38004295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to obtain enrollment certificate from CA in PHP I'm writing a service that is supposed to receive DER format PKCS#10 certificate request in Base64 encoding from mobile device and then return certificates obtained from CA. I'm trying to use "https://CA-server/certsrv/mscep/mscep.dll?operation=PKIOperation&Message=urlencoded request" $ca_link_device="https://..../certsrv/mscep/mscep.dll"; $URL=$ca_link_device."?operation=PKIOperation&Message=".urlencode($BinarySecurityToken)."="; $ch3 = curl_init(); curl_setopt($ch3, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch3, CURLOPT_URL, $URL); curl_setopt($ch3, CURLOPT_HEADER, 0); $cert = curl_exec($ch3); but the data that is returned contains empty envelope. I'm most probably doing something really dumb, but unfortunately my knowledge in certificate management is close to zero. I have been trying to google around, but there are so many technical documents around that I don't know from where to start and what is relevant to me and what is not. All help much appreciated. Edit: According to one documentation I should wrap my PKCS10 request into PKCS7. According to Microsoft PKCS10 should be fine and PKCS7 is only used for certificate renewal. Who to believe? A: In the end we just droped the mscep.dll approach and used curl to directly send POST with needed parameters to ...certsrv/certfnsh.asp page. Then we parsed the returned HTML and obtained the link for certificate download. Not a nice solution, but worked for us.
{ "language": "en", "url": "https://stackoverflow.com/questions/14497060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to create this in flutter enter image description here I have tried different ways like transform and rotation but not able do it A: I think you've been getting some downvotes because you didn't share any of your previous attempts, on which the solution could be built. I think what you might want to do is use a CustomClipper to create the desired shape. I wrote you a simple one that makes a wedged shape with just 4 points, but you can use Path.arc to make it more like the image you provided: class WedgeClipper extends CustomClipper<Path> { final bool isRight; WedgeClipper({this.isRight = false}); @override Path getClip(Size size) { Path path = Path(); // mirror the clip accordingly when it's left or right aligned if (isRight) { path.addPolygon([ Offset(0, size.height / 4), Offset(size.width, 0), Offset(size.width, size.height), Offset(0, 3 * size.height / 4) ], true); } else { path.addPolygon([ Offset(0, 0), Offset(size.width, size.height / 4), Offset(size.width, 3 * size.height / 4), Offset(0, size.height) ], true); } return path; } @override bool shouldReclip(covariant CustomClipper<Path> oldClipper) { return true; } } Then to use it in your code, you can do something like: @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: const Text('Wedge List View'), ), body: ListView.builder( itemBuilder: (context, index) { bool isRightAligned = index % 2 == 0; return Padding( child: ClipPath( child: Material( child: SizedBox( height: 80, width: double.maxFinite, child: Row( mainAxisAlignment: isRightAligned ? MainAxisAlignment.end : MainAxisAlignment.start, children: isRightAligned ? [ Text('Tile to the right side'), SizedBox(width: 10), Image.network('https://upload.wikimedia.org/wikipedia/commons/b/b1/VAN_CAT.png'), ] : [ Image.network('https://upload.wikimedia.org/wikipedia/commons/b/b1/VAN_CAT.png'), SizedBox(width: 10), Text('Tile to the left side'), ],), ), color: Color(0xffdddddd), ), clipper: WedgeClipper( isRight: isRightAligned), // alternating left/right clips ), padding: EdgeInsets.symmetric(horizontal: 8.0), ); }, ), ); } It basically draws a rectangle, clips it with the CustomClipper and renders what's left. The result looks like this:
{ "language": "en", "url": "https://stackoverflow.com/questions/71409897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: Inserting image to database I am trying to fetch image from the database, insertion of images are successful but are not displaying when I fetch it. Here's my code for uploading image include("p1-connect.php"); if(isset($_POST['upload'])){ $file = $_FILES['picture']; $fileName = $_FILES['picture']['name']; $fileTmpName = $_FILES['picture']['tmp_name']; $fileSize = $_FILES['picture']['size']; $fileError = $_FILES['picture']['error']; $fileType = $_FILES['picture']['type']; $fileExt = explode('.', $fileName); $fileActualExt = strtolower(end($fileExt)); $allowed = array('jpg','jpeg','png'); //allowed file types if(in_array($fileActualExt,$allowed)) { if($fileError === 0){ if($fileSize < 1000000){ $fileNameNew = uniqid('',true).".".$fileActualExt; $fileDestination = 'uploads/'.$fileNameNew; $query = "INSERT INTO image(image,image_name) VALUES('$fileName','$fileTmpName')"; mysqli_query($connect,$query); $images = addslashes(file_get_contents($_FILES['picture']['tmp_name'])); move_uploaded_file($fileTmpName, $fileDestination); }else{ echo "Your file is too big"; } } else{ echo "An error uploading your files"; } } else{ echo "You cannot upload files of this type"; }} For fetching the image <body> <form method="post" action="" enctype='multipart/form-data'> <input type='file' name='picture' /> <input type='submit' value='Upload Image' name='upload'> <?php $sql = "SELECT * FROM image"; $result =mysqli_query($connection,$sql); while($row = mysqli_fetch_array($result)){ $imagee = $row['image'];} ?> <tr> <td><img src="<?php echo $imagee; ?>" width="175" height="200"/></td> </tr> </form></body> Only broken image icons are showing My 'image' table includes: id - int image - blob image_name - varchar Thank you in advance
{ "language": "en", "url": "https://stackoverflow.com/questions/49721424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }