id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_23535200
my live site is https site. My test site urls are working fine with $url = route('admin.taskmanager.taskmanager.edit', [$task->id]); which is generating http://example.com/app/taskmanager/taskmanagers/89/edit but the same url in live site is generating https://example.com/app/taskmanager/taskmanagers/89/edit PROBLEM: Above live site url is generating 502 gateway error. but when I add /en to the url like https://example.com/en/app/taskmanager/taskmanagers/89/edit it is working. Both are nginx platform with same conf files. Test site which is http site is automatically adding /en but live site with https is not adding /en to my generated urls using routes are throwing 502 error. Any help? @matiaslauriti my router is as below $router->group(['prefix' => '/taskmanager'], function(Router $router) { $router->bind('taskmanager', function($id) { return app('Modules\Taskmanager\Repositories\TaskmanagerRepository')->find($id); }); $router->get('taskmanagers/{taskmanager}/edit', [ 'as' => 'admin.taskmanager.taskmanager.edit', 'uses' => 'TaskmanagerController@edit', 'middleware' => 'can:taskmanager.taskmanagers.access' ]);
doc_23535201
A B C D E 1 YRI_1 YRI_2 10761 0 2 YRI_3 YRI_3 7825 0 3 YRI_1 YRI_4 9880 0 4 YRI_1 Medit_1 79707 0 5 YRI_2 Medit_2 73865 0 6 YRI_2 Medit_3 77165 0 7 YRI_3 Medit_4 76428 0 8 YRI_3 CHB_1 8273 0 9 YRI_2 CHB_2 10668 0 10 YRI_1 CHB_3 8391 0 I would like to obtain: A B C D E 2 YRI_3 YRI_3 7825 0 4 YRI_1 Medit_1 79707 0 5 YRI_2 Medit_2 73865 0 9 YRI_2 CHB_2 10668 0 i.e. I would like to keep those rows whose numbers in columns B and C only match, for example YRI_1 / Medit_1 both have a "1" so it is a wanted row, but I would not like to keep for example YRI_1 / Medit_10, since this is "10" although it contains a "1". I tried with awk: for i in {1..4} do awk '$2=="*$i"||$3=="*$i" {print $1,$2,$3,$4,$5}' table > desired_table done where $i was supposed to be substituted in each iteration by the next number in the list 1..4, and also I pretended * to mean anything, because I am interested in the number (but I guess this is not the way to do it with awk). A: You can use this awk command: awk 'split($2, a, /_/) && split($3, b, /_/) && a[2] == b[2]' file A B C D E 2 YRI_3 YRI_3 7825 0 4 YRI_1 Medit_1 79707 0 5 YRI_2 Medit_2 73865 0 9 YRI_2 CHB_2 10668 0 * *We use 2 split functions to split $2 and $3 and then compare 2nd fields of split array for equality. *split returns number of elements in the resulting array. By using awk 'split($2, a, /_/) && split($3, b, /_/) && we are making sure that split is returning non-zero values. A: Remove all chars except numbers ([^0-9]) from related cols and print if they match. awk 'NR==1 || (gensub(/[^0-9]/,"","g",$2)==gensub(/[^0-9]/,"","g",$3))' file A B C D E 2 YRI_3 YRI_3 7825 0 4 YRI_1 Medit_1 79707 0 5 YRI_2 Medit_2 73865 0 9 YRI_2 CHB_2 10668 0
doc_23535202
Currently I'm calling the cuda kernels within a C++ static function wrapper, so I can call the kernels from a .cpp file (not .cu), like this: //kernels.cu: //kernel definition __global__ void kernelCall_kernel( dataRow* in, dataRow* out, void* additionalData){ //Do something }; //kernel handler, so I can compile this .cu and link it with the main project and call it within a .cpp file extern "C" void kernelCall( dataRow* in, dataRow* out, void* additionalData){ int blocksize = 256; dim3 dimBlock(blocksize); dim3 dimGrid(ceil(tableSize/(float)blocksize)); kernelCall_kernel<<<dimGrid,dimBlock>>>(in, out, additionalData); } If I call the handler as a normal function, the data printed is right. //streamProcessing.cpp //allocations and definitions of data omitted //copy data to GPU cudaMemcpy(data_d,data_h,tableSize,cudaMemcpyHostToDevice); //call: kernelCall(data_d,result_d,null); //copy data back cudaMemcpy(result_h,result_d,resultSize,cudaMemcpyDeviceToHost); //show result: printTable(result_h,resultSize);// this just iterate and shows the data But to allow parallel copy and execution of data on the GPU I need to create a thread, so when I call it making a new boost::thread: //allocations, definitions of data,copy data to GPU omitted //call: boost::thread* kernelThreadOwner = new boost::thread(kernelCall, data_d,result_d,null); kernelThreadOwner->join(); //Copy data back and print ommited I just get garbage when printing the result on the end. Currently I'm just using one thread, for testing purpose, so there should be no much difference in calling it directly or creating a thread. I have no clue why calling the function directly gives the right result, and when creating a thread not. Is this a problem with CUDA & boost? Am I missing something? Thank you in advise. A: The problem is that (pre CUDA 4.0) CUDA contexts are tied to the thread in which they were created. When you are using two threads, you have two contexts. The context that the main thread is allocating and reading from, and the context that the thread which runs the kernel inside are not the same. Memory allocations are not portable between contexts. They are effectively separate memory spaces inside the same GPU. If you want to use threads in this way, you either need to refactor things so that one thread only "talks" to the GPU, and communicates with the parent via CPU memory, or use the CUDA context migration API, which allows a context to be moved from one thread to another (via cuCtxPushCurrent and cuCtxPopCurrent). Be aware that context migration isn't free, and there is latency involved, so if you plan to migrating contexts around frequently, you might find it more efficient to change to a different design which preserves context-thread affinity.
doc_23535203
cat config.json | grep -Po '"server"\s*:\s*"([^"]*)"' But I just want the part within (parentheses). I can't use a look-behind because it's variable-length. What are my options? Sample input 1: {"debug":false,"server":"dev-dutch","env":"dev"} Sample input 2: { "debug": false, "server": "dev-dutch", "env": "dev" } Desired output for both: dev-dutch I know there are probably safer/better ways to parse JSON, but I want to do this in shell, and it should run on both Ubuntu and FreeBSD without installing any external programs, so I'm OK with a grep hack. A: With GNU grep: grep -Po '"server": *"\K[^"]+' file
doc_23535204
(1) What's the sequence of firmware/software get called from x86 system bootstrap? x86 power on reset-->coreboot-->SeaBIOS->GRUB->Linux kernel? (2) If we use mini-sata as non-volatile storage, how should the grub binary and configuration file be stored on mSATA, in the MBR or something? (3) How should linux kernel initrd be stored, in filesystem or on a raw disk? I recall from PowerPC development that there are no constraint on where kernel and ramdisk stored in flash, u-boot just need the address to bring kernel up. A: It depends. Your proposed flow with seabios and grub is certainly possible, but grub2 can act as a coreboot payload, too - in that case it's coreboot->grub->Linux. Or if you don't estimate changing the kernel all the time, or if you go for kexec(), you can do coreboot->Linux, with Linux in flash. Assuming you're going for a boot flow involving grub2, let's look at the other questions: With seabios, grub2 would be stored in MBR and some spare sectors, like with PCBIOS. With grub2 as payload, it's stored in flash. No matter where grub2 resides, its configuration file, Linux kernel and initrd are best stored in a filesystem. grub2 provides drivers for nearly every modern filesystem, and it's the easiest way to maintain them from within the OS.
doc_23535205
Per my understanding, I can't refer a .NET Standard library in a .NET Framework Library. I searched all over the internet but I couldn't find any way other than migrating my whole service to .NET Core.NET Standard. Is there any other way to upgrade the latest version of Azure Storage SDKs? A: Is there any other way to upgrade the latest version of Azure Storage SDKs? Direct upgrade from version 6 to 12 is not possible as SDK version 12 is actually quite different than older versions (9 or below). Firstly, now the SDK is split in many SDKs and there are different SDKs for each service (Blobs, Files and Queues). Thus you would need to reference different Nuget packages in your source code. Secondly, there have been many breaking changes in the SDKs thus simply referencing the Nuget packages for version 12 is just not sufficient. You will need to rewrite the code unfortunately.
doc_23535206
I am having trouble coding the last row and column boxes. Basically, if say for Row 1, there is even a single 'Red', the last box on row 1 (1,6) should automatically turn red. If there is no red, but there is an 'amber', the last box should turn 'amber'. If there is a 'red' and an 'amber', 'red' should be given the priority. Similar logic for columns as well. What I have tried so far: Within the userform code: Private Sub Txt_Score_1_1_Change() 'This is for row 1 column 1 on the matrix' Call ScoreChange.ScoreChange("Txt_Score_1_1") Within a module: Public Sub ScoreChange(ctrlName As String) If Scorecard.Controls(ctrlName).Value = "R" Then Scorecard.Controls(ctrlName).BackColor = vbRed ElseIf Scorecard.Controls(ctrlName).Value = "G" Then Scorecard.Controls(ctrlName).BackColor = vbGreen ElseIf Scorecard.Controls(ctrlName).Value = "A" Then Scorecard.Controls(ctrlName).BackColor = vbYellow Else Scorecard.Controls(ctrlName).BackColor = vbWhite End If For i = 1 To 5 For j = 1 To 5 If Scorecard.Controls("Txt_Score_" & i & "_" & j).Value <> "" Then If Scorecard.Controls("Txt_Score_" & i & "_" & j).Value = "R" Then Scorecard.Controls("Txt_Score_" & i & "_6").Value = "R" Scorecard.Controls("Txt_Score_6_" & j).Value = "R" ElseIf Scorecard.Controls("Txt_Score_" & i & "_" & j).Value = "A" Then Scorecard.Controls("Txt_Score_" & i & "_6").Value = "A" Scorecard.Controls("Txt_Score_6_" & j).Value = "A" End If End If Next j Next i End Sub The above works to change the individual colours of the combo boxes when changed but falls apart for the 'total'/'all up' boxes. What I think needs to be done to achieve the above is that I need to write a code that recognises when all the combo boxes for a specific row/column have been filled, and then stores those values in an array, and recognises within the array, the value for the last box. Any help on how to achieve this will be appreciated. Also, apologies if something similar has been posted elsewhere, but I did a lot of research and couldn't find anything. Thanks. A: I think there might be a simpler way of attacking this task, and certainly an easier way of consuming all the ComboBox_Change events. If I understand your question correctly, you are saying that you have a matrix of 5 by 5 'child' ComboBoxes. You then have 5 'parent' controls that change based on the selection of the rows' children, and 5 'parent controls' that do the same for the columns' children. What you could do, therefore, is create two classes. I've called them clsChild and clsParent. The child class traps the change event and then notifies the row and column parents that a change has occured. The parent class contains a list of its children and runs the colouring rules based on the children's selection. In terms of the rules, I've created an Enum of your colours where Red is lowest and White is the highest, so you simply take the lowest 'score' of any of the children to colour the parent control. I've kept the same naming conventions as your post for the ComboBoxes but I don't see why the 'parent' controls are Comboboxes - surely you wouldn't want a user to be able to change them? I've taken the liberty then of making them Labels with the naming convention Lbl_Score_R1 ... R5 for rows and Lbl_Score_C1 ... C5 for columns. The beauty of this method is that you only need to tie up the relationships between children and parents once and simply pass the control objects between them. This will avoid having to do your awkward string manipulation every time a change event occurs. So, the code... i. Insert a new class and call it clsChild. Add the following code: Option Explicit Private WithEvents mCtrl As MSForms.ComboBox Private mMum As clsParent Private mDad As clsParent Private mLight As Lights Public Property Set Mum(val As clsParent) Set mMum = val Set mMum.ChildInLine = Me End Property Public Property Set Dad(val As clsParent) Set mDad = val Set mDad.ChildInLine = Me End Property Public Property Set Ctrl(val As MSForms.ComboBox) Set mCtrl = val With mCtrl .List = Array("R", "A", "G", "W") .ListIndex = 3 End With End Property Public Property Get Light() As Lights Light = mLight End Property Private Property Let Light(val As Lights) mLight = val With mCtrl Select Case mLight Case Lights.Red: .BackColor = vbRed Case Lights.Amber: .BackColor = vbYellow Case Lights.Green: .BackColor = vbGreen Case Lights.White: .BackColor = vbWhite End Select End With If Not mMum Is Nothing Then mMum.ConsumeChildChanged If Not mDad Is Nothing Then mDad.ConsumeChildChanged End Property Private Sub mCtrl_Change() Select Case mCtrl.Value Case Is = "R": Light = Red Case Is = "A": Light = Amber Case Is = "G": Light = Green Case Else: Light = White End Select End Sub ii. Insert another new class and call it clsParent and add the following code: Option Explicit Private mCtrl As MSForms.Label Private mChildren As Collection Private mLight As Lights Public Property Set Ctrl(val As MSForms.Label) Set mCtrl = val Set mChildren = New Collection End Property Public Property Set ChildInLine(val As clsChild) mChildren.Add val End Property Public Sub ConsumeChildChanged() Dim lowest As Lights Dim oChild As clsChild lowest = White For Each oChild In mChildren With oChild If .Light < lowest Then lowest = .Light End If End With Next Light = lowest End Sub Private Property Get Light() As Lights Light = mLight End Property Private Property Let Light(val As Lights) mLight = val With mCtrl Select Case mLight Case Lights.Red: .BackColor = vbRed Case Lights.Amber: .BackColor = vbYellow Case Lights.Green: .BackColor = vbGreen Case Else: .BackColor = vbWhite End Select End With End Property iii. At the top of any Module add the following: Public Enum Lights Red Amber Green White End Enum iv. And finally add the following to your UserForm code: Option Explicit Private mMum(1 To 5) As clsParent Private mDad(1 To 5) As clsParent Private mChild(1 To 5, 1 To 5) As clsChild Private Sub UserForm_Initialize() Dim i As Integer, j As Integer For i = 1 To 5 Set mMum(i) = New clsParent Set mMum(i).Ctrl = Me.Controls("Lbl_Score_R" & i) Set mDad(i) = New clsParent Set mDad(i).Ctrl = Me.Controls("Lbl_Score_C" & i) Next For i = 1 To 5 For j = 1 To 5 Set mChild(i, j) = New clsChild With mChild(i, j) Set .Ctrl = Me.Controls("Txt_Score_" & i & "_" & j) Set .Mum = mMum(i) Set .Dad = mDad(j) End With Next Next End Sub
doc_23535207
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="hibernateProperties"> <props> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.connection.pool_size">10</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.transaction.auto_close_session">true</prop> <prop key="hibernate.transaction.flush_before_completion">true</prop> <prop key="current_session_context_class">true</prop> <!--HSQL--> <prop key="hibernate.connection.datasource">java:comp/env/jdbc/xxx</prop> <prop key="hibernate.dialect">org.hibernate.dialect.SQLServerDialect</prop> <prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</prop> </props> </property> <property name="annotatedClasses"> <list> <value>com.mytest.examples.Person</value> <value>com.mytest.examples.Customer</value> <value>com.mytest.examples.Employee</value> </list> </property> The second sessionFactory points to the second database as follows <bean id="sessionFactory2" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="hibernateProperties"> <props> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.connection.pool_size">10</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.transaction.auto_close_session">true</prop> <prop key="hibernate.transaction.flush_before_completion">true</prop> <prop key="current_session_context_class">true</prop> <!--HSQL--> <prop key="hibernate.connection.datasource">java:comp/env/jdbc/yzz</prop> <prop key="hibernate.dialect">org.hibernate.dialect.SQLServerDialect</prop> <prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</prop> </props> </property> <property name="annotatedClasses"> <list> <value>com.mytest.examples.Container</value> <value>com.mytest.examples.Credentials</value> </list> </property> Now when I try and make use of these factories using the following Session session = getHibernateTemplate().getSessionFactory().openSession(); I always get the first factory by default and am able to access all the tables in its schema but not the second one. How do I specify which factory I want to use? A: The HibernateTemplate constructor takes a SessionFactory as argument. I don't know what getHibernateTemplate() does in your code, but it should return a HibernateTemplate built with one of the SessionFactory beans you defined (either by declaring them in the spring context xml file, or by constructing them in Java from one of the injected session factories). Note that, as the documentation of HibernateTemplate says (in bold): As of Hibernate 3.0.1, transactional Hibernate access code can also be coded in plain Hibernate style. Hence, for newly started projects, consider adopting the standard Hibernate3 style of coding data access objects instead, based on SessionFactory.getCurrentSession(). I would inject the session factory directly, and use the Hibernate API directly. HibernateTemplate doesn't bring much over the Hibernate API and often gets in the way, IMHO. (For example by not providing an equivalent to Query.uniqueResult()).
doc_23535208
$customerId = $_GET['u']; $password = $_GET['p']; $localCustomer = Mage::getModel('customer/customer') ->getCollection() ->addAttributeToSelect('customer_id') ->addAttributeToFilter('customer_id', $customerNumber) ->load(); $customer = Mage::getModel('customer/customer')->load($localCustomer->getData()[0]['entity_id']); umask(0); ob_start(); session_start(); Mage::app('default'); Mage::getSingleton("core/session", array("name" => "frontend")); $session = Mage::getSingleton('customer/session'); $session->login($customer->getData('customer_id'), $password); $session->setCustomerAsLoggedIn($customer); header('Location: '.$forwardUrl); This works nicely in Firefox and Chrome, but for some reason, it doesn't in Internet Explorer 11 and I can't seem to understand why. If I query data from customer/session after this, they are there, but as soon as I navigate to the start page, I'm no longer logged in. Only in Internet Explorer, works perfectly with normal browsers. Any ideas, hints, as to why this is happening? I'm getting desperate. A: Try this <?php function loginUser( $email, $password ) require_once ("app/Mage.php"); umask(0); ob_start(); session_start(); Mage::app('default'); Mage::getSingleton("core/session", array("name" => "frontend")); $websiteId = Mage::app()->getWebsite()->getId(); $store = Mage::app()->getStore(); $customer = Mage::getModel("customer/customer"); $customer->website_id = $websiteId; $customer->setStore($store); try { $customer->loadByEmail($email); $session = Mage::getSingleton('customer/session')->setCustomerAsLoggedIn($customer); $session->login($email, $password); }catch(Exception $e){ } } ?>
doc_23535209
This wouldn't normally be an issue but recently I noticed that the class list is unreadable because the long class names make it too wide to fit on a screen. Is there a way to make Doxygen break names of classes in the class list to more lines? Is there perhaps a way to hide specializations of a template class from class list while retaining the general template class? Is there a better solution? I managed to find a silly work-around by hiding the classes in a namespace and then immediately importing this namespace to the global namespace, so that the names of those classes would not appear on the list, unless the namespace is clicked or detail level is increased. The obvious downside is that the classes now do not appear on the list (some of those are fairly important and I'd like them to be there). I could also remove the following style: .directory td.entry { white-space: nowrap; } This can be done by saving this: .directory td.entry { white-space: normal; } as modify.css and specifying it under HTML_EXTRA_STYLESHEET. There are more word wrap tags however (e.g. flex-wrap), so additional editing might be required. A: Finally, I went with modifying the css. I ended up using: .directory td.entry { white-space: normal; /*width: 50%;*/ /* does not work, makes "Related Pages" look bad */ min-width: 512px; /* better, unless you have a 640x480 screen */ } I saved this as doxygen_modify.css and specified path to it in HTML_EXTRA_STYLESHEET (note that if named doxygen.css, it will not be renamed automatically and instead it will be replaced by the main style sheet - and hence you will not see any changes).
doc_23535210
Originally I installed Linux 17.2 (and later updated to 17.3) to do the job, however there seemed to be some conflict with my box causing there to be intermittent booting problems (it would often just stop dead and the monitor would go into power save, this didn't happen all the time but enough to be a problem)... So I decided to update to Linux 18 to see if the problem persisted, which I;m happy to say, it does not, no boot problems at all, however I've instead run into a new problem :( Since Linux Mint 8 has PHP7 by default in its repositories it is not compatible with what I'm trying to run, so I've been attempting to get PHP 5.6 installed, which I believe I have done, at-least it tells me I have it installed when I check the version in the terminal. php -v PHP 5.6.27-1+deb.sury.org~xenial+1 (cli) Copyright (c) 1997-2016 The PHP Group Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2016, by Zend Technologies I achieved this with the following: $ sudo apt-get install python-software-properties $ sudo add-apt-repository ppa:ondrej/php $ sudo apt-get update $ sudo apt-get install -y php5.6 php5.6-mcrypt php5.6-gd http://tecadmin.net/install-laravel-framework-on-ubuntu/ However, following the same walk-through, I am unable to install as described because I get the following error: Package libapache2-mod-php5 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'libapache2-mod-php5' has no installation candidate and yet when I try installed apache2 flat $ sudo apt-get install apache2 While apache works, PHP fails to work at all, I created a page with the phpinfo(); method and I simply get a blank page :( Can somebody please help me shed some light on this ? Thanks for any help in advance :( Regards, A: Try this module instead: sudo a2enmod php5.6 sudo service apache2 restart
doc_23535211
How can I access or directly make a list from all foo::bar values from hieradata file (below) in the Puppet module with lookup() or better way? --- foo::bar: 'some uniq name': baz: 12345 ... 'another uniq name': baz: 54321 ... So if it would be possible to use wildcards the key path would look like this -> foo::bar::*::baz. A: This requires the use of the lookup function, a lambda iterator, and hash syntax notation, so it actually is not that easy, although the code may make it seem that way. We need to iterate over the values for the keys inside the foo::bar hash. We can start off with that via: lookup(foo::bar, Hash).each |String $key, Hash $value| { # first $key is 'some uniq name' string # first $value is 'some uniq name' hash } Now we need to access the values for the bar key inside each nested hash. We can do that by the normal syntax for accessing values of keys inside a hash: lookup(foo::bar, Hash).each |String $key, Hash $value| { $value['baz'] # first value is 12345 } However, we need to store these values inside a variable so they are retained instead of being discarded after exiting the lambda scope. Therefore, we need to have a variable store the return value of the lambda iterator and use a lambda iterator that returns a modified array: $bazes = lookup(foo::bar, Hash).map |String $key, Hash $value| { $value['baz'] } Thus achieving the goal of storing an array (or list as you put it) of all the baz values inside the hieradata. Although the code is short, it is arguably not that simple. Helpful Documentation - lookup: https://puppet.com/docs/puppet/5.2/hiera_use_function.html lambda iterator map: https://puppet.com/docs/puppet/5.3/function.html#map accessing hash values: https://puppet.com/docs/puppet/5.3/lang_data_hash.html#accessing-values
doc_23535212
import React from 'react'; import PropTypes from 'prop-types'; const AbTest = ({ components, criteriaToMatch }) => { let componentToRender; components.forEach((component) => { if (component.criteria === criteriaToMatch) { componentToRender = component.instance; } }); return componentToRender; }; AbTest.propTypes = { components: PropTypes.arrayOf(PropTypes.shape({ instance: PropTypes.func, criteria: PropTypes.any, })), criteriaToMatch: PropTypes.oneOfType([ PropTypes.bool, PropTypes.string, PropTypes.number, ]), }; export default AbTest; You'll then use it like this: import MyComponentA from '../my-component-a'; import MyComponentB from '../my-component-b'; <AbTest components={[ { instance: MyComponentA, criteria: 'A' }, { instance: MyComponentB, criteria: 'B' }, ]} criteriaToMatch="A" /> So you'll pass it an array of components each having some criteria, and which ever matches gets rendered. But I'm getting an error saying: Functions are not valid as a React child A: From AbTest component, you must return the component instance like const AbTest = ({ components, criteriaToMatch }) => { let ComponentToRender; components.forEach((component) => { if (component.criteria === criteriaToMatch) { ComponentToRender = component.instance; } }); return <ComponentToRender />; };
doc_23535213
my products Adapter public class productsAdapter extends RecyclerView.Adapter<productsAdapter.MyViewHolder> { private Context mContext; private List<products> productList; public class MyViewHolder extends RecyclerView.ViewHolder { public TextView title, count; public ImageView thumbnail, cart; private Context context; public MyViewHolder(View view) { super(view); context = itemView.getContext(); title = (TextView) view.findViewById(R.id.title); count = (TextView) view.findViewById(R.id.count); thumbnail = (ImageView) view.findViewById(R.id.thumbnail); cart = (ImageView) view.findViewById(R.id.cart); } } public productsAdapter(Context mContext, List<products> productList) { this.mContext = mContext; this.productList = productList; } @Override public MyViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View itemView = LayoutInflater.from(parent.getContext()) .inflate(R.layout.products_card, parent, false); return new MyViewHolder(itemView); } @Override public void onBindViewHolder(final MyViewHolder holder, final int position) { final products p = productList.get(position); holder.title.setText(p.getName()); holder.count.setText(p.getPrice() + "L.E"); Glide.with(mContext).load(p.getThumbnail()).into(holder.thumbnail); holder.thumbnail.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent intent = new Intent(v.getContext(), product_details.class); Bundle bundle = new Bundle(); bundle.putString("img", p.getThumbnail()); bundle.putString("name", p.getName()); bundle.putInt("price", p.getPrice()); intent.putExtras(bundle); v.getContext().startActivity(intent); } }); holder.cart.setImageResource(R.drawable.ic_shopping_cart_black_24dp); holder.cart.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { holder.cart.setImageResource(R.drawable.ic_add_cart); SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(mContext); SharedPreferences.Editor editor = preferences.edit(); Gson gson = new Gson(); String jsonFavorites = gson.toJson(productList); editor.putString(FAVORITES, jsonFavorites); editor.commit(); } }); } public int getItemCount() { return productList.size(); } } my cart activity public class CartActivity extends AppCompatActivity { Button l; ImageView imv; Toolbar t; RecyclerView rv; RecyclerView.LayoutManager layoutmanager; RecyclerView.Adapter adapter; List<products> cartitems; ArrayList<products> selected_items_list = new ArrayList<>(); SharedPreference sharedPreference; public static final String MyPREFERENCES = "MyPrefs"; int countt = 0; boolean edit_mode = false; TextView counterr; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_cart); rv = (RecyclerView) findViewById(R.id.mycartrecycler); layoutmanager = new LinearLayoutManager(this); rv.setLayoutManager(layoutmanager); SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(this); String jsonFavorites = preferences.getString(FAVORITES, null); Gson gson = new Gson(); products[] favoriteItems = gson.fromJson(jsonFavorites, products[].class); cartitems = Arrays.asList(favoriteItems); cartitems = new ArrayList<products>(cartitems); adapter = new CartAdapter(cartitems, CartActivity.this); rv.setAdapter(adapter); } } A: Because you add the whole list in sharedpreferences.. look at below you did this... SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(mContext); SharedPreferences.Editor editor = preferences.edit(); Gson gson = new Gson(); String jsonFavorites = gson.toJson(productList); 1) if you want the specific selected result then you have to make a pojo class of product with the contain details like public TextView title, count; public ImageView thumbnail, cart; private Context context; and make getter and setter method of all fields. in onclick method of cart,you have to pass that pojo class and get that class in your cart activity. that's how you can get your desire output. 2) the second way to add all details as a field of sharedpreferences like editor.putString("title",title.getText().toString()); and set all count and thumbnail as an value. I hope it will help you... A: You are saving productList in SharedPreference. Try like this final Product product = productList.get(position); // Some other code holder.cart.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { holder.cart.setImageResource(R.drawable.ic_add_cart); SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(mContext); SharedPreferences.Editor editor = preferences.edit(); Gson gson = new Gson(); String jsonFavorites = gson.toJson(product); editor.putString(FAVORITES, jsonFavorites); editor.commit(); } }); In CartActivity Gson gson = new Gson(); Product product = gson.fromJson(jsonFavorites, Product.class);
doc_23535214
android.os.Process.killProcess(android.os.Process.myPid()); System.exit(1); System.exit(0); All works, but the operating system shows me a message saying that the application has stopped and there is a button that says "restart application" Is there a way to avoid that? A: Did you tried ? this.finishAffinity(); or finishAndRemoveTask (); Use System.exit(0); is a bad solution to finish an App.
doc_23535215
if(count >= SIZE) { Ship *hold = new Ship[SIZE]; // to hold for a while value of list for(int q = 0; q < SIZE; q++) hold[q] = list[q]; delete[] list; //delete list Ship *list = new Ship[SIZE + 10]; //set a new size for list for(int q = 0; q < SIZE - 10; q++) list[q] = hold[q]; delete[] hold; } A: There is no need for hold, just copy the old list to the new one. But the real mistakes in your code are that you don't change SIZE but you assume that it has changed Ship *list = new Ship[SIZE + 10]; //set a new size for list for(int q = 0; q < SIZE - 10; q++) should be Ship *list = new Ship[SIZE + 10]; //set a new size for list for(int q = 0; q < SIZE; q++) ... SIZE += 10; and that you declare a new list variable, when you should be changing the existing list variable. Ship *list = new Ship[SIZE + 10]; //set a new size for list should be list = new Ship[SIZE + 10]; //set a new size for list Here's your code with all mistakes fixed if(count >= SIZE) { Ship *new_list = new Ship[SIZE + 10]; //make the new list for(int q = 0; q < SIZE; q++) //copy from the old list new_list[q] = list[q]; delete[] list; //delete the old list list = new_list; //use the new list SIZE += 10; //set a new size for list } Now here's the same code using std::vector vec.resize(vec.size() + 10); std::vector is a little bit easier (actually it's a whole lot easier). You should use it. A: be careful when working with pointers. // -------------- include directives: #include <iostream> #include <string> // -------------- your class definitions class ship { public: ship() {} virtual ~ship() {} }; // ------- declaration state #define ADDITIONAL_RESERVED_SLOTS (10) unsigned int arraySize{3}; unsigned int shipCount{}; // initial with zero count ship* shipArray{nullptr}; // its very important to fill pointers with null for initialialization void addNewShip(const ship& new_ship); // function prototype put these lines in your initializer block of code // ------- allocate array dynamicly with 3 slot ship* shipArray = new ship[arraySize]; // zero allocated memory for pervent from mistakes std::memset(shipArray, 0, (sizeof(ship) * arraySize)); add new ship in to your array with define a generic function like this: void addNewShip(const ship& new_ship) { if(++shipCount > arraySize) { arraySize += ADDITIONAL_RESERVED_SLOTS; // allocate new array ship* pNewShipArray = new ship[arraySize]; // zero allocated memory for pervent from mistakes std::memset(pNewShipArray, 0, (sizeof(ship) * arraySize)); // copy previuse data in to the new array std::memcpy(pNewShipArray, shipArray, (sizeof(ship) * (shipCount - 1))); // delete previuse array delete[] shipArray; // now you can use shipArray shipArray = pNewShipArray; } // add new ship in to your array shipArray[shipCount - 1] = new_ship; // be carefull!! if you have pointers in your ship class you most handle them too. }
doc_23535216
Source : HighMaps for Angular Component.ts file below displayMapChart(){this.mapChart = new MapChart({ chart: { map: 'custom/world-robinson' }, title: { text: "Regions" }, mapNavigation: { enabled: true, buttonOptions: { alignTo: 'spacingBox' } }, colorAxis: { min: 0 }, series: [ { type: 'map', name: 'Text here', states: { hover: { color: '#BADA55' } }, dataLabels: { enabled: true, format: '{point.name}' }, allAreas: false, data: [ ['fo', 0], ['um', 1], ['us', 2], ['jp', 3], ['sc', 4], ['in', 5], ['fr', 6], ['fm', 7], ['cn', 8], ['pt', 9], ['sw', 10], ['sh', 11], ['br', 12], ['ki', 13], ['ph', 14], ['mx', 15], ['es', 16], ['bu', 17], ['mv', 18], ['sp', 19], ['gb', 20], ['gr', 21], ['as', 22], ['dk', 23], ['gl', 24], ['gu', 25], ['mp', 26], ['pr', 27], ['vi', 28], ['ca', 29], ['st', 30], ['cv', 31], ['dm', 32], ['nl', 33], ['jm', 34], ['ws', 35], ['om', 36], ['vc', 37], ['tr', 38], ['bd', 39], ['lc', 40], ['nr', 41], ['no', 42], ['kn', 43], ['bh', 44], ['to', 45], ['fi', 46], ['id', 47], ['mu', 48], ['se', 49], ['tt', 50], ['my', 51], ['pa', 52], ['pw', 53], ['tv', 54], ['mh', 55], ['cl', 56], ['th', 57], ['gd', 58], ['ee', 59], ['ag', 60], ['tw', 61], ['bb', 62], ['it', 63], ['mt', 64], ['vu', 65], ['sg', 66], ['cy', 67], ['lk', 68], ['km', 69], ['fj', 70], ['ru', 71], ['va', 72], ['sm', 73], ['kz', 74], ['az', 75], ['tj', 76], ['ls', 77], ['uz', 78], ['ma', 79], ['co', 80], ['tl', 81], ['tz', 82], ['ar', 83], ['sa', 84], ['pk', 85], ['ye', 86], ['ae', 87], ['ke', 88], ['pe', 89], ['do', 90], ['ht', 91], ['pg', 92], ['ao', 93], ['kh', 94], ['vn', 95], ['mz', 96], ['cr', 97], ['bj', 98], ['ng', 99], ['ir', 100], ['sv', 101], ['sl', 102], ['gw', 103], ['hr', 104], ['bz', 105], ['za', 106], ['cf', 107], ['sd', 108], ['cd', 109], ['kw', 110], ['de', 111], ['be', 112], ['ie', 113], ['kp', 114], ['kr', 115], ['gy', 116], ['hn', 117], ['mm', 118], ['ga', 119], ['gq', 120], ['ni', 121], ['lv', 122], ['ug', 123], ['mw', 124], ['am', 125], ['sx', 126], ['tm', 127], ['zm', 128], ['nc', 129], ['mr', 130], ['dz', 131], ['lt', 132], ['et', 133], ['er', 134], ['gh', 135], ['si', 136], ['gt', 137], ['ba', 138], ['jo', 139], ['sy', 140], ['mc', 141], ['al', 142], ['uy', 143], ['cnm', 144], ['mn', 145], ['rw', 146], ['so', 147], ['bo', 148], ['cm', 149], ['cg', 150], ['eh', 151], ['rs', 152], ['me', 153], ['tg', 154], ['la', 155], ['af', 156], ['ua', 157], ['sk', 158], ['jk', 159], ['bg', 160], ['qa', 161], ['li', 162], ['at', 163], ['sz', 164], ['hu', 165], ['ro', 166], ['ne', 167], ['lu', 168], ['ad', 169], ['ci', 170], ['lr', 171], ['bn', 172], ['iq', 173], ['ge', 174], ['gm', 175], ['ch', 176], ['td', 177], ['kv', 178], ['lb', 179], ['dj', 180], ['bi', 181], ['sr', 182], ['il', 183], ['ml', 184], ['sn', 185], ['gn', 186], ['zw', 187], ['pl', 188], ['mk', 189], ['py', 190], ['by', 191], ['cz', 192], ['bf', 193], ['na', 194], ['ly', 195], ['tn', 196], ['bt', 197], ['md', 198], ['ss', 199], ['bw', 200], ['bs', 201], ['nz', 202], ['cu', 203], ['ec', 204], ['au', 205], ['ve', 206], ['sb', 207], ['mg', 208], ['is', 209], ['eg', 210], ['kg', 211], ['np', 212] ] } ]});} Template.html <mat-card class="md-elevation-z7"> <div style="height: 346px;" [chart]="mapChart"></div> </mat-card> A: Perhaps you don't load map data. Check this description from highcharts-angular official wrapper, which I recommend you to use: https://github.com/highcharts/highcharts-angular#to-load-a-map-for-highmaps. Below you will find the online example of Angular with highcharts map (using highcharts-angular wrapper). app.module.ts: import { BrowserModule } from "@angular/platform-browser"; import { NgModule } from "@angular/core"; import { HighchartsChartModule } from "highcharts-angular"; import { ChartComponent } from "./chart.component"; import { AppComponent } from "./app.component"; @NgModule({ declarations: [AppComponent, ChartComponent], imports: [BrowserModule, HighchartsChartModule], providers: [], bootstrap: [AppComponent] }) export class AppModule {} chart.component.ts: import { Component, OnInit } from "@angular/core"; import * as Highcharts from 'highcharts'; import HC_map from 'highcharts/modules/map'; HC_map(Highcharts); require("./worldmap")(Highcharts); @Component({ selector: "app-chart", templateUrl: "./chart.component.html" }) export class ChartComponent implements OnInit { title = "app"; chart; updateFromInput = false; Highcharts = Highcharts; chartConstructor = "mapChart"; chartCallback; chartOptions = { chart: { map: 'myMapName' }, title: { text: 'Highmaps basic demo' }, subtitle: { text: 'Source map: <a href="http://code.highcharts.com/mapdata/custom/world.js">World, Miller projection, medium resolution</a>' }, mapNavigation: { enabled: true, buttonOptions: { alignTo: 'spacingBox' } }, colorAxis: { min: 0 }, series: [{ name: 'Random data', states: { hover: { color: '#BADA55' } }, dataLabels: { enabled: true, format: '{point.name}' }, allAreas: false, data: [ ['fo', 0], ['um', 1], ['us', 2], ['jp', 3], ['sc', 4], ['in', 5], ['fr', 6], ['fm', 7], ['cn', 8], ['pt', 9], ['sw', 10], ['sh', 11], ['br', 12], ['ki', 13], ['ph', 14], ['mx', 15], ['es', 16], ['bu', 17], ['mv', 18], ['sp', 19], ['gb', 20], ['gr', 21], ['as', 22], ['dk', 23], ['gl', 24], ['gu', 25], ['mp', 26], ['pr', 27], ['vi', 28], ['ca', 29], ['st', 30], ['cv', 31], ['dm', 32], ['nl', 33], ['jm', 34], ['ws', 35], ['om', 36], ['vc', 37], ['tr', 38], ['bd', 39], ['lc', 40], ['nr', 41], ['no', 42], ['kn', 43], ['bh', 44], ['to', 45], ['fi', 46], ['id', 47], ['mu', 48], ['se', 49], ['tt', 50], ['my', 51], ['pa', 52], ['pw', 53], ['tv', 54], ['mh', 55], ['cl', 56], ['th', 57], ['gd', 58], ['ee', 59], ['ag', 60], ['tw', 61], ['bb', 62], ['it', 63], ['mt', 64], ['vu', 65], ['sg', 66], ['cy', 67], ['lk', 68], ['km', 69], ['fj', 70], ['ru', 71], ['va', 72], ['sm', 73], ['kz', 74], ['az', 75], ['tj', 76], ['ls', 77], ['uz', 78], ['ma', 79], ['co', 80], ['tl', 81], ['tz', 82], ['ar', 83], ['sa', 84], ['pk', 85], ['ye', 86], ['ae', 87], ['ke', 88], ['pe', 89], ['do', 90], ['ht', 91], ['pg', 92], ['ao', 93], ['kh', 94], ['vn', 95], ['mz', 96], ['cr', 97], ['bj', 98], ['ng', 99], ['ir', 100], ['sv', 101], ['sl', 102], ['gw', 103], ['hr', 104], ['bz', 105], ['za', 106], ['cf', 107], ['sd', 108], ['cd', 109], ['kw', 110], ['de', 111], ['be', 112], ['ie', 113], ['kp', 114], ['kr', 115], ['gy', 116], ['hn', 117], ['mm', 118], ['ga', 119], ['gq', 120], ['ni', 121], ['lv', 122], ['ug', 123], ['mw', 124], ['am', 125], ['sx', 126], ['tm', 127], ['zm', 128], ['nc', 129], ['mr', 130], ['dz', 131], ['lt', 132], ['et', 133], ['er', 134], ['gh', 135], ['si', 136], ['gt', 137], ['ba', 138], ['jo', 139], ['sy', 140], ['mc', 141], ['al', 142], ['uy', 143], ['cnm', 144], ['mn', 145], ['rw', 146], ['so', 147], ['bo', 148], ['cm', 149], ['cg', 150], ['eh', 151], ['rs', 152], ['me', 153], ['tg', 154], ['la', 155], ['af', 156], ['ua', 157], ['sk', 158], ['jk', 159], ['bg', 160], ['qa', 161], ['li', 162], ['at', 163], ['sz', 164], ['hu', 165], ['ro', 166], ['ne', 167], ['lu', 168], ['ad', 169], ['ci', 170], ['lr', 171], ['bn', 172], ['iq', 173], ['ge', 174], ['gm', 175], ['ch', 176], ['td', 177], ['kv', 178], ['lb', 179], ['dj', 180], ['bi', 181], ['sr', 182], ['il', 183], ['ml', 184], ['sn', 185], ['gn', 186], ['zw', 187], ['pl', 188], ['mk', 189], ['py', 190], ['by', 191], ['cz', 192], ['bf', 193], ['na', 194], ['ly', 195], ['tn', 196], ['bt', 197], ['md', 198], ['ss', 199], ['bw', 200], ['bs', 201], ['nz', 202], ['cu', 203], ['ec', 204], ['au', 205], ['ve', 206], ['sb', 207], ['mg', 208], ['is', 209], ['eg', 210], ['kg', 211], ['np', 212] ] }] }; constructor() { const self = this; this.chartCallback = chart => { self.chart = chart; }; } ngOnInit() {} update_chart() { const self = this, chart = this.chart; chart.showLoading(); setTimeout(() => { chart.hideLoading(); self.chartOptions.series = [ { data: [10, 25, 15] } ]; self.updateFromInput = true; }, 2000); } } chart.component.html: <div class="boxChart__container"> <div> <highcharts-chart id="container" [Highcharts]="Highcharts" [constructorType]="chartConstructor" [options]="chartOptions" [callbackFunction]="chartCallback" [(update)]=updateFromInput [oneToOne]="true" style="width: 100%; height: 400px; display: block;"> </highcharts-chart> </div> </div> Demo: * *https://stackblitz.com/edit/angular-ax7gdr
doc_23535217
logs/foo/per_process_dir/journal.log My little daemon is written in Python and uses the watchdog-module to watch over the files (inotify is used by the module on Linux). I simply ask the module to monitor the foo/ subdirectory (recursively) and it notifies me, whenever a journal is appended... This all works but... The whole logs/ directory is rotated, when the application is restarted -- and I'd like my daemon to notice this automatically so that there'd be no need to restart it too. I expected to receive a "moved" event -- when logs/ is renamed to logs-Sunday/, for example -- but it does not happen... The daemon is small currently and I loath having to enlarge it by adding the code for watching the logs/ folder separately. Is there some other way, perhaps? A: If you want to catch renames of the logs directory, you will need to attach your observer to its parent directory. That is, if your logs directory is actually appname/logs, then instead of calling, e.g.: observer.schedule(event_handler, 'appname/logs', recursive=True) You would use: observer.schedule(event_handler, 'appname', recursive=True) (And you will subsequently need to filter events and ignore those that are outside of the logs directory.) This happens because your filesystem observer is attached to the logs directory. When you rename the logs directory, your observer continues to monitor it...under the new name. That is, the observer is attached to the inode, not to the path.
doc_23535218
//Kuvaesitys <?php function tf_kuvaesitys($wp_customize) { $wp_customize->add_section('tf-kuvaesitys-section', array( 'title' => 'Kuvaesitys' )); $wp_customize->add_setting('tf-kuvaesitys-kuvayksi'); $wp_customize->add_control( new WP_Customize_Cropped_Image_Control($wp_customize, 'tf-kuvaesitys-kuvayksi-control', array( 'label' => 'Kuva 1', 'section' => 'tf-kuvaesitys-section', 'settings' => 'tf-kuvaesitys-kuvayksi', 'width' => 1500, 'height' => 573 ))); $wp_customize->add_setting('tf-kuvaesitys-kuvakaksi'); $wp_customize->add_control( new WP_Customize_Cropped_Image_Control($wp_customize, 'tf-kuvaesitys-kuvakaksi-control', array( 'label' => 'Kuva 2', 'section' => 'tf-kuvaesitys-section', 'settings' => 'tf-kuvaesitys-kuvakaksi', 'width' => 1500, 'height' => 573 ))); $wp_customize->add_setting('tf-kuvaesitys-kuvakolme'); $wp_customize->add_control( new WP_Customize_Cropped_Image_Control($wp_customize, 'tf-kuvaesitys-kuvakolme-control', array( 'label' => 'Kuva 3', 'section' => 'tf-kuvaesitys-section', 'settings' => 'tf-kuvaesitys-kuvakolme', 'width' => 1500, 'height' => 573 ))); $wp_customize->add_setting('tf-kuvaesitys-kuvanelja'); $wp_customize->add_control( new WP_Customize_Cropped_Image_Control($wp_customize, 'tf-kuvaesitys-kuvanelja-control', array( 'label' => 'Kuva 4', 'section' => 'tf-kuvaesitys-section', 'settings' => 'tf-kuvaesitys-kuvanelja', 'width' => 1500, 'height' => 573 ))); } add_action('customize_register', 'tf_kuvaesitys'); ?> Now Media Library shows only list view (grid view is hanging with loader rolling) and when I upload images in Media Library it says "//Kuvaesitys" under "Maximum upload file size: 64MB" and in blue bar it says "Crunching...". When uploading image in Customize view it gives this error message: "An error occurred in the upload. Please try again later." My wild guess is there's something in the php above that is doing this but I can't find out what.
doc_23535219
{ "SdAff”: { "userId": " svc_gen_nsm ", "id": "IFMS12345", "additionalaffecteditems": [ { "itemType": "NODE-ID", "ItemName": "22BT_ORNC03", “restoreTime”: ” 25-JUL-2018 14:11:48” }, { "ItemType": "CCT", "ItemName": "A_circuit_id", “restoreTime”: ” 25-JUL-2018 14:11:48” },.....] } } The additionalaffecteditems can have multiple values. And I will have read the values one by one and insert them into DB using hibernate. My entity class name is SdAff where I have the DB mapping for each item. Can someone suggest how to accomplish this. Thanks !!
doc_23535220
IIS 7.0, IIS 7.5, and IIS 8.0 define several HTTP status codes that indicate a more specific cause of a 401 error. The following specific HTTP status codes are displayed in the client browser but are not displayed in the IIS log: * *401.1 - Logon failed. *401.2 - Logon failed due to server configuration. *401.3 - Unauthorized due to ACL on resource. *401.4 - Authorization failed by filter. *401.5 - Authorization failed by ISAPI/CGI application. I need some custom extended HTTP codes like these: * *401.10 - User not found *401.11 - User password mismatch *401.12 - User account is locked *401.13 - User account is expired *... How to return these extended 401 error codes from .NET MVC application? A: How to return these extended 401 error codes from .NET MVC application? You cannot return them from your application in the sense that the user will receive them, as HTTP does not support substatus codes. They are used only for IIS logging and tracing, only the "major" status code is returned. A: Use Response.StatusCode and Response.SubStatusCode A: In your action method you set the status codes manually: Response.StatusCode = 401; Response.SubStatusCode = 10; Additionally, you can use the HttpStatusCode enumeration in System.Net: Response.StatusCode = (int)HttpStatusCode.Unauthorized;
doc_23535221
if [ -f "$(brew --prefix)/opt/bash-git-prompt/share/gitprompt.sh" ];then source "$(brew --prefix)/opt/bash-git-prompt/share/gitprompt.sh" fi but I get this error Missing end to balance this if statement .bash_profile (line 2): if [ -f "$(brew --prefix)/opt/bash-git- prompt/share/gitprompt.sh" ]; then ^ from sourcing file .bash_profile called on standard input Dose anyone have an idea why? I have the code from here https://github.com/magicmonty/bash-git-prompt A: This error message: Missing end to balance this if statement .bash_profile (line 2): if [ -f "$(brew --prefix)/opt/bash-git- prompt/share/gitprompt.sh" ]; then ^ from sourcing file .bash_profile called on standard input is generated by the fish shell. The .bash_profile file is intended only to be executed (sourced) by the bash shell. fish is a different shell, with different syntax; it's not compatible with bash. If you're using fish as your interactive shell, and you want some commands to be executed automatically when you start a new shell, you'll need to translate the bash-specific commands to fish syntax and add them to your fish startup file. (Not a lot of people use fish, so providers of software packages aren't likely to provide startup commands in fish syntax -- but this package apparently does; see chepner's answer.) A: Although the linked repository contains a script for fish, the README does not provide any directions for how to use that script. Not having used fish in several years, I think what you want to do is add if status --is-login source (brew --prefix)"/opt/bash-git-prompt/share/gitprompt.fish" end to ~/.config/fish/config.fish instead. The if status command prevents the file from being unnecessarily sourced if you aren't starting an interactive shell.
doc_23535222
Simplified I have the following situation: one table rates with columns rate and converted generated with the scaffold statement within ruby. The column rate will be shown in the view, but the converted column not which will only be used in the background of the application. Now it's my goal to ask the user for a rate given in the field rate so the application can calculate the converted and store this value in the database. It's a simple calculation: converted = 1 / rate. Please advice :). A: Well there are a ton of ways to do that, including but not limited to putting the logic in the model (probably cleanest), in the controller and in the view itself with a hidden field and javascript. I would do it at the model level with a callback. For example, add this to your rates.rb model: before_save :calculate_converted_rate private def calculate_converted_rate self.converted = 1.0 / self.rate end Which as the code might suggest sets the converted field to the inverse of the rate field each time before a Rates instance is saved. More information on ActiveRecord callbacks: http://api.rubyonrails.org/classes/ActiveRecord/Callbacks.html A: Define your own mutator method in your model. This will allow you to take the input from your user, do what you want with it, then store it in the database. To illustrate: class Rate < ActiveRecord::Base def rate=(value) write_attribute :rate, value write_attribute :converted, 1.0 / value end end This will override Rails' default rate= method, which is what gets called when your model is passed your rate data from (assumedly) your form. Construct your model's form normally, with a <%= f.text_field :rate %> or whatever you need to do. Note the 1.0 / value, not 1 / value - this will ensure the result of that expression is a float, and that you won't lose any precision. If you divide an integer by an integer in Ruby, you'll always get an integer.
doc_23535223
On the Execution of the Job Start Windows Batch file it will invoke the dimtableinsert job and then after it finishes it will invoke fact_dim_combine it is taking just minutes to run in the Talend Open Studio but when I invoke the batch file via the Task Scheduler it is taking hours for the process to finish Time Taken Manual -- 5 Minutes Automation -- 4 hours (on invoking Windows batch file) Can someone please tell me what is wrong with this Automation Process A: The easiest way to know what is taking so much time is to add some logs to your job. First, add some tWarn at the start and finish of each of the subjobs (dimtableinsert and fact_dim_combine) to know which one is the longest. Then add more logs before/after the components inside the jobs. This way you should have a better idea of what is responsible for the slowdown (DB access, writing of some files, etc ...) A: The reason of the delay in the execution would be a latency issue. Talend might be installed in the same server where database instance is installed. And so whenever you execute the job in Talend, it will complete as expected. But the scheduler might be installed in the other server, when you call the job through scheduler, it would take some time to insert the data. * *Make sure you scheduler and database instance is on the same server *Execute the job directly in the windows terminal and check if you have same issue
doc_23535224
$('#loadmore').click(function() { $.ajax({ url: 'includes/loadmorebuilds.php', success: function(html) { $("#content").append(html); } }); }); To load more items into a container which is sorted with jquery masonry. However when they are appended they do not follow the rest of the items and break the masonry style. The new items work as a masonry layout, but they just dont continue on from the first ones. They start there own line. I have searched and found this information on their site: msnry.appended( elements ) // or with jQuery $container.masonry( 'appended', elements ) I just need help modifying my original script to make the appended items work with masonry. thanks, craig. A: What you can do is append your incoming html to the masonry container and then 'inform' masonry about it. Take a look at this fiddle. So in your example, your success callback could look something like: //... success: function(html) { var content = $("#content"), elements = $(html); // would make sense to reference your masonry container through // a variable used earlier in the script, but that should work too content.append(html).masonry('appended', elements); } //... There are some other ways to do this, but it should suffice for what you described. What's important is that you pass elements that are of 'Type: Element, NodeList, or Array of Elements', so you might want to check what you're actually receiving.
doc_23535225
queue = cl.CommandQueue(ctx,properties=cl.command_queue_properties.PROFILING_ENABLE) <other codes> for i in range(N): events.append(prg3.butterfly(queue,(len(twid),),None,twid_dev,<buffers>)) events[i].wait() for i in range(N): elapsed = elapsed + 1e-9*(event[i].profile.end - event[i].profile.start) print elapsed While time module can be used like this, k=time.time() for i in range(N): event = prg3.butterfly(queue,(len(twid),),None,twid_dev,<buffers>) print time.time()-k Since both of these give totally different results for N=20, ( while answer remains same and correct!), I have the following questions. * *What exactly does event profiling do and is it adding the time spent in event.wait() ? *Since answer is same without event.wait() in case 2, is it the right amount of time spent in just executing the Kernel? Please enlighten me about the right way to benchmark OpenCL programs in python. A: Your second case is only capturing the time taken to enqueue the kernel, not to actually run it. These enqueue kernel calls return as soon as the kernel invocation has been placed in the queue - the kernel will be run asynchronously with your host code. To time the kernel execution as well, just add a call to wait until all enqueued commands have finished: k=time.time() for i in range(N): event = prg3.butterfly(queue,(len(twid),),None,twid_dev,<buffers>) queue.finish() print time.time()-k Your first case is correctly timing the time spent inside kernel execution, but is unnecessarily blocking the host in-between each kernel invocation. You could just use queue.finish() again once all commands have been enqueued: for i in range(N): events.append(prg3.butterfly(queue,(len(twid),),None,twid_dev,<buffers>)) queue.finish() for i in range(N): elapsed = elapsed + 1e-9*(event[i].profile.end - event[i].profile.start) print elapsed Both of these approaches should return almost identical times.
doc_23535226
A: You can launch the Activity with the FLAG_ACTIVITY_NO_HISTORY flag and that should keep it out of the backstack. A: I would recommend putting some kind of user validation in the admin activity, which will disable the normal layout and display some kind of message that tells the user they are not allowed to make changes.
doc_23535227
I've been sending mail with this for quite some time and it used to work fine. Since yesterday it stopped working and the error message says "error": { "code": 400, "message": "Invalid value at 'message.raw' (TYPE_BYTES), Base64 decoding failed for \"[base64 encoded message with CRLF after every 76th character]\"", "errors": [ { "message": "Invalid value at 'message.raw' (TYPE_BYTES), Base64 decoding failed for \"[base64 encoded message with CRLF after every 76th character]\"", "reason": "invalid" } ], "status": "INVALID_ARGUMENT" } It works when I remove the CRLF. Any thoughts? Ref: https://www.rfc-editor.org/rfc/rfc2822#section-2.1.1 My code $msg = new Swift_Message(); $msg->setCharset('UTF-8') ->addTo(/*recipient*/) ->setSubject(/*sbject*/) ->addPart(/*text content*/, "text/plain") ->addPart(/*html content*/, "text/html"); $base64 = (new Swift_Mime_ContentEncoder_Base64ContentEncoder)->encodeString($msg->toString()); $base64_msg = rtrim(strtr($base64, '+/', '-_'), '='); $mailer = $this->_getGmailService();// new Google_Service_Gmail(new Google_Client()) $message = new Google_Service_Gmail_Message(); $message->setRaw($base64_msg); $message->setThreadId($threadId); $mailer->users_messages->send('me', $message); A: I used base64_encode($message->toString()); instead of (new Swift_Mime_ContentEncoder_Base64ContentEncoder)->encodeString($msg->toString()); A: The library method you're using does a Base64 encoding and you need a Base64URL encoded string as stated in the documentation
doc_23535228
My Code is as fallows AddDeadline Activity private MobileServiceClient mClient; private EditText title; public EditText editText; public EditText editDate; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_add_deadline); try { mClient = new MobileServiceClient( "https://craigsapp.azure-mobile.net/", "BTkcgnFQvevAdmmRteHCmhHPzdGydq84", this ); } catch (MalformedURLException e) { e.printStackTrace(); } editText = (EditText)findViewById(R.id.editText2); editDate = (EditText)findViewById(R.id.editText3); getSupportActionBar().setDisplayHomeAsUpEnabled(true); final ListDeadlines lst= new ListDeadlines(); lst.txtInput=(EditText)findViewById(R.id.txtinput); ImageView btAdd=(ImageView)findViewById(R.id.btnAdd); btAdd.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { title= (EditText) findViewById(R.id.txtinput); Item item = new Item(); item.Text = title.getText().toString(); Intent returnIntent = getIntent(); returnIntent.putExtra("result", item.Text = title.getText().toString()); setResult(RESULT_OK, returnIntent); mClient.getTable(Item.class).insert(item, new TableOperationCallback<Item>() { public void onCompleted(Item entity, Exception exception, ServiceFilterResponse response) { if (exception == null) { // Insert succeeded } else { // Insert failed } } }); finish(); } }); ListDeadline Activity public ArrayList<String> arrayList; public ArrayAdapter<String> adapter; public ArrayAdapter<String> newad; public EditText txtInput; public EditText txtDate; public int ADD_DEADLINE_REQUEST=0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_list_deadlines); final ListView listView = (ListView)findViewById(R.id.listv); String[] items= {""}; arrayList=new ArrayList<>(Arrays.asList(items)); adapter=new ArrayAdapter<String>(this,R.layout.list_item,R.id.txtitem,arrayList); listView.setAdapter(adapter); getSupportActionBar().setDisplayHomeAsUpEnabled(true); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); Item newitem = new Item(); if (requestCode == ADD_DEADLINE_REQUEST){ if (resultCode == RESULT_OK) { String item = data.getStringExtra("result"); arrayList.add(item); adapter.notifyDataSetChanged(); Toast toast = Toast.makeText(getApplicationContext(), "Deadline has been added", Toast.LENGTH_SHORT); toast.setGravity(Gravity.TOP | Gravity.CENTER_HORIZONTAL, 100, 0); toast.show(); } } } A: you'll have to query azure for the text and then populate your listbox. Or if you want to save the text locally then you can do it using shared preferences. A: As @pooja said, you can try to get the text via query Azure Mobile Service or get from the local storage. However, for keeping consistent between local and cloud, you need to add Offline Data Sync to your Android Mobile Services app, please see the offical document https://azure.microsoft.com/en-us/documentation/articles/mobile-services-android-get-started-offline-data/. There are two demo video below. * *https://channel9.msdn.com/Shows/Cloud+Cover/Episode-155-Offline-Storage-with-Donna-Malayeri *http://azure.microsoft.com/documentation/videos/azure-mobile-services-offline-enabled-apps-with-donna-malayeri/ (For Windows, but feature discussion applies to all platforms.) And an Android sample for offline data sync on GitHub you can refer to, please see https://github.com/Azure/mobile-services-samples/tree/master/TodoOffline/Android.
doc_23535229
So far, it goes as follows: from bs4 import BeautifulSoup import json import re url = 'www.html_code_url' page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') for script in soup.findAll('script'): if 'required_json_content'in script.get_text(): json = script.get_text.replace('unnecessary_stuff','') I also replace other tags that come along when I extract the JSON. However, there is a portion of the text which I can't really remove. It goes something like this, right after the JSON: something.push({"key1" : "field1","dict1" : [{"Id": 12479895,"randomnumber" : 1325757 ,"WebTree":{"options":[]}}]}) Is there a way to remove this 100 percent of the time?
doc_23535230
In schemas.py the fields are defined as follows: class Message(BaseModel): title: str id: int datim: Optional[datetime] to_id: Optional[int] from_id: Optional[int] body: Optional[str] class Config: orm_mode = True So why does it complain about 'body'? A: It turned out I had an extra argument in my function for @app.post() that was not used. Removing that solved the problem. A: This is for anyone who still has issues with this, I had a one to one relationship but had not specified uselist=False while defining the relationship.
doc_23535231
I have tried to get this problem solved for several hours now and my head is aching (especially because I did already solve it earlier but can't remember how, the script in which I use my solution is stored on a caomputer at school). Ok so this is my problem. In a given sequence of A's, T's, G's and C's (yup that's DNA) I have to find all the amino acids and count how many there are of them. In layman terms it comes down to this. I have to search the sequence for certain patterns (also called codons) these are three letter long sequences of A's and/or T's and/or G's and/or C's. Each amino acid has at least one codon associated with it. My job is to count the amount of occurences of each amino acid. In the second table you'll see the amino acid's to the left and the associated codons to the right. I have a dictionary set up like so: aaDic = {'ttt': 'F', 'tct': 'S', 'tat': 'Y', 'tgt': 'C', 'ttc': 'F', 'tcc': 'S', 'tac': 'Y', 'tgc': 'C', 'tta': 'L', 'tca': 'S', 'taa': '*', 'tga': '*', 'ttg': 'L', 'tcg': 'S', 'tag': '*', 'tgg': 'W', 'ctt': 'L', 'cct': 'P', 'cat': 'H', 'cgt': 'R', 'ctc': 'L', 'ccc': 'P', 'cac': 'H', 'cgc': 'R', 'cta': 'L', 'cca': 'P', 'caa': 'Q', 'cga': 'R', 'ctg': 'L', 'ccg': 'P', 'cag': 'Q', 'cgg': 'R', 'att': 'I', 'act': 'T', 'aat': 'N', 'agt': 'S', 'atc': 'I', 'acc': 'T', 'aac': 'N', 'agc': 'S', 'ata': 'I', 'aca': 'T', 'aaa': 'K', 'aga': 'R', 'atg': 'M', 'acg': 'T', 'aag': 'K', 'agg': 'R', 'gtt': 'V', 'gct': 'A', 'gat': 'D', 'ggt': 'G', 'gtc': 'V', 'gcc': 'A', 'gac': 'D', 'ggc': 'G', 'gta': 'V', 'gca': 'A', 'gaa': 'E', 'gga': 'G', 'gtg': 'V', 'gcg': 'A', 'gag': 'E', 'ggg': 'G' } I can of course count the amount of occurences of each codon but since there's more than one codon associated with each Amino Acid I really need the sum of specific codons. for codons in aaDic: s.count(codons) (s is the sequence of a,t,c,g in the code above). For example: tta,ttg,ctt,ctc,cta,ctg are all associated with the amino acid 'L', so I need to sum of all occurences of tta,ttg,ctt,ctc,cta,ctg to get the total amount of occurences of the amino acid 'L'. I hope I am clear enough, it's a little hard to explain, especially after trying to do it so long for yourself and failing at it (which usually indicates you have little to no idea of what you are doing, at least that's the case with me :D) EDIT: Let me try to make myself somewhat more clear: * *We are given a sequence consisting exlusively of the letters A, T, C and G. *We have to parse this sequence three by three. suppose the sequence is "TTCTTACTC" we get "TTC", "TTA", "CTC" *We now look up these keys in the dictionary and we find the associated amino acids: TTC is F TTA is L CTC is L *We need to count and store in a list the number of F's, L's and any other value (FLIMVSPTAY*HQNKDECWRSG) in the dictionary. The desired output would be a dictionary like so: {L:total no. of the amino acid 'L' in the sequence, S:total no. of the amino acid 'S' in the sequence, ...} A: If you use Python 2.7 or above, you can use collections.Counter to count the amino acids. First, split your base sequence into codons, then count the amino acids corresponding to each codon: base_seq = "atcgtgagt" codons = [base_seq[i:i + 3] for i in range(0, len(base_seq), 3)] amino_acid_counts = collections.Counter(aaDict[c] for c in codons) Note that the generator expression (aaDict[c] for c in codons) generates a sequence of amino acids, regardless by which codons they were encoded. If you use an earlier version of Python, you can also use a plain dictionary for counting: amino_acid_counts = dict.fromkeys(aaDict.values(), 0) for c in codons: amino_acid_counts[aaDict[c]] += 1 A: If you don't have 2.7+, you can still use a defaultdict: counts = collections.defaultdict(int) for k in aaDic: counts[aaDic[k]] += 1 A: Try the following: y = {} for x in aaDic.items(): y[x[1]] = [] for x in aaDic.items(): y[x[1]].append(x[0]) Then you can find all the values with X keys with: xkv = [ k for k in y.keys() if len(y[k]) == X ] A: Using codons split from @sven-marnach: base_seq = "atcgtgagt" # split sequence, 3 by 3 codons = [base_seq[i:i + 3] for i in range(0, len(base_seq), 3)] # for each codon we have, obtain his associated amino_acid from aaDic amino_acids = map(aaDic.get, base_seq) # here, amino_acids is ['I', 'V', 'S'] i_count = amino_acids.count('I') # and so on Then you can assemble your resulting dict with: aa_names = set(aaDic.values()) return dict((aa_name, amino_acids.count(aa_name) for aa_name in aa_names))
doc_23535232
MemcachedClient client1 =new MemcachedClient(new BinaryConnectionFactory(), AddrUtil.getAddresses("172.22.65.111:11211 172.22.65.11:11211")); and MemcachedClient client2 =new MemcachedClient(new BinaryConnectionFactory(), AddrUtil.getAddresses("172.22.65.111:11212 172.22.65.11:11212")); here i am specifying that client2 is listening on the other port 11212. but i am getting java.net.ConnectException: Connection refused: no further information. Due to the client2 declaration. I have installed memcached and then executed the commands memcached -p 11211 -d start and memcached -p 11212 -d start in the CMD. A: Changing the memcached.conf file worked for me when I was having similar problems. It seems that memcached is ignoring the options you give it and just using the options which are in the file.
doc_23535233
|---------------------|----------------------------------| | Cost | Combo | |---------------------|----------------------------------| | 12 | ['apples', 'bananas', 'carrots'] | |---------------------|----------------------------------| | 7 | ['apples', 'carrots'] | |---------------------|----------------------------------| If the 'Cost' column is a function of the cost of the individual 'Combo' items. The 'Combo' is a list of items. If a 'bananas' changes, then I want to modify the 'Cost' of any Combo with bananas accordingly. While I can step through each row using iterrows, checking the Combo to see if bananas is in the Combo, I was wondering if there was a faster method I could use to achieve the same effect. And what if I were to update more than one item in the combo, such as bananas and carrots?
doc_23535234
UKOS.getToken = async function () { if ( !UKOS.token || new Date().getTime() > UKOS.token.issued_at + 1000 * (UKOS.token.expires_in - 15) ) UKOS.token = await fetch(/* CGI Script */).then((response) => response.json() ); console.log(UKOS.token.access_token); return UKOS.token.access_token; }; // ... fetch(UKOS.WMTS + "&request=GetCapabilities&version=2.0.0", { headers: { Authorization: "Bearer " + UKOS.getToken() }, }).then(/* etc */); I have read that the explicit use of 'async' and 'await' shouldn't be necessary when using 'fetch', but their use definitely alters the behaviour, though neither version works, as follows: With 'async' and 'await' as above: Error messages come from the access token being returned as 'undefined' before it could be returned by the CGI script, but after these the console.log statement, which has waited for the token, prints the right answer. Without these keywords: The console.log prints undefined before the error messages resulting from the undefined return value. This is being tested using FF 96.0.3, but PaleMoon 29.4.4.4 does the same, IE dies much earlier, I suspect because it doesn't understand modern scripting syntax. How can I force the return to wait for the correct value? A: async functions always return promises. So code like { Authorization: "Bearer " + UKOS.getToken() } is going to be trying to concatenate a string with a promise, which is not what you want. You need to either call .then on the promise and put your code in the callback: UKOS.getToken() .then(token => { return fetch(UKOS.WMTS + "&request=GetCapabilities&version=2.0.0", { headers: { Authorization: "Bearer " + token }, }) }) .then(/* etc */) Or you need to put your code in an async function, and await the promise: async function someFunction() { const token = await UKOS.getToken(); const response = await fetch(UKOS.WMTS + "&request=GetCapabilities&version=2.0.0", { headers: { Authorization: "Bearer " + token }, }) /* etc */ }
doc_23535235
2017-10-05 22:02:02,564 luigi-interface WARNING Failed pinging scheduler 2017-10-05 22:02:03,129 requests.packages.urllib3.connectionpool INFO Starting new HTTP connection (126): localhost 2017-10-05 22:02:03,130 luigi-interface ERROR Failed connecting to remote scheduler 'http://localhost:8082' Traceback (most recent call last): ... File "/home/develop/data_warehouse/venv/local/lib/python2.7/site-packages/requests/sessions.py", line 585, in send r = adapter.send(request, **kwargs) File "/home/develop/data_warehouse/venv/local/lib/python2.7/site-packages/requests/adapters.py", line 467, in send raise ConnectionError(e, request=request) ConnectionError: HTTPConnectionPool(host='localhost', port=8082): Max retries exceeded with url: /api/add_worker (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f15128cb3d0>: Failed to establish a new connection: [Errno 111] Connection refused',)) 2017-10-05 22:02:03,180 luigi-interface INFO Worker Worker(salt=150908931, workers=3, host=etl2, username=develop, pid=18019) was stopped. Shutting down Keep-Alive thread Traceback (most recent call last): File "app_metadata.py", line 1567, in <module> luigi.run() File "/home/develop/data_warehouse/venv/local/lib/python2.7/site-packages/luigi/interface.py", line 210, in run return _run(*args, **kwargs)['success'] File "/home/develop/data_warehouse/venv/local/lib/python2.7/site-packages/luigi/interface.py", line 238, in _run return _schedule_and_run([cp.get_task_obj()], worker_scheduler_factory) File "/home/develop/data_warehouse/venv/local/lib/python2.7/site-packages/luigi/interface.py", line 197, in _schedule_and_run success &= worker.run() File "/home/develop/data_warehouse/venv/local/lib/python2.7/site-packages/luigi/worker.py", line 867, in run self._add_worker() File "/home/develop/data_warehouse/venv/local/lib/python2.7/site-packages/luigi/worker.py", line 652, in _add_worker self._scheduler.add_worker(self._id, self._worker_info) File "/home/develop/data_warehouse/venv/local/lib/python2.7/site-packages/luigi/rpc.py", line 219, in add_worker return self._request('/api/add_worker', {'worker': worker, 'info': info}) File "/home/develop/data_warehouse/venv/local/lib/python2.7/site-packages/luigi/rpc.py", line 146, in _request page = self._fetch(url, body, log_exceptions, attempts) File "/home/develop/data_warehouse/venv/local/lib/python2.7/site-packages/luigi/rpc.py", line 138, in _fetch last_exception luigi.rpc.RPCError: Errors (3 attempts) when connecting to remote scheduler 'http://localhost:8082' sounds like try to ping central schedule, but be failed, then crashed, later tasks all be blocked, cannot run successfully. and, some one else also meet the similar error, but his resolution not works. Github - Failed connecting to remote scheduler #1894 A: I would try making the timeout a little longer if your central scheduler is getting overloaded. You could also increase retries and retry wait time. in luigi.cfg [core] rpc-connect-timeout=60.0 #default is 10.0 rpc-retry-attempts=10 #default is 3 rpc-retry-wait=60 #default is 30 You may also want to add a watch have the scheduler process automatically restart on crash. A: Have you configured the central scheduler properly? See the docs: https://luigi.readthedocs.io/en/stable/central_scheduler.html If not, try using the local scheduler by specifying --local-scheduler from the command line.
doc_23535236
I have been using setdiff() from rgeos for many years, but I've found a quirk in redoing my work. What I basically want to do is to overlay a transparent hole over an Akima generated image map. I find that I need to specify one of the hole vertices slightly larger than the polygon in which the hole is to be cut. Here is the example script: library(rgeos) Changing 'maxy' to 2.1 provides the desired effect. Why not for 1.9? maxy <- 1.9 maxy <- 2.1 x1 <- c(1, 1, 2, 2, 1) y1 <- c(1, 2, 2, 1, 1) xy1 <- cbind(x1, y1) p1 <- as(xy1, "gpc.poly") plot(p1, main="p1") x2 <- c(1.1, 1.5, 1.9, 1.1) y2 <- c(1.1, maxy, 1.1, 1.1) # NOTE: 'maxy' is here! xy2 <- cbind(x2, y2) p2 <- as(xy2, "gpc.poly") plot(p2, main="p2") plot(setdiff(p1, p2), poly.args = list(col = "grey"), main="setdiff(p1, p2)", asp=1) If this cannot be fixed, is there another way to accomplish the same? Thanks ahead of time.
doc_23535237
. . . from apache_beam.options.pipeline_options import PipelineOptions class DataflowOptions(PipelineOptions): @classmethod def _add_argparse_args(cls, parser): parser.add_value_provider_argument( '--table_name', help='Name of table on BigQuery') def run(argv=None): pipeline_options = PipelineOptions() dataflow_options = pipeline_options.view_as(DataflowOptions) with beam.Pipeline(options=pipeline_options) as pipeline: table_spec = bigquery.TableReference( projectId='MyProyectId', datasetId='MyDataset', tableId=str(dataflow_options.table_name)) p = (pipeline | 'Read Table' >> beam.io.Read(beam.io.BigQuerySource(table_spec))) if __name__ == '__main__': run() But when I launch the job, I get the following error: Workflow failed. Causes: S01:Read Table+Batch Users/ParDo(_GlobalWindowsBatchingDoFn)+Hash Users+Upload to Ads failed., BigQuery getting table "RuntimeValueProvider(option: table_name, type: str, default_value: None)" from dataset "MyDataset" in project "MyProject" failed., BigQuery execution failed., Error: Message: Invalid table ID "RuntimeValueProvider(option: table_name, type: str, default_value: None)". HTTP Code: 400 I read this answer, but isn't there something from 2017 so far? A: From the documentation as mentioned here, the TableReference takes the following parameters (dataset_ref, table_id). From your code snippet it looks like the braces are incorrectly placed. with beam.Pipeline(options=pipeline_options) as pipeline: dataset_ref = bigquery.DatasetReference('my-project-id', 'some_dataset') table_spec = bigquery.TableReference(dataset_ref, tableId=str(dataflow_options.table_name)
doc_23535238
A: This is a tough one, but I can think of a solution which involves invoking a script immediately after a merge operation (but before any manual resolving). First off, the cache of conflict resolutions (.git/rr-cache) is stored by blob hash rather than by file path. There is nothing to indicate which file the resolutions actually came from, so I think hacking away at that directory would not be a viable solution. This quote from maintainer Junio Hamano also hits on the fact that rerere is per-merge and not per-file: There is no "I do not care if there are good resolutions remembered that do not have anything to do with the current merge, just remove all of them"---that is what "rm -fr .git/rr-cache" is for. ...which of course is not useful for you because selectively deleting files from .git/rr-cache is not well suited for automation in your use-case. The feature to exploit may be the forget sub-command which takes a pathspec. The "forgetting" can only happen during the context of a merge however, which makes it fundamentally different than something like .gitignore which you can apply statically. But, conceivably you could have a post-merge hook that invokes a script which: Iterates over the conflicted files: git diff --name-only --diff-filter=U For every file that you want to "un-rerere", forget it: git rerere forget -- path/to/file ...and restore the conflict: git checkout -m path/to/file Then, continue on with the merge per-usual leaving the rerere resolutions for the files you want remembered intact. The list of files to have "rerere remembered" could be checked into the repository (perhaps in a file called .reremember) and queried from the script.
doc_23535239
Raw data 2013-05-07T17:41:06+00:00 source=HEROKU_POSTGRESQL_VIOLET addon=postgres-metric-68904 sample#current_transaction=1873 sample#db_size=26219348792bytes sample#tables=13 sample#active-connections=92 sample#waiting-connections=1 sample#index-cache-hit-rate=0.99723 sample#table-cache-hit-rate=0.99118 sample#load-avg-1m=1.42 sample#load-avg-5m=1.45 sample#load-avg-15m=1.34 sample#read-iops=0 sample#write-iops=2.875 sample#memory-total=1692568kB sample#memory-free=73876kB sample#memory-cached=1344128kB sample#memory-postgres=22388kB I want to calculate percentage using below value. sample#memory-total=1692568kB sample#memory-free=73876kB sample#memory-cached=1344128kB sample#memory-postgres=22388kB A: Divide the total reported memory total by the plan-specific amount of RAM then multiply by 100 for a percentage. (sample#memory-total / available mem in kb) * 100
doc_23535240
I have a collection C, with an array of attributes A1. Each attribute has an array of subattributes A2. How can I add a subdocument to a specific C.A1 subdocument ? A: Here is an example. db.docs.insert({_id: 1, A1: [{A2: [1, 2, 3]}, {A2: [4, 5, 6]}]}) If you know the index of the subdocument you want to insert, you can use dot notation with the index (starting from 0) in the middle: db.docs.update({_id: 1}, {$addToSet: {'A1.0.A2': 9}}) This results in: { "A1" : [ { "A2" : [ 1, 2, 3, 9 ] }, { "A2" : [ 4, 5, 6 ] } ], "_id" : 1 } A: Yes, this is possible. If you post an example I can show you more specifically what the update query would look like. But here's a shot: db.c.update({ A1: value }, { $addToSet: { "A1.$.A2": "some value" }}) I haven't actually tried this (I'm not in front of a Mongo instance right now) and I'm going off memory, but that should get you pretty close. A: Yes, $push can be used to do the same. Try below given code. db.c.update({ A1: value }, { $push: { "A1.$.A2": num }});
doc_23535241
<T> T notUsedRandomItem(List<T> allItems, List<T> usedItems) { return allItems.stream() .filter(item -> !usedItems.contains(item)) .sorted((o1, o2) -> new Random().nextInt(2) - 1) .findFirst() .orElseThrow(() -> new RuntimeException("Did not find item!")); } Function might be used like this... System.out.println( notUsedRandomItem( Arrays.asList(1, 2, 3, 4), Arrays.asList(1, 2) ) ); // Should print either 3 or 4 Edit: Collected suggested implementations and tested efficiency by running them against Person lists. edit2: Added missing equals method to Person class. import java.util.*; import java.util.concurrent.TimeUnit; import java.util.function.BiFunction; import java.util.stream.Collectors; import java.util.stream.IntStream; import static java.util.stream.Collectors.toList; class Functions { <T> T notUsedRandomItemOriginal(List<T> allItems, List<T> usedItems) { return allItems.stream() .filter(item -> !usedItems.contains(item)) .sorted((o1, o2) -> new Random().nextInt(2) - 1) .findFirst() .orElseThrow(() -> new RuntimeException("Did not find item!")); } <T> T notUsedRandomItemByAominè(List<T> allItems, List<T> usedItems) { List<T> distinctItems = allItems.stream() .filter(item -> !usedItems.contains(item)) .collect(toList()); if (distinctItems.size() == 0) throw new RuntimeException("Did not find item!"); return distinctItems.get(new Random().nextInt(distinctItems.size())); } <T> T notUsedRandomItemByEugene(List<T> allItems, List<T> usedItems) { // this is only needed because your input List might not support removeAll List<T> left = new ArrayList<>(allItems); List<T> right = new ArrayList<>(usedItems); left.removeAll(right); return left.get(new Random().nextInt(left.size())); } <T> T notUsedRandomItemBySchaffner(List<T> allItems, List<T> usedItems) { Set<T> used = new HashSet<>(usedItems); List<T> all = new ArrayList<>(allItems); Collections.shuffle(all); for (T item : all) if (!used.contains(item)) return item; throw new RuntimeException("Did not find item!"); } } public class ComparingSpeedOfNotUsedRandomItemFunctions { public static void main(String[] plaa) { runFunctionsWith(100); runFunctionsWith(1000); runFunctionsWith(10000); runFunctionsWith(100000); runFunctionsWith(200000); } static void runFunctionsWith(int itemCount) { TestConfiguration testConfiguration = new TestConfiguration(); Functions functions = new Functions(); System.out.println("Function execution time with " + itemCount + " items..."); System.out.println("Schaffner: " + testConfiguration.timeSpentForFindingNotUsedPeople( itemCount, (allPeople, usedPeople) -> functions.notUsedRandomItemBySchaffner(allPeople, usedPeople) )); System.out.println("Eugene: " + testConfiguration.timeSpentForFindingNotUsedPeople( itemCount, (allPeople, usedPeople) -> functions.notUsedRandomItemByEugene(allPeople, usedPeople) )); System.out.println("Aominè: " + testConfiguration.timeSpentForFindingNotUsedPeople( itemCount, (allPeople, usedPeople) -> functions.notUsedRandomItemByAominè(allPeople, usedPeople) )); System.out.println("Original: " + testConfiguration.timeSpentForFindingNotUsedPeople( itemCount, (allPeople, usedPeople) -> functions.notUsedRandomItemOriginal(allPeople, usedPeople) )); } } class TestConfiguration { Long timeSpentForFindingNotUsedPeople(int numberOfPeople, BiFunction<List<Person>, List<Person>, Person> function) { ArrayList<Person> people = new ArrayList<>(); IntStream.range(1, numberOfPeople).forEach(i -> people.add(new Person())); Collections.shuffle(people); List<Person> halfOfPeople = people.stream() .limit(numberOfPeople / 2) .collect(Collectors.toList()); Collections.shuffle(halfOfPeople); long before = System.nanoTime(); Person foundItem = function.apply(people, halfOfPeople); long after = System.nanoTime(); // Return -1 if function do not return valid answer if (halfOfPeople.contains(foundItem)) return (long) -1; return TimeUnit.MILLISECONDS.convert(after - before, TimeUnit.NANOSECONDS); } class Person { public final String name = UUID.randomUUID().toString(); @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Person person = (Person) o; return name != null ? name.equals(person.name) : person.name == null; } @Override public int hashCode() { return name != null ? name.hashCode() : 0; } } } Results: Function execution time with 100 items... Schaffner: 0 Eugene: 1 Aominè: 2 Original: 5 Function execution time with 1000 items... Schaffner: 0 Eugene: 14 Aominè: 13 Original: 5 Function execution time with 10000 items... Schaffner: 2 Eugene: 564 Aominè: 325 Original: 348 Function execution time with 20000 items... Schaffner: 3 Eugene: 1461 Aominè: 1418 Original: 1433 Function execution time with 30000 items... Schaffner: 3 Eugene: 4616 Aominè: 2832 Original: 4567 Function execution time with 40000 items... Schaffner: 4 Eugene: 10889 Aominè: 4903 Original: 10394 Conclusion When list size reach 10000 items then so far only Schaffner's implementation is usable. And because it's fairly simple to read I will pick it as the most elegant solution. A: I can think of this, but no idea what-so-ever how it will scale compared to your existing solution: <T> T notUsedRandomItem(List<T> allItems, List<T> usedItems) { // this is only needed because your input List might not support removeAll List<T> left = new ArrayList<>(allItems); List<T> right = new ArrayList<>(usedItems); left.removeAll(right); return left.get(new Random().nextInt(left.size())); } One thing to keep in mind is that sorted is a stateful operation, so it will sort the entire "diff", but you only retrieve one element from that. Also your Comparator is wrong, for the same two values o1 and o2 you might say they are different - this can break in mysterious ways. A: You should use HashSets to improve performance: <T> T notUsedRandomItem(List<T> allItems, List<T> usedItems) { Set<T> used = new HashSet<>(usedItems); Set<T> all = new HashSet<>(allItems); all.removeIf(used::contains); // or all.removeAll(used) if (all.isEmpty()) throw new RuntimeException("Did not find item!"); int skip = new Random().nextInt(all.size()); Iterator<T> it = all.iterator(); for (int i = 0; i < skip; i++) it.next(); return it.next(); } This removes elements from the all set if they belong to the used set. As Set.removeIf and Set.contains are being used, the removal of elements is optimal w.r.t performance. Then, a random number of elements is skipped in the resulting set, and finally, the next element of the set is returned. Another approach is to shuffle the all list first and then simply iterate and return the first element that doesn't belong to the used set: <T> T notUsedRandomItem(List<T> allItems, List<T> usedItems) { Set<T> used = new HashSet<>(usedItems); List<T> all = new ArrayList<>(allItems); Collections.shuffle(all); for (T item : all) if (!used.contains(item)) return item; throw new RuntimeException("Did not find item!"); } EDIT: Checking the last snippet of code, I now realize that there's no need to shuffle the whole list. Instead, you could randomize the indices of the allItems list and return the first element that doesn't belong to the used set: <T> T notUsedRandomItem(List<T> allItems, List<T> usedItems) { Set<T> used = new HashSet<>(usedItems); return new Random().ints(allItems.size(), 0, allItems.size()) .mapToObj(allItems::get) .filter(item -> !used.contains(item)) .findAny() .orElseThrow(() -> new RuntimeException("Did not find item!")); } A: The Comparator you've passed into the sorted intermediate operation seems wrong and strange way to use a Comparator to my eyes anyway; which relates to what @Eugene has mentioned in his post. Thus, I'd recommend you avoid any type of pitfalls and always use an API the way it's intended to be used; nothing more. if you really want a random element from the said list; the only way that is possible is to find all the distinct elements of the two lists. so we cannot improve the speed in this aspect. once this is done we simply need to generate a random integer within the range of the list containing the distinct elements and index into it given there is at least one element contained in it. Though I have to admit there are probably better ways to accomplish the task at hand without the use of streams; here's how I have modified your code slightly to remove the misuse of .sorted((o1, o2) -> new Random().nextInt(2) - 1). <T> T notUsedRandomItem(List<T> allItems, List<T> usedItems) { List<T> distinctItems = allItems.stream() .filter(item -> !usedItems.contains(item)) .collect(toList()); if(distinctItems.size() == 0) throw new RuntimeException("Did not find item!"); return distinctItems.get(new Random().nextInt(distinctItems.size())); }
doc_23535242
There are the following Classes: public class Tag { public string Id{get;set} public string Tag{ get{ return Id;} } public class AnEntity { [...] public virtual ICollection<Tag> Tags{ get; set;} } I query the database this way: var query = from entity in db.AnEntities where [...] select new { [...] Tags = entity.Tags.Select(tag => tag.Id).ToList() } If I execute the query now with await query.ToListAsync() the performance is quite reasonable. However if i want to filter the search and return only the AnEntities that cointains all of a given set of tags, the performance is really bad. I do the "contains all query" this way: var filters = new List<string>(){"Filter1", "Filter2"}; from filtent in query where filters.Intersect(filtent.Tags).Count() == filters.Count select filtent; Is there a cleverer way to do this 'contains all' query? Edit: I also thought about replace the Tags with a comma-seperated string but I'm not sure how i can run a 'contains all' query directly on the database. This is important, because after the filterquery there are some more queries and i dont want to receive entities from the database that dont contains all the given filters. Is there a way i could do it with a comma seperated string or something like that? Many thanks for hints! A: This is because EF creates a monstrous query with Intersect. It somehow has to translate the list filters into SQL and it does that by UNION-ing single-value SELECTs for each element in the list. And that happens a number of times in one query. I've experienced that statements like Intersect, Except and sometimes All and Any are often better avoided and replaced by a solution using Contains, because Contains is always translated into an IN statement. In your case that would be from filtent in query where filtent.Tags.All(f => filters.Contains(f) ) select filtent I'm sure this will produce a much better query that also performs much better. In this case, the All statement is translated into a harmless NOT EXISTS.
doc_23535243
I am new to Node and JavaScript and I am trying to understand the different uses of package.json and package-lock.json. Before you read any further, no, I am not merely just asking for a summary of what their difference is here. After doing some homework, my understanding of them is as follows: * *you want to commit both to source control, so neither should be mentioned in the .gitignore *package.json describes your project and can do some lightweight dependency management, for instance, specifying that you want the latest version of the fizzbuzz package, or you want the latest 3.10.x version of the fizzbuzz package *package-lock.json is purely for dependency management and goes into detail about which specific dependencies your project should use; for instance if you specify you want the latest 3.10.x version of fizzbuzz in your package.json file, the package-lock.json file might specify fizzbuzz-3.10.24, etc. *you do directly modify/edit your package.json file, but you only let NPM and perhaps other command line tools modify your package-lock.json (hence no human being should ever edit package-lock.json) Are these statements correct? If not, can someone please provide some details as to how/where my understanding is going awry? A: Small answer Your understanding is correct. To run a basic Nodejs project you only need package.json file on your project, I mean it's required. The package.json is used to keep the dependencies of the project. Which also defines project properties like description, author, license information, scripts, etc. The package-lock.json is used to keep dependencies in a specific version number. It records the exact version of each installed package which allows you to install the same version of packages on different environments. Brief answer Why package-lock.json is created? When you install a package in your project using the below command. for example npm install node-sass --save , it will install the exact latest version of that package in your project and save the dependency in the package.json with a carat (^) sign. "node-sass": "^6.0.0" Carat (^) means it will support any higher version with the major version. Here, package-lock.json is created for locking the dependency with the installed version, in this case 6. What is the use of package-lock.json? As mentioned above it records the exact version of each installed package which allows you to re-install them. This allows you to generate the same results in different environments. For that, we should use the package-lock.json file to install dependencies. Why should we commit package-lock.json with our project source code (to Git)? During deployment, when you run npm i (or npm install) on your server or whatever environment with the same package.json file without the package-lock.json, the installed packages might have a higher version now from what you had intended. In that case, if your code targeted a specific version of some of those packages you might have a problem. References https://docs.npmjs.com/cli/v7/configuring-npm/package-lock-json
doc_23535244
(setq mu4e-sent-messages-behavior 'delete) delete These three tests return false: (eq 'mu4e-sent-messages-behavior 'delete) nil (equal 'mu4e-sent-messages-behavior 'delete) nil (equal 'mu4e-sent-messages-behavior "delete") nil And this one returns true, but with the member function for lists: (if (member mu4e-sent-messages-behavior '(delete)) t nil) t If the user keeps the setting at the default set in the code: (defcustom mu4e-sent-messages-behavior 'sent ... ) then member also fails: (when (member mu4e-sent-messages-behavior '(sent)) t nil) nil What is wrong with my tests, and how can I test for the value of a variable set by the user? A: Don't quote the variable name when passing it to eq: (eq mu4e-sent-messages-behavior 'delete) The problem with this piece of code: (when (member mu4e-sent-messages-behavior '(sent)) t nil) is that when will either return nil (if the condition is false) or the last value of the body (if the condition is true), which in this case is nil - so this piece of code will always return nil. Use if instead of when, and you should see it returning t.
doc_23535245
Issue I am having is they have onboarded a new customer with a new feed. The customers feed which they are now sending me are correct, Although their XML files are missing the declaration at the top. Example of Declaration missing <?xml version="1.0" encoding="UTF-8"?> This is stopping the feed from processing and displaying on the website. If I manually add the Declaration and re upload it works. Now the customer is saying they have never had the headers and it works for other 3rd party companies. Is there a way to add the header once receiving the files, the feed processor & FTP server are in c#, dotnet3.1 public static T Deserialize<T>(XmlReader reader) { var emptyNamespaces = new XmlSerializerNamespaces(new[] { XmlQualifiedName.Empty, new XmlQualifiedName("xs", "http://www.w3.org/2001/XMLSchema") }); object retVal = null; XmlSerializer serializer = new XmlSerializer(typeof(T)); var settings = new XmlReaderSettings(); settings.IgnoreComments = true; settings.ConformanceLevel = ConformanceLevel.Document; settings.DtdProcessing = DtdProcessing.Parse; // using (StringReader stringReader = new StringReader(objectXml)) // using (var xmlReader = XmlReader.Create(stringReader, settings)) { retVal = serializer.Deserialize(reader); return (T)retVal; } } public static T Deserialize<T>(XDocument xdoc) { var emptyNamespaces = new XmlSerializerNamespaces(new[] { XmlQualifiedName.Empty, new XmlQualifiedName("xs", "http://www.w3.org/2001/XMLSchema") }); object retVal = null; XmlSerializer serializer = new XmlSerializer(typeof(T)); var settings = new XmlReaderSettings(); settings.IgnoreComments = true; settings.ConformanceLevel = ConformanceLevel.Document; settings.DtdProcessing = DtdProcessing.Parse; using (StringReader stringReader = new StringReader(xdoc.ToString())) using (var xmlReader = XmlReader.Create(stringReader, settings)) { retVal = serializer.Deserialize(xmlReader); return (T)retVal; } } public static T Deserialize<T>(string objectXml) { var emptyNamespaces = new XmlSerializerNamespaces(new[] { XmlQualifiedName.Empty, new XmlQualifiedName("xs", "http://www.w3.org/2001/XMLSchema") }); object retVal = null; XmlSerializer serializer = new XmlSerializer(typeof(T)); var settings = new XmlReaderSettings(); settings.IgnoreComments = true; settings.ConformanceLevel = ConformanceLevel.Document; settings.DtdProcessing = DtdProcessing.Parse; using (StringReader stringReader = new StringReader(objectXml)) using (var xmlReader = XmlReader.Create(stringReader, settings)) { retVal = serializer.Deserialize(xmlReader); return (T)retVal; } } // public static T Deserialize<T>(string objectXml, string xmlNamespace) // { // var emptyNamespaces = new XmlSerializerNamespaces(new[] { XmlQualifiedName.Empty, new XmlQualifiedName("xs", xmlNamespace) }); // object retVal = null; // XmlSerializer serializer = new XmlSerializer(typeof(T)); // var settings = new XmlReaderSettings(); // settings.IgnoreComments = true; // settings.ConformanceLevel = ConformanceLevel.Document; // settings.DtdProcessing = DtdProcessing.Parse; // using (StringReader stringReader = new StringReader(objectXml)) // using (var xmlReader = XmlReader.Create(stringReader, settings)) // { // retVal = serializer.Deserialize(xmlReader); // return (T)retVal; // } // } public static T Deserialize<T>(Stream xmlStream) { var emptyNamespaces = new XmlSerializerNamespaces(new[] { XmlQualifiedName.Empty, new XmlQualifiedName("xs", "http://www.w3.org/2001/XMLSchema") }); object retVal = null; XmlSerializer serializer = new XmlSerializer(typeof(T)); var settings = new XmlReaderSettings(); settings.IgnoreComments = true; settings.ConformanceLevel = ConformanceLevel.Document; settings.DtdProcessing = DtdProcessing.Ignore; using (var xmlReader = XmlReader.Create(xmlStream, settings)) { retVal = serializer.Deserialize(xmlReader); return (T)retVal; } } } } public static class XmlUtil { public static string SerializeToString<T>(T value) { var emptyNamespaces = new XmlSerializerNamespaces(new[] { XmlQualifiedName.Empty }); var serializer = new XmlSerializer(value.GetType()); var settings = new XmlWriterSettings(); settings.Indent = true; settings.OmitXmlDeclaration = false; using (var stream = new StringWriter()) using (var writer = XmlWriter.Create(stream, settings)) { // This adds our DocType writer.WriteDocType("propertyList", null, "http://reaxml.realestate.com.au/propertyList.dtd", null); serializer.Serialize(writer, value, emptyNamespaces); return stream.ToString(); } } public static string SerializeToStringUTF8<T>(T value) { var emptyNamespaces = new XmlSerializerNamespaces(new[] { XmlQualifiedName.Empty }); var serializer = new XmlSerializer(value.GetType()); var settings = new XmlWriterSettings(); settings.Indent = true; settings.Encoding = Encoding.UTF8; settings.OmitXmlDeclaration = false; using (var stream = new StringWriterWithEncoding(Encoding.UTF8)) using (var writer = XmlWriter.Create(stream, settings)) { // This adds our DocType writer.WriteDocType("propertyList", null, "http://reaxml.realestate.com.au/propertyList.dtd", null); serializer.Serialize(writer, value, emptyNamespaces); return stream.ToString(); } }
doc_23535246
I have a CSV file of exposures for days of the year e.g. 01/11/2002 (DMY). I want these imported into Stata and it to recognise that it is a date variable. I've been using: insheet using "FILENAME", comma But by doing this I am only getting the dates as labels rather than names of the variables. I guess this is because Stata doesn't allow variable names to start with numbers. I have tried to reformat the cells as Dates in Excel and import but then Stata thinks the whole column is a Date and changes the exposure data into dates. Any advice on the best course of action is appreciated... A: As commented elsewhere, I too think you probably have a dataset that is best formatted as panel data. However, I address first the specific problem I think you have according to your question. Then I show some code in case you are interested in switching to a panel structure. Here is an example CSV file open as a spreadsheet: And here the same file, open in a text editor. Imagine the ; are ,. This is related to my system's language settings. Running this (substitute delimiter(";") for comma, in your case): clear all set more off insheet using "D:\xlsdates.csv", delimiter(";") results in which I think is the problem you describe: dates as variable labels. You would like to have the dates as variable names. One solution is to use a loop and strtoname() to rename the variables based on the variable labels. The following goes after importing with insheet: foreach var of varlist * { local j = "`: variable l `var''" local newname = strtoname("`j'", 1) rename `var' `newname' } The result is The function strtoname() will substitute out the ilegal characters for _'s. See help strtoname. Now, if you want to work with a panel structure, one way would be: clear all set more off insheet using "D:\xlsdates.csv", delimiter(";") * Rename variables foreach var of varlist * { local j = "`: variable l `var''" local newname = strtoname("`j'", 1) rename `var' `newname' } * Generate ID generate id = _n * Change to long format reshape long _, i(id) j(dat) string * Sensible name rename _ metric * Generate new date variable gen dat2 = date(dat,"DMY", 2050) format dat2 %d list, sepby(id) As you can see, there's no need to do anything beforehand in Excel or in an editor. Stata seems to be enough in this case. Note: I've reused code from http://www.stata.com/statalist/archive/2008-09/msg01316.html. A further note on performance: A CSV file with 122 variables or days (columns) and 10,000 observations or subjects (rows) + 1 header row, will produce 1,220,000 observations after the reshape. I have tested this on some old machine with a 1.79 GHz AMD processor and 640 MB RAM and the reshape takes approximately 8 minutes. Stata 12 has a hard-limit of 2,147,483,647 observations (although available RAM determines if you can actually achieve it) and Stata SE of 32,767 variables. A: There seems to be some confusion here between the names that variables may have, the values that variables may have and the types that they may have. Thus, the statement "Stata doesn't allow variables to start with numbers" appears to be a reference to Stata's rules for variable names; if it were true, numeric variables would be impossible. Stata has no variable (i.e. storage) type that is a date. Strictly, it has no concept of a date variable, but dates may be held as strings or numbers. Dates may be held as strings insofar as any text indicating a date is likely to be a string that Stata can hold. This is flexible, but not especially useful. For almost all useful work, dates need to be converted to integers and then assigned a display format that matches their content to be readable by people. Stata has various conventions here, e.g. that daily dates are held as integers with 0 meaning 1 January 1960. It seems likely in your case that daily dates are being imported as strings: if so, the function date() (also known as daily()) may be used to convert to an integer date. The example here just uses the minimal default display format for daily dates: friendlier formats exist. . set obs 1 obs was 0, now 1 . gen sdate = "12/03/12" . gen ndate = daily(sdate, "DMY", 2050) . format ndate %td . l +----------------------+ | sdate ndate | |----------------------| 1. | 12/03/12 12mar2012 | +----------------------+ If your variable names are being misread, as guessed by @ChrisP, you may need to tell us more. A short and concrete example is worth more than a longer verbal description.
doc_23535247
Example: imagine you have this struct C { /* a class */ }; template<class T> struct S { typedef C type; // S<T>::type is a type }; What's bothering me is this: template<class T> struct Typedef { typedef typename S<T>::type MyType; // needs typename }; template<class T> struct Inheritance : S<T>::type // doesn't need typename { }; In both cases the parser should expect a type, so it could parse S<T>::type as one. Why does it only do so for inheritance, and not for typedefs? The pattern seems the same to me: typedef $type$ $new_symbol$; class $new_symbol$ : $type$ { $definition$ }; Or is there a usage of typedef I'm not aware of, which make this ambiguous? PS: I'm pretty sure this has already been asked, but I can't find it (there's a lot of noise related to the typename keyword). This question is only about the syntax, not whether it's better to use inheritance or typedefs. I appologize in advance if there's a duplicate. A: The reason is that the typedef syntax is more variable than the inheritance syntax. Normally you would write typedef first, and the type name second. But the order is actually unimportant. To wit, the following is a valid typedef: int typedef integer_type; Now consider what happens if we use a dependent name: S<T>::type typedef integer_type; Without doing some non-trivial lookahead, the parser cannot know that S<T>::type refers to a type name here (because it hasn’t yet seen the typedef), so by the disambiguation rules it infers a value. For consistency in the grammar, there is no special case for prefixed typedef (which, you are right, is unambiguous). There could be a special case, but there simply isn’t.
doc_23535248
import nltk f=open('word-freq-utf8-new.txt','rU') text=f.read() text1=text.split() abst=nltk.Text(text1) abst.concordance('سلام') A: The nltk does not yet work really well with unicode, although they are working on it. As a bit of a quick fix, you can create a subclass for the concordance and overwrite the print_concordance method to make sure you are encoding/decoding at the right times for processing and display purposes. Here is a really quick fix, assuming you have already imported the nltk (I am using as an example part of a unicode Greek text): >>> tokens = re.findall(ur'\w+', t.decode('utf-8'), flags=re.U) # I did this to make sure I was working with a decoded text. If you are working with an encoded text, skip this. `t` is the equivalent of your `text`. >>> class ConcordanceIndex2(nltk.ConcordanceIndex): 'Extends the ConcordanceIndex class.' def print_concordance(self, word, width=75, lines=25): half_width = (width - len(word) - 2) // 2 context = width // 4 # approx number of words of context offsets = self.offsets(word) if offsets: lines = min(lines, len(offsets)) print("Displaying %s of %s matches:" % (lines, len(offsets))) for i in offsets: if lines <= 0: break left = (' ' * half_width + ' '.join([x.decode('utf-8') for x in self._tokens[i-context:i]])) # decoded here for display purposes right = ' '.join([x.decode('utf-8') for x in self._tokens[i+1:i+context]]) # decoded here for display purposes left = left[-half_width:] right = right[:half_width] print(' '.join([left, self._tokens[i].decode('utf-8'), right])) # decoded here for display purposes lines -= 1 else: print("No matches") If you are working with a decoded text, you will need to encode the tokens like so: >>> concordance_index = ConcordanceIndex2([x.encode('utf-8') for x in tokens], key=lambda s: s.lower()) # encoded here to match an encoded text >>> concordance_index.print_concordance(u'\u039a\u0391\u0399\u03a3\u0391\u03a1\u0395\u0399\u0391\u03a3'.encode('utf-8')) Displaying 1 of 1 matches: ΚΑΙΣΑΡΕΙΑΣ ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΙΣΤΟΡΙΑ Euse Otherwise, you can simply do this: >>> concordance_index = ConcordanceIndex2(tokens, key=lambda s: s.lower()) >>> concordance_index.print_concordance('\xce\x9a\xce\x91\xce\x99\xce\xa3\xce\x91\xce\xa1\xce\x95\xce\x99\xce\x91\xce\xa3') Displaying 1 of 1 matches: ΚΑΙΣΑΡΕΙΑΣ ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΙΣΤΟΡΙΑ Euse
doc_23535249
Let's say that we have stored some data in the table. For example: Column A Column B x1 y1 x2 y2 x3 y3 If I try to send some data like (x1, y4), will y1 be set to y4? If not is there a setting to do it from phpMyAdmin? Or Do I need to use UPDATE? A: You should use UPDATE or INSERT ON ON DUPLICATE KEY UPDATE MORE INFO @ http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html A: trying to insert a row with an existing primary key will cause an error, and that's exactly why no one uses things that might be duplicated as primary keys, to change the value of the existing row, you must use UPDATE, there is no other solution, but if you want to insert a new row, my advice is to go main stream, either use an autoincremented id as primary key, or use things like GUID(windows only).
doc_23535250
const EventEmitter = require('events'); const WebSocket = require('ws'); const myEmitter = new EventEmitter(); const ws = new WebSocket('wss://someurl'); ws.on('message', (data) => { ........ /* preprocess and do the mongodb stuff */ myEmitter.emit('someevent', data)}); }); My question is, how can I listen for such an event in my React client? If I stick with this approach, do I need to pass in myEmitter to my React components? I am new to React so please let me know if there is any better way to solve the problem. A: do I need to pass in myEmitter to my React components? no... your client side and serverside code should be separate. You can use a client-side SocketIO app like socket.io. if you're going to be listening for a bunch of different events in different components, consider using an enhancer style wrapper function withSocket (event?, onEvent?) { // note: this is TS return (Component) => { class WithSocketEvent extends Component { constructor (props) { super(props) this.socket = io.connect(SOCKET_ENDPOINT) } componentDidMount () { if (event && onEvent) { this.socket.on(event, onEvent) } } componentWillUnmount () { this.socket && this.socket.close() } render () { return ( <Component { ...this.props } socket={ this.socket } /> ) } } return WithSocketEvent } } // usage class HasSocketEvent extends Component { componentDidMount () { // handle the event in the component this.props.socket.on("someEvent", this.onSocketEvent) } onSocketEvent = (event) => { } render () { } } // handle the event outside the component export default withSocket("someEvent", function () { // so something })(HasSocketEvent) // or export default withSocket()(HasSocketEvent)
doc_23535251
I want to create UUID. But I can't find any c# version of 'CFUUIDXXXX'. Anyone knows what the class name is? If it doesn't exist, I have a question. Looking Xamarin's API document is fastest way to know whether it exist or not? If it doesn't exist, It seems like that Xamarin.iOS doesn't inject all the class from iOS native class? Is it correct? I also know about Guid. But I was told that there is no guarantee it's unique. And also there is "UIDevice.CurrentDevice.IdentifierForVendor". Is it good enough to use as UUID? Thanks. A: You can use NSUUID(in Xamarin NSUuid) instead. This is the new younger cousin to CFUUID. NSUUID just popped up in iOS 6. It is pretty much exactly the same as CFUUID except it has a nice, modern Objective-C interface. Here is a good comparison of all Id-Types: https://possiblemobile.com/2013/04/unique-identifiers/
doc_23535252
/root/document/{ids}?fields={fields} /root/externaldocument/{ids}?fields={fields} to map to the same interface member: Documents GetDocuments(string ids, string fields) I have tried putting a wildcard into a literal URL segment: [OperationContract] [WebGet(UriTemplate = "/root/*document/{ids}?fields={fields}")] Documents GetDocuments(string ids, string fields) However, this is not valid and I get the following exception: The UriTemplate '/root/*document/{ids}?fields={fields}' is not valid; the wildcard ('*') cannot appear in a variable name or literal... Note that a wildcard segment, either a literal or a variable, is valid only as the last path segment in the template If I wrap the wildcard segment in a template brace: [OperationContract] [WebGet(UriTemplate = "/root/{*document}/{ids}?fields={fields}")] Documents GetDocuments(string ids, string fields) Then I get an exception because there is no such input parameter in the method arguments: Operation 'GetDocuments' in contract 'IAPIv2' has a UriTemplate that expects a parameter named 'DOCUMENTS', but there is no input parameter with that name on the operation. My workaround is simply to have two entries, pointing to different methods, and then have the methods call a common implementation: [OperationContract] [WebGet(UriTemplate = "/root/document/{ids}?fields={fields}")] Documents GetDocuments(string ids, string fields) [OperationContract] [WebGet(UriTemplate = "/root/externaldocument/{ids}?fields={fields}")] Documents GetExternalDocuments(string ids, string fields) But this seems kind of ugly. I have read the documentation and cannot find this point specifically address. Is there any way I can have a wildcard literal segment in WCF? Or is this not possible in WCF? A: As it turned out, the two entry points needed to have slightly different functionality. So I needed to capture which URL was used to enter the method. What I ended up doing was the following: [OperationContract] [WebGet(UriTemplate = "/root/{source}ocuments/{ids}?fields={fields}")] DocumentCollection GetDocumentsById(string source, string ids, string fields); Both URLs: /root/document/{ids}?fields={fields} /root/externaldocument/{ids}?fields={fields} map to the same URL template, and thus I needed to have only a single entry with a single UriTemplate in my interface. The "source" input parameter captures either "d" if the second segment is "documents or "externald" if the second segment is "externaldocuments". Thus by inspecting this input parameter, the method can react appropriately, depending upon which URL was used to reach the method. Note that I could not use the following for the UriTemplate: [WebGet(UriTemplate = "/root/{source}documents/{ids}?fields={fields}")] because in this case, the incoming URL /root/document/{ids}?fields={fields} would not match the template, even though the template matches if an empty string ("") is used for the source input parameter. Apparently the UriTemplate matching algorithm requires there to be at least one character in a parameter capturing group for there to be a match.
doc_23535253
Views: login.blade.php <script> $(document).ready(function(){ $("#submit").click(function(e){ e.preventDefault(); email = $("#email").val(); password = $("#password").val(); $.ajax({ type:"POST", data:{"email":email, "password":password, "_token":"{{csrf_token()}}"}, url:"{{URL::to('login_redirect')}}", success:function(data){ if (typeof data !== 'object') { data = JSON.parse(data); } if (data.redirect) { window.location.replace(data.redirect); } else { $("#success").html('<p style="color:red;">' + data.error + '</p>'); } } }); }); }); </script> <div id="success"></div> <input type="text" name="email" id="email" placeholder="Email"> <input type="password" name="password" id="password" placeholder="Password"> <input type="submit" name="submit" id="submit" class="btn btn-primary"> contollers: Mycontroller.php <?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Controllers\Controller; use App\Http\Requests; use Auth; use Session; use DB; class Mycontroller extends Controller { public function login_redirect(Request $request) { $email = $request->input('email'); $password = $request->input('password'); $sql = DB::table('user')->where('email', '=', $email)->where('password', '=', $password)->count(); if($sql > 0) { $query = DB::table('user')->where('email', '=', $email)->where('password', '=', $password)->get(); Session::put('user', $query); if (!isset($_POST)) { header ("Location: dashboard"); } else { echo json_encode(array('redirect' => "dashboard")); } } else { echo json_encode(array('error' => 'Wrong email or password or may be your account not activated.')); } } public function dashboard() { $user = Session::get('user'); return view('user.dashboard',['data'=>$user]); } public function logout(Request $request) { Auth::logout(); Session::flush(); return redirect('/login'); } } views: dashboard.php <?php if(empty($data)) { header('location:{{url("login")}}'); } ?> @if (is_array($data) || is_object($data)) @foreach($data as $row) <h3>Welcome, {{ $row->username }}</h3> @endforeach @endif <a href="{{url('logout')}}">Logout</a> Now, the problem is that when I click on logout button it redirect me on login page which is fine but when I directly browse dashboard page on my url without login it again accessible which is wrong. I want once user logout it can't access dashboard directly. So, How can I do this? Please help me. Thank You A: You should need to check the user is login status in the dashboard. For simply, you can try the below answer. As using Laravel, why can't you use the middleware property? Middleware provide a convenient mechanism for filtering HTTP requests entering your application. For example, Laravel includes a middleware that verifies the user of your application is authenticated. If the user is not authenticated, the middleware will redirect the user to the login screen. However, if the user is authenticated, the middleware will allow the request to proceed further into the application. public function dashboard() { $user = Session::get('user'); if($user) return view('user.dashboard',['data'=>$user]); else return redirect('/login'); }
doc_23535254
There is only one call to database and after that its using existing store from cache. How to make it reload from database every-time(or at least every-time when we reopen the display). Below is the code. //store Ext.define('NetworkStore', { extend: 'Ext.data.Store', alias: 'NetworkStore', fields: ['Id', 'value'], storeId: 'NetworkStore', autoLoad: true, proxy: { type: 'ajax', useDefaultXhrHeader: false, actionMethods: { create: "POST", read: "GET", update: "POST", destroy: "POST" }, headers: { 'Content-Type': 'application/x-www-form-urlencode' }, limitParam: false, startParam: false, pageParam: false, extraParams: { Style: 1 }, url: 'url', reader: { type: 'json' } } }); xtype: 'combo', name: 'NetworkIDList', store: Ext.create('NetworkStore').load({ params: { Style: 3 } }), A: The offical docs in lastQuery offer: listeners: { beforequery: function (qe) { delete qe.combo.lastQuery; } } here is the full soure: /** * @property {String} lastQuery * The value of the match string used to filter the store. Delete this property to force * a requery. Example use: * * var combo = new Ext.form.field.ComboBox({ * ... * queryMode: 'remote', * listeners: { * // delete the previous query in the beforequery event or set * // combo.lastQuery = null (this will reload the store the next time it expands) * beforequery: function(qe){ * delete qe.combo.lastQuery; * } * } * }); * * To make sure the filter in the store is not cleared the first time the ComboBox trigger * is used configure the combo with `lastQuery=''`. Example use: * * var combo = new Ext.form.field.ComboBox({ * ... * queryMode: 'local', * triggerAction: 'all', * lastQuery: '' * }); */ A: Add the desired logic to the list expand event handler. Fiddle xtype: 'combo', name: 'NetworkIDList', listeners: { expand: function () { this.getStore().load() } }, store: Ext.create('NetworkStore').load({ params: { Style: 3 } }) A: If we pass store as given below it will always get store from db not from cache. store: { type: 'NetworkStore', proxy: { extraParams: { Style: 3 } } } A: remoteFilter: true true if want to load datas from the server side on each expand of combobox, false if you want to load data locally after once it is loaded. if remoteFilter property is not set in the store it defaults to false. soo setting this property to true might solve the issue A: First, you need to convert your desire to a systematic logic. "at least every-time when we reopen the display" Like the following logic: * *Listen to the widget's event, which means to "when we reopen the display" *Reload the widget's store { xtype: 'combo', name: 'NetworkIDList', store: Ext.create('NetworkStore').load({ params: { Style: 3 } }), //Add this: afterRender: function(){ //Just do this to load the store this.getStore().load(); } } afterRender is one of sample listeners which can be declared as a method. There are many other event listeners you can use to place your code to load the widget's store again and again as you want. see https://docs.sencha.com/extjs/7.0.0/modern/Ext.Widget.html#method-afterRender and .load() is the ProxyStore's method which can be used to first load manually or reload the store. see https://docs.sencha.com/extjs/7.0.0/modern/Ext.data.ProxyStore.html#method-load Hope it help.
doc_23535255
As you know any user can dump database from phone and see its structure and data. Does exist any way to prevent database dump from Android phones? I avoid that anynone can have a look at database data (and then copy it) using any SQLite explorer. Thank you. A: Using password in the connection string Data Source=filename;Version=3;Password=myPassword; Source: http://www.connectionstrings.com/sqlite or encrypting (Search here for SO answers)
doc_23535256
With Bootstrap I tired Bootstap with text and it is working. But i couldn't figure out how to do it with check box. $(document).ready(function () { $("#myInput").on("keyup", function () { var value = $(this).val().toLowerCase(); $("#activityTable tr").filter(function () { $(this).toggle($(this).text().toLowerCase().indexOf(value) > -1) }); }); }); Above code only works with text information. Does anyone knows how to update this code for checkbox please A: You can give an id to the checkbox and then get it's state. Let's say it's id is chkBox then you will have to write like: var isChecked = $("#chkBox").is(':checked'); and then use it in your filter function: $(this).toggle($(this).text().toLowerCase().indexOf(value) > -1 && (isChecked && $(this).find('input:checkbox').is(':checked'))); and if you meant was to show only rows where checkbox state is checked when user types in fitler box then: $(this).toggle($(this).text().toLowerCase().indexOf(value) > -1 && $(this).find('input:checkbox').is(':checked') ); Now this will look in the tr for checkbox and if it is checked that row will be toggled.
doc_23535257
post tag ref_post_tag post and tag has a Many-to-Many relationship Entities Post @Entity @Table(name = "post") public class Post implements Serializable{ private static final long serialVersionUID = 1783734013146305964L; public enum Status { DRAFT, REMOVED, LIVE; } @Id @Column(name = "id") @GeneratedValue(strategy = GenerationType.AUTO) private String id; @Column(name = "title") private String title; @Column(name = "create_time") private LocalDateTime createTime; @Column(name = "update_time") private LocalDateTime updateTime; @Column(name = "content") private String content; @Column(name = "status") @Enumerated(EnumType.STRING) private Status status; @ManyToMany @JoinTable( name = "ref_post_tag", joinColumns = @JoinColumn(name="post_id",referencedColumnName = "id"), inverseJoinColumns = @JoinColumn(name="tag_id", referencedColumnName = "id")) private List<Tag> tagList; ... } Tag @Entity @Table(name="tag") public class Tag implements Serializable{ private static final long serialVersionUID = -7015657012681544984L; @Id @Column(name="id") @GeneratedValue(strategy = GenerationType.AUTO) private Integer id; @Column(name="name") private String name; @Column(name="description") private String description; @ManyToMany(mappedBy = "tagList") private List<Post> postList; public Integer getId() { return id; } ... } Tag Repo public interface TagRepo extends CrudRepository<Tag, Integer>{ } service implementation @Service public class TagServiceImpl implements TagService{ @Autowired private TagRepo tagRepo; @Override public void addTag(Tag tag) { tagRepo.save(tag); } @Override public Tag getTag(Integer id) { Tag tag = tagRepo.findOne(id); return tag; } @Override public List<Tag> findAllTags() { return CollectionUtil.toArrayList(tagRepo.findAll()); } } sample test (Updated) @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = TestContextConfiguration.class) @Transactional public abstract class ServiceTest { } public class TagServiceTest extends ServiceTest{ @Autowired private TagService tagService; @Autowired private TagRepo tagRepo; @Test @Transactional public void addTag() throws Exception { Tag tag = new Tag(); tag.setName("new tag"); tag.setDescription("this is a new tag"); tagService.addTag(tag); Tag tagCreated = tagRepo.findOne(tag.getId()); assertNotNull(tagCreated); assertEquals(tagCreated.getName(), tag.getName()); } @Test public void getTag() throws Exception { Tag tag = tagService.getTag(1); // tag "java" has an ID of "1" assertNotNull(tag); assertEquals(tag.getName(), "java"); assertEquals(143,tag.getPostList().size()); // 143 posts under tag "java" } } Question The sample test case passes. It means that the postList in fetched Tag is also eagerly fetched and filled. Is Spring data repository's methods eagerly fetching by default? If yes, what is the best way to change this to lazy fetching?
doc_23535258
It is very simple class that has methods for: creating file, opening file closing file saving file writing to file etc etc There are some options i use when writing my text to file such as font size, color and max words. These options are passed to the constructor. Should I split this class in to two seperated classes: fileMaker fileMakerOptions and use fileMakerOptions object hold responsibility for text size etc and pass to the constructor of fileMaker? or would it be best to have file maker encapsulate everything related to making the file including the style options A: If your class is simple. And if there are only few options. Then no, you should not create new class. Overrefactoring is very bad. A: You should provide more information about your main class responsibility as far as count of options you want to handle. Anyway - the general idea of creating new additional instances is delete options handling responsibility from your main 'class' responsibility. For me I decide, if the amount of options greater then 4-5 - then I creating separate option entity. A: I only split up classes when an existing class becomes too difficult or gettign too many properties of functions OR when I know beforehand that it will be extended a lot. In your case it depends on the complexity and amount of functions/properties.
doc_23535259
mvn clean compile For some calsses I get multiple versions of the same class, example ./classes/com/.../MyClass$1$1.class ./classes/com/.../MyClass$1$10.class ./classes/com/.../MyClass$1$11.class ./classes/com/.../MyClass$1$12.class ./classes/com/.../MyClass$1$13.class ./classes/com/.../MyClass$1$14.class etc. Why is this happening? A: The $1 notation is for anonymous inner classes. Running javac will result in the same sort of output.
doc_23535260
For example, the default value of the input box is False but when user clicks on the save button, the value in the input box changes to True. Need help in solving the condition logic. function change() { var change = document.getElementById("check"); if (change.value == "false") { document.test.savereport = "True"; document.test.submit(); } else { change.value = "false"; } } <form name="test" method="post"> <input type="text" name="savereport" value="False" /> </form> <div align="center"> <button type="button" value="false" id="check" onclick="change()" />Save</button> </div> A: I'm rusty with the exact syntax but the following code should do function change() { var change = document.getElementById("check"); if (change.value == "1") { change.value = false document.test.savereport = "True"; document.test.submit(); } else { change.value = true } } A: I don't understand why you need to go through all the trouble of reading values for this particular requirement. You are reading the value, comparing it to False, leaving it as is if so or if True, you change it to False. Hence essentially, you are just leaving the value to be False no matter what the earlier value was. So, simply change the value in savereport upon click of the button. <script> function change() { document.getElementsByName("savereport")[0].value = 'True'; } </script> <form name="test" method="post"> <input type="text" name="savereport" value="False" /> </form> <div align="center"> <button type="button" value="false" id="check" onclick="change()" >Save</button> </div> A: Apart from removing self-closing of button tag, replace document.test.savereport = "True"; with document.test.elements[ "savereport" ].value = "true" Demo function change() { var change = document.getElementById("check"); if (change.value == "false") { document.test.elements["savereport"].value = "True"; //document.test.submit(); } else { change.value = "true"; } } <form name="test" method="post"> <input type="text" name="savereport" value="False" /> </form> <div align="center"> <button type="button" value="false" id="check" onclick="change()"> Save </button> </div> A: The code at Question does not set the .value of <input type="text" name="savereport" value="False" /> element. You can set the value of <input type="text" name="savereport" value="false" /> element to "false" and evaluate boolean false as a string to check for equality. <form name="test" method="post"> <input type="text" name="savereport" value="false" /> </form> <div align="center"> <button type="button" value="false" id="check" onclick="change()">Save</button> <script> var input = document.test.savereport; function change() { if (input.value === String(false)) { input.value = true; // document.test.submit(); } else { input.value = false; } console.log(input.value); } </script> </div>
doc_23535261
this is loginctrl (Login Controller) 'use strict'; angular.module('dreamflow').controller('LoginCtrl', ['$scope', 'LoginService', function($scope, LoginService) { $scope.title = "Login"; $scope.master = {} $scope.login = function() { var user = { username: $scope.username, password: $scope.password }; LoginService(user); }; } ]); this is loginService angular.module('dreamflow') .factory('LoginService', function($http, $location, $rootScope) { return function(user) { $http.post('/login',{ username: user.username, password: user.password }).then(function(response) { if (response.data.success) { console.log(response.data); $rootScope.user = response.data.user; $location.url('/'); } else { console.log(response.data.errorMessage); $location.url('/'); } }); }; }); In the above code user details is coming after checking the success of response and then we are redirected to the home page. I want to access the user details coming in $rootScope.user in home page angular controller. A: You can have a service which will hold the login username and the service will injected into both controllers as such: jsfiddle with '$scope' Also, I find that using 'this' instead of '$scope' is helpful in not mixing up controller scopes between each other in case you use more than one controller in the same place. There are also other reasons. HTML: <div ng-app="myApp"> <div ng-controller="ControllerOne as one"> <h2>ControllerOne:</h2> Change testService.loginName: <input type='text' ng-model='one.myService.loginName'/> </br></br> myName: {{one.myService.loginName}} </div> <hr> <div ng-controller="ControllerTwo as two"> <h2>ControllerTwo:</h2> myName: {{two.myService.loginName}} </div> </div> JS: app.service('testService', function(){ this.loginName = "abcd"; }); app.controller('ControllerOne', function($scope, testService){ this.myService = testService; }); app.controller('ControllerTwo', function($scope, testService){ this.myService = testService; }); jsfiddle with 'this'
doc_23535262
This is my code: .js file function addVote(steward_id, league_id, user_id, vote) { $.ajax({ type: "POST", async: false, url: "submit.php", data: "form=addVote&steward_id=" + steward_id + "&league_id=" + league_id + "&user_id=" + user_id + "&vote=" + vote }).success(function( msg ) { $('.success').css("display", ""); $(".success").fadeIn(1000, "linear"); $('.success_text').fadeIn("slow"); $('.success_text').html(msg); setTimeout(function(){location.reload()},1200); }); } .php button part <button type="button" class="btn-danger" onclick="addVote(2,2,2,2)">DSQ (<?php echo $dqs; ?> votes)</button> submit.php case 'addVote': $steward_id = $_POST['steward_id']; $league_id = $_POST['league_id']; $user_id = $_POST['user_id']; $vote = $_POST['vote']; $ez->addVote($steward_id, $league_id, $user_id, $vote); break; the php function: function addVote($steward_id, $league_id, $user_id, $vote){ $this->link->query("INSERT INTO `lg_vote` (`steward_id`, `user_id`, `league_id`, `vote`) VALUES ('$steward_id','$league_id', '$user_id', '$vote')"); return; } Anyone? Thanks! A: Connect to mysql database first and then insert to table. function addVote($steward_id, $league_id, $user_id, $vote){ // connect to mysql database first ;) $this->link->query("INSERT INTO `lg_vote` (`steward_id`, `user_id`, `league_id`, `vote`) VALUES ('$steward_id','$league_id', '$user_id', '$vote')"); return; }
doc_23535263
Function.prototype.apply2 = function( self, arguments ){ switch( arguments.length ){ case 1: this.call( self, arguments[0] ); break; case 2: this.call( self, arguments[0], arguments[1] ); break; case 3: this.call( self, arguments[0], arguments[1], arguments[2] ); break; case 4: this.call( self, arguments[0], arguments[1], arguments[2], arguments[3] ); break; case 5: this.call( self, arguments[0], arguments[1], arguments[2], arguments[3], arguments[4] ); break; case 6: this.call( self, arguments[0], arguments[1], arguments[2], arguments[3], arguments[4], arguments[5] ); break; case 7: this.call( self, arguments[0], arguments[1], arguments[2], arguments[3], arguments[4], arguments[5], arguments[6] ); break; case 8: this.call( self, arguments[0], arguments[1], arguments[2], arguments[3], arguments[4], arguments[5], arguments[6], arguments[7] ); break; case 9: this.call( self, arguments[0], arguments[1], arguments[2], arguments[3], arguments[4], arguments[5], arguments[6], arguments[7], arguments[8] ); break; case 10: this.call( self, arguments[0], arguments[1], arguments[2], arguments[3], arguments[4], arguments[5], arguments[6], arguments[7], arguments[8], arguments[9] ); break; default: this.apply( self, arguments ); break; } }; So does anyone know why? A: Referencing the ECMAScript Language Specification 5.1 Edition (June 2011): 15.3.4.3 Function.prototype.apply (thisArg, argArray) When the apply method is called on an object func with arguments thisArg and argArray, the following steps are taken: * *If IsCallable(func) is false, then throw a TypeError exception. *If argArray is null or undefined, then return the result of calling the [[Call]] internal method of func, providing thisArg as the this value and an empty list of arguments. *If Type(argArray) is not Object, then throw a TypeError exception. *Let len be the result of calling the [[Get]] internal method of argArray with argument "length". *Let n be ToUint32(len). *Let argList be an empty List. *Let index be 0. *Repeat while index < n *Let indexName be ToString(index). *Let nextArg be the result of calling the [[Get]] internal method of argArray with indexName as the argument. *Append nextArg as the last element of argList. *Set index to index + 1. *Return the result of calling the [[Call]] internal method of func, providing thisArg as the this value and argList as the list of arguments. 15.3.4.4 Function.prototype.call (thisArg [ , arg1 [ , arg2, … ] ] ) When the call method is called on an object func with argument thisArg and optional arguments arg1, arg2 etc, the following steps are taken: * *If IsCallable(func) is false, then throw a TypeError exception. *Let argList be an empty List. *If this method was called with more than one argument then in left to right order starting with arg1 append each argument as the last element of argList *Return the result of calling the [[Call]] internal method of func, providing thisArg as the this value and argList as the list of arguments. As we can see, the format in which apply is specified is notably heavier and needs to do a lot more due to the need to change the format in which the arguments are given and how they are finally needed. There are a number of checks in apply which are not necessary in call due to the difference of input formatting. Another key point is the manner in which arguments are looped over (steps 4-12 in apply, implied in step 3 of call): the whole set-up for looping is executed in apply regardless of how many arguments there actually are, in call all of this is done only if needed. Additionally it's worthwhile noting that the way in which step 3 in call is implemented isn't specified, which would help explain the drastic differences in different browser behavior. So to shortly recap: call is faster than apply because the input parameters are already formatted as necessary for the internal method. Be sure to read the comments below for further discussion.
doc_23535264
How can I preconfigure this in docker so I do not have to do it each time? A: Use Kibana create saved objects API to write a script and run it in a container or something like that: kibana_endpoint=localhost kibana_system_user_password="12345678" curl -X POST \ "${kibana_endpoint}:5600/api/saved_objects/index-pattern/logstash" \ --header "kbn-xsrf: true" \ --header "Content-Type: application/json" \ --user "kibana_system:${kibana_system_user_password}" \ --data \ " { "attributes": { "title" : "logstash-*", "timeFieldName": "@timestamp" } } " A: You can use the index template to define a template, and next time whenever you create an index matching the pattern name defined in your template, it will have a settings and mappings defined in the template.
doc_23535265
var ninja = ninject.Get<Ninja>(); But why can't I do this: Type ninjaType = typeof(Ninja); var ninja = ninject.Get<ninjaType>(); What's the correct way of specifying the type outside the call to Get? A: Specifying type arguments is not a runtime thing, it's statically compiled. The type must be known at compile time. In your scenario, it is potentially unknown, or computed at runtime. Through reflection it is possible to construct a method call where you specify the type arguments, but it's unlikely you want to do that. Also, most containers should have an overload that would look something like this: Type ninjaType = typeof(Ninja); var ninja = (Ninja)ninject.Get(ninjaType); Finally, most containers should provide ways to specify in the container configuration, which type should be provided on certain conditions. I know that Ninject has a pretty DSL to conditionally specify which type should be returned under what circumstances. This would mean however, to code against an abstraction and let the container decide what is returned: class DependencyConsumer { ctor(IWarrior warrior) { //Warrior could be a ninja, because e.g. you told NInject //that the dependency should be filled that way for this class } } A: Since the purpose of the T is to specify the type you want. Ninject receives your type T and calls typeof(T) on your behalf. I think that this way your code is shorter. Don't you think?
doc_23535266
int val; if (!(cin >> val)); I don't understand what the if(!()) stands for and also able to type a character. A: what does the ! mean after ... ! is a unary operator. In this case it is the logical NOT operator. If expression implicitly converted to bool results in true, then !expression is false and if expression implicitly converted to bool results in false, then !expression is true. A: The ! operator in c++ is called NOT operator, it toggles true/false. In the given code fragment: int val; if (!(cin >> val)); The "if" condition will be true when the input value(cin>>) is "not" an integer. it can be further be clarified by the following code: int val; if (!(cin >> val)) cout << "not an integer"; else cout <<"integer"; A: I think this should not work as if(!(cin>>val))// this is similar to if((cin>>val)==false) This mean if the the input was not integer or something similar. I this what is really require is checking if is digit. int val; cin >>val; if(!isdigit(val)) { //goes in if false of else it skips the if statement. } This is just an assumption (could be wrong).If need correct ans more detail is required, like what is the goal or...!
doc_23535267
# panel.html.erb <% if content.content_type == "image" && content.content_image_url =~ URI::ABS_URI %> <%= image_tag content.content_image.pinboard_thumb %> <% elsif content.content_image? == false && content.content_value =~ URI::ABS_URI %> <%= image_tag content.content_value %> <% else %> <%= auto_link content.content_value %> <% end %> I`m thinking where should I move this logic from if else block. How to do that Rails Way. Move that to helper?? Or is better way? Code below isn`t working. A: Yes, a helper is the right place as the code seems to be about rendering only. So there is no business logic there. If you see yourself putting a lot of stuff like this into helpers, you could have a look at http://github.com/jcasimir/draper which is a gem that implements the presenter pattern.
doc_23535268
Traceback (most recent call last): File "/Users/user/PycharmProjects/SSW-540/P8-hajayi.py", line 34, in alphabetCount[ch] += 1 KeyError: '\n' I also noticed that the print statements are repeating and I think its probably because they are in the for loop. I do not want them to repeat. Also the from and import in "from operator import itemgetter" and "import operator" do not glow blue like the first import. Is it because of the version of python im using? Im using python 3.6 from string import punctuation from operator import itemgetter import operator fileName = input('Enter the file name') file = open(fileName, 'r') punc_translator = str.maketrans({key: None for key in punctuation}) documentFile = str(file.read()).translate(punc_translator).lower() print(documentFile) alphabetCount = { "a": 0, "b": 0, "c": 0, "d": 0, "e": 0, "f": 0, "g": 0, "h": 0, "i": 0, "j": 0, "k": 0, "l": 0, "m": 0, "n": 0, "o": 0, "p": 0, "q": 0, "r": 0, "s": 0, "t": 0, "u": 0, "v": 0, "w": 0, "x": 0, "y": 0, "z": 0 } totalWords = 0 totalDistinctWords = 0 for ch in documentFile: if ch != ' ': alphabetCount[ch] += 1 allWords = documentFile.split(' ') wordsCountDict = dict() for word in allWords: totalWords += 1 if word in wordsCountDict.keys(): wordsCountDict[word] += 1 else: wordsCountDict[word] = 1 totalDistinctWords += 1 print(totalWords) print(totalDistinctWords) sortedWordsCount = sorted(wordsCountDict.items(), key=operator.itemgetter(1), reverse=True) sortedCharactersCount = sorted(alphabetCount.items(), key=operator.itemgetter(1),reverse=True) print('The summary of document: ') print("Total words is: " + str(totalWords)) print(totalDistinctWords) print('Most Frequent Characters:') print(sortedCharactersCount) print('Most Frequent Words:') print(sortedWordsCount)
doc_23535269
<form method="post" name="Form1" action="default.asp" onsubmit="return processField();"> <input type="hidden" name="hiddentext1"> <input type="text" name="text1"> <input type="submit" value="Submit" id="button1"> </form> And what I want is to call ProcessField function on the submit of the form. I know that ProcessField function works fine - tested it using an inline call. But now I want to attach the event via JavaScript. Below is my JavaScript code: <script type="text/javascript"> if (window.addEventListener){ window.addEventListener('load', attachFormSubmit, false); } else if (window.attachEvent){ window.attachEvent('onload', attachFormSubmit ); } function attachFormSubmit(){ theForm = document.getElementById("Form1"); alert("attaching form submit"); if (theForm.addEventListener){ theForm.addEventListener('submit', CleanUpEID, false); } else if (theForm.attachEvent){ theForm.attachEvent('onsubmit', CleanUpEID); } } function ProcessField() { alert("processing field"); if (this.text1.value == '') { alert ("Please enter the value") this.text1.focus() return false } this.hiddentext1.value = this.text1.value; // Disable button with ($('button1')) { value = "Processing ..."; disabled = true; } return true; } </script> I have two issues with the above script: * *it attaches the event to the form multiple times - every time page reloads. I suspect there is a better place for my code but cannot figure it out. *Keyword "this" in processField function comes up undefined. It works fine if I replace "this" with form name, but I was wondering what needs to be done in order for keyword "this" to work in this case. I'd really appreciate if someone could point me in the right direction. Thanks. A: EDIT OP had a series of questions that involve some fundamental concepts of JavaScript. I have provided some links that will hopefully answer those questions. See the end of this post. Rewrote the code to demonstrate it actually works with a test server. With the syntax errors, you'd never get it working. The biggest error is you call processField() but you define a function called ProcessField(), JavaScript is case-sensitive. In order to function for your purposes, you'll have to change the form's action of course. I had to validate it's input for a min of 5 and a max of 15 alphanumerics due to the test server's limits, so you'll want to change that as well probably. * *it attaches the event to the form multiple times - every time page reloads. I suspect there is a better place for my code but cannot figure it out You are adding/attaching the eventListener/Handler to the window which makes your submit event global, plus you didn't provision any way to prevent default behavior, so any form and form elements that by default are triggered by a submit event will popup on the event chain. I added the eventListener to the form and then used stopPropagation(); to prevent any unintentional triggering during the bubbling phase. *Keyword "this" in processField function comes up undefined. It works fine if I replace "this" with form name, but I was wondering what needs to be done in order for keywrod "this" to work in this case. See explanation above concerning the typo of processField.* Btw, I didn't bother adding the cross-browser crap attachEvent because IE8 is dead, and that 1% of the world can use IE9. If you want to cater to that 1% just apply attachEvent as you did in your code. http://glpjt.s3.amazonaws.com/so/34775593.html <!doctype html> <html> <head> <meta charset="utf-8"> <title>34775593</title> </head> <body> <form method="post" id="Form1" name="Form1" action="http://www.hashemian.com/tools/form-post-tester.php/so_post"> <input type="hidden" id="hiddentext1" name="hiddentext1"> <input type="text" name="text1" id="text1" placeholder="Submit 5 to 15 alphanumerics"> <input type="submit" value="Submit" id="button1"> </form> <p>Once post is submited, server's response is returned with value of #hiddentext1 and #text1</p> <div class="serverData"> <a href="http://www.hashemian.com/tools/form-post-tester.php/data.php/so_post">Server-side Results</a> </div> <script> var form = document.getElementById('Form1'); var htxt = document.getElementById('hiddentext1'); var btn = document.getElementById('button1'); form.addEventListener('submit', processField, false); function processField(e) { var txt = document.getElementById('text1'); var str = txt.value; var alnum = str.length; alert("processing field"); if (alnum < 5 || alnum > 15) { alert("Please enter a value of 5 to 15 alphanumeric values "); txt.focus(); return false; } htxt.value = str; // Disable button this.setAttribute('disabled', true); this.value = "Processing ..."; e.stopPropagation(); } </script> </body> </html> References addEventListener() Bubbling & capturing Loading <script> Getting elements
doc_23535270
All is running, but quite often and absolutely random (even if I scroll very slowly), I get this error: I/flutter (10990): EXCEPTION CAUGHT BY IMAGE RESOURCE SERVICE I/flutter (10990): The following HttpException was thrown resolving an image codec: I/flutter (10990): Connection closed before full header was received, uri = I/flutter (10990): http://www.attilofficina.altervista.org/phpbackend/JOB/000004/mockup/000004_017.jpg I/flutter (10990): I/flutter (10990): When the exception was thrown, this was the stack: I/flutter (10990): #0 NetworkImage._loadAsync (package:flutter/src/painting/image_provider.dart:525:41) I/flutter (10990): <asynchronous suspension>. And those images are skipped! Is there a way to handle this error and force reload of those images missing? Special thanks in advance I've also tried with paginated listView, with minimum number of pages to be load at a time, but this doesn't solve and the error always returns randomly. Here flutter doctor -v [√] Flutter (Channel stable, v1.7.8+hotfix.3, on Microsoft Windows [Versione 10.0.17134.885], locale it-IT) • Flutter version 1.7.8+hotfix.3 at C:\src\flutter • Framework revision b712a172f9 (7 days ago), 2019-07-09 13:14:38 -0700 • Engine revision 54ad777fd2 • Dart version 2.4.0 [√] Android toolchain - develop for Android devices (Android SDK version 28.0.3) • Android SDK at C:\Users\Mussa.DESKTOP-HFFLS0G\AppData\Local\Android\sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-28, build-tools 28.0.3 • Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01) • All Android licenses accepted. [√] Android Studio (version 3.4) • Android Studio at C:\Program Files\Android\Android Studio • Flutter plugin version 37.0.1 • Dart plugin version 183.6270 • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01) [√] VS Code (version 1.36.0) • VS Code at C:\Users\Mussa.DESKTOP-HFFLS0G\AppData\Local\Programs\Microsoft VS Code • Flutter extension version 3.2.0 [√] Connected device (1 available) • Android SDK built for x86 • emulator-5554 • android-x86 • Android P (API 27) (emulator) • No issues found! A: I was having exactly the same issue and was never able to figure out the cause of the issue. I ended up using the Cached network image library which fixed my problems with image loading. A: This issue still exists even though the issue on GitHub was closed: This error really random and we couldn't know when it will become an error. My solution to fix this issue is we can use FadeInImage to know when is the request image got the error Connection closed before full header was received, URI = we can handle it on imageErrorBuilder method. I create a new Widget called UrlImage: FadeInImage.memoryNetwork( placeholder: kTransparentImage, image: imageURL, imageErrorBuilder: (context, error, stacktrace) { // Handle Error for the 1st time return FadeInImage.memoryNetwork( placeholder: kTransparentImage, image: imageURL, imageErrorBuilder: (context, error, stacktrace) { // Handle Error for the 2nd time return FadeInImage.memoryNetwork( fit: BoxFit.cover, placeholder: kTransparentImage, image: imageURL, imageErrorBuilder: (context, error, stacktrace) { // Handle Error for the 3rd time to return text return Center(child: Text('Image Not Available')); }, ); }, ); More detail Widget, you can see the code below on my Gist: https://gist.github.com/Robihamanto/5e0dd358d4da90603683ca74430aff8a A: It may be because the widget is reused in the widget tree and thus interrupts the HTTP request. You could then use a key in your FutureBuilder or Image widget. child: FutureBuilder<File>( key: ValueKey(imageUrl), // or use UniqueKey() ... A: The issue still exists as of writing this. I tried with CachedNetworkImage package and the result is same. Finally I ended up using extended_image and it works like charm. Have a look at the plugin here- https://pub.dev/packages/extended_image I have completed ported my application to use this package. Loading, Error, Success events also can be handled easily like CachedNetworkImage
doc_23535271
I would like to update this column to allow NULLs. I have the following script to do this however I would like to check first if the column is already NULL (or NOT NULL), as it may have been changed previously. ALTER TABLE [dbo].[aud] ALTER COLUMN [actname] nvarchar(50) NULL Any help appreciated. A: select is_nullable from sys.columns c inner join sys.tables t on t.object_id = c.object_id where t.name = 'aud' and c.name = 'actname' Will give you a BIT representing whether it is nullable or not. So you could switch on this like IF EXISTS(SELECT * from sys.columns c inner join sys.tables t on t.object_id = c.object_id where t.name = 'aud' and c.name = 'actname' AND is_nullable = 1) BEGIN --What to do if nullable END ELSE BEGIN --What to do if not nullable END END That of course assumes that the table and column exist at all... A: There isn't really a need to do that, because if it's already Nullable, changing a column from Nullable to Nullable will have no negative effect. However you can do it with this query: SELECT is_nullable FROM sys.columns WHERE object_id=object_id('YourTable') AND name = 'yourColumn' A: Use COLUMNPROPERTY to get column property . You may write something like SELECT COLUMNPROPERTY(OBJECT_ID('dbo.aud'),'actname','AllowsNull') AS 'AllowsNull'; For more information please visit this link
doc_23535272
import { ApolloClient, InMemoryCache, HttpLink } from "@apollo/client" //import { RestLink } from "apollo-link-rest" import fetch from "isomorphic-fetch" const xml2js = require("xml2js") const parseXmlResponseToJson = xml => { // The function is not being fired const { parseString } = xml2js let jsonFeed = null parseString(xml, function (err, result) { jsonFeed = result }) return jsonFeed } const restLink = new HttpLink({ uri: "https://cors-anywhere.herokuapp.com/https://www.w3schools.com/xml/note.xml", responseTransformer: async response => response.text().then(xml => parseXmlResponseToJson(xml)), fetch, }) export const client = new ApolloClient({ link: restLink, cache: new InMemoryCache(), }) The provider looks like this import React from "react" import { client } from "../apollo" import { ApolloProvider as Provider } from "@apollo/client" function ApolloProvider({ children }) { return <Provider client={client}>{children}</Provider> } export default ApolloProvider And the query I try to use looks like this import React from "react" import gql from "graphql-tag" import { useQuery } from "@apollo/client" const APOLLO_QUERY = gql` { note { to from heading body } } ` const Component = () => { const { loading, error, data } = useQuery(APOLLO_QUERY) console.log("Apollo data", loading, error, data) The error will return with Unexpected token < in JSON at position 2 // ... I have been struggling with this for days. What am I doing wrong? I try to make it work with w3schools very simply xml example https://www.w3schools.com/xml/note.xml
doc_23535273
If I precompute the drill down levels I'm interested in, I have to put those values into a separate fact-table / measure group for each drill down level, or don't I? Is it possible to do this in a way that is transparent to the end user? So it should look like there is only one fact table and SSAS automatically selects the value from the correct fact table based on the drill-down level? A: I found the answer in a Microsoft forum: http://social.msdn.microsoft.com/Forums/en-US/sqlanalysisservices/thread/d5998b23-936b-4e7b-b593-bd164b20c022 On the Calculate tab you can define a scope statement: In this really trivial example, internet sales amount will be shown when reseller sales amount is chosen (at calendar quarters). scope([Measures].[Reseller Sales Amount], [Date].[Calendar].[Calendar Quarter]); this=[Measures].[Internet Sales Amount]; end scope;
doc_23535274
@commands.Cog.listener() async def on_member_join(member): role = discord.utils.get(member.guild.roles, name="Unverified") await member.add_roles(role) channel = discord.utils.get(member.guild.channels, name="welcome") embed = discord.Embed(title=f"Welcome {member}", color=discord.Colour.blue()) await channel.send(embed=embed) Traceback Traceback (most recent call last): File "C:\Users\Zsombor\AppData\Local\Programs\Python\Python36-32\lib\site-packages\discord\client.py", line 343, in _run_event await coro(*args, **kwargs) TypeError: on_member_join() takes 1 positional argument but 2 were given A: You don't get a Context parameter for the on_member_join event, only a Member parameter. Therefore, you'll need to access the Guild in some other way. Fortunately, this can easily be done; Member objects have a guild attribute. You can also simplify grabbing the Unverified role and welcome channel by using discord.utils.get. So, with those changes: @commands.Cog.listener() async def on_member_join(self, member): role = discord.utils.get(member.guild.roles, name="Unverified") await member.add_roles(role) channel = discord.utils.get(member.guild.channels, name="welcome") embed = discord.Embed(title=f"Welcome {member}", color=discord.Colour.blue()) await channel.send(embed=embed)
doc_23535275
this.pictureBox1.ImageLocation = "d:\\*.png"; The directory alawys consists of single .png file, though it changes name periodically. A: You can't use wildcards on a PictureBox, however Directory.GetFiles does support them. So you could use that like so: string[] files = Directory.GetFiles(@"D:\", "*.png"); if (files.Length > 0) { // File(s) were found. You can now either decide // which one to display or just display the first // one pictureBox1.ImageLocation = files[0]; } else { // No files found. Display a default image or something } A: The ImageLocation property is the path to a single image resource (file or url). You can use Directory.GetFiles to enumerate the file(s) in the target folder using wildcards. A: No. ImageLocation must specify the location of a single file to be displayed. If you wish to display multiple images, you will need multiple picture box controls.
doc_23535276
E0209 23:21:41.300842 6 token_source.go:152] Unable to rotate token: failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied E0209 23:21:41.316286 6 token_source.go:152] Unable to rotate token: failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied Nignx is running uid 101 but serviceaccount directory owned by root user. How to fix this error message? Thanks A: It's kinda odd, as I've tested I haven't experienced such a error. You could use securityContext, set * *fsGroup: 101 or *runAsUser/runAsGroup But still the ingress-nginx sets appropriate securityContext (for example to bind on 80/443), so it should work. A: As @sfgroups mentioned solution is to make nignx not running uid 101 but 0. Add flag to installation command: --set controller.image.runAsUser=0. Read: helm-nginx-ingress-installation.
doc_23535277
On certain views in my rails app, I want to have other settings, like placeholders for example. I'm not sure if version 4 includes a lot of changes compared to some of the older ones, but here are some other stackoverflow references that I've used: How do I add a placeholder attribute to an instance of CKEditor? This one in particular suggests using CKEDITOR.replace("myeditor" , config ); to replace the placeholder, but that doesn't work for me. When I replace "myeditor" with the ID of my textarea element, such as this: CKEDITOR.replace("randomTextAreaID" , {placeholder: "Hello World"} );, I get this error: Uncaught The editor instance "randomTextAreaID" is already attached to the provided element. There are other suggestions that appear as "hacks" from 2012, but that is 6 years ago. Can't imagine that setting a placeholder for a text editor would be this difficult. ** EDIT ** So I got it to work like this: var editor = CKEDITOR.instances['randomTextAreaID']; if (editor) { editor.destroy(true); } CKEDITOR.replace( 'randomTextAreaID', { height: 20, }); but still looks like trying to set a placeholder isn't working. I've also downloaded the placeholder plugin and placed it here: views/assets/javascripts/ckeditor/plugins/placeholder and still no luck. No javascript loading errors, just simply not working. A: If you wish to use placeholders, please add this plugin to your CKEditor build. Since the placeholder require plugins which also have couple of dependencies, I would recommend using online builder. Simply select your package (e.g. standard), add placeholder plugin and the builder will take care of dependencies. Next download the minified version. Once you have that working you need to add place holder in instance configuration (config.js is a global configuration, while configuration in replace method is instance specific): CKEDITOR.replace( 'randomTextAreaID', { height: 20, extraPlugins : 'placeholder' }); Place holders can be inserted using placeholder dialog. They will also auto-change (e.g. your initial content) text like [[any text]] to placeholder. To use more technical terms - this text will be upcasted to placeholder widget. Please also see the placeholder demo in action: https://sdk.ckeditor.com/samples/placeholder.html A: ckeConfig = { extraPlugins: 'placeholder' }; --------------------------------------------------------------------- <ckeditor id="editor1" [config]="ckeConfig" (namespaceLoaded)="onNamespaceLoaded($event)" [(data)]="editorData"> </ckeditor> ---------------------------------------------------------------------------- public onNamespaceLoaded( event ) { event.on('dialogDefinition', function(event) { if ('placeholder' == event.data.name) { var input = event.data.definition.getContents('info').get('name'); input.id= 'aps-placeholders', input.type = 'select'; input.items = [ ['Company'], ['Email'], ['First Name'], ['Last Name'] , ['City'] , ['Province'] , ['Province']]; input.setup = function() { this.setValue('Company'); }; } }); } Add this method in ts , it will work fine if anyone has the same issue with cdkplaceholder ,Above solution is for angular(2+)
doc_23535278
Suddenly I am getting the following error: SqlException: The INSERT permission was denied on the object '', database '', schema 'dbo'. I checked my user which is db owner in the database. Add my user directly to the table itself. I am still getting the error. How can I view it in SQL Profiler which account executed the query? This is a virtual machine and everything is installed using my account. Edit:1 * *The SQL Profiler is running inside the same machine where SQL server is install. It is a completely isolated environment. Everything runs inside the same machine. *Everything is installed under one account and that is mine account. No other account is used. *SQL Server Agent (MSSQLServer) is running under NT Service\SQLServeragent *SQL Server : NT Service\MSSQLSERVER *My account has sysadmin and db_owner rights on the database *gave direct access to the tables as well, but still no luck A: In terms of capturing the offending batch/query in SQL Profiler, I'd suspect that perhaps you aren't capturing the correct events? If an exception is being thrown, the only way you'd be able to see which actual SQL batch/statement caused the exception would be by including "starting" events (in addition to the more common captured "completed" events). Running a trace with the following should allow you to see which procedure/statement is throwing the exception if you are capturing the correct events, those you'd want to capture would/could include: * *SQL:BatchStarting *SQL:BatchCompleted *SQL:StmtStarting *SQL:StmtCompleted *RPC:Starting *RPC:Completed *SP:Starting *SP:Completed *SP:StmtStarting *SP:StmtCompleted *Exception You mentioned you're using EF, so you could likely safely ignore events 5 & 6, and also 7-9 if you're not actually executing a sproc. Be sure you capture all associated columns in the trace as well (should be the default if you are running a trace using the Profiler tool). The Exception class will include the actual error in your trace, which should allow you to see the immediate preceding statement within the same SPID that threw the exception. You must include the starting events in addition to the completed events as an exception that occurs will preclude the associated completed events from firing in the trace.
doc_23535279
Edit One thing I have noticed is that if I change keyup to blur it will work fine. I need this to run on keyup though. It will also accept .25 by itself but not a number then decimal. For example 2.25. $(document).on('keyup', '.Monday', findTotalMon); function findTotalMon() { var arr = document.getElementsByClassName('Monday'); var tot = 0; for (var i = 0; i < arr.length; i++) { if (parseFloat(arr[i].value)) { var newValue; newValue = (.25 * Math.round(4 * arr[i].value)); arr[i].value = newValue; tot += parseFloat(newValue); } } document.getElementById('totalHoursMon').value = tot; if (tot === 0) { document.getElementById('totalHoursMon').value = ''; } } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script> <input class="form-control full-width Monday" name="monday" id="monday" type="number" step="any" /><br><br> <input class="form-control full-width totalBox" name="total" id="totalHoursMon" type="text" readonly="readonly"/> A: The issue lies in the use of keyup and the overwriting of your fields value with the newly parsed / rounded float. After typing your decimal separator, the input is parsed and rounded and the separator disappears, because "1." or "1," is parsed to 1. One way to solve this is to only overwrite your field when the rounded input is different than the original input. $(document).on('keyup', '.Monday', findTotalMon); function findTotalMon() { var arr = document.getElementsByClassName('Monday'); var originalValue, newValue, tot = 0; for (var i = 0; i < arr.length; i++) { originalValue = arr[i].value; if (parseFloat(originalValue)) { newValue = (.25 * Math.round(4 * arr[i].value)); tot += parseFloat(newValue); if (newValue != originalValue) { // we're only overwriting input when the rounded value is different than the original arr[i].value = newValue; } } } document.getElementById('totalHoursMon').value = tot; if (tot === 0) { document.getElementById('totalHoursMon').value = ''; } } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script> <input class="form-control full-width Monday" name="monday" id="monday" type="number" step="any" /><br><br> <input class="form-control full-width totalBox" name="total" id="totalHoursMon" type="text" readonly="readonly"/> On a side note : Using keyup and overwriting your user's input is really annoying. For instance, while testing, I realized you can't use backspace to correct your input. I know you said you had to use keyup, but I strongly advise you to use blur or keypress. For your user's sake :)
doc_23535280
ex: i have selected the date as "Thu Feb 22 2018 00:00:00 GMT+0530 (India Standard Time)".this is the value i get when i log in console. But when i tried to debug in C# the value is changed. {21-02-2018 18:30:00} How to handle or work with the typescript date object when passing it to a API method and displaying it back My typescript Model export class Visitor { public id: number; public firstname: string; public lastname: string; public dob: Date; public genderId: number; public age: number; } and C# model public class Visitor { public int Id { get; set; } public string Lastname { get; set; } public string Firstname { get; set; } public int Age { get; set; } public DateTime DOB { get; set; } } A: As long as you are passing a valid Date object you should be fine. Your problem probably lies in your Culture settings in .NET side. Can you please try doing this in your controller action: var culture = new CultureInfo("gu-IN"); CultureInfo.DefaultThreadCurrentCulture = culture; CultureInfo.DefaultThreadCurrentUICulture = culture; Since you provide no code, this is merely a guess. A: When you are printing in console, browser is printing the date in local timezone which is India in your case. When you are retrieving the value on server, C# is treating date in UTC. That is why you see a difference of 5.30 hours. India time is UTC + 5.30 Hours. If you are storing the date value in UTC format on server side then convert the value of date object from client side to UTC using the libraries such as momentjs. While retrieving the value from server, send the UTC value from server and convert it to local time zone using libraries such as momentjs. I think this strategy will work.
doc_23535281
Thread stack https-jsse-nio-8443-exec-5 at com.vaadin.flow.dom.Element$$Lambda$1668.get$Lambda(Lcom/vaadin/flow/component/internal/PendingJavaScriptInvocation;)Lcom/vaadin/flow/function/SerializableConsumer; (Unknown Source) at java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object; (Unknown Source) at java.lang.invoke.Invokers$Holder.linkToTargetMethod(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object; (Unknown Source) at com.vaadin.flow.dom.Element.lambda$scheduleJavaScriptInvocation$9c7dc614$1(Lcom/vaadin/flow/internal/StateNode;Lcom/vaadin/flow/component/internal/PendingJavaScriptInvocation;Lcom/vaadin/flow/component/UI;)V (Element.java:1493) at com.vaadin.flow.dom.Element$$Lambda$1316.accept(Ljava/lang/Object;)V (Unknown Source) at com.vaadin.flow.internal.StateNode.runWhenAttached(Lcom/vaadin/flow/function/SerializableConsumer;)V (StateNode.java:895) at com.vaadin.flow.dom.Element.scheduleJavaScriptInvocation(Ljava/lang/String;Ljava/util/stream/Stream;)Lcom/vaadin/flow/component/page/PendingJavaScriptResult; (Element.java:1493) at com.vaadin.flow.dom.Element.callJsFunction(Ljava/lang/String;[Ljava/io/Serializable;)Lcom/vaadin/flow/component/page/PendingJavaScriptResult; (Element.java:1392) at com.vaadin.flow.component.combobox.ComboBoxDataController$UpdateQueue.lambda$enqueue$0(Ljava/lang/String;[Ljava/io/Serializable;)V (ComboBoxDataController.java:109) at com.vaadin.flow.component.combobox.ComboBoxDataController$UpdateQueue$$Lambda$1789.run()V (Unknown Source) at com.vaadin.flow.component.combobox.ComboBoxDataController$UpdateQueue$$Lambda$1793.accept(Ljava/lang/Object;)V (Unknown Source) at java.util.ArrayList.forEach(Ljava/util/function/Consumer;)V (ArrayList.java:1541) at com.vaadin.flow.component.combobox.ComboBoxDataController$UpdateQueue.commit(I)V (ComboBoxDataController.java:103) at com.vaadin.flow.data.provider.DataCommunicator.passivateInactiveKeys(Ljava/util/Set;Lcom/vaadin/flow/data/provider/ArrayUpdater$Update;Z)V (DataCommunicator.java:1304) at com.vaadin.flow.data.provider.DataCommunicator.performUpdate(Ljava/util/Set;Lcom/vaadin/flow/internal/Range;Lcom/vaadin/flow/internal/Range;Lcom/vaadin/flow/data/provider/DataCommunicator$Activation;)V (DataCommunicator.java:1225) at com.vaadin.flow.data.provider.DataCommunicator.flush()V (DataCommunicator.java:1171) at com.vaadin.flow.data.provider.DataCommunicator.lambda$requestFlush$7258256f$1(Lcom/vaadin/flow/internal/ExecutionContext;)V (DataCommunicator.java:1103) at com.vaadin.flow.data.provider.DataCommunicator$$Lambda$1416.accept(Ljava/lang/Object;)V (Unknown Source) at com.vaadin.flow.internal.StateTree.lambda$runExecutionsBeforeClientResponse$2(Lcom/vaadin/flow/internal/StateTree$BeforeClientResponseEntry;)V (StateTree.java:392) at com.vaadin.flow.internal.StateTree$$Lambda$1720.accept(Ljava/lang/Object;)V (Unknown Source) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(Ljava/lang/Object;)V (ForEachOps.java:183) at java.util.stream.ReferencePipeline$2$1.accept(Ljava/lang/Object;)V (ReferencePipeline.java:177) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Ljava/util/function/Consumer;)V (ArrayList.java:1655) at java.util.stream.AbstractPipeline.copyInto(Ljava/util/stream/Sink;Ljava/util/Spliterator;)V (AbstractPipeline.java:484) at java.util.stream.AbstractPipeline.wrapAndCopyInto(Ljava/util/stream/Sink;Ljava/util/Spliterator;)Ljava/util/stream/Sink; (AbstractPipeline.java:474) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Ljava/util/stream/PipelineHelper;Ljava/util/Spliterator;)Ljava/lang/Void; (ForEachOps.java:150) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Ljava/util/stream/PipelineHelper;Ljava/util/Spliterator;)Ljava/lang/Object; (ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(Ljava/util/stream/TerminalOp;)Ljava/lang/Object; (AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(Ljava/util/function/Consumer;)V (ReferencePipeline.java:497) at com.vaadin.flow.internal.StateTree.runExecutionsBeforeClientResponse()V (StateTree.java:389) at com.vaadin.flow.server.communication.UidlWriter.encodeChanges(Lcom/vaadin/flow/component/UI;Lelemental/json/JsonArray;)V (UidlWriter.java:390) at com.vaadin.flow.server.communication.UidlWriter.createUidl(Lcom/vaadin/flow/component/UI;ZZ)Lelemental/json/JsonObject; (UidlWriter.java:174) at com.vaadin.flow.server.communication.UidlRequestHandler.createUidl(Lcom/vaadin/flow/component/UI;Z)Lelemental/json/JsonObject; (UidlRequestHandler.java:158) at com.vaadin.flow.server.communication.UidlRequestHandler.writeUidl(Lcom/vaadin/flow/component/UI;Ljava/io/Writer;Z)V (UidlRequestHandler.java:146) at com.vaadin.flow.server.communication.UidlRequestHandler.synchronizedHandleRequest(Lcom/vaadin/flow/server/VaadinSession;Lcom/vaadin/flow/server/VaadinRequest;Lcom/vaadin/flow/server/VaadinResponse;)Z (UidlRequestHandler.java:116) at com.vaadin.flow.server.SynchronizedRequestHandler.handleRequest(Lcom/vaadin/flow/server/VaadinSession;Lcom/vaadin/flow/server/VaadinRequest;Lcom/vaadin/flow/server/VaadinResponse;)Z (SynchronizedRequestHandler.java:40) at com.vaadin.flow.server.VaadinService.handleRequest(Lcom/vaadin/flow/server/VaadinRequest;Lcom/vaadin/flow/server/VaadinResponse;)V (VaadinService.java:1564) at com.vaadin.flow.server.VaadinServlet.service(Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V (VaadinServlet.java:364) at com.vaadin.cdi.CdiVaadinServlet.service(Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V (CdiVaadinServlet.java:67) at javax.servlet.http.HttpServlet.service(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V (HttpServlet.java:733) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V (ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V (ApplicationFilterChain.java:166) at org.apache.tomcat.websocket.server.WsFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;)V (WsFilter.java:53) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V (ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V (ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (StandardWrapperValve.java:202) at org.apache.catalina.core.StandardContextValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (StandardContextValve.java:97) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (AuthenticatorBase.java:542) at org.apache.catalina.core.StandardHostValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (StandardHostValve.java:143) at org.apache.catalina.valves.ErrorReportValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (ErrorReportValve.java:92) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (AbstractAccessLogValve.java:690) at org.apache.catalina.valves.RemoteIpValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (RemoteIpValve.java:764) at org.apache.catalina.core.StandardEngineValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (StandardEngineValve.java:78) at org.apache.catalina.connector.CoyoteAdapter.service(Lorg/apache/coyote/Request;Lorg/apache/coyote/Response;)V (CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Lorg/apache/tomcat/util/net/SocketWrapperBase;)Lorg/apache/tomcat/util/net/AbstractEndpoint$Handler$SocketState; (Http11Processor.java:374) at org.apache.coyote.AbstractProcessorLight.process(Lorg/apache/tomcat/util/net/SocketWrapperBase;Lorg/apache/tomcat/util/net/SocketEvent;)Lorg/apache/tomcat/util/net/AbstractEndpoint$Handler$SocketState; (AbstractProcessorLight.java:65) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(Lorg/apache/tomcat/util/net/SocketWrapperBase;Lorg/apache/tomcat/util/net/SocketEvent;)Lorg/apache/tomcat/util/net/AbstractEndpoint$Handler$SocketState; (AbstractProtocol.java:888) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun()V (NioEndpoint.java:1597) at org.apache.tomcat.util.net.SocketProcessorBase.run()V (SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V (ThreadPoolExecutor.java:1128) at java.util.concurrent.ThreadPoolExecutor$Worker.run()V (ThreadPoolExecutor.java:628) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run()V (TaskThread.java:61) at java.lang.Thread.run()V (Thread.java:829)
doc_23535282
background_splash.xml <?xml version="1.0" encoding="utf-8"?> <layer-list xmlns:android="http://schemas.android.com/apk/res/android"> <item android:drawable="@color/dark_blue"/> <item android:top="100dp"> <bitmap android:gravity="top" android:src="@mipmap/img_logo"/> </item> </layer-list> As you see, I place the logo with margin 100dp from the top. Then I try to do the same in my fragment layout: fragment_start.xml <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@mipmap/bg_create_account"> <ImageView android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@mipmap/img_logo" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:layout_marginTop="100dp" /> </RelativeLayout> But the logo in the layout appears lower than logo on the splash screen. I thought, the problem is in the default margin of Activity. But if I set: <dimen name="activity_horizontal_margin">0dp</dimen> <dimen name="activity_vertical_margin">0dp</dimen> Nothing still happens. I always see the "jump" of logo from top to down about 10-20 dp. How can I avoid it? EDIT: My activity xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <LinearLayout android:id="@+id/container" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"/> </LinearLayout> EDIT 2: I tried to pick up the distance manually and if I set <item android:top="125dp"> (or 126dp) and leave android:layout_marginTop="100dp" I see no "jump". It means the difference is 25 or 26 dp, but where are they? EDIT 3: according to answer from Bryan the issue exists only in Android 4.4(API 19) and above. To avoid it I overrode styles.xml in folder values-19 with: <item name="android:windowTranslucentStatus">true</item> A: It seems drawable that you use for the splash screen does not take into account the size of the status bar, but the Activity does. This is the ~25dp difference you are observing, though this height of ~25dp is not guaranteed to be the same on all devices. A: Maybe the problem is in the ActionBarSize, try add this to the SplashScreen : Without AppCompat android:paddingTop="?android:attr/actionBarSize" With AppCompat android:paddingTop="?attr/actionBarSize" Or if you want, (although may be considered a bad practice), you can set a negative padding in the Activity Layout using data binding : android:paddingTop="@{-1 * ?android:actionBarSize}"
doc_23535283
Here's the code url<-'http://myneta.info/uttarpradesh2017/index.php?action=summary&subAction=candidates_analyzed&sort=candidate#summary' webpage<-read_html(url) candidate_info<- html_nodes(webpage,xpath='//*[@id="main"]/div/div[2]/div[2]/table') candidate_info<- html_table(candidate_info) head(candidate_info) But getting no output, suggest what I am doing wrong? A: That site has some very broken HTML. But, it's workable. I find it better to target nodes in a slightly less fragile way. The XPath below finds it by content of the table. html_table() croaks (or took forever and I didn't want to wait) so I ended up building the table "manually". library(rvest) # helper to clean column names mcga <- function(x) { make.unique(gsub("(^_|_$)", "", gsub("_+", "_", gsub("[[:punct:][:space:]]+", "_", tolower(x)))), sep = "_") } pg <- read_html("http://myneta.info/uttarpradesh2017/index.php?action=summary&subAction=candidates_analyzed&sort=candidate#summary") # target the table tab <- html_node(pg, xpath=".//table[contains(thead, 'Liabilities')]") # get the rows so we can target columns rows <- html_nodes(tab, xpath=".//tr[td[not(@colspan)]]") # make a data frame do.call( cbind.data.frame, c(lapply(1:8, function(i) { html_text(html_nodes(rows, xpath=sprintf(".//td[%s]", i)), trim=TRUE) }), list(stringsAsFactors=FALSE)) ) -> xdf # make nicer names xdf <- setNames(xdf, mcga(html_text(html_nodes(tab, "th")))) # get the header to get column names str(xdf) ## 'data.frame': 4823 obs. of 8 variables: ## $ sno : chr "1" "2" "3" "4" ... ## $ candidate : chr "A Hasiv" "A Wahid" "Aan Shikhar Shrivastava" "Aaptab Urf Aftab" ... ## $ constituency : chr "ARYA NAGAR" "GAINSARI" "GOSHAINGANJ" "MUBARAKPUR" ... ## $ party : chr "BSP" "IND" "Satya Shikhar Party" "Islam Party Hind" ... ## $ criminal_case: chr "0" "0" "0" "0" ... ## $ education : chr "12th Pass" "10th Pass" "Graduate" "Illiterate" ... ## $ total_assets : chr "Rs 3,94,24,827 ~ 3 Crore+" "Rs 75,106 ~ 75 Thou+" "Rs 41,000 ~ 41 Thou+" "Rs 20,000 ~ 20 Thou+" ... ## $ liabilities : chr "Rs 58,46,335 ~ 58 Lacs+" "Rs 0 ~" "Rs 0 ~" "Rs 0 ~" ...
doc_23535284
The code I used for that is python3 manage.py makemigrations I tried: python3 manage.py makemigrations polls and it shows "App 'polls' could not be found. Is it in INSTALLED_APPS?" Here is my INSTALLED_APPS of the 'settings.py' file INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'polls.apps.PollsConfig', ] I know some people simply put 'polls', it doesn't work either Here is my folder structure: Here is my 'apps.py': from django.apps import AppConfig class PollsConfig(AppConfig): name = 'polls' Here is my urls.py from django.urls import path from . import views urlpatterns = [ path('', views.index, name='index'), path('new', views.new), ] I am using django2.1 and python3.8.
doc_23535285
SELECT * FROM obj WHERE ObjectType = 'user' AND ( ((( valueOne > 6.13661152336E-318 ) and ( valueTwo < 1.68611310981 ) The problem is obviously that ValueOne is too small and hence cannot be represented within normal 64bit machine precision. The problem is that I read these values from a file and I do not have control over what Input data I get. I would implement a rounding procedure to deal with this problem, but I am not sure which is the minimal (in absolute value) representable number in SQL Server 2012 Express. Can anyone help me on the matter A: Decimal and numeric: Fixed precision and scale numbers. When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1 float and real: Float Range: - 1.79E+308 to -2.23E-308, 0 and 2.23E-308 to 1.79E+308 Real Range: - 3.40E + 38 to -1.18E - 38, 0 and 1.18E - 38 to 3.40E + 38 int, bigint, smallint, and tinyint: bigint range: -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807) int range: -2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647) smallint range: -2^15 (-32,768) to 2^15-1 (32,767) tinyint range: 0 to 255 6.13661152336E-318 is outside the range representable in SQL Server as a native SQL Server supported type. You can try to use a CLR User defined type. You'll need a custom CLR library to manipulate such extreme values.
doc_23535286
This works well along with a server-rendered MVC application but i can't get it to work with a SPA using hellojs. I was fiddling with the urls to get the login-page from Identity Server displayed. The login-logic is running fine but i think any little piece is missing to propertly redirect to my client app with an access token. This is the code i use to initialize hellojs: const AuthorityUrl = 'http://localhost:5000'; hello.init({ 'openidconnectdemo': { oauth: { version: '2', auth: AuthorityUrl + '/Account/Login', grant: AuthorityUrl + '/Grant' }, scope_delim: ' ' } }); hello.init({ openidconnectdemo: this.authConfig.clientId }); And this is the login code hello.login('openidconnectdemo', { redirect_uri: this.authConfig.redirectUrl, scope: `openid profile`, response_type: 'token', display: 'page', }); In this code in AccountController, Identity Server tries to redirect to the application but this fails as the ReturnUrl is null: if (Url.IsLocalUrl(model.ReturnUrl)) { return Redirect(model.ReturnUrl); } else if (string.IsNullOrEmpty(model.ReturnUrl)) { return Redirect("~/"); } else { // user might have clicked on a malicious link - should be logged throw new Exception("invalid return URL"); } Do i have to modify Identity Server in order to get them work together or is there something missing on the client part? Thanks for any advise! A: I'd probably recommend using oidc-client-js as it is a full implementation of OIDC and should be much easier to get working. That said, it looks like your first issue here is that the auth URL should point at the authorize endpoint and not the sign in UI - i.e. it should be /connect/authorize rather than /account/login.
doc_23535287
This is my Controller Code: public function courseDetails($id) { $courseByID = DB::table('courses') ->join('course_outlines','course_outlines.course_id','=','courses.id' ) ->where('courses.id',$id) ->select('courses.*','course_outlines.title1','course_outlines.class1') ->first(); return view('frontEnd.pages.courseDetails',['courseByID'=>$courseByID]); } This is my View code: <div class="card" > <div class="card-body"> <h2 class="">Course Outline</h2> <div class="outline_section"> <h4><span class="outline_header">{{$courseByID->title1}}</span></h4> <ul> <li>{{$courseByID->class1}}</li> </ul> </div> </div> </div> I want to show like this but using just 1 row .. dd($courseByID) result : {#386 ▼ +"id": 60 +"course_name": "In a vero pariatur." +"trainer_name": "Ward Jaskolski" +"tution_fee": "25590" +"duration": "9" +"total_student": null +"total_class": "11" +"batch_no": "9" +"shift": "day" +"type": "short" +"hours": "112" +"start_date": "1979-11-02" +"deadline": "1998-12-24" +"status": "1" +"created_at": null +"updated_at": null +"title1": "HTML & CSS" +"class1": """ * PC De-assembly and Assembly, * Bus Architecture & Interfaces, * BIOS, Processor & Motherboard, * Operating Systems and Installation (Windows XP/7, Ubuntu), Partitioning & Formatting Hard Disk, * Laptop De-assembly and Assembly, * Installing adapters, * Computer Networking Fundamentals, * Networking Media & Hardware, * Diagnostics and Troubleshooting * Sharing file within LAN """ } A: You can do like this : <div class="card" > <div class="card-body"> <h2 class="">Course Outline</h2> <div class="outline_section"> <h4><span class="outline_header">{{$courseByID->title1}}</span></h4> <ul> @foreach(str_replace(['""','*'],['',''], explode(",",$courseByID->class1)) as $key => $val) <li>{{ $val }}</li> @endforeach </ul> </div> </div> </div> Above code tested here
doc_23535288
A: If I understand you question correctly than with this powershell script: Effective AD Group Membership https://gallery.technet.microsoft.com/scriptcenter/Effective-AD-Group-b4759085
doc_23535289
string binary = "10011101"; MessageBox.Show(Convert.ToInt32(binary, 2).ToString("X")); I will get the output as 9D. This is the simplest code which I can write in CSharp for converting Binary to Hexadecimal. Is there any such way to do the same in SQL also(using some inbuilt function if any is available). Is it fn_varbintohexstr? Is there any bitwise operation which will help to do the same conversion. Thanks A: select master.sys.fn_varbintohexstr(@binvalue) Should do the trick. A: I'm not aware of a built-in function which will convert a stream of bits into hex. One solution would be to write a CLR function to carry out the conversion using the .Net code you have above. Below is a T-SQL implementation, which splits the string into individual characters and then converts them to hex. If you have your own numbers table, the CTE can be discarded: DECLARE @vch_string VARCHAR(MAX) DECLARE @chr_delim CHAR(1) SET @chr_delim = ',' SET @vch_string = '10011101' -- replace nums_cte with a join to a numbers table if you have one -- since it will be far more efficient. ;WITH nums_cte AS ( SELECT 1 AS n UNION ALL SELECT n+1 FROM nums_cte WHERE n < LEN(@vch_string) ) SELECT CAST(SUM(CAST(SUBSTRING(s,n,1) AS INT) * POWER(2,n -1)) AS VARBINARY(1)) AS hexvalue FROM (SELECT REVERSE(@vch_string) AS s) AS D JOIN nums_cte ON n <= LEN(s) OPTION (MAXRECURSION 0);
doc_23535290
I found some extension library for Apache-Beam that does it: But I can't find a way to save the sketch itself to BigQuery. to be able to use it later with merge function and other functions by some time sliding: see this link my code: .apply("hll-count", Combine.perKey(ApproximateDistinct.ApproximateDistinctFn .create(StringUtf8Coder.of()))) .apply("reify-windows", Reify.windows()) .apply("to-table-row", ParDo.of(new DoFn< ValueInSingleWindow<KV<GroupByData,HyperLogLogPlus>>, TableRow>() { @ProcessElement public void processElement(ProcessContext processContext) { ValueInSingleWindow<KV<GroupByData,HyperLogLogPlus>> windowed = processContext.element(); KV<GroupByData, HyperLogLogPlus> keyData = windowed.getValue(); GroupByData key = keyData.getKey(); HyperLogLogPlus hyperLogLogPlus = keyData.getValue(); if (key != null) { TableRow tableRow = new TableRow(); tableRow.set("country_code",key.countryCode); tableRow.set("event", key.event); tableRow.set("profile", key.profile); tableRow.set("occurrences", hyperLogLogPlus.cardinality()); I just found how to do hyperLogLogPlus.cardinality() but how can write the buffer itself, in way I can run on it later merge function, in BiGQuery. Using hyperLogLogPlus.getBytes also didn't work for merge. A: Currently this functionality is not supported by Apache Beam, but there are people working on it. To be specific: The extension library in Apache Beam you mentioned depends on this HyperLogLog implementation. The sketches produced by this library is not consistent with the sketches computed by Google Cloud BigQuery. So it wouldn't make sense to merge sketches in BigQuery. A: Since this question was first asked in 2019 April, a BigQuery-compatible implementation of HLL sketch has been released, as noted in this GCP blog post, Using HLL++ to speed up count-distinct in massive datasets. The post has illustrative code snippets showing how to save the HLL sketches to BigQuery as well as to GCS files. Quoting the relevant parts of the post: [The Google implementation of HyperLogLog] was added to BigQuery in 2017 and has recently been open sourced and made directly available in Apache Beam as of version 2.16. That means it’s available for use in Cloud Dataflow ... Note: As of version 2.16, there are several implementations of approximate count algorithms. We recommend the use of HllCount.java, especially if you need sketches and/or need compatibility with Google Cloud BigQuery. From section 3 of the post, "Storing the sketches in BigQuery": BigQuery supports HLL++ via the HLL_COUNT functions, and BigQuery’s sketches are fully compatible with Beam’s, so it’s easy to interoperate with sketch objects across both systems. In the example below we will: 1. Pre-aggregate data into sketches in Beam; 2. Store the sketches in BigQuery as byte[] columns along with some metadata about the time interval; 3. Run a rollup query in BigQuery, which can extract the results at interactive speed, thanks to the sketches that were pre-computed in Beam.
doc_23535291
create table #temptable (userid int,UserName nvarchar(50),WardId bigint,ZoneId bigint,WardName nvarchar(255)) insert into #temptable exec GetWardWiseHierarchyUser @wardid --354 SELECT distinct WardId FROM #temptable declare @StringWardId nvarchar(max) select @StringWardId=(select stuff((select ',' + cast(wardId as nvarchar(50)) from #temptable FOR XML PATH(''), TYPE).value('.', 'nvarchar(max)'), 1, 1,'' )) This SP giving Following Output WardId 10054 10056 10057 10058 But I want this output like WardId 10054,10056,10057,10058 Please Help me to solve this problem A: COALESCE only replace null to ' ' DECLARE @WardId VARCHAR(8000) SELECT @WardId = COALESCE(@WardId + ', ', '') + WardId FROM tbl
doc_23535292
My browser gives me a 404: https://i.imgur.com/to38Sb2.png Can u help me? This is my code: <?php $url = "https://www.instagram.com/asdsdfsvxd"; //This user doesn't exist echo $url; $file_headers = @get_headers($url); echo $file_headers[0]; if($file_headers[0] !== 'HTTP/1.1 404 Not Found') { echo "exists"; } else { echo "not exists"; } ?> A: Solution: Dont forget the trailing / (slash). $url = "https://www.instagram.com/asdsdfsvxd/"; Output: $ php test.php https://www.instagram.com/asdsdfsvxd/HTTP/1.1 404 Not Foundnot exists
doc_23535293
2021-08-05 15:57:16.427 29800-30293/? E/AndroidRuntime: FATAL EXCEPTION: mqt_native_modules Process: com.shuket.worldmart.bakchon, PID: 29800 java.lang.SecurityException: getLine1NumberForDisplay: Neither user 11034 nor current process has android.permission.READ_PHONE_STATE, android.permission.READ_SMS, or android.permission.READ_PHONE_NUMBERS at android.os.Parcel.createExceptionOrNull(Parcel.java:2385) at android.os.Parcel.createException(Parcel.java:2369) at android.os.Parcel.readException(Parcel.java:2352) at android.os.Parcel.readException(Parcel.java:2294) at com.android.internal.telephony.ITelephony$Stub$Proxy.getLine1NumberForDisplay(ITelephony.java:10831) at android.telephony.TelephonyManager.getLine1Number(TelephonyManager.java:5349) at android.telephony.TelephonyManager.getLine1Number(TelephonyManager.java:5317) at com.learnium.RNDeviceInfo.RNDeviceModule.getPhoneNumberSync(Unknown Source:66) at com.learnium.RNDeviceInfo.RNDeviceModule.getPhoneNumber(Unknown Source:0) at java.lang.reflect.Method.invoke(Native Method) at com.facebook.react.bridge.JavaMethodWrapper.invoke(Unknown Source:147) at com.facebook.react.bridge.JavaModuleWrapper.invoke(Unknown Source:21) at com.facebook.react.bridge.queue.NativeRunnable.run(Native Method) at android.os.Handler.handleCallback(Handler.java:938) at android.os.Handler.dispatchMessage(Handler.java:99) at com.facebook.react.bridge.queue.MessageQueueThreadHandler.dispatchMessage(Unknown Source:0) at android.os.Looper.loop(Looper.java:246) at com.facebook.react.bridge.queue.MessageQueueThreadImpl$4.run(Unknown Source:37) at java.lang.Thread.run(Thread.java:923)
doc_23535294
= Table.AddColumn(#"Changed Type", "Sumif", each if [Column2] =2 or [Column2]=1 then [Column3]+[Column4] else 0) let Source = Folder.Files... #"C:\Users... #"Imported Excel" = Excel.Workbook(#"C:\... SegPL_Chart = #"Imported Excel"{[Name="SegPL_Chart"]}[Data], #"Removed Top Rows" = Table.Skip(SegPL_Chart,12), #"Removed Alternate Rows" = Table.AlternateRows(#"Removed Top Rows",1,1,90), #"Promoted Headers" = Table.PromoteHeaders(#"Removed Alternate Rows"), #"Filtered Rows" = Table.SelectRows(#"Promoted Headers", each ([Col1]="1" or [Col1]="2")), #"Table Group = Table.Group(#"Filtered Rows", {}, List.TransformMany(Table.ColumnNames(#"Filtered Rows",(x)=>{each if x = "Names" then "Totals" else List.Sum(Table.Column(_,x))},(x,y)=>{x,y})), #"append" = Table.Combine({#"Filtered Rows",#"Table Group"}) in #"append" It gives an error "in" Token comma needed..? What else I need to do bring total rows? A: You can use several steps to create several helper columns with intermediate results of conditional sums. Then you can create a new column, sum up all the intermediate results and the delete the helper columns with the intermediate results. Keep in mind that unlike Excel, the calculations in Power Query always return constants and you can then delete calculated columns you no longer need. So, * *Create helper column 1 with complicated IF and Sum scenario *Create helper column 2 with complicated IF and Sum scenario *Create total column to add column 1 + column 2 *Delete helper columns and keep only the total column A: That gives me exact result what I was looking for, but it is with DAX formula in PowerPivot: =SUMX(FILTER('TableName',[ColName] = 1),'TableName'[ColName2]) So would be glad to convert it to Power-Query formula
doc_23535295
var sheetController = showBottomSheet( context: context, builder: (context) => BottomSheet( onClosing:(){}, builder: (BuildContext context) { return Container( width:100.0, child:FlatButton( onPressed:(){ Navigator.pop(context,data); })) })); sheetController.closed.then((value) async { print(value); } }); i want to get data but it prints null A: Change your code to this : var sheetController = showBottomSheet( context: context, builder: (context) => displayBottomSheet(number)); sheetController.closed.then((value) async { await print(value); }
doc_23535296
$types = DB::table('cemetery_charge_types') ->join('payment_details', 'invoices.id', '=', 'payment_details.payable_id') ->join('invoices', 'cemetery_charge_types.id', '=', 'invoices.charge_type_id') ->select('cemetery_charge_types.*', DB::raw('SUM(payment_details.amount) as amount'),'invoices.*') ->whereIn('payment_details.payment_id', $payments->pluck('id')) ->get(); I have made sure all of the column names are correct, along with their tables, but I get the following error: Column not found: 1054 Unknown column 'invoices.id' in 'on clause'.... I have a table with charge types, which can have multiple invoices, who in turn can have multiple payment_details. A: You need to change the order of your joins like this: $types = DB::table('cemetery_charge_types') ->join('invoices', 'cemetery_charge_types.id', '=', 'invoices.charge_type_id') ->join('payment_details', 'invoices.id', '=', 'payment_details.payable_id') ->select('cemetery_charge_types.*', DB::raw('SUM(payment_details.amount) as amount'),'invoices.*') ->whereIn('payment_details.payment_id', $payments->pluck('id')) ->get();
doc_23535297
[ { "userId":"user123", "name":"John", "card":{ "amount":1000.0, "sentMoneyList":[ { "creationDate":"2019-08-07T00:00:00.000+0000", "shopId":"merchant1", "loyaltyPoint":200, "amount":250 }, { "creationDate":"2019-01-07T00:00:00.000+0000", "shopId":"merchant2", "loyaltyPoint":100, "amount":99 } ], "receivedMoneyList":[ { "creationDate":"2019-09-07T00:00:00.000+0000", "amount":40 }, { "creationDate":"2019-03-07T00:00:00.000+0000", "amount":500 } ] } } ] I want to build a timeline of received and sent money of all users starting from a given date. In case of startDate is "2019-02-01T00:00:00.000+0000", the ouput of my request should be like this: [ { "userId":"user123", "name":"John", "card":{ "amount":1000.0, "sentMoneyList":[ { "creationDate":"2019-08-07T00:00:00.000+0000", "shopId":"merchant1", "loyaltyPoint":200, "amount":250 } ] } }, { "userId":"user123", "name":"John", "card":{ "amount":1000.0, "receivedMoneyList":[ { "creationDate":"2019-09-07T00:00:00.000+0000", "amount":40 } ] } }, { "userId":"user123", "name":"John", "card":{ "amount":1000.0, "receivedMoneyList":[ { "creationDate":"2019-03-07T00:00:00.000+0000", "amount":500 } ] } } ] Here the java code that tries to get this result: Criteria criteriaClient = new Criteria(); MatchOperation matchOperation = match(criteriaClient.orOperator( Criteria.where("card.sentMoneyList.creationDate").gte(startDate), Criteria.where("card.receivedMoneyList.creationDate").gte(startDate))); UnwindOperation unwindSent = Aggregation.unwind("card.sentMoneyList"); UnwindOperation unwindReceived = Aggregation.unwind("card.receivedMoneyList"); Aggregation aggregation = Aggregation.newAggregation(unwindSent, unwindReceived, matchOperation); List<UserDTO> result = mongoTemplate.aggregate( aggregation, "users", UserDTO.class).getMappedResults(); It gives an empty List. what is missing in the query in order to get the result above ? Thanks A: You can achieve the expected output with $facet which helps you to categorize the incoming data. Here I have get sentMoneyList array in sentMoney array and receivedMoneyList array in receivedMoney. Then aggregate whatever that gives you the output. public List<Object> test() { Aggregation aggregation = Aggregation.newAggregation( facet( p -> new Document("$project", new Document("card.receivedMoneyList", 0) ), a -> new Document("$addFields", new Document("card.sentMoneyList", new Document("$filter", new Document("input", "$card.sentMoneyList") .append("cond", new Document("$gte", Arrays.asList("$$this.creationDate", "2019-02-01T00:00:00.000+0000")) ) ) ) ), unwind("$card.sentMoneyList") ).as("sentMoney").and( p -> new Document("$project", new Document("card.sentMoneyList", 0) ), a -> new Document("$addFields", new Document("card.receivedMoney", new Document("$filter", new Document("input", "$card.receivedMoney") .append("cond", new Document("$gte", Arrays.asList("$$this.creationDate", "2019-02-01T00:00:00.000+0000")) ) ) ) ), unwind("$card.receivedMoney") ).as("receivedMoney"), p -> new Document("$project", new Document("combined", new Document("$concatArrays", Arrays.asList("$sentMoney", "$receivedMoney")) ) ), unwind("$combined"), replaceRoot("combined") ).withOptions(AggregationOptions.builder().allowDiskUse(Boolean.TRUE).build()); return mongoTemplate.aggregate(aggregation, mongoTemplate.getCollectionName(Users.class), Object.class).getMappedResults(); } First I request you to use Object.class to get the aggregated result and return as a List<Object>. If that works fine, then you can convert this model to UserDTO.class which should be structured same as output. You have added a target collection users which is not a good practice. So use mongoTemplate.getCollectionName(YOUR_TARGET_COLLECTION.class) Not : I've not tried this code, but this is written based on working Mongo playground
doc_23535298
http://geoss.compusult.net/wes/serviceManagerCSW/csw?request=GetCapabilities&service=CSW How would i create the proxy classes for the service? More information HERE. EDIT #1: The hosting service above is implementing an OGC standard (CSW). The schemas for this standard are hosted HERE. And they have some WSDLs HERE. If i was to place the URL to one of those WSDL into visual studio's "add reference/service" i get a list of web operations and will generate a reference. However, that will not work because it does not know about the true hosting provider. So i'm not quite sure what to do. Edit #2: This is what it generated: EDIT #3: Following John Saunders's comment to check for erors, i got the following: Custom tool error: Failed to generate code for the service reference 'ServiceReference1'. Please check other error and warning messages for details. D:\temp\WebApplication2\WebApplication2\Service References\ServiceReference1\Reference.svcmap 1 1 WebApplication2 So i checked the warnings and i saw a few warnings similar to this: Warning 1 Custom tool warning: Cannot import wsdl:portType Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.XmlSerializerMessageContractImporter Error: Schema with target namespace 'http://www.opengis.net/cat/csw/2.0.2' could not be found. XPath to Error Source: //wsdl:definitions[@targetNamespace='http://www.opengis.net/cat/csw/2.0.2/requests']/wsdl:portType[@name='csw'] D:\temp\WebApplication2\WebApplication2\Service References\ServiceReference1\Reference.svcmap 1 1 WebApplication2 And these are the same types of warnings i've been getting when doing anything with with these schemas in .NET. Aren't schemas supposed to work with any language? Here are some example of my heart ache with .NET and these schemas: HERE HERE HERE So i'm not sure whether to blame the schemas or .NET for not being able to deal with such large schemas. Event though Marc and Basiclife both provided answers that would have normally worked on "normal" schemas, John is getting the answer because he helped me troubleshoot it when i did not think it was a troubleshooting issue, but rather something i was missing. I should have known with these schemas though i thought creating the client wouldn't be as much of a big deal. A: Right-click on your project, and go to Add Service Reference. Click Advanced at the bottom-left, then Add Web Reference at the bottom left again. When you put in the URL, it will look up the available services, which you can select and give a name for within your project. Edit: Once you have the wsdl imported, you can change its base address via your app config. Edit 2: I am also used to WCF services :-) To change the URL, right-click on your reference, go to Properties, and change the Web Reference URL to whatever you need. Edit 3: When I add the reference, the following warning message appears in my Error List: Warning 1 Custom tool warning: DiscoCodeGenerator unable to initialize code generator. No code generated. s:\dev\Sandbox\Sandbox\Web References\net.opengis.schemas\Reference.map This is why Reference.cs is empty. A: Right-click the project, select Add Service Reference. Enter the URL provided... EDIT: Add the reference to the WSDL. This is what VS needs to create the proxies. Once the proxies have been created, you can edit the URL it actually uses to access the service in the web.config / app.config file for your application A: Use the actual WSDLs to generate the service reference. Then, when you open the proxy class, use the constructor overload that includes an EndpointAddress. That will include the URL of the actual service. The URL in the WSDL is only a hint.
doc_23535299
def run @entries.keep_if { |i| valid_entry?(i) }.each do |e| begin unique_id = get_uniqueid e cdr_record = Cdr.find_by_uniqueid(unique_id).first recording = cdr_record.nil? ? NullAsteriskRecording.new : AsteriskRecording.new(cdr_record, e) recording.set_attributes recording.import rescue Exception => e fail_status end end end fail_status is a private method that updates the instance variable to :failed. Through breaking some other things, I've basically verified this code works, but I want a test in place as well. Currently, I've got the following: context "in which an exception is thrown" do before do @recording = double("asterisk_recording") @recording.stub(:import).and_raise("error") end it "should set #status to :failed" do # pending "Update instance variable in rescue block(s) of #run" subject.run subject.status.should eq :failed end end But the test always fails. The rescue block is never evaluated (I checked with a puts statement that would be evaluated when I hardcoded in a raise statement). Am I using the double feature wrong, here? Or am I doing myself in by stubbing out an exception, so the rescue block never gets run? A: You set up @recording in your before block, but the code you have posted for your run method will not use that @recording instance and therefore the call to recording.import in the run method will not raise an exception. In your run method, recording can either end up being an instance of NullAsteriskRecording or AsteriskRecording. If you know that it is going to be an AsteriskRecording as your current test implies, one approach would be to change your before block to the following: before do AsteriskRecording.any_instance.stub(:import).and_raise("error") end