id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23525600
|
Set-Mailbox <identity> -UserSMimeCertificate <MultiValuedProperty>
The problem is, I have the S/MIME certificate as a .pfx file. How do I convert the .pfx file to a <MultiValuedProperty>?
A: A pfx file is a PKCS#12 file. userSMIMECertificate is designed to hold a PKCS#7 signed message which contains the public certificate, but can also hold any intermediate certificates as well information about the client's cipher capabilities (therefore multi-valued).
Because the contents of userSMIMECertificate is a signed message, the private key is required to sign.
Please see this question and its answers for details.
You can use openssl to create such a signed message. To create a signed message, include some additional certificates and read the private key from another file:
openssl smime -sign -in in.txt -text -out mail.msg -signer mycert.pem -inkey mykey.pem -certfile mycerts.pem
To convert pfx to pem:
openssl pkcs12 -in mykey.pfx -out mykey.pem
The Windows Certificate Manager (certmgr) may be able to perform the conversion as well if you import (check allow re-exporting private key), then export the private and the public key separately.
A PKCS#7 signed message may also be created using an email client. See above mentioned question and its answers for details.
| |
doc_23525601
|
For example, in the attached image, the deltas are between 0.015 to 0.13. The current scale doesn't show the real scenario, since all cell sizes are equal.
Is there a way to place the ticks in their realistic positions, such that cell sizes would also change accordingly?
Alternatively, is there another method to generate this figure such that it would provide a realistic representation of the tick values?
A: As mentioned in the comments, a Seaborn heatmap uses categorical labels. However, the underlying structure is a pcolormesh, which can have different sizes for each cell.
Also mentioned in the comments, is that updating the private attributes of the pcolormesh isn't recommended. Moreover, the heatmap can be directly created calling pcolormesh.
Note that if there are N cells, there will be N+1 boundaries. The example code below supposes you have x-positions for the centers of the cells. It then calculates boundaries in the middle between successive cells. The first and the last distance is repeated.
The ticks and tick labels for x and y axis can be set from the given x-values. The example code supposes the original values indicate the centers of the cells.
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.set()
N = 10
xs = np.random.uniform(0.015, 0.13, 10).cumsum().round(3) # some random x values
values = np.random.rand(N, N) # a random matrix
# set bounds in the middle of successive cells, add extra bounds at start and end
bounds = (xs[:-1] + xs[1:]) / 2
bounds = np.concatenate([[2 * bounds[0] - bounds[1]], bounds, [2 * bounds[-1] - bounds[-2]]])
fig, ax = plt.subplots()
ax.pcolormesh(bounds, bounds, values)
ax.set_xticks(xs)
ax.set_xticklabels(xs, rotation=90)
ax.set_yticks(xs)
ax.set_yticklabels(xs, rotation=0)
plt.tight_layout()
plt.show()
PS: In case the ticks are mean to be the boundaries, the code can be simplified. One extra boundary is needed, for example a zero at the start.`
bounds = np.concatenate([[0], xs])
ax.tick_params(bottom=True, left=True)
| |
doc_23525602
|
Until now, I was following this approach:
https://vikrammnit.wordpress.com/2016/03/28/facing-navigation-drawer-item-onclick-lag/
So I navigate in onDrawerClosed instead than in onNavigationItemSelected to avoid the glitch.
This has been a very common issue, but it is back again. Using the Navigation Component, it is laggy again and I don't see a way to have it implemented in onDrawerClosed.
These are some older answers prior to Navigation Component
Navigation Drawer lag on Android
DrawerLayout's item click - When is the right time to replace fragment?
Thank you very much.
A: I'm tackling this issue as I write this answer. After some testing, I concluded that code I'm executing in fragment right after its created (like initializing RecyclerView adapter and populating it with data, or configuring UI) is causing the drawer to lag as its all happening simultaneously.
Now the best idea I got is similar to some older solutions that rely on onDrawerClosed. We delay the execution of our code in fragment until the drawer has closed. The layout of the fragment will become visible before the drawer is closed, so it will still look fast and responsive.
Note that I'm also using navigation component.
First, we are going to create an interface and implement it fragments.
interface StartFragmentListener {
fun configureFragment()
}
In activity setup DrawerListener like:
private fun configureDrawerStateListener(){
psMainNavDrawerLayout.addDrawerListener(object: DrawerLayout.DrawerListener{
override fun onDrawerStateChanged(newState: Int) {}
override fun onDrawerSlide(drawerView: View, slideOffset: Float) {}
override fun onDrawerOpened(drawerView: View) {}
override fun onDrawerClosed(drawerView: View) {
notifyDrawerClosed()
}
})
}
To notify a fragment that the drawer has been closed and it can do operations that cause lag:
private fun notifyDrawerClosed(){
val currentFragment =
supportFragmentManager.findFragmentById(R.id.psMainNavHostFragment)
?.childFragmentManager?.primaryNavigationFragment
if(currentFragment is StartFragmentListenr && currentFragment != null)
currentFragment.configureFragment()
}
In case you are not navigating to the fragment from the drawer (for example pressing back button) you also need to notify fragment to do its things. We will implement FragmentLifecycleCallbacksListener:
private fun setupFragmentLifecycleCallbacksListener(){
supportFragmentManager.findFragmentById(R.id.psMainNavHostFragment)
?.childFragmentManager?.registerFragmentLifecycleCallbacks(object : FragmentManager.FragmentLifecycleCallbacks() {
override fun onFragmentActivityCreated(fm: FragmentManager, f: Fragment, savedInstanceState: Bundle?) {
super.onFragmentActivityCreated(fm, f, savedInstanceState)
if (!psMainNavDrawerLayout.isDrawerOpen(GravityCompat.START)) {
if (f is StartFragmentListener)
f.configureFragment()
}
}
}, true)
}
In fragment:
class MyFragment: Fragment(), MyActivity.StartFragmentListener {
private var shouldConfigureUI = true
...
override fun onDetach() {
super.onDetach()
shouldConfigureUI = true
}
override fun configureFragment() {
if(shouldConfigureUI){
shouldConfigureUI = false
//do your things here, like configuring UI, getting data from VM etc...
configureUI()
}
}
}
A similar solution could be implemented with a shared view model.
A: Avoid the lag caused while changing Fragment / Activity onNavigationItemSelected- Android
Navigation Drawer is the most common option used in the applications, when we have more than five options then we go towards the navigation menu.
I have seen in many applications that when we change the option from the navigation menu, we observe that it lags, some people on StackOverflow recommended that use Handler like the below code:
private void openDrawerActivity(final Class className) {
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
ProjectUtils.genericIntent(NewDrawer.this, className, null, false);
}
}, 200);
}
But in the above code, it’s still not smooth & I thought why we add handler may be there is another solution after so much R&D, what I figure it out that we need to change the fragment/activity when the drawer is going to be close. let’s see with the implementation.
For more details with solution kindly go through https://android.jlelse.eu/avoid-the-lag-caused-while-changing-fragment-activity-onnavigationitemselected-android-28bcb2528ad8. It really help and useful.
Hope you find better solutions in it!
| |
doc_23525603
|
If I was using a framework like React or Ember, the rendered template would automatically be inserted into the DOM (in the least invasive way due to virtual DOM diffing). But these libraries just give me a string of HTML. How do I use that?
Is it as simple as finding the desired parent DOM node and setting innerHTML? Is there a DOM diffing library not tied into React that I can use?
[edit] If I rerender the text, and it's the same, inserting into the DOM should ideally be idempotent, and not even disturb event event handlers.
A:
...I have the string of several nested elements already, and want to add it to the DOM.
You can use .innerHTML property, but it has problems. A better alternative is a method that's like .innerHTML called .insertAdjacentHTML(). It doesn't have the problems that innerHTML has, it's faster, and you have options that allow you to place your string before/after/prepend/append on/in an element.
Signature
element.insertAdjacentHTML(position, text);
position determines where the text goes relative to the element. It must be one of the following values:
*beforebegin*// <== insert before the element
<element>
*afterbegin*// <== insert before the element's content (prepend)
<child>Content</child>
<child>Content</child>
<child>Content</child>
<child>Content</child>
Content
*beforeend*// <== insert after the element's content (append)
</element>
*afterend* // <== insert after the element
Snippet
html,
body {
height: 100%;
width: 100%;
background: black;
font: 400 12px/1.2 Consolas;
}
main {
height: auto;
width: 90vw;
border: 3px dashed red;
background: black;
color: white;
}
section {
height: auto;
width: 100%;
border: 2px dotted white;
background: rgba(181, 111, 0, .6);
}
div {
border: 1px solid white;
background: rgba(255, 30, 30, .3);
}
fieldset {
display: table-row;
width: 90%;
}
.bb {
height: 30px;
color: gold;
border-color: gold;
}
.ab {
height: 30px;
color: lightgreen;
border-color: lightgreen;
}
.be {
height: 30px;
color: #0022ef;
border-color: #0022ef;
}
.ae {
height: 30px;
color: violet;
border-color: violet;
}
<!doctype html>
<html>
<head>
<link href='style.css' rel='stylesheet'>
</head>
<body>
<header>
<fieldset>
<button onclick='bb()'>main beforebegin</button>
<button onclick='ab()'>main afterbegin</button>
<button onclick='be()'>main beforeend</button>
<button onclick='ae()'>main afterend</button>
</fieldset>
</header>
<main id='core' class='topic'>
<article class='category'>
<section id='I'>
<p>CONTENT</p>
<p>CONTENT</p>
<p>CONTENT</p>
<p>CONTENT</p>
</section>
<section id='1I'>
<p>CONTENT</p>
<p>CONTENT</p>
<p>CONTENT</p>
<p>CONTENT</p>
</section>
</article>
<article class='category'>
<section id='III'>
<p>CONTENT</p>
<p>CONTENT</p>
<p>CONTENT</p>
<p>CONTENT</p>
</section>
</article>
</main>
<footer class='footer'>
</footer>
<script>
function bb() {
document.querySelector('main').insertAdjacentHTML('beforebegin', '<div class="bb">This div has been inserted at position beforebegin</div>');
}
function ab() {
document.querySelector('.topic').insertAdjacentHTML('afterbegin', '<div class="ab">This div has been inserted at position afterbegin</div>');
}
function be() {
document.querySelector('#core').insertAdjacentHTML('beforeend', '<div class="be">This div has been inserted at position beforeend</div>');
}
function ae() {
document.querySelector('main#core.topic').insertAdjacentHTML('afterend', '<div class="ae">This div has been inserted at position afterend</div>');
}
</script>
</body>
</html>
| |
doc_23525604
|
I'm sure there is a simple way to do this it's just a pain as I need to ensure the page doesn't get changed in the url e.g. details.aspx stays as is but can show current content or previous versioned content.
cheers in advance
A: Views are supposed to be "stupid" (and have single responsibility)
I don't think you fully understand the Asp.net MVC conceptual model. Views are supposed to be stupid and have just as much logic in them as needed. Divide and conquer is the rule here. So if you have two different views of a particular data, you'd build two customised views as well.
Controller's supposed to be the smart guy here. So giving views the possibility of decisions isn't the right way of doing it.
Decision is almost certainly based on application model state anyway, so it's up to controller to decide which view to display and provide the correct model for that particular view.
It's nothing uncommon to return various views from the same controller action. It's not written in stone that each controller action should have one view. This way we'd get bloated views with too much code hence making them unmaintainable. Basically we'd blow the whole separation of concerns of the MVC model.
So when you'd like to return particular view from your controller you can always provide its name when returning from the controller action:
return View("ViewName", model);
I suggest you analyse and refactor your process.
A: Can you create a "parent" View Model instead which contains both contentModel and revertedContent objects? Send this new View Model to the view, and check whether the revertedContent member is null.
public class ParentViewModel
{
public contentModel content { get; set; }
public revertedContent reverted { get; set; }
}
Then the view
<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<YourNamespace.ParentViewModel>" %>
...
<% if(!Model.reverted) { %>
//do regular content here
<% } else { %>
//do reverted content here
<% } %>
A: David's answer is one direction. I'd look at doing something a little higher up the food chain--like make the controller pick the view as well as supplying a viewmodel.
A: I suggest to change your view models design to allow for the scenario, instead of having 2 unrelated view models, make sure both fit under the same type.
Only you will know which design will make sense for your application.
I'll take a blind guess, and suggest you can always use ContentViewModel in your view. Have a IsRevertedInfo property in it, that you can check in the view to display any extra information.
| |
doc_23525605
|
I have an interface
public interface InventoryRepository
{
List<Location> GetLocations();
List<InventoryItem> GetItems();
List<InventoryItem> GetItems(int LocationId);
HomeMadeItem GetHomeMadeItem(int ItemId);
StoreBoughtItem GetStoreBoughtItem(int ItemId);
Location GetLocation(int LocationId);
int SaveHomeMadeItem(HomeMadeItem item);
string DeleteItem(int id);
}
My MVC app is going to be happy as a pig in ... well you know, so long as this contract is satisfied. Woot, all is good.
So you see, I have a DeleteItem method that is implemented thusly:
public string DeleteItem(int ItemId)
{
using (var context = new InventoryEF())
{
var item = context.InventoryItems.Find(ItemId);
if (item != null)
{
try
{
context.InventoryItems.Remove(item);
context.SaveChanges();
return "";
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
return e.InnerException.ToString();
}
}
else
{
return "Id not found for delete";
}
}
}
I can very easily devise a unit test that tests the return value on successful deletion or item not found. I do not know how to devise a unit test that demonstrates the correct return value for a thrown exception.
Moq was suggested. But my original question was not clear enough for me to be certain that this is the correct approach.
Does anyone have a suggestion as to how to devise this unit test?
A: using (var context = new InventoryEF())
That is a problem since you are instantiating a dependency directly. You need to make the context injectable somehow (which is a part of different discussion).
Once you have InventoryEF a dependency, you can mock any of the methods the context is using and then throw exception on it through the MOQ framework, to be able to test your exception handling.
something like this, please check the syntax:
var mockContext = new Mock<IInventoryEF>();
mockContext.Setup(c=>c.SaveChanges()).Throws<Exception>();
As pointed out in another reply, you should also have either the Attribute for ExpectedException or if you use FluentAssertions you can also expect exceptions inline.
A: If all you want is to catch a specific exception, and you aren't using MOQ already, I'd suggest the ExpectedExceptionAttribute:
[TestMethod]
[ExpectedException(typeof(IDNotFoundException))]
public void WhicheverTestMethod()
{
..
}
Where of course the type of exception can be whatever you want. I'm fairly sure you can check against the message as well.
In general, I'd also suggest not catching type Exception and getting the InnerException because there may be no inner exception, and you can't really test all possible exceptions because Exception is extendable.
| |
doc_23525606
|
I made the ajax call from frontend using the below code:
useEffect(() => {
fetch("/api/orderCreateWebhooks")
.then((response) => response.json())
.then(({ fact }) => console.log(fact))
.catch((error) => {
});
}, []);
And In my backend code, I have below code.
Route::get('/api/orderCreateWebhooks', function (Request $request) {
/** @var AuthSession */
$session = $request->get('shopifySession'); // Provided by the shopify.auth middleware, guaranteed to be active
$client = new Rest($session->getShop(), $session->getAccessToken());
$data = [
"webhook"=> [
"topic"=> "orders/create",
"address"=> "https://9918-110-44-127-202.ngrok.io/",
"format"=> "json"
]
];
$result = $client->post('/admin/api/2022-04/webhooks.json',$data);
return response($result->getDecodedBody());
})->middleware('shopify.auth:online');
With this ajax call, the call is made instead to /api/auth?shop= URL with 500 response HTTP code. Is there any thing missing that needs to be done?
There is no well documentation on this from Shopify as well. Any help would be appreciated.
UPDATE:
After looking in to the network request I found that the request are being redirect with 302 status code. Please find the screenshot below. It seems like some authentication issue. How can be authenticate the API that we are requesting?
Thank you
A: it seems like you are missing authentication on your original code. You would need to provide some Authorization token [Authorization - Bearer JWT TOKEN] in your request or you need to pass ShopifySession in the query parameter variables.
useEffect(() => {
fetch("/api/orderCreateWebhooks")
.then((response) => response.json())
.then(({ fact }) => console.log(fact))
.catch((error) => {
});
}, []);
would become something like,
useEffect(() => {
fetch("/api/orderCreateWebhooks?shopifySession=",
{ method: 'get'
})
.then((response) => response.json())
.then(({ fact }) => console.log(fact))
.catch((error) => {
});
}, []);
Mentioned below is a general format of fetch requests
fetch('URL_GOES_HERE', {
method: 'post',
headers: new Headers({
'Authorization': 'Basic '+btoa('username:password'),
'Content-Type': 'application/x-www-form-urlencoded'
}),
body: 'A=1&B=2'
});
| |
doc_23525607
|
$(function() {
$( "#datepicker" ).datepicker({
onSelect: function(dateText, inst){
$("input[name='date']").val(dateText);}
});
} );
I am unable to store it either locally or into my database, or even alert it. I have received errors such as [object Object] and [object htmlinputelement].
I would really appreciate if someone can give me a step-by-step on how I can store the value of the datepicker.
The reference for my datepicker is: https://jqueryui.com/datepicker/
& I'm developing it on eclipse
A: You can use AJAX. Assuming you have a form, that form has an input for the date and an id on that input like date let's say:
var date = $('#date').val();
$.ajax({
type: 'post',
url: 'your_script.php',
data: {date: date},
success: function(response){
console.log(response);
}
})
Also you need a serverside script.
UPDATE:
I've made you a fiddle to see alerting the date after selecting it. I've created an HTML form with an input for the datepicker. LINK.
A simple example to store the value from the datepicker is to save it for example in local storage, in your browser more exactly, using, as you have seen in the fiddle JavaScript's localStorage.
A: Since you're using an ID to initialize datepicker, I'll assume there's only one instance of datepicker whose value you want.
You can just use val()
var datetime = $("#datepicker").val()
or
var datetime = document.getElementById("datepicker").value
| |
doc_23525608
|
public List<EntityA> getAllEntityAByAAndDPaginated(
Collection<Long> a,
Collection<Long> d,
int numberOfResults,
int page
) {
TypedQuery<EntityA> q = entityManager.createNamedQuery(EntityA.GET_BY_A_AND_D,
EntityA.class);
q.setParameter(A, a);
q.setParameter(D, d);
q.setMaxResults((page + 1) * numberOfResults * d.size()); // without this, it works!
return a != null && !a.isEmpty() && d != null && !d.isEmpty()
? q.getResultList() : new ArrayList<>();
}
For performance reasons, I am only loading (page + 1) * numberOfResults * d.size() results. But executing this, I get a NullPointerException in the last line of the code snippet of the following form:
java.lang.NullPointerException
at java.base/java.util.ArrayList.<init>(ArrayList.java:179)
at org.eclipse.persistence.mappings.ForeignReferenceMapping.prepareNestedJoinQueryClone(ForeignReferenceMapping.java:2455)
at org.eclipse.persistence.mappings.OneToOneMapping.valueFromRowInternalWithJoin(OneToOneMapping.java:1814)
at org.eclipse.persistence.mappings.ForeignReferenceMapping.valueFromRow(ForeignReferenceMapping.java:2177)
at org.eclipse.persistence.mappings.ForeignReferenceMapping.buildCloneFromRow(ForeignReferenceMapping.java:341)
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.buildAttributesIntoWorkingCopyClone(ObjectBuilder.java:2007)
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.buildWorkingCopyCloneFromRow(ObjectBuilder.java:2260)
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.buildObjectInUnitOfWork(ObjectBuilder.java:858)
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:745)
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:699)
at ...
According the exception stacktrace, it seems like it has tried somewhere to add null values to the result list. When I delete the part q.setMaxResults((page + 1) * numberOfResults * d.size());, it works fine, i.e. this has to cause the NPE somehow.
The database of EntityA does not contain any null values.
How can I fix this?
| |
doc_23525609
|
When I check in my console ->Network->page.html ,
*
*Status Code is OK
Response Headers
*HTTP/1.1 200 OK
*Pragma: no-cache
*Refresh: 1; URL=http://balbla.com/page.html
*Connection: Close
*Content-Type: text/html
Then in the Response Tab - > <HTML></HTML>
My question is how can I validate in the success function that data is not an empty html page, before running the commands below, otherwise, I will end up with a blank page.
$.ajax({
type: "POST",
url: 'http://balbla.com/page.html',
data: 'country=' + user_country,
success: function(data) {
$pageWrap.fadeOut(1000, function() {
$('link[href="css/main.css"]').attr('href', $css);
$pageWrap.hide().html(data).fadeIn(500);
$.getScript($scripts);
});
},
error: function(xhr, textStatus, error) {
alert('Error Fecthing page') ; }
});
Thanks
A: Do this:
success: function(data) {
if(data !=="<HTML></HTML>"){
$pageWrap.fadeOut(1000, function() {
$('link[href="css/main.css"]').attr('href', $css);
$pageWrap.hide().html(data).fadeIn(500);
$.getScript($scripts);
});
}
}
A: You can strip html tags off your response string and then calculate it's length.
HTML Tag strip function is taken from: Strip HTML from Text JavaScript
function strip(html)
{
var tmp = document.createElement("DIV");
tmp.innerHTML = html;
return tmp.textContent || tmp.innerText || "";
}
Then in your ajax function:
success: function(data) {
if(strip(data).length > 0) {
// content is not empty
}
| |
doc_23525610
|
I want to create a Dictionary of string, string, with "Id" as the key for each entry and the contents of the list as the value.
i.e.
myList= { "string1", "string2", ...etc }
and therefore
myDictionary = {{"Id1", "string1"}, {"Id2", "string2"}, ...etc}
I have been trying to create a dictionary using the List.ToDictionary method but to no avail
List.ToDictionary(Of String, String)("Id", Function(p) p.key)
Any help is much appreciated.
A: Try something like this:-
Dim list As List(of string)
Dim dict As IDictionary(Of String) = list.ToDictionary(Function(p) "Id", Function(p) p)
A:
I want to create a Dictionary of string, string, with "Id" as the key
for each entry
This is impossible. Every entry in the dictionary must have a unique key.
| |
doc_23525611
|
I've setup wamp to allow remote connections to my site so they can view it, but
the URI's don't work as intended for other people visiting the site.
When other people uses the navbar on the site the URI don't change but the correct content displays
E.G:
For me: www.mysite.com/pencils
Others: www.mysite.com
i'm using .htaccess but I don't think that is the problem, i believe the problem is inside wamp somewhere.
But to be sure, here's my .htaccess
//remove .php
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}\.php -f
RewriteRule ^(.*)$ $1.php
//Always route to www
RewriteCond %{HTTP_HOST} ^mysite\.com
RewriteRule ^(.*)$ www.mysite.com/$1 [R=permanent,L]
Anyone know what may be the problem?
A: I've got it to work.
The problem was my web host(provider) which i used to forward my domain and dns to my own server.
I got a different domain service (got a new domain) and used the dns service included from them to point to my server, changed the virtual hosts etc in wamp and then it worked.
| |
doc_23525612
|
@Override
public RemoteViews getViewAt(int position)
{
if (mWidgetItems == null || position > mWidgetItems.size() - 1)
{
return getLoadingView();
}
WidgetUiUser user = mWidgetItems.get(position);
RemoteViews views = new RemoteViews(mContext.getPackageName(), R.layout.widget_grid_item);
views.setTextViewText(R.id.widgetLocation, user.getShortLocation());
views.setViewVisibility(R.id.widgetLocation, View.VISIBLE);
Intent srcIntent = ProfileActivity.createIntent(MyApplication.getInstance(), user, PageSourceHelper.Source.SOURCE_WIDGET);
srcIntent.setAction(IntentRoutingActivity.getUniqueIntentAction()); //required for this activity to work properly
views.setOnClickFillInIntent(R.id.widgetGridItemLayout, IntentRoutingActivity.createWidgetFillInIntent(srcIntent));
// Fetch bitmap synchronously
Bitmap bitmap = mImageFetcher.getBitmap(user.getImageUrl());
views.setImageViewBitmap(R.id.widgetImage, bitmap);
return views;
}
Everything works fine and all grid items get displayed correctly, providing the synchronous image load executes quickly.
However, if I run the app on a slow network where image downloads can take a few seconds, I quickly see that many of my grid items fail to load completely. I added some logcat output, and could see that the adapter fails to request the grid items that are not loading.
So for instance, my widget shows a 3x3 grid of users, and the entire GridView holds 80 or so. When working correctly, I see the adapter request around the first 20 grid items, and they all load correctly. When image loads are slow, I see the adapter request items 0, 3, 5, and 8 (for example, it varies) but no requests are made for any other items. If I start to scroll the GridView, more items start to get requested but many items still fail to request at all.
It seems that taking too much time in getViewAt() is somehow breaking the display behaviour, despite the fact that android documentation says it is fine to perform long operations in this method. Anyone seen this problem or know of a solution?
| |
doc_23525613
|
Anyways, I'm trying to create a form for Merchants to sign in. I have a MerchantSessionsController that tries to create a new session based on signin form input:
This is what I have in my
app/views/merchant_sessions/new.html.erb
<%= form_for(:merchant_session, :url => merchant_sessions_path) do |f| %>
<div class="field">
<%= f.label :userName %><br />
<%= f.text_field :userName %>
</div>
<div class="field">
<%= f.label :password %><br />
<%= f.password_field :password %>
</div>
<div class="actions">
<%= f.submit "Sign in" %>
</div>
<% end %>
The file app\controllers\merchant_sessions_controller.rb contains:
def create
merchant = Merchant.find_by_userName(params[:userName])
if merchant && merchant.authenticate(params[:password])
merchant_session[:merchant_id] = merchant.id
redirect_to root_url, :notice => "Merchanthas been logged in"
else
flash.now[:error] = "Invalid username or password."
@title = "Merchant Signin"
render "new"
end
end
Unfortunately, the params[:userName] and params[:password] keep getting passed as nil, even though on the debug output on the merchants signin page, I see that the userName and password are definitely being passed in.
--- !ruby/hash:ActiveSupport::HashWithIndifferentAccess
utf8: ✓
authenticity_token: 8WsOviJyY1kktPq9dDO+OFePdSKf2uGLY3Pnc4bU2tc=
merchant_session: !ruby/hash:ActiveSupport::HashWithIndifferentAccess
userName: asd
password: ddsad
commit: Sign in
action: create
controller: merchant_sessions
I've also attempted to access the params[:action] parameter, which worked fine. Why is it that the userName and password parameters are nil? I had changed the name of the MerchantSessionsController (formerly just SessionsController), but I don't think that should be the problem
A: You're looking in the wrong place for the username and password, note the specific structure of your YAML dump:
merchant_session: !ruby/hash:ActiveSupport::HashWithIndifferentAccess
userName: asd
password: ddsad
and your form:
<%= form_for(:merchant_session, :url => merchant_sessions_path) do |f| %>
You want to look at params[:merchant_session][:userName] and params[:merchant_session][:password] instead of params[:userName] and params[:password]:
def create
mparams = params[:merchant_session]
merchant = Merchant.find_by_userName(mparams[:userName])
if merchant && merchant.authenticate(mparams[:password])
#...
| |
doc_23525614
|
The Curl Request c&p'd from the google developer tool:
curl \
'https://people.googleapis.com/v1/contactGroups?key=[YOUR_API_KEY]' \
--header 'Authorization: Bearer [YOUR_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
--compressed
And my php cURL request:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://people.googleapis.com/v1/contactGroups");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'GET');
$headers = ["Authorization: Bearer {$access_token}", "Accept: application/json"];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
$data = curl_exec($ch);
$result = json_decode($data, true);
var_dump($result); die();
The returned outputs of the requests are in the same format, just the content differs. Does anyone have experience trying to get a list of contact groups/labels from the people api?
| |
doc_23525615
|
I also followed official link-> [Angular JS Eclipse Github setup link][1]
I installed, “Angular JS” from “Eclipse->Help Menu->Eclipse Marketplace”
It got installed and restarted.
Then, Eclipse->New->Static web project
Then, Convert the project to Angular. Right click Project, Configure-> Convert to Angular JS project
Then, Adding new html files. Right click Project, New->HTML file
Simple code:
<!DOCTYPE html>
<html lang="en-US" ng-app>
<script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js"></script>
<body>
<div>
<p>Name : <input type="text" ng-model="name"></p>
<h1>Hello {{name}}</h1>
</div>
</body>
</html>
It works fine. Runs in the browser well.
My problem is, why does not it show the completion help when I type on "ng-" and use control + space in mac eclipse ? When I move cursor on "ng", it gives help of "div" tag rather than "ng" help. I want to just type "ng-" and get the list of available methods by just press control + space in mac eclipse
I can understand that there is some angular installation problem. But, none of the existing solution solved my issue.
| |
doc_23525616
|
Is it possible to get the current page roles in my code (without parsing the ylm file 'by hand') ?
Thanks
A: Its duplicate of : Symfony2 get to the access_control parameters located in the security.yml
use Symfony\Component\Yaml\Yaml;
$file = sprintf("%s/config/security.yml", $this->container->getParameter('kernel.root_dir'));
$parsed = Yaml::parse(file_get_contents($file));
$access = $parsed['security']['access_control'];
| |
doc_23525617
|
I have a HorizontalScroll view outlined in my main_activity.xml and I want to add LinearLayout containers to it with data, each time a user presses a button. So my scroll view will start with nothing, and each button push will add another entry to it.
I tried something with a separate layout for the containers to be added, that use inflator, but then saw indications that I just define it all in the main activity layout. Maybe that was the correct route Here is the pertinent part of my layout
<HorfizontalScrollView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/ResultView">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="horizontal"
android:gravity="center"
android:onClick="onShowResult"
android:clickable="true"
android:visibility="gone"
android:id="@+id/ResultContainer">
<!--successes-->
<LinearLayout
android:orientation="horizontal"
android:weightSum="2"
android:gravity="center"
android:layout_width="wrap_content"
android:layout_height="wrap_content">
<TextView
android:text="@string/success_label"
android:gravity="end"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:id="@+id/Success_Label"
android:layout_weight="1"
android:clickable="false" />
<TextView
android:text=""
android:gravity="end"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:id="@+id/Successes"
android:layout_weight="1"
android:clickable="false" />
</LinearLayout>
<!--tenss-->
<LinearLayout
android:orientation="horizontal"
android:weightSum="2"
android:gravity="center"
android:layout_width="wrap_content"
android:layout_height="wrap_content">
<TextView
android:text="@string/tens_result_label"
android:gravity="end"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:id="@+id/Tens_Result_Label"
android:layout_weight="1"
android:clickable="false" />
<TextView
android:text=""
android:gravity="end"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:id="@+id/Tens_Result"
android:layout_weight="1"
android:clickable="false" />
</LinearLayout>
<!--ones-->
<LinearLayout
android:orientation="horizontal"
android:weightSum="2"
android:gravity="center"
android:layout_width="wrap_content"
android:layout_height="wrap_content">
<TextView
android:text="@string/ones_result_label"
android:gravity="end"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:id="@+id/Ones_Result_Label"
android:layout_weight="1"
android:clickable="false" />
<TextView
android:text=""
android:gravity="end"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:id="@+id/Ones_Result"
android:layout_weight="1"
android:clickable="false" />
</LinearLayout>
</LinearLayout>
And here is the code that I started playing with to implement it in my OnClick for the button. (the entries themselves will eventually be clickable) This is the only part of the onClick pertinent as the rest is just doing calculations, not creating the view being added.
//create and insert results
//start by getting the result view we plan to insert to
HorizontalScrollView resultview = (HorizontalScrollView)findViewById(R.id.ResultView);
//pull up the container to insert
LinearLayout resultcontainer = (LinearLayout)findViewById(R.id.ResultContainer);
//get and set the results
//add the container to the resultsview
resultview.addView(resultcontainer);
resultcontainer.setVisibility(View.VISIBLE);
This causes my app to crash and the debug seems to indicate it's something with an OnClick, but I wasn't clear. Specifically it looks like the addview in the second to last line causes the crash.
I'm fairly certain I'm not understanding something very fundamental and basic about how to do this, but like I said, kinda just diving in and learning as I go.
I also need to edit the text of the TextViews in the containers I'm adding, but one thing at a time.
Any help or pointers in the right direction would be appreciated.
So, my question, since apparently there was some confusion. What is the best way to add LinearLayouts to a HorrizontalScrollView, such that I can access and modify TextViews within those LinearLayouts?
Ideally, I'd like to create the objects being added initially via an XML layout, and not built them entirely programatically. Once created and modified, I'll add them to my HorrizontalScroll View.
| |
doc_23525618
|
For example, short alpha code representing the insurance (e.g., 'BCBS' for 'Blue Cross Blue Shield'):
txtDesc.text = "Blue Cross Blue Shield";
string Code = //This must be BCBS..
Is it possible? Please help me. Thanks!
A: Without Regex:
string input = "Blue Cross Blue Shield";
string output = new string(input.Where(Char.IsUpper).ToArray());
Response.Write(output);
A: You can try use the 'Replace lowercase characters with star' implementation, but change '*' to '' (blank)
So the code would look something like this:
txtDesc.Text = "Blue Cross Blue Shield";
string TargetString = txt.Desc.Text;
string MainString = TargetString;
for (int i = 0; i < TargetString.Length; i++)
{
if (char.IsLower(TargetString[i]))
{
TargetString = TargetString.Replace( TargetString[ i ].ToString(), string.Empty );
}
}
Console.WriteLine("The string {0} has converted to {1}", MainString, TargetString);
A: I´d map the value to your abbreviation in a dictionary like:
Dictionary<string, string> valueMap = new Dictionary<string, string>();
valueMap.Add("Blue Cross Blue Shield", "BCBS");
string Code = "";
if(valueMap.ContainsKey(txtDesc.Text))
Code = valueMap[txtDesc.Text];
else
// Handle
But if you still want the functionality you mention use linq:
string newString = new string(txtDesc.Text.Where(c => char.IsUpper(c).ToArray());
A: string Code = Regex.Replace(txtDesc.text, "[a-z]", "");
A: string caps = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
string.Join("",
"Blue Cross Blue Shield".Select(c => caps.IndexOf(c) > -1 ? c.ToString() : "")
.ToArray());
A: Well you could use a regular expression to remove everything that wasn't capital A-Z:
using System;
using System.Text.RegularExpressions;
class Program
{
static void Main( string[] args )
{
string input = "Blue Cross Blue Shield 12356";
Regex regex = new Regex("[^A-Z]");
string output = regex.Replace(input, "");
Console.WriteLine(output);
}
}
Note that this would also remove any non-ASCII characters. An alternative regex would be:
Regex regex = new Regex(@"[^\p{Lu}]");
... I believe that should cover upper-case letters of all cultures.
A: Rather than matching on all capitals, I think the specification would require matching the first character from all the words. This would allow for inconsitent input but still be reliable in the long run. For this reason, I suggest using the following code. It uses an aggregate on each Match from the Regex object and appends the value to a string object called output.
string input = "Blue Cross BLUE shield 12356";
Regex regex = new Regex("\\b\\w");
string output = regex.Matches(input).Cast<Match>().Aggregate("", (current, match) => current + match.Value);
Console.WriteLine(output.ToUpper()); // outputs BCBS1
A: string Code = new String(txtDesc.text.Where(c => IsUpper(c)).ToArray());
A: string Code = Regex.Replace(txtDesc.text, "[a-z]", "");
A: This isn't perfect but should work (and passes your BCBS test):
private static string AlphaCode(String Input)
{
List<String> capLetter = new List<String>();
foreach (Char c in Input)
{
if (char.IsLetter(c))
{
String letter = c.ToString();
if (letter == letter.ToUpper()) { capLetter.Add(letter); }
}
}
return String.Join(String.Empty, capLetter.ToArray());
}
And this version will handle strange input scenarios (this makes sure the first letter of each word is capitalized).
private static string AlphaCode(String Input)
{
String capCase = System.Globalization.CultureInfo.CurrentCulture.TextInfo.ToTitleCase(Input.ToString().ToLower());
List<String> capLetter = new List<String>();
foreach (Char c in capCase)
{
if (char.IsLetter(c))
{
String letter = c.ToString();
if (letter == letter.ToUpper()) { capLetter.Add(letter); }
}
}
return String.Join(String.Empty, capLetter.ToArray());
}
A: Here is my variant:
var input = "Blue Cross Blue Shield 12356";
var sb = new StringBuilder();
foreach (var ch in input) {
if (char.IsUpper(ch)) { // only keep uppercase
sb.Append(ch);
}
}
sb.ToString(); // "BCBS"
I normally like to use regular expressions, but I don't know how to select "only uppercase" in them without [A-Z] which will break badly on characters outside the English alphabet (even other Latin characters! :-/)
Happy coding.
But see Mr. Skeet's answer for the regex way ;-)
| |
doc_23525619
|
A: I didn't understand what you meant by the last few lines, but when you have a specific output like yours, you usually have to write you own.
def sci_not(v,err,rnd=1):
power = - int(('%E' % v)[-3:])+1
return '({0} +/- {1})e{2}'.format(
round(v*10**power,rnd),round(err*10**power,rnd),power)
This does the trick
>>> v = .01342
>>> err = .0004
>>> sci_not(v,err)
(13.4 +/- 0.4)e3
EDIT : You can put in the ± character if you make the string unicode, but the results only look pretty when you use a print statment.
Replace the previous return statement with
return u'({0} \u00B1 {1})e{2}'.format(
round(v*10**power,rnd),round(err*10**power,rnd),power)
This returns
>>> sci_not(v,err)
u'(13.4 \xb1 0.4)e3'
>>> print sci_not(v,err)
(13.4 ± 0.4)e3
A: I don't know if this is exactly what you're looking for, but you can display numbers in scientific notation using .format
v = 0.01342
err = 0.0004
print ('({:.2e}'.format(float(v)) + ' +/- ' + '{:.2e}'.format(float(err)) + ')')
Will output the following:
(1.34e-02 +/- 4.00e-04)
the .2 of {:.2e} specifies the precision, which prevents any overly ugly numbers
| |
doc_23525620
|
<tr class="simplehighlight" onclick="window.location.href='./games/game191.php';">
<td>06/09/2007</td><td>Jennifer Woods Memorial Grand Prix</td><td>C54</td>
<td>Nikolayev, Igor</td><td>2431</td><td>Parry, Matt</td><td>2252</td><td class="text-center">1-0</td></tr>
I want to read in a delimited file, make an array, and populate the table: (sample record)
game191|06/09/2007|Jennifer Woods Memorial Grand Prix|C54|Nikolayev, Igor|2431|Parry, Matt|2252|1-0
I tried this, but it only displays the last record from the datafile (/games.csv)
<?php
// open delimited data file "|" if needed and read.
//game191|06/09/2007|Jennifer Woods Memorial Grand Prix|C54|Nikolayev, Igor|2431|Parry, Matt|2252|1-0
if(!isset($_SESSION['games_array'])) {$file = $_SERVER['DOCUMENT_ROOT'].'/games.csv';
$fp = fopen($file,"r"); $list = fread($fp, filesize($file));
$_SESSION['games_array'] = explode("\n",$list); fclose($fp);}
// extract variables from each tuple by iteration
foreach ($_SESSION['games_array'] as $v);{
$token = explode("|", $v);
//write the table row and table data
echo "<tr class=\"simplehighlight\" onclick=\"window.location.href='./games/";
echo $token[0]; echo ".php';\">";
echo "<td>";echo $token[1];echo "</td>"; echo "<td>";echo $token[2];echo "</td>";
echo "<td>";echo $token[3];echo "</td>"; echo "<td>";echo $token[4];echo "</td>";
echo "<td>";echo $token[5];echo "</td>"; echo "<td>";echo $token[6];echo "</td>";
echo "<td>";echo $token[7];echo "</td>";
echo "<td class=\"text-center\">"; echo $token[8];echo "</td>";
echo "</tr>";};
?>
What am I missing?
A: foreach ($_SESSION['games_array'] as $v);{
should be
foreach ($_SESSION['games_array'] as $v) {
A: Try with file():
$lines = file('filename');
foreach ($lines as $line_num => $line) {
echo "Line #<b>{$line_num}</b> : " . htmlspecialchars($line) . "<br />\n";
}
A: do you have the right permissions to access that file? is your php error_reporting set to display all errors? you could try using only relative paths, have you tried that? try using file_get_contents...
A: It was simpler than I thought... I erred in reusing old junk code from another script.
I found this and it works!
<?php
$text = file('games.csv'); foreach($text as $line) {$token = explode("|", $line);
echo "<tr class=\"simplehighlight\" onclick=\"window.location.href='./games/";
echo $token[0]; echo ".php';\">";
echo "<td>";echo $token[1];echo "</td>"; echo "<td>";echo $token[2];echo "</td>";
echo "<td>";echo $token[3];echo "</td>"; echo "<td>";echo $token[4];echo "</td>";
echo "<td>";echo $token[5];echo "</td>"; echo "<td>";echo $token[6];echo "</td>";
echo "<td>";echo $token[7];echo "</td>";
echo "<td class=\"text-center\">"; echo $token[8]; echo "</td>";
echo "</tr>";};
?>
Let me know if you see any improvements or faster functions. Thank you !
| |
doc_23525621
|
#if DEBUG
#include <iostream>
#include <ostream>
#define LOG(x) std::cout << x << std::endl;
#else
#define LOG(x) // LOG(x) is replaced with nothing in non-debug
#endif
How would an equivalent function look like that allows this?:
LOG("This is a Test message" << " with " << testVariable << " a variable");
my current implementation looks like this:
template <typename T>
inline void logD(const T& x) {
if constexpr (Debug) {
std::cout << x << std::endl;
}
};
but I get the following error:
error C2296: '<<': illegal, left operand has type 'const char [25]'
replacing << with + for concatenating doesnt help either
error C2110: '+': cannot add two pointers
A: With the help of Mooing_Duck i made the function a vararg template and simply use the parameter pack.
template <typename ...T>
inline void logD(const T&... x) {
if constexpr (DebugBuild) {
(std::cout << ... << x) << std::endl;
}
};
you call the function with separated commas for the content.
logD("This is a ","test with a ",variable," variable");
A: The first part of function argument must be a well-defined type that can be used with standard streams, e.g:
std::string testVariable = "test";
LOG(std::string("This is a Test message") + " with " + testVariable + " a variable");
| |
doc_23525622
|
Structure:
apps:
*
*project_a
*project_b
Project_a and project_b are both --sup applications and I want to use the GenServer from project_a in project_b. I've included the project in my deps.exs file, but I don't know what to do next...
If I open the observer I see both application in the menu, but I keep getting errors because project_b can't use project_a.
Does anyone know what I'm forgetting?
A: I forgot to add project_a in the mix.exs file from project_b.
It is not enough to add it as dep, but it must also be added in de def application part.
see: https://github.com/josevalim/kv_umbrella for an example.
A: While your answer to your own question is correct, I can tell you from experience that writing an adapter to your service in the other application is a good practice intended to more loosely couple the 2 applications together and to avoid circular references.
What do I mean by this? Take the public API portion of the GenServer you are sending a message to and move it to another module in another application. You will find this is very similar to writing a facade for an HTTP API. The public API part of a GenServer is actually run from the calling process, even tough it is within the GenServer's module, so moving it to another module is just fine.
Please forgive any syntax problems etc in the following code as I am pulling it out of my head.
Change something like this:
defmodule App1.Calculator do
use GenServer
def add( num1, num2 ), do: GenServer.call( App1.Caclculator, {:add, num1, num2})
def handle_call({:add, num1, num2}, _from, state) do
{:reply, {:ok, num1+num2}, state}
end
end
To:
defmodule App1.Calculator do
use GenServer
def handle_call({:add, num1, num2}, _from, state) do
{:reply, {:ok, num1+num2}, state}
end
end
defmodule Service.Calculator do
def add(num1, num2), do: GenServer.call(process_name, {:add, num1, num2})
# Just an example of how you might have named your node and calculator process
def process_name, do: {:calculator, :"app1@127.0.0.1"}
end
Where Service.Caclulator is in a 3rd app named Service that can be depended on by App1 and App2 without creating a circular reference.
Why might you have to create a circular reference? As soon as you are doing things asynchronously using cast from App1 to App2, then App2 will have to send a message with results to App1, and without the 3rd Service application, you would create a circular reference of App1 and App2. Not to mention when you start releasing two nodes (one for each app), there is no need to include the entirety of the other apps compiled code just to get an adapter to a service.
| |
doc_23525623
|
func addFavouriteColumn() -> Bool
{
var querySql = "ALTER TABLE 'Media' ADD COLUMN Favourite INTEGER DEFAULT 0"
guard let queryStatement = try? prepareStatement(sql: querySql) else {
return false
}
defer {
sqlite3_finalize(queryStatement)
}
return sqlite3_step(queryStatement) == SQLITE_DONE
}
A: The problem was sqlite3_step(queryStatement) returning 8 which represents SQLITE_READONLY because of database file was in bundle, and i fixed it by moving the database file to the documents folder to be read write and it's working fine for this query and added new column to table.
| |
doc_23525624
|
Also is there anyway we could reverse an array without TEMP value?
for (int i = 0; i < numbers.length / 2; i++) { // why divide by 2
int temp = numbers[i];
numbers[i] = numbers[numbers.length - 1 - i];// what this does?
numbers[numbers.length - 1 - i] = temp;`i];
numbers[numbers.length - 1 - i] = temp;
A: Simply put, this loop is swapping the position of the values in the array.
For example, take this array [1, 2, 3, 4].
The loop starts by setting the element at i to the variable temp.
Then, the number at the end of the array minus how many iterations/positions (i) we have moved forward, is set to the current position of the array (i) thus replacing 1. In other words, it selects the furthest element from itself that hasn't been swapped. Right after that, temp is set to the position of the old position of 4.
The first iteration causes the array to look like [4, 2, 3, 1]
The second iteration causes the array to look like [4, 3, 2, 1]
The array is now reversed. But notice that we only iterated half the length of the array. There is no need to keep iterating and if we were to go any further, we would get an array out of bounds error.
(The reason for the -1 is because the .length returns the number of elements in the array counting up from 1)
A: Let's say your array has a length of 10, the first iteration would do this
i = 0
int temp = numbers[0];
numbers[0] = numbers[10 - 1 - 0]; // first value becomes the last value in the array
numbers[10 - 1 - 0] = temp; // last value becomes the previously first value in the array
2 values change in one step, so you need (length / 2) steps to get the job done.
If you want to swap two integers without using an additional variable, you can use the xor bitwise operator (^)
An example:
int x = -10;
int y = 125;
x ^= y;
y ^= x;
x ^= y;
More details about the theory behind it on wikipedia
| |
doc_23525625
|
Is it configurable in Kafka Server configurations?
A: I don't think it's possible to accomplish this. One of the key differences between Kafka and other messaging systems is that Kafka uses the underlying OS's to handle storage.
Another unconventional choice that we made is to avoid explicitly
caching messages in memory at the Kafka layer. Instead, we rely on
the underlying file system page cache. Whitepaper
So Kafka automatically writes messages to disk, so it retains them by default. This is a conscious decision the designers of Kafka have made that they believe is worth the tradeoffs.
If you're asking this because you're worried writing to disk may be slower than keeping things in memory.
We have found that both the production and the
consumption have consistent performance linear to the data size,
up to many terabytes of data. Whitepaper
So the size of the data that you've retained doesn't impact how fast the system is.
| |
doc_23525626
|
This is my seekBar:
<?xml version="1.0" encoding="utf-8"?>
<layout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools">
<SeekBar
android:id="@+id/seekBar"
android:layout_width="match_parent"
android:layout_height="match_parent">
</SeekBar>
</layout>
This is my class:
open class UtilClass(activity: Activity) {
...
val seekBar = activity.findViewById<SeekBar>(R.id.seekBar)
..
}
The seekbar is inflated using DataBindingUtil in its own Fragment:
class SeekBarFragment(): Fragment {
...
override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?
): View? {
val binding: SeekbarBinding = DataBindingUtil.inflate(inflater, R.layout.seekbar, container, false)
...
}
UtilClass is instantiated here:
class MainActivity : AppCompatActivity() {
private lateinit var mUtilClass: UtilClass
....
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
mUtilClass = UtilClass(requireActivity())
retainInstance = true
var fm = fragmentManager
var fragment: Fragment? = fm!!.findFragmentById(R.id.seek_container)
if (fragment == null) {
fragment = SeekBarFragment.newInstance()
fm.beginTransaction()
.add(R.id.seek_container, fragment)
.commit()
}
}
Binding is created here:
class SeekBarFragment: Fragment() {
companion object {
fun newInstance(): SeekBarFragment {
return SeekBarFragment()
}
}
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
super.onCreate(savedInstanceState)
}
override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?
): View? {
val binding: SeekbarBinding = DataBindingUtil.inflate(inflater, R.layout.seekbar, container, false)
...
}
A: You should specify the parent layout view.
override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?
): View {
var v: View =inflater.inflate(R.layout.seekbar, container, false)
var seekbar=v.findViewById(R.id.seekbar)
return v
}
| |
doc_23525627
|
this.transactionTemplate.execute(new TransactionCallback<E>() {
@Override
public E doInTransaction(TransactionStatus status) {
// update entities
TransactionSynchronizationManager.registerSynchronization(new NotificationTransactionSynchronization(){
@Override
public void afterCommit() {
// do some post commit work
int i = notifier.notifyAllListeners();
}
});
}
});
my test class:
@Test
public void testHappyPath() {
context.checking(new Expectations() {
{
allowing(platformTransactionManager).getTransaction(definition);
will((returnValue(status)));
oneOf(platformTransactionManager).commit(status);
//next line never gets hit... so the test fails...
//if i remove it will pass but i need to check that it works...
oneOf(mockNotifier).notifyAllListeners();
}
});
this.TestClass.process();
context.assertIsSatisfied();
}
A: Recently I got to the point where I had to test code which was using transactional hooks and after some investigation I got to following solution:
src:
public void methodWithTransactionalHooks() {
//...
TransactionSynchronizationManager.registerSynchronization(
new TransactionSynchronizationAdapter() {
public void afterCommit() {
// perform after commit synchronization
}
}
);
//...
}
test:
@Transactional
@Test
public void testMethodWithTransactionalHooks() {
// prepare test
// fire transaction synchronizations explicitly
for(TransactionSynchronization transactionSynchronization
: TransactionSynchronizationManager.getSynchronizations()
){
transactionSynchornization.afterCommit();
}
// verify results
}
Test by default is set to rollback so afterCommit synchronizations won't be fired. To test it, explicit call is necessary.
A: I'm not sure I understand, but if you have a mock transaction manager, then who would be calling the notifier?
A: I ran into the same issue, in my case adding
@Rollback(false)
to the test method helped.
See https://stackoverflow.com/a/9817815/1099376
| |
doc_23525628
|
Expected behavior
Stated warning should NOT be displayed.
Actual behavior
Every time I make a change and trigger a re-deployment, I get an error like:
WARN[0064] image [gcr.io/wired-benefit-XXXXX/demoapp] is not used by the deployment
Yet the image is modified with the updated change, so I'm not sure what the error is indicating,
Information
*
*Skaffold version: version... v1.15.0
*Operating system: ... MacOS Catilina 10.15.16
*Contents of skaffold.yaml:
apiVersion: skaffold/v2beta8
kind: Config
metadata:
name: demoapp
build:
artifacts:
- image: gcr.io/wired-benefit-293406/demoapp
deploy:
kubectl:
manifests:
- k8*.yml
Content of K8s manifests :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: demoapp
name: demoapp
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demoapp
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: demoapp
spec:
containers:
- image: gcr.io/wired-benefit-293406/demoapp
imagePullPolicy: IfNotPresent
name: demoapp
restartPolicy: Always
apiVersion: v1
kind: Service
metadata:
labels:
app: demoapp
name: demoapp-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 3000
selector:
app: demoapp
type: LoadBalancer
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: demoapp
spec:
maxReplicas: 5
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: demoapp
targetCPUUtilizationPercentage: 80
Steps to reproduce the behavior
*
*a very basic starter demo app
*skaffold dev
*Any change ... docker build is successful by skaffold and even pushing to registry
But, changes are not being reflected. Could be tag related problem. When I manually set the image name to latest for the deployment, then app change works.
A: As I said in the comment:
Does your K8S manifests is a single file with Deployment, Service and HPA inside of it? I ran exactly as you've pasted it (encountered same warning) and it lacked the --- in between the resources.
Talking specifically about the content included in Content of K8s manifests, this file is missing three dashes (---) between the resources.
It could be fixed either by:
*
*spliting the resources in multiple files (by following skaffold.yaml and it's template k8*.yml):
*
*k8s-deployment.yaml
*k8s-service.yaml
*k8s-hpa.yaml
*adding the --- between each of the resource in Content of K8s manifests (example):
DEPLOYMENT
---
SERVICE
---
HPA
You can read more about --- in YAML files by following this StackOverflow answer:
*
*Stackoverflow.com: why — (3 dashes/hyphen) in yaml file?
As for the reproduction. I used the the official getting started guide:
*
*Skaffold.dev: Docs: Quickstart
I copied the Content of K8s manifests into the k8s-pod.yaml and changed the line (this file does not have --- between the resources):
- image: gcr.io/PROJECT-NAME/demoapp
Running below command with:
*
*$ skaffold dev
Listing files to watch...
- gcr.io/PROJECT-NAME/demoapp
Generating tags...
- gcr.io/PROJECT-NAME/demoapp -> gcr.io/PROJECT-NAME/demoapp:<--REDACTED-->
Checking cache...
- gcr.io/PROJECT-NAME/demoapp: Not found. Building
Building [gcr.io/PROJECT-NAME/demoapp]...
Sending build context to Docker daemon 3.072kB
<--REDACTED-->
<--REDACTED-->: Pushed
<--REDACTED-->: Layer already exists
<--REDACTED-->: digest: <--REDACTED--> size: 739
Tags used in deployment:
- gcr.io/PROJECT-NAME/demoapp -> gcr.io/PROJECT-NAME/demoapp:<--REDACTED-->
Starting deploy...
WARN[0023] image [gcr.io/PROJECT-NAME/demoapp] is not used by the deployment
- horizontalpodautoscaler.autoscaling/demoapp created
Waiting for deployments to stabilize...
Deployments stabilized in 198.216977ms
Press Ctrl+C to exit
Watching for changes...
Focusing on:
WARN[0023] image [gcr.io/PROJECT-NAME/demoapp] is not used by the deployment
- horizontalpodautoscaler.autoscaling/demoapp created
As you can see only the HPA object was created. Deployment and Service was not created. It's also showing the same warning as yours.
Running $ kubectl apply -f k8s-pod.yaml will yield the same results!
Editing the k8s-pod.yaml file to include --- and running $ skaffold dev once again should produce output similar to the one below:
Listing files to watch...
- gcr.io/PROJECT-NAME/demoapp
Generating tags...
- gcr.io/PROJECT-NAME/demoapp -> gcr.io/PROJECT-NAME/<--REDACTED-->
Checking cache...
- gcr.io/PROJECT-NAME/demoapp: Not found. Building
<--REDACTED-->
<--REDACTED-->: Pushed
<--REDACTED-->: Layer already exists
<--REDACTED-->: digest: <--REDACTED--> size: 739
Tags used in deployment:
- gcr.io/PROJECT-NAME/demoapp -> gcr.io/PROJECT-NAME/demoapp:<--REDACTED-->
Starting deploy...
- deployment.apps/demoapp created
- service/demoapp-svc created
- horizontalpodautoscaler.autoscaling/demoapp created
Waiting for deployments to stabilize...
- deployment/demoapp is ready.
Deployments stabilized in 5.450197785s
Press Ctrl+C to exit
Watching for changes...
[demoapp] Hello World with ---!
[demoapp] Hello World with ---!
[demoapp] Hello World with ---!
As you can see above all of the resources were created, there was no warning about the deployment not using an image and also the app responded.
Additional resources:
*
*Kubernetes: Working with objects: Kubernetes objects
| |
doc_23525629
|
If you're using git, then any time you regularly commit your work like a good developer, you can no longer see which lines or files you have modified.
It would be nice if I could tell PhpStorm "please consider the deploy-candidate branch to be 'the last update', so I can easily see what I've actually changed in my current task's BUG-9463-Fix-Login-Button branch."
So: Is there a setting to show all changes in your local branch as "edited"?
Notes:
1) As an alternative, a correct answer could be a git incantation to fool the editor. I'd be OK with having to undo that incantation each time before I committed, and redo it after I push, since I can just automate that in .gitconfig anyway.
2) Ideally it'd also show which files were changed in your branch in the "local changes" tab, but while that'd be a bonus, it's not the focus of this question, nor required in a correct answer.
3) While it shows similar information, the undeniably awesome diff tool is no substitute for line-status indicators.
A: There's this request submitted to the tracker: https://youtrack.jetbrains.com/issue/IDEA-24398
And it appears that someone has made a plugin partially addressing the request: https://plugins.jetbrains.com/plugin/10083-git-scope
| |
doc_23525630
|
It's a next.js app that only has basic CRUD functions (I'm following Brad Traversys next.js course on udemy)
In this component I want to make a request to my Strapi backend to fetch user data
import React from 'react'
import Layout from '../components/Layout'
import { parseCookies } from '@/helpers/index'
import { API_URL } from '../../config'
const Dashboard = ({ events }) => {
return (
<Layout title='User Dashboard'>
<h1>Your events</h1>
{events && events.length && events.map((el, i) => <div>{el.name}</div>)}
</Layout>
)
}
export default Dashboard
export async function getServerSideProps({ req }) {
const { token } = parseCookies(req)
const res = await fetch(`${API_URL}/events/me`,
{
method: 'GET',
headers: {
Authorization: `Bearer ${token}`
}
})
const events = await res.json()
return {
props: {
events
}
}
}
the helper method should extract the cookie from the request and return the token to getServerSideProps
import cookie from 'cookie'
export function parseCookies(req) {
console.log('///// REQ IN HELPER', cookie.parse(req.headers.cookie))
return cookie.parse(req ? req.headers.cookie || '' : '')
}
Instead of a token the method returns this
{
_xsrf: '2|07438526|dd1d3c86869ab7209b159b127acbead9|1629292796',
'username-localhost-8888': '2|1:0|10:1629300070|23:username-localhost-8888|44:OThhNzc0YWY4MTA4NDFmZWFlYWM3MWE2MmEyNmUzYjI=|5c1a7386f172dee14bf53281f3e3ba9a6fb7e1cf067e7438529ca8f4160214f6'
}
Here is an example of how I set the cookie (in this case after login):
import { API_URL } from '../../config/index'
import cookie from 'cookie'
export default async (req, res) => {
if (req.method === 'POST') {
const { identifier, password } = req.body
const strapiRes = await fetch(`${API_URL}/auth/local`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ identifier, password })
})
const data = await strapiRes.json()
if (strapiRes.ok) {
res.setHeader('Set-Cookie',
cookie.serialize('token', data.jwt),
{
httpOnly: true,
maxAge: 60 * 60 * 24 * 7,
sameSite: 'strict',
path: '/'
})
res.status(200).json({ user: data.user })
} else {
res.status(data.statusCode).json({ message: data.message[0].messages[0].message })
}
}
else {
res.setHeader('Allow', ['POST'])
res.status(405).json({ message: `Method ${req.method} is not allowed` })
}
}
So as far as I got it the cookie should now be stored server side and put automatically into each request. Though the result I get from my cookie.parse() is caused by an undefined cookie, as far as I understand.
In other components getting the cookie from the header is no problem and works as it should - it's just in this dashboard-component where it does not seem to work.
Does anyone of you know how to fix that?
A: I found my mistake - for everyone else who got this problem, here is what I made wrong: The call of cookie.serialize was wrong, the object with the options need to be an argument of cookie.serialize instead of setHeader
wrong version
res.setHeader('Set-Cookie',
cookie.serialize('token', data.jwt),
{
httpOnly: true,
maxAge: 60 * 60 * 24 * 7,
sameSite: 'strict',
path: '/'
})
working version:
res.setHeader('Set-Cookie',
cookie.serialize('token', data.jwt,
{
httpOnly: true,
maxAge: 60 * 60 * 24 * 7,
sameSite: 'strict',
path: '/'
}))
A: And I found another problem that might help you to overcome that problem. I had another faulty cookie saved in my browser that made my application crash. After deleting this cookie everything worked as it should
| |
doc_23525631
|
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'qualityAuditTokenService' defined in URL [jar:file:/home/www/webapps/ROOT/WEB-INF/lib/product-service-2.0.1-SNAPSHOT.jar!/com/company/product/services/QualityAuditTokenService.class]: Unsatisfied dependency expressed through constructor argument with index 0 of type [com.company.workflow.dao.TokenDao]: : No qualifying bean of type [com.company.workflow.dao.TokenDao] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {}; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [com.company.workflow.dao.TokenDao] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {}
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:742)
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:196)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1114)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1017)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:504)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:304)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:300)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:700)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:760)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:403)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:306)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:106)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4811)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5251)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1092)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1834)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [com.company.workflow.dao.TokenDao] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {}
at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoSuchBeanDefinitionException(DefaultListableBeanFactory.java:1100)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:960)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:855)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:806)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:734)
... 28 more
However if I install exactly the same web application to Tomcat 8.0.33 installed on an NFS mount running on Java 1.8.0_92 on Centos 6.7 it works just fine. It also works just fine if I install it to Tomcat 7.0.69 on ext3 on Centos, Tomcat 8.0.33 on ext4 on Ubuntu and NTFS on Windows. So it's just throwing this error running in Tomcat 8.0.33 on ext3 on Centos. It wouldn't be so much of a problem if this weren't our live deployment environment.
So this is clearly not one of the standard "missing annotations" or "bean class missing from JAR" type problems although I am happy to hear suggestions in this vein in case I missed something.
The strange thing about this deployment is that the Spring beans are created in a different order on the different file systems. In the versions that work, the following appears in the logfile with Spring logging maxed:
DEBUG DefaultListableBeanFactory:449 - Creating instance of bean 'tokenDaoHbm'
DEBUG DefaultListableBeanFactory:249 - Returning cached instance of singleton bean 'sessionFactory'
DEBUG DefaultListableBeanFactory:249 - Returning cached instance of singleton bean 'searchSessionFactory'
DEBUG DefaultListableBeanFactory:750 - Autowiring by type from bean name 'tokenDaoHbm' via constructor to bean named 'sessionFactory'
DEBUG DefaultListableBeanFactory:523 - Eagerly caching bean 'tokenDaoHbm' to allow for resolving potential circular references
DEBUG DefaultListableBeanFactory:249 - Returning cached instance of singleton bean 'org.springframework.transaction.config.internalTransactionAdvisor'
DEBUG DefaultListableBeanFactory:249 - Returning cached instance of singleton bean 'org.springframework.cache.config.internalCacheAdvisor'
DEBUG AnnotationTransactionAttributeSource:108 - Adding transactional method 'TokenDaoHbm.update' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; ''
DEBUG InfrastructureAdvisorAutoProxyCreator:551 - Creating implicit proxy for bean 'tokenDaoHbm' with 0 common interceptors and 1 specific interceptors
DEBUG JdkDynamicAopProxy:117 - Creating JDK dynamic proxy: target source is SingletonTargetSource for target object [com.company.product.dao.hibernate.TokenDaoHbm@4a51d9f9]
DEBUG DefaultListableBeanFactory:477 - Finished creating instance of bean 'tokenDaoHbm'
This is the bean that would satisfy the dependency had it been created - in the version that throws the exception, this bean creation is notable for it's absence.
TL;DR
So, how can the OS, file system type and/or network latency change the order which Spring creates beans (or otherwise break it's dependency analysis)? Surely this is something enshrined in the WAR file (and the version of Spring it is packaged with)?
I have tried to influence bean creation via @ComponentScan and @Qualifier to no avail - are there other approaches?
This problem bears a resemblance to the one linked below, but there is no posted solution (and they are having the problem with Tomcat 7 not 8).
Need help debugging Tomcat 7 application error
Any help with this greatly appreciated, as this one is really vexing me! :-D
A: I now have a solution to this (hence posting an answer) but it's butt fugly and I still have no explanation for why this is necessary (so I'll save the accepted tick for either a more elegant solution or a complete explanation).
It turns out my issue is related to issue 57129 raised on the ASF bugzilla:
https://bz.apache.org/bugzilla/show_bug.cgi?id=57129
However, in the case outlined there, multiple JAR files within a WAR contain different versions of the same class file. This means changing the order will change the application behaviour - not desirable.
In my case, the class in question, TokenDaoHbm, only exists once in the WAR file. It's just that, if the Tomcat class loader hasn't loaded the product-dao-hibernate-2.0.1-SNAPSHOT JAR file by the time Spring comes to instantiate qualityAuditTokenService bean, then you get a NoSuchBeanDefinitionException. Surely Spring and/or Tomcat must know all classes must be loaded before bean instantiation can commence?
So, to fix my problem, I placed the following in my application context.xml in the WAR according to the advice from Mark Thomas in the ASF bug posting:
<Resources>
<PreResources className="org.apache.catalina.webresources.FileResourceSet"
base="${catalina.base}/webapps/ROOT/WEB-INF/lib/product-dao-hibernate-2.0.1-SNAPSHOT.jar"
webAppMount="/WEB-INF/lib/product-dao-hibernate-2.0.1-SNAPSHOT.jar" />
</Resources>
If anyone can shed further light on this I am happy to mark them as the accepted answer.
| |
doc_23525632
|
Error LNK2005 mkl_serv_allocate already defined in mkl_intel_thread_dll.lib(mkl_intel_thread.dll) test C:\Users\user1\test\build\mkl_core.lib(mkl_memory_patched.obj)
Error LNK2005 mkl_serv_malloc already defined in mkl_intel_thread_dll.lib(mkl_intel_thread.dll) test C:\Users\user1\test\build\mkl_core.lib(mkl_memory_patched.obj)
Error LNK2005 mkl_serv_deallocate already defined in mkl_intel_thread_dll.lib(mkl_intel_thread.dll) test C:\Users\user1\test\build\mkl_core.lib(mkl_memory_patched.obj)
Error LNK2005 mkl_serv_free already defined in mkl_intel_thread_dll.lib(mkl_intel_thread.dll) test C:\Users\user1\test\build\mkl_core.lib(mkl_memory_patched.obj)
Error LNK2005 mkl_serv_format_print already defined in mkl_intel_thread_dll.lib(mkl_intel_thread.dll) test C:\Users\user1\test\build\mkl_core.lib(mkl_msg_support.obj)
Error LNK2005 mkl_serv_inspector_suppress already defined in mkl_intel_thread_dll.lib(mkl_intel_thread.dll) test C:\Users\user1\test\build\mkl_core.lib(mkl_semaphore.obj)
Error LNK2005 mkl_serv_inspector_unsuppress already defined in mkl_intel_thread_dll.lib(mkl_intel_thread.dll) test C:\Users\user1\test\build\mkl_core.lib(mkl_semaphore.obj)
Error LNK2005 mkl_serv_thread_yield already defined in mkl_intel_thread_dll.lib(mkl_intel_thread.dll) test C:\Users\user1\test\build\mkl_core.lib(mkl_semaphore.obj)
Error LNK2005 mkl_serv_unlock already defined in mkl_intel_thread_dll.lib(mkl_intel_thread.dll) test C:\Users\user1\test\build\mkl_core.lib(mkl_semaphore.obj)
Error LNK2019 unresolved external symbol __kmpc_global_thread_num referenced in function mkl_lapack_dgetrf test C:\Users\test\build\mkl_intel_thread.lib(dgetrf_par.obj)
Error LNK2001 unresolved external symbol __kmpc_global_thread_num test C:\Users\test\build\mkl_intel_thread.lib(d__scal_drv.obj)
Error LNK2019 unresolved external symbol __kmpc_ok_to_fork referenced in function mkl_lapack_dgetrf test C:\Users\test\build\mkl_intel_thread.lib(dgetrf_par.obj)
Error LNK2001 unresolved external symbol __kmpc_ok_to_fork test C:\Users\test\build\mkl_intel_thread.lib(d__scal_drv.obj)
Error LNK2019 unresolved external symbol __kmpc_push_num_threads referenced in function mkl_lapack_dgetrf test C:\Users\test\build\mkl_intel_thread.lib(dgetrf_par.obj)
Error LNK2001 unresolved external symbol __kmpc_push_num_threads test C:\Users\test\build\mkl_intel_thread.lib(d__scal_drv.obj)
Error LNK2019 unresolved external symbol __kmpc_fork_call referenced in function mkl_lapack_dgetrf test C:\Users\test\build\mkl_intel_thread.lib(dgetrf_par.obj)
Error LNK2001 unresolved external symbol __kmpc_fork_call test C:\Users\test\build\mkl_intel_thread.lib(d__scal_drv.obj)
Error LNK2019 unresolved external symbol __kmpc_serialized_parallel referenced in function mkl_lapack_dgetrf test C:\Users\test\build\mkl_intel_thread.lib(dgetrf_par.obj)
Error LNK1120 16 unresolved externals test C:\Users\user1\test\build\Debug\test.exe
Is this due to the following CMAKE?
cmake_minimum_required(VERSION 3.11)
PROJECT(MYPROJECT)
set(MKL_INCLUDE_DIRS "C:/Program Files (x86)/IntelSWTools/compilers_and_libraries/windows/mkl/include")
set(MKL_LIBRARIES "C:/Program Files (x86)/IntelSWTools/compilers_and_libraries/windows/mkl/lib/intel64")
include_directories(${MKL_INCLUDE_DIRS})
add_executable(test MACOSX_BUNDLE)
target_link_libraries(
test
${MKL_LIBRARIES}/mkl_blas95_ilp64.lib"
${MKL_LIBRARIES}/mkl_blas95_lp64.lib"
${MKL_LIBRARIES}/mkl_core.lib"
${MKL_LIBRARIES}/mkl_core_dll.lib"
${MKL_LIBRARIES}/mkl_intel_ilp64.lib"
${MKL_LIBRARIES}/mkl_intel_ilp64_dll.lib"
${MKL_LIBRARIES}/mkl_intel_lp64.lib"
${MKL_LIBRARIES}/mkl_intel_lp64_dll.lib"
${MKL_LIBRARIES}/mkl_intel_thread.lib"
${MKL_LIBRARIES}/mkl_intel_thread_dll.lib"
${MKL_LIBRARIES}/mkl_lapack95_ilp64.lib"
${MKL_LIBRARIES}/mkl_lapack95_lp64.lib"
${MKL_LIBRARIES}/mkl_rt.lib"
${MKL_LIBRARIES}/mkl_sequential.lib"
${MKL_LIBRARIES}/mkl_sequential_dll.lib"
${MKL_LIBRARIES}/mkl_tbb_thread.lib"
${MKL_LIBRARIES}/mkl_tbb_thread_dll.lib"
I have the bin folder containing the DLLs in the system PATH:
C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2019.0.117\windows\redist\intel64_win\mkl
A: You link something unnecessary. You can solve this error by setting the link flag.
LINK_FLAGS "/force:multiple"
and you lost something neccessary, please link libiomp***.dll. You can refer to this post
| |
doc_23525633
|
What I did was first making datetime object of US/Eastern timezone, converting it to UTC timezone, and converting it back to the US/Eastern timezone. It is expected the first and last US/Eastern timezone datetime objects are identical. But it turned out that the two are printed differently.
What am I missing here?
Code:
from datetime import datetime
import pytz
tz_local = pytz.timezone('US/Eastern')
tz_utc = pytz.utc
datestring = '20210701'
timestring = '04:00:00'
hour, minute, sec = timestring.split(':')
hour, minute, sec = list(map(int, [hour, minute, sec]))
# Make naive datetime object from raw strings
date_naive = datetime.strptime(datestring, '%Y%m%d')
time_naive = date_naive.replace(hour=hour, minute=minute, second=sec)
# Add local timezone information US/Eastern
time_local = time_naive.replace(tzinfo=tz_local)
# Convert to UTC timezone
time_utc = time_local.astimezone(tz_utc)
# Revert to US/Eastern Timezone
time_local_rev = time_utc.astimezone(tz_local)
print(time_local.strftime('%Y-%m-%d %H:%M:%S %Z%z'))
print(time_local_rev.strftime('%Y-%m-%d %H:%M:%S %Z%z'))
Outputs:
2021-07-01 04:00:00 LMT-0456
2021-07-01 04:56:00 EDT-0400
Solution
As @MrFuppes noted, using .localize method instead of .replace solved the issue as follows
# Add local timezone information US/Eastern
time_local = tz_local.localize(time_naive)
Generated
2021-07-01 04:00:00 EDT-0400
2021-07-01 04:00:00 EDT-0400
A: If you can use Python 3.9 or higher, use the built-in zoneinfo library to avoid the "localize-trap". EX:
from datetime import datetime
from zoneinfo import ZoneInfo
tz_local = ZoneInfo('US/Eastern')
tz_utc = ZoneInfo('UTC')
datestring = '20210701'
timestring = '04:00:00'
# make a datetime object and set the time zone with replace:
dt_local = datetime.strptime(datestring+timestring, "%Y%m%d%H:%M:%S").replace(tzinfo=tz_local)
dt_utc = dt_local.astimezone(tz_utc)
print(dt_local)
# 2021-07-01 04:00:00-04:00
print(dt_utc)
# 2021-07-01 08:00:00+00:00
With Python < 3.8, you can also use zoneinfo via backports, or you can use dateutil to handle time zones. It's safe to set tzinfo directly with both; no extra localize step needed.
| |
doc_23525634
|
*
*documents
*document_revision_fields
I need to migrate some of the columns of some of the records (based on the value of one of the columns) inside the document_revision_fields to a new table document_fields.
This is how the new table is being created:
await knex.schema
.createTable('document_fields', (table) => {
table.increments()
table
.text('sid')
.unique()
table.text('applicationId')
table
.foreign('applicationId')
.references('id')
.inTable('applications')
.onDelete('CASCADE')
table.text('documentId')
table
.foreign('documentId')
.references('id')
.inTable('documents')
.onDelete('CASCADE')
table.text('digitalAssetId')
table
.foreign(['digitalAssetId', 'applicationId'])
.references(['id', 'applicationId'])
.inTable('digital_assets')
.onDelete('CASCADE')
table.enu('type', ['text', 'embed', 'slideshow'])
table.text('value')
table.text('copy')
table.text('header')
table.enu('status', ['active', 'inactive', 'deleted'])
table.integer('order')
table.text('meta')
table.timestamp('createdAt').defaultTo(knex.fn.now())
table.timestamp('updatedAt').defaultTo(knex.fn.now())
})
This is how I'm selecting the rows to be migrated
const rows = await knex('document_revision_fields')
.select(
'document_revision_fields.sid',
'document_revision_fields.applicationId',
'document_revision_fields.documentId',
'document_revision_fields.digitalAssetId',
'document_revision_fields.type',
'document_revision_fields.value',
'document_revision_fields.copy',
'document_revision_fields.header',
'document_revision_fields.status',
'document_revision_fields.order',
'document_revision_fields.meta'
)
.join('documents', 'documents.activeRevisionId', 'document_revision_fields.revisionId')
And finally, this is how I'm trying to migrate these rows to the new table
return knex.batchInsert('document_fields', rows)
Everything up until the batchInsert does work. In fact I can console.log the rows variable and I get the expected result.
When I try to batchInsert these rows in the new table, however, I get this error
Failed to migrate schema.
error: new row for relation "document_fields" violates check constraint "document_fields_type_check"
detail: "Failing row contains (52808, Nmw-5qMsV, 06217620-aee9-11e9-9f1a-4bff6f1a2985, f046adc0-3214-11ea-8056-31c443663220, null, paragraph, <p>Here's President Donald Trump's tweet for reference. </p..., null, null, active, 5, {}, 2020-05-21 07:20:24.022705+02, 2020-05-21 07:20:24.022705+02)."
Can you spot any issue at all?
A: From what I can tell, you have set up type to be an enum:
table.enu('type', ['text', 'embed', 'slideshow'])
However, you are trying to insert either null or paragraph, and neither of them are part of the enum.
| |
doc_23525635
|
"NetworkError: 404 Not Found - http://localhost/drupal/tooltip/css/layout.css"
my files path like this
/* mytheme.info */
stylesheets[all][] = tooltip/css/layout.css
stylesheets[all][] = tooltip/css/typo.css
This files are in my Theme folder
Folder: MYTHEME
File: mytheme.info
Folder: tooltip
Folder: css
File: layout.css
File: typo.css
A: The URL is wrong: localhost/drupal/tooltip/css/layout.css should be localhost/drupal/sites/all/themes/YOUR_THEME_FOLDER/tooltip/css/layout.css.
| |
doc_23525636
|
I can successfully emulate the call using postman. DataSource definition below:
constructor(
injector: Injector,
private _route: ActivatedRoute,
private _router: Router
) {
super(injector);
let authString = "Bearer " + getToken();
this.gridDataSource = {
store: {
type: 'odata',
key: 'Id',
keyType: "Int32",
version: 4,
url: 'http://localhost:21021/odata/Roles'
},
select: [
'Id',
'Name',
'DisplayName'
],
beforeSend: (e) => {
e.headers = {
"Content-Type": "application/json",
"Authorization": authString
}
}
}
}
Postman request image
I'm assuming I have the beforesend header definition incorrectly formatted, any help much appreciated
xhr screenshot
A: The beforeSend function has to be inside the store object.
this.dataSource = {
store: {
type: 'odata',
key: 'Id',
url: 'https://localhost:44350/odata/Books',
version: 4,
withCredentials: true,
beforeSend: (e) => {
e.headers = {
"Authorization": `Bearer ${jwtToken}`
};
}
}
A: Try encoding with btoa
let authString = "Bearer " + window.btoa(getToken()+ ':');
A: I was able to get it working by defining the datasource as follows:
this.gridDataSource = new DataSource({
store: new ODataStore({
url: "http://localhost:21021/odata/Roles",
key: "Id",
keyType: "Int32",
version: 4,
beforeSend: (e) => {
e.headers = {
"Content-Type": "application/json",
"Authorization": 'Bearer ' + abp.auth.getToken(),
};
}
})
})
| |
doc_23525637
|
A: You can refer my case :
*
*Machine A (192.168.1.1): Centos
*
*Download and install SonarQube, MySQL http://192.168.1.1:9000
*Machine B (192.168.1.2): Centos
*
*SonarQube Runner
*Jenkins http://192.168.1.2:8080
*SCM like ClearClear, Git
*Build tools (JDK, Maven, Ant,...)...
A: There are lot of links on sonarqube/jenkins site to setup it. you could find it there.
| |
doc_23525638
|
The Event has a start_date property.
I need to get the IDs of the duplicated events that start from 2 months to now. A duplicated event is an event that is a the same location at same hour.
// Create the pipeline
pipeline := []bson.M{
bson.M{
"$match": bson.M{
"start_date": bson.M{"$gt": time.Now().AddDate(0, -2, 0)},
},
},
bson.M{
"$group": bson.M{
"_id": bson.M{
"_location_id": "$_location_id",
"start_date": "$start_date",
},
"docs": bson.M{"$push": "$_id"},
"count": bson.M{"$sum": 1},
},
},
bson.M{
"$match": bson.M{
"count": bson.M{"$gt": 1.0},
},
},
}
Am I missing something ?
I checked in database and I do have events that have a start_date matching my criteria, with that request db.events.find({}).sort({ "start_date": -1}).limit(1); and that one db.events.find({"start_date": { "$gt": ISODate("2019-05-16T00:00:00.0Z")}}).limit(1)
Version : MongoDB shell version v3.4.6
A:
I checked in database and I do have events that have a start_date matching my criteria, with that request
MongoDB stores time in UTC by default, and will convert any local time representations into this form.
This means, if you are in UTC+2 timezone, your query filter is by default would be in local time but the documents in the database is in UTC. You need to convert your time to UTC. For example, you have the following documents in a collection:
{ "start_date": ISODate("2019-05-27T00:00:00Z"), "location_id": 1 },
{ "start_date": ISODate("2019-05-28T00:00:00Z"), "location_id": 2 },
{ "start_date": ISODate("2019-05-24T00:00:00Z"), "location_id": 1 },
You can perform $match for dates two months ago as below:
pipeline := mongo.Pipeline{
{{"$match", bson.D{
{"start_date", bson.D{
{"$gt", time.Now().AddDate(0, -2, 0).UTC()},
},
},
}}},
}
cursor, err := collection.Aggregate(context.Background(), pipeline)
defer cursor.Close(context.Background())
for cursor.Next(context.Background()) {
var doc bson.M
err := cursor.Decode(&doc)
if err != nil {
log.Fatal(err)
}
fmt.Println(doc)
}
Note the conversion to UTC after the calculation. If today is already the 24th July 2019, then the query will not matched the third document. Instead you need to query with 2 months and 1 day ago.
Another tip is you can debug the date sent to the server by printing the pipeline, i.e:
fmt.Println(pipeline)
// [[{$match [{start_date [{$gt 2019-05-24 05:19:47.382049 +0000 UTC}]}]}]]
If you have a static date value, you can also construct the date (still in UTC) as below example:
filterDate := time.Date(2019, 5, 24, 0, 0, 0, 0, time.UTC)
pipeline := mongo.Pipeline{
{{"$match", bson.D{
{"start_date", bson.D{
{"$gt", filterDate},
},
},
}}},
}
fmt.Println(pipeline)
All of the snippets above are written for MongoDB Go driver v1.0.x.
| |
doc_23525639
|
a14,2T,50,33.26:a14,3T,50,33.26,a14,4,50,33.26,
a17,2T,50,33.26:a17,3T,50,33.26,a17,4,50,33.26,
a4,2T,50,33.26:a4,3T,48,33.26,a4,4,49,33.26,
a3,2T,50,33.26:a3,3T,47,33.26,a3,4,48,33.26,
a9,2T,50,33.26:a9,3T,50,33.26,a9,4,50,33.26,
a2,2T,50,31.48:a2,3T,50,33.26,a2,4,50,33.26,
a12,2T,50,33.26:a12,3T,49,33.26,a12,4,49,33.26,
a10,2T,50,33.26:a10,3T,50,33.26,a10,4,50,33.26,
a11,2T,50,31.48:a11,3T,50,33.26,a11,4,50,33.26,
a8,2T,50,33.26:a8,3T,50,33.26,a8,4,49,33.26,
a16,2T,50,33.26:a16,3T,50,33.26,a16,4,50,33.26,
a6,2T,50,33.26:a6,3T,50,33.26,a6,4,49,33.26,
a13,2T,50,33.26:a13,3T,49,33.26,a13,4,50,33.26,
a5,2T,50,31.48:a5,3T,50,33.26,a5,4,50,33.26,
a15,2T,50,31.48:a15,3T,50,33.26,a15,4,50,33.26,
a7,2T,50,33.26:a7,3T,50,33.26,a7,4,50,33.26,
a1,2T,50,33.26:a1,3T,45,33.26,a1,4,48,33.26,
I want the string to be :
a1,a1,a1,a3,a7,a9,a5,a5,a6,a8,a0,a2
The order doesn't really matter but I want the ones which are for example a1 to be near eachother, any idea how I can do this.. I have no idea on how to google it also I tried.
Any help would be much appreciated!
Edit: The string is dynamic
A: Here you go! Change the string to an array, then sort it, and convert back!
<?php
$string = 'a1
a3
a7
a9
a5
a6
a1
a8
a0
a2
a5';
$array = explode("\n", $string);
sort($array);
$string = implode(',',$array);
echo $string;
Which outputs:
a0,a1,a1,a2,a3,a5,a5,a6,a7,a8,a9
Check it out here: https://3v4l.org/rIQ1e
| |
doc_23525640
|
URL normalization (or URL canonicalization) is the process by which URLs are modified and standardized in a consistent manner. The goal of the normalization process is to transform a URL into a normalized or canonical URL so it is possible to determine if two syntactically different URLs are equivalent.
Strategies include adding trailing slashes, https => http, etc. The Wikipedia page lists many.
Got a favorite method of doing this in Java? Perhaps a library (Nutch?), but I'm open. Smaller and fewer dependencies is better.
I'll handcode something for now and keep an eye on this question.
EDIT: I want to aggressively normalize to count URLs as the same if they refer to the same content. For example, I ignore the parameters utm_source, utm_medium, utm_campaign. For example, I ignore subdomain if the title is the same.
A: Have you taken a look at the URI class?
http://docs.oracle.com/javase/7/docs/api/java/net/URI.html#normalize()
A: Because you also want to identify URLs which refer to the same content, I found this paper from the WWW2007 pretty interesting: Do Not Crawl in the DUST: Different URLs with Similar Text. It provides you with a nice theoretical approach.
A: No, there is nothing in the standard libraries to do this. Canonicalization includes things like decoding unnecessarily encoded characters, converting hostnames to lowercase, etc.
e.g. http://ACME.com/./foo%26bar becomes:
http://acme.com/foo&bar
URI's normalize() does not do this.
A: The RL library:
https://github.com/backchatio/rl
goes quite a ways beyond java.net.URL.normalize().
It's in Scala, but I imagine it should be useable from Java.
A: I found this question last night, but there wasn't an answer I was looking for so I made my own. Here it is incase somebody in the future wants it:
/**
* - Covert the scheme and host to lowercase (done by java.net.URL)
* - Normalize the path (done by java.net.URI)
* - Add the port number.
* - Remove the fragment (the part after the #).
* - Remove trailing slash.
* - Sort the query string params.
* - Remove some query string params like "utm_*" and "*session*".
*/
public class NormalizeURL
{
public static String normalize(final String taintedURL) throws MalformedURLException
{
final URL url;
try
{
url = new URI(taintedURL).normalize().toURL();
}
catch (URISyntaxException e) {
throw new MalformedURLException(e.getMessage());
}
final String path = url.getPath().replace("/$", "");
final SortedMap<String, String> params = createParameterMap(url.getQuery());
final int port = url.getPort();
final String queryString;
if (params != null)
{
// Some params are only relevant for user tracking, so remove the most commons ones.
for (Iterator<String> i = params.keySet().iterator(); i.hasNext();)
{
final String key = i.next();
if (key.startsWith("utm_") || key.contains("session"))
{
i.remove();
}
}
queryString = "?" + canonicalize(params);
}
else
{
queryString = "";
}
return url.getProtocol() + "://" + url.getHost()
+ (port != -1 && port != 80 ? ":" + port : "")
+ path + queryString;
}
/**
* Takes a query string, separates the constituent name-value pairs, and
* stores them in a SortedMap ordered by lexicographical order.
* @return Null if there is no query string.
*/
private static SortedMap<String, String> createParameterMap(final String queryString)
{
if (queryString == null || queryString.isEmpty())
{
return null;
}
final String[] pairs = queryString.split("&");
final Map<String, String> params = new HashMap<String, String>(pairs.length);
for (final String pair : pairs)
{
if (pair.length() < 1)
{
continue;
}
String[] tokens = pair.split("=", 2);
for (int j = 0; j < tokens.length; j++)
{
try
{
tokens[j] = URLDecoder.decode(tokens[j], "UTF-8");
}
catch (UnsupportedEncodingException ex)
{
ex.printStackTrace();
}
}
switch (tokens.length)
{
case 1:
{
if (pair.charAt(0) == '=')
{
params.put("", tokens[0]);
}
else
{
params.put(tokens[0], "");
}
break;
}
case 2:
{
params.put(tokens[0], tokens[1]);
break;
}
}
}
return new TreeMap<String, String>(params);
}
/**
* Canonicalize the query string.
*
* @param sortedParamMap Parameter name-value pairs in lexicographical order.
* @return Canonical form of query string.
*/
private static String canonicalize(final SortedMap<String, String> sortedParamMap)
{
if (sortedParamMap == null || sortedParamMap.isEmpty())
{
return "";
}
final StringBuffer sb = new StringBuffer(350);
final Iterator<Map.Entry<String, String>> iter = sortedParamMap.entrySet().iterator();
while (iter.hasNext())
{
final Map.Entry<String, String> pair = iter.next();
sb.append(percentEncodeRfc3986(pair.getKey()));
sb.append('=');
sb.append(percentEncodeRfc3986(pair.getValue()));
if (iter.hasNext())
{
sb.append('&');
}
}
return sb.toString();
}
/**
* Percent-encode values according the RFC 3986. The built-in Java URLEncoder does not encode
* according to the RFC, so we make the extra replacements.
*
* @param string Decoded string.
* @return Encoded string per RFC 3986.
*/
private static String percentEncodeRfc3986(final String string)
{
try
{
return URLEncoder.encode(string, "UTF-8").replace("+", "%20").replace("*", "%2A").replace("%7E", "~");
}
catch (UnsupportedEncodingException e)
{
return string;
}
}
}
A: You can do this with the Restlet framework using Reference.normalize(). You should also be able to remove the elements you don't need quite conveniently with this class.
A: In Java, normalize parts of a URL
Example of a URL: https://i0.wp.com:55/lplresearch.com/wp-content/feb.png?ssl=1&myvar=2#myfragment
protocol: https
domain name: i0.wp.com
subdomain: i0
port: 55
path: /lplresearch.com/wp-content/uploads/2019/01/feb.png?ssl=1
query: ?ssl=1"
parameters: &myvar=2
fragment: #myfragment
Code to do the URL parsing:
import java.util.*;
import java.util.regex.*;
public class regex {
public static String getProtocol(String the_url){
Pattern p = Pattern.compile("^(http|https|smtp|ftp|file|pop)://.*");
Matcher m = p.matcher(the_url);
return m.group(1);
}
public static String getParameters(String the_url){
Pattern p = Pattern.compile(".*(\\?[-a-zA-Z0-9_.@!$&''()*+,;=]+)(#.*)*$");
Matcher m = p.matcher(the_url);
return m.group(1);
}
public static String getFragment(String the_url){
Pattern p = Pattern.compile(".*(#.*)$");
Matcher m = p.matcher(the_url);
return m.group(1);
}
public static void main(String[] args){
String the_url =
"https://i0.wp.com:55/lplresearch.com/" +
"wp-content/feb.png?ssl=1&myvar=2#myfragment";
System.out.println(getProtocol(the_url));
System.out.println(getFragment(the_url));
System.out.println(getParameters(the_url));
}
}
Prints
https
#myfragment
?ssl=1&myvar=2
You can then push and pull on the parts of the URL until they are up to muster.
A: Im have a simple way to solve it. Here is my code
public static String normalizeURL(String oldLink)
{
int pos=oldLink.indexOf("://");
String newLink="http"+oldLink.substring(pos);
return newLink;
}
| |
doc_23525641
|
var time = new Date().getHours();
if(time == 11) {
alert("this works");
}
but that only detects the user's time. how can i do pacific time only?
A: Combination of getTimezoneOffset() and PST (-7), currently Pacific Daylight Time!
var offset = new Date().getTimezoneOffset() / 60;
var localHour = new Date().getHours();
var PSTHour = localHour + offset - 7;
PSTHour = PSTHour > 0 ? PSTHour : PSTHour + 24;
console.log('PST hour: ' + PSTHour);
| |
doc_23525642
|
"require": {
"components/jquery": "1.9.*"
},
This has created a components folder with jquery in it.
My question is:
*
*Can I specify where query is downloaded to, I'd prefer it to go in public/src
*The above downloads loads of things like require-built, require.js, jquery-migrate. Is there a way to specify just downloading jquery?
A: *
*That components/jquery package uses RobLoach/component-installer to put the files somewhere useful, and the README file has explanations how to affect this: https://github.com/RobLoach/component-installer
*No, if you want to use this package for jquery, these are dependencies, even if you don't need them.
You should probably look into "Bower" or "Component", which are package manager for frontend dependencies like Javascripts.
| |
doc_23525643
|
I have the following interface:
public interface MyInterface{
/**
* Some contract
**/
public String convert(String arg);
}
Now I have a bunch of MyInterface implementations: MyInterface1Impl, MyInterface2Impl, MyInterface3Impl, etc...
The issue is that I want to test the general contract defined in JavaDoc as well as details of a specific implementation. An I'd write something like this:
public class MyInterfacecontractTestSuite{
@Test(dataProvider="someDataProvider")
public void testContract(MyInterface mi, String arg){
//test
}
}
and, for instance
public class MyInterface1ImplTest{
//test cases for the implementation MyInterface1Impl
}
But it looks a little bit messy in that the cases for MyInterface1Impl are not all in the MyInterface1ImplTest class which might be confusing for a bit.
But putting a duplicate test case for the general contract in every implementation test class is weird as well.
So, I'm not qiute sure if I'm on the right way. Maybe there's some better solution for testing such contracts.
A: Why not having something like:
public abstract class MyInterfacecontractTestSuite {
@Test(dataProvider="someDataProvider")
public void testContract(String arg) {
//test against getMi()
}
public abstract MyInterface getMi();
}
public class MyInterface1ImplTest extends MyInterfacecontractTestSuite {
private MyInterface1Impl mi; // init somewhere
public MyInterface getMi() {
return mi;
}
//test cases for the implementation MyInterface1Impl
}
| |
doc_23525644
|
url: http://localhost:15672/api/policies/vhost123/DLX
body:
{
"pattern":".*",
"definition": {
"dead-letter-exchange":"DLX123"
},
"priority":0,
"apply-to": "all"
}
Is there any way to do this within the C# driver?
A: There doesn't appear to be a way to do this through the C# interface but you can do it directly with a call to the API. Here is my solution:
internal static void DeleteDLXPolicyOnVhost()
{
const string policyURL = "http://localhost:15672/api/policies/vhost123/DLX";
using (var client = new HttpClient())
{
var byteArray = Encoding.ASCII.GetBytes("username123:password123");
client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Basic", Convert.ToBase64String(byteArray));
var response = client.DeleteAsync(policyURL).Result;
switch (response.StatusCode)
{
case HttpStatusCode.NoContent:
Console.WriteLine("Old DLX policy successfully deleted");
break;
case HttpStatusCode.NotFound:
Console.WriteLine("DLX policy was not found");
break;
default:
{
var content = response.Content;
throw new Exception(string.Format("Unhandled API response code of {0}, content: {1}", response.StatusCode, content));
}
}
}
}
| |
doc_23525645
|
A: NFC reading and writing logic should be independent of what you use to construct the view layer of your app – whether that's UIKit or SwiftUI doesn't matter.
Check out the documentation for the Core NFC framework, which allows for reading and writing NFC tags, and check other iOS NFC questions/answers on Stack Overflow, as this question has been asked before.
The Core NFC framework presents a native system interface to read/write to NFC tags on your behalf, without requiring you to implement any UI specifically.
iOS 11 Core NFC - any sample code?
Apple also have a sample project and somewhat of a guide: https://developer.apple.com/documentation/corenfc/building_an_nfc_tag-reader_app
If you structured your app such that each view has a view model class containing logic relating to that view, you might choose to start the appropriate NFC reader/writer session in a button press handler function or perhaps when your view appears. You should be able to implement the appropriate delegate callback methods your view model class.
| |
doc_23525646
|
A: As long as you're on Windows 7+
DisplaySwitch.exe /clone
will duplicate displays.
This will extend displays:
DisplaySwitch.exe /extend
Hope this helps.
Also you can use Win+P if you want a quick shortcut.
A: There is no general solution for this using batch-files. However Nvidia drivers do provide a option fore this, it may be possible that other manufacturers have similar documents. I however would not know because all machines I work with have Nvidia cards.
The documentation can be found here. As per nvidia documentation to set dual monitor mode would be:
rundll32.exe setview 1 dualview AA DA
And setting both views the same would be:
rundll32.exe NvCpl.dll,dtcfg setview 2 clone AA DA
| |
doc_23525647
|
adapter = new SimpleAdapter(
this, list_data, R.layout.list_item_detail,
new String[]{"title","desc","icon"},
new int[]{R.id.title, R.id.desc, R.id.icon}
);
listview.setAdapter(adapter);
private List<Map<String, Object>> list_data_add(String title, String desc, Bitmap icon) {
List<Map<String, Object>> list = new ArrayList<Map<String, Object>>();
Map<String, Object> map;
map = new HashMap<String, Object>();
map.put("title", title);
map.put("desc", desc);
map.put("icon", icon);
list.add(map);
return list;
}
hi, the icon is Bitmap type, but this way it doesn't show any image in listview, but if change icon to int type, and set icon = R.drawable.icon_folder , and import to list_data_add to create a hashmap, it could show a android drawable resource image in listview.
so, could anyone can help me to solve this? tks!
A: ok i got it, SimpleAdapter does not accpect Bitmap, create a baseadapter then it's fine.
| |
doc_23525648
|
here i have created a client using the command from command prompt now trying to create a soap Message,but I am very new to this concept so unable to find the correct way so does any one have Idea regarding this?
A: Sample code to create soap request
using System;
using System.IO;
using System.Net;
using System.Xml;
namespace UsingSOAPRequest
{
public class Program
{
static void Main(string[] args)
{
//creating object of program class to access methods
Program obj = new Program();
Console.WriteLine("Please Enter Input values..");
//Reading input values from console
int a = Convert.ToInt32(Console.ReadLine());
int b = Convert.ToInt32(Console.ReadLine());
//Calling InvokeService method
obj.InvokeService(a, b);
}
public void InvokeService(int a, int b)
{
//Calling CreateSOAPWebRequest method
HttpWebRequest request = CreateSOAPWebRequest();
XmlDocument SOAPReqBody = new XmlDocument();
//SOAP Body Request
SOAPReqBody.LoadXml(@"<?xml version=""1.0"" encoding=""utf-8""?>
<soap:Envelope xmlns:soap=""http://schemas.xmlsoap.org/soap/envelope/"" xmlns:xsi=""http://www.w3.org/2001/XMLSchema- instance"" xmlns:xsd=""http://www.w3.org/2001/XMLSchema"">
<soap:Body>
<Addition xmlns=""http://tempuri.org/"">
<a>" + a + @"</a>
<b>" + b + @"</b>
</Addition>
</soap:Body>
</soap:Envelope>");
using (Stream stream = request.GetRequestStream())
{
SOAPReqBody.Save(stream);
}
//Geting response from request
using (WebResponse Serviceres = request.GetResponse())
{
using (StreamReader rd = new StreamReader(Serviceres.GetResponseStream()))
{
//reading stream
var ServiceResult = rd.ReadToEnd();
//writting stream result on console
Console.WriteLine(ServiceResult);
Console.ReadLine();
}
}
}
public HttpWebRequest CreateSOAPWebRequest()
{
//Making Web Request
HttpWebRequest Req = (HttpWebRequest)WebRequest.Create(@"http://localhost/Employee.asmx");
//SOAPAction
Req.Headers.Add(@"SOAPAction:http://tempuri.org/Addition");
//Content_type
Req.ContentType = "text/xml;charset=\"utf-8\"";
Req.Accept = "text/xml";
//HTTP method
Req.Method = "POST";
//return HttpWebRequest
return Req;
}
}
}
| |
doc_23525649
|
device_name = tf.test.gpu_device_name()
if not device_name:
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
it returns Found GPU at: /device:GPU:0Metal device set to: Apple M1 Max.
Furthermore, the profiler shows that I'm only using the host:
TF Op Placementinfo
generally desired to have more ops on device
Host: 100.0%
Device: 0.0%
Is TensorFlow not using the GPU during the training even if it can detect it? How can I fix this issue and use the GPU?
| |
doc_23525650
|
I can do this, what I can't do is to make the new movieclip selectable and draggable.
This is how I create the new movieclip
new_btn.addEventListener(MouseEvent.CLICK, newMc);
function newMc (event:MouseEvent):void {
var mc:MovieClip = new MovieClip();
mc.graphics.beginFill(0xFF0000);
mc.graphics.drawRect(0, 0, 660, 590);
mc.graphics.endFill();
mc.x = 15;
mc.y = 15;
workArea_mc.addChild(mc);
}
How do I make the new movieclip selectable and draggable?
A: First, you can add a event listener at the MovieClip what you would Drag, and in the listener, you can use the startDrag function in MovieClip Class.
| |
doc_23525651
|
(base) C:\Users\LENOVO>pyspark
usage: jupyter [-h] [--version] [--config-dir] [--data-dir] [--runtime-dir] [--paths] [--json] [subcommand]
jupyter: error: one of the arguments --version subcommand --config-dir --data-dir --runtime-dir --paths is required
when i execute spark-shell, it is working fine
A: There was an issue with the installation, python 3.8 was not supporting pyspark 3+, so i have installed the previous version of python
| |
doc_23525652
|
Here is a link to a mocked up version of my site, as the visual will help best convey the issue I am seeing - https://pub.s7.exacttarget.com/c1i011a1jlr. This issue is occurring on Slides 2 and 3.
Thank you,
Allison
A: $('#PGECarousel').on('slide.bs.carousel', function(e) {
if (e.from < 2) {
$('.carousel-item').addClass('noAnim');
}
}).on('slid.bs.carousel', function(e) {
$('.carousel-item').removeClass('noAnim');
})
will, most likely, fix your issue. You also need this in CSS:
.carousel-item.noAnim {
transition-duration: 0s;
}
On the side, I'd also add these to your styles:
h2:focus {
border: none;
padding: 0;
}
.slide-container {
min-height: 383px;
display: flex;
flex-direction: column;
box-sizing: border-box;
}
.carousel-item {
transition-timing-function: cubic-bezier(.5,0,.2,1);
}
@media (max-width: 767px) {
.slide-container {
min-height: 0;
}
}
Bluntly put, you shouldn't use Bootstrap carousel. Any popular one out there requires a lot less and delivers a lot more (Slick, Flexslider, Owl).
As for the extra CSS, I'd say the padding:0 on h2:focus is a must, the rest are more or less details.
Note: I haven't tested it on IE or Edge (I don't have them on this system), but the above disables CSS transition for the slides that caused the trembling.
| |
doc_23525653
|
I tried with this:
try (CallableStatement callableStatement =
dbConn.prepareCall("{ CALL prueba1(?,?) }")) {
callableStatement.setString(1,mun);
callableStatement.setString(2,edo);
callableStatement.execute();
callableStatement.close();
}
And with this...
try (Connection conn = DBConnection.createConnection(); PreparedStatement pstmt = conn.prepareStatement("{call prueba1(?,?)}")) {
pstmt.setString(1,mun);
pstmt.setString(2,edo);
ResultSet rs = pstmt.executeQuery();
} catch (SQLException e) {
System.out.println(e.getMessage());
}
But it's still not working... Any help would be great.
A: In your prepared statement try switching:
ResultSet rs = pstmt.executeQuery();
With:
pstmt.executeUpdate();
Since your prepared statement actually returns void, there is no result set to return, and it is more effectively a directive like an insert or update - with no return value.
| |
doc_23525654
|
import java.util.Random;
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
/**
*
* @author yolk3
*/
public class clcikerUI extends javax.swing.JFrame {
/**
* Creates new form clcikerUI
*/
public clcikerUI() {
initComponents();
}
/**
* This method is called from within the constructor to initialize the form.
* WARNING: Do NOT modify this code. The content of this method is always
* regenerated by the Form Editor.
*/
@SuppressWarnings("unchecked")
// <editor-fold defaultstate="collapsed" desc="Generated Code">
private void initComponents() {
textArea1 = new java.awt.TextArea();
jPanel1 = new javax.swing.JPanel();
jLabel1 = new javax.swing.JLabel();
jButton1 = new javax.swing.JButton();
jLabel2 = new javax.swing.JLabel();
jTextField1 = new javax.swing.JTextField();
jButton2 = new javax.swing.JButton();
jTextField2 = new javax.swing.JTextField();
jLabel3 = new javax.swing.JLabel();
jLabel4 = new javax.swing.JLabel();
setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);
jPanel1.setBorder(javax.swing.BorderFactory.createTitledBorder("David Clicker"));
jLabel1.setHorizontalAlignment(javax.swing.SwingConstants.CENTER);
jLabel1.setIcon(new javax.swing.ImageIcon(getClass().getResource("/newpackage/david (2).JPG"))); // NOI18N
jButton1.setText("Click");
jButton1.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jButton1ActionPerformed(evt);
}
});
jLabel2.setFont(new java.awt.Font("Tahoma", 0, 24)); // NOI18N
jLabel2.setText("Click Count");
jTextField1.setEditable(false);
jTextField1.setFont(new java.awt.Font("Tahoma", 0, 48)); // NOI18N
jTextField1.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jTextField1ActionPerformed(evt);
}
});
jButton2.setText("Exit");
jButton2.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jButton2ActionPerformed(evt);
}
});
jTextField2.setEditable(false);
jTextField2.setFont(new java.awt.Font("Tahoma", 0, 18)); // NOI18N
jTextField2.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jTextField2ActionPerformed(evt);
}
});
jLabel4.setFont(new java.awt.Font("Tahoma", 0, 36)); // NOI18N
jLabel4.setHorizontalAlignment(javax.swing.SwingConstants.CENTER);
jLabel4.setText("David Clicker 2016");
javax.swing.GroupLayout jPanel1Layout = new javax.swing.GroupLayout(jPanel1);
jPanel1.setLayout(jPanel1Layout);
jPanel1Layout.setHorizontalGroup(
jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addComponent(jLabel4, javax.swing.GroupLayout.Alignment.TRAILING, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)
.addGroup(jPanel1Layout.createSequentialGroup()
.addContainerGap(javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)
.addGroup(jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.TRAILING)
.addGroup(jPanel1Layout.createSequentialGroup()
.addGroup(jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addComponent(jLabel3)
.addGroup(jPanel1Layout.createSequentialGroup()
.addGroup(jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING, false)
.addComponent(jTextField1)
.addComponent(jLabel2, javax.swing.GroupLayout.DEFAULT_SIZE, 194, Short.MAX_VALUE))
.addGap(51, 51, 51)
.addComponent(jLabel1, javax.swing.GroupLayout.PREFERRED_SIZE, 530, javax.swing.GroupLayout.PREFERRED_SIZE)))
.addGap(63, 63, 63)
.addComponent(jTextField2, javax.swing.GroupLayout.PREFERRED_SIZE, 233, javax.swing.GroupLayout.PREFERRED_SIZE)
.addGap(12, 12, 12))
.addComponent(jButton2, javax.swing.GroupLayout.PREFERRED_SIZE, 93, javax.swing.GroupLayout.PREFERRED_SIZE)))
.addGroup(jPanel1Layout.createSequentialGroup()
.addGap(401, 401, 401)
.addComponent(jButton1, javax.swing.GroupLayout.PREFERRED_SIZE, 239, javax.swing.GroupLayout.PREFERRED_SIZE)
.addGap(0, 0, Short.MAX_VALUE))
);
jPanel1Layout.setVerticalGroup(
jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel1Layout.createSequentialGroup()
.addContainerGap()
.addComponent(jLabel4)
.addGap(48, 48, 48)
.addGroup(jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.TRAILING)
.addGroup(jPanel1Layout.createSequentialGroup()
.addGap(83, 83, 83)
.addComponent(jTextField2, javax.swing.GroupLayout.PREFERRED_SIZE, 218, javax.swing.GroupLayout.PREFERRED_SIZE)
.addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED, javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)
.addComponent(jButton2, javax.swing.GroupLayout.PREFERRED_SIZE, 45, javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap())
.addGroup(jPanel1Layout.createSequentialGroup()
.addComponent(jLabel3)
.addGroup(jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel1Layout.createSequentialGroup()
.addGap(88, 88, 88)
.addComponent(jLabel2, javax.swing.GroupLayout.PREFERRED_SIZE, 67, javax.swing.GroupLayout.PREFERRED_SIZE)
.addGap(18, 18, 18)
.addComponent(jTextField1, javax.swing.GroupLayout.PREFERRED_SIZE, 60, javax.swing.GroupLayout.PREFERRED_SIZE))
.addGroup(jPanel1Layout.createSequentialGroup()
.addGap(8, 8, 8)
.addComponent(jLabel1, javax.swing.GroupLayout.PREFERRED_SIZE, 651, javax.swing.GroupLayout.PREFERRED_SIZE)))
.addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED)
.addComponent(jButton1, javax.swing.GroupLayout.PREFERRED_SIZE, 85, javax.swing.GroupLayout.PREFERRED_SIZE)
.addGap(0, 221, Short.MAX_VALUE))))
);
javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane());
getContentPane().setLayout(layout);
layout.setHorizontalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(layout.createSequentialGroup()
.addContainerGap()
.addComponent(jPanel1, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)
.addContainerGap())
);
layout.setVerticalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(layout.createSequentialGroup()
.addContainerGap()
.addComponent(jPanel1, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE))
);
pack();
}// </editor-fold>
private void jButton1ActionPerformed(java.awt.event.ActionEvent evt) {
clickNum++;
jTextField1.setText(Integer.toString(clickNum));
Random rand = new Random();
jTextField2.setText(array[rand.nextInt(array.length-1)]);
count++;
if(count>10&&count<20){
jLabel1.setIcon(new javax.swing.ImageIcon(getClass().getResource("/newpackage1/david2 (2).JPG")));
}
else if(count>20&&count<30){
jLabel1.setIcon(new javax.swing.ImageIcon(getClass().getResource("/newpackage2/WIN_20160205_10_17_02_Pro - Copy.JPG")));
}
else if(count>30){
jLabel1.setIcon(new javax.swing.ImageIcon(getClass().getResource("/newpackage/david (2).JPG")));
count=0;
}
}
private int count = 0;
private int clickNum;
private String []array = {"David Approves","MLG David", "Easy Sevens", "Notice Me Senpai", "Please Mark, No", "Ib 45", "Eazy 7s", /*8*/"Help Me, David, Help Me!!!", "Deez Coding Skeels", "Daniel is Terrible at League", "CSGO",/*12*/ "Subscribe to Big Little Tuna", "Subsribe to Dat Fish", "Please David, Notice me", "Master Juang!!!" };
private void jButton2ActionPerformed(java.awt.event.ActionEvent evt) {
System.exit(0);
}
private void jTextField1ActionPerformed(java.awt.event.ActionEvent evt) {
}
private void jTextField2ActionPerformed(java.awt.event.ActionEvent evt) {
}
/**
* @param args the command line arguments
*/
public static void main(String args[]) {
/* Set the Nimbus look and feel */
//<editor-fold defaultstate="collapsed" desc=" Look and feel setting code (optional) ">
/* If Nimbus (introduced in Java SE 6) is not available, stay with the default look and feel.
* For details see http://download.oracle.com/javase/tutorial/uiswing/lookandfeel/plaf.html
*/
try {
for (javax.swing.UIManager.LookAndFeelInfo info : javax.swing.UIManager.getInstalledLookAndFeels()) {
if ("Nimbus".equals(info.getName())) {
javax.swing.UIManager.setLookAndFeel(info.getClassName());
break;
}
}
} catch (ClassNotFoundException ex) {
java.util.logging.Logger.getLogger(clcikerUI.class.getName()).log(java.util.logging.Level.SEVERE, null, ex);
} catch (InstantiationException ex) {
java.util.logging.Logger.getLogger(clcikerUI.class.getName()).log(java.util.logging.Level.SEVERE, null, ex);
} catch (IllegalAccessException ex) {
java.util.logging.Logger.getLogger(clcikerUI.class.getName()).log(java.util.logging.Level.SEVERE, null, ex);
} catch (javax.swing.UnsupportedLookAndFeelException ex) {
java.util.logging.Logger.getLogger(clcikerUI.class.getName()).log(java.util.logging.Level.SEVERE, null, ex);
}
//</editor-fold>
/* Create and display the form */
java.awt.EventQueue.invokeLater(new Runnable() {
public void run() {
new clcikerUI().setVisible(true);
}
});
}
// Variables declaration - do not modify
private javax.swing.JButton jButton1;
private javax.swing.JButton jButton2;
private javax.swing.JLabel jLabel1;
private javax.swing.JLabel jLabel2;
private javax.swing.JLabel jLabel3;
private javax.swing.JLabel jLabel4;
private javax.swing.JPanel jPanel1;
private javax.swing.JTextField jTextField1;
private javax.swing.JTextField jTextField2;
private java.awt.TextArea textArea1;
// End of variables declaration
}
| |
doc_23525655
|
i wanted to capture the trailing "555" in the request url.
i tried using the logic below(python) in lambda but unfortunately i cannot capture the "555" part since queryStringParameters is null
productid=list(event['queryStringParameters'].keys())[0]
The event doesnt have any trace of the product id
{
"resource": "/list",
"path": "/list",
"httpMethod": "GET",
"headers": null,
"multiValueHeaders": null,
"queryStringParameters": null,
"multiValueQueryStringParameters": null,
"pathParameters": null,
"stageVariables": null,
"requestContext": {
"resourceId": "aazo5x",
"resourcePath": "/list",
"httpMethod": "GET",
"extendedRequestId": "brfc9EVHvHcF5WQ=",
"requestTime": "16/Nov/2022:05:46:07 +0000",
"path": "/list",
"accountId": "1234",
"protocol": "HTTP/1.1",
"stage": "test-invoke-stage",
"domainPrefix": "testPrefix",
"requestTimeEpoch": 1668577567573,
"requestId": "17da8762-6369-4f54-9c67-dddd68795",
"identity": {
"cognitoIdentityPoolId": null,
"cognitoIdentityId": null,
"apiKey": "test-invoke-api-key",
"principalOrgId": null,
"cognitoAuthenticationType": null,
"userArn": "arn:aws:iam::12345:user/12345",
"apiKeyId": "test-invoke-api-key-id",
"userAgent": "aws-internal/3 aws-sdk-java/1.12.302 Linux/5.10.144-111.639.amzn2int.x86_64 OpenJDK_64-Bit_Server_VM/25.352-b08 java/1.8.0_352 vendor/Oracle_Corporation cfg/retry-mode/standard",
"accountId": "12345",
"caller": "AIDAYQECU",
"sourceIp": "test-invoke-source-ip",
"accessKey": "AIDAYQECU",
"cognitoAuthenticationProvider": null,
"user": "AADSWSCCO3L4N"
},
"domainName": "testPrefix.testDomainName",
"apiId": "31u00z1fz9"
},
"body": null,
"isBase64Encoded": false
}
Is it possible to capture just the trailing number after ? -> "555"
without using query string format ?key=value
| |
doc_23525656
|
From other questions here, I've gotten up to using Runtime.exec(cmdArray, null, workingDirectory); but I keep getting "CreateProcess error=2, The system cannot find the file specified". I've checked, and both the path and file exist, so I don't know what is going wrong. Here is the code I'm using.
String [] fileName = {"mp3wrap.exe", "Clear_10", "*.mp3"};
String dirName = "E:\\Music\\New Folder\\zz Concatinate\\Clear_10";
try {
Runtime rt = Runtime.getRuntime();
Process pr = rt.exec(fileName, null, new File(dirName));
BufferedReader input = new BufferedReader(new InputStreamReader
(pr.getInputStream()));
String line = null;
while ((line = input.readLine()) != null) {
System.out.println(line);
}//end while
int exitVal = pr.waitFor();
System.out.println("Exited with error code " + exitVal);
}//end try
catch (Exception e) {
System.out.println(e.toString());
e.printStackTrace();
}//end catch`
I'm getting this error:
java.io.IOException: Cannot run program "mp3wrap.exe" (in directory "E:\Music\New Folder\zz Concatinate\Clear_10"): CreateProcess error=2, The system cannot find the file specified
A: Give the whole path to mp3wrap.exe.
Java doesn't use the PATH to find mp3wrap.
--
Update after comment:
Okay - rereading the question, he asks how to start the program from inside the directory. If the program needs it, you have to start the Java program while being in this directory.
You might still have to give the whole path to the program, or start it with an indication to search for it in the current dir. I remember, that in Windows, the current dir is always searched. Other system differ here, so you would indicate the current dir with a dot, which works on Windows too: "./mp3wrap".
A: Alternatively you might want to try using ProcessBulder.start(). You can set env variables, the working directory and any args you want to pass to the Process that is spawned by the start() method. Look at the Java docs for a sample invocation.
| |
doc_23525657
|
I'm trying to add a banner that changes every time a page refreshes. I have set up 2 examples in my database called "link 1" and "link 2". I will want to add more as and when I get them.
What I want to happen is this:
I want to display one of the 2 images on my site and when the user refreshes the page, it will select one of the 2 images and this should continue every time the page refreshes.
I'm testing this out in a page called banner.php before I move it to my footer.php and make it live.
I currently have this code In my banner.php page:
<?PHP
include_once('include/connection.php');
// Edit this number to however many links you want displaying
$num_displayed = 1 ;
// Select random rows from the database
global $pdo;
$query = $pdo->prepare ("SELECT * FROM banners ORDER BY RAND() LIMIT $num_displayed");
$query->execute();
// For all the rows that you selected
while ($row = execute($result))
{
// Display them to the screen...
echo "<a href=\"" . $row["link"] . "\">
<img src=\"" . $row["image"] . "\" border=0 alt=\"" . $row["text"] . "\">
</a>" ;
}
?>
<br /><br /><br />
But I am getting this error code:
Fatal error: Call to undefined function execute() in banner.php on line 13
My connection page is used by other pages so I know it works.
Please can some one help me on what I am doing wrong.
Need any more info then please ask and I will add it to this post.
Thank you.
Kev
A: replace this
while ($row = execute($result))
with this:
while ($row = $query->fetch())
EDIT
This make it better to read.
while ($row = $query->fetch()) :
// Display them to the screen...
?>
<a href="<?php echo $row['link']; ?>">
<img src="<?php echo $row['image']; ?>" border="0" alt="<?php echo $row['text'];?>">
</a>
<?php endwhile; ?>
<br/>
<br/>
<br/>
| |
doc_23525658
|
I have not gotten very far, I am just trying to make a square of random size and random color appear on the screen, but I can't even manage that. See my code below:
<script type="text/javascript">
function Shape () {
this.x = Math.floor(Math.random()*850);
this.y = Math.floor(Math.random()*850);
this.draw();
}
Shape.prototype.draw = function() {
var shapeHtml = '<div></div>';
var widthAndHeight = Math.floor(Math.random()*400);
var left = Math.floor(Math.random()*850);
var top = Math.floor(Math.random()*850);
this.shapeElement = $(shapeHtml);
this.shapeElement.css({
position: "relative",
left: this.left,
top: this.top,
width: widthAndHeight,
height: widthAndHeight,
});
$("body").append(this.shapeElement);
}
Shape.prototype.colour = function() {
var colours = '0123456789ABCDEF'.split('');
var randomColour = "#";
for (i = 0; i < 6; i++) {
randomColour+=colours[Math.floor(Math.random()*16)];
};
this.shapeElement.css({backgroundColor: 'randomColour'});
}
var square = new Shape();
</script
So far, no square will appear on the screen. All that happens is a div of a random size is appended, but it is always in the upper-left position and has no background color. The console is not helping me because it is not showing that there are any errors in my code. I am extremely confused and finding the transition to OOP is extremely confusing. Any help in understanding why this won't work would be extremely appreciated!
A: Several small errors:
Warning: function Shape sets up x and y properties that are not used.
Error: Shape.prototype.draw defines variables left and top but refers to them as this.left and this.top in the CSS object initializer. As properties they are undefined - take out the two this. qualifiers.
Error: Shape.prototype.colour is not called, so the DIV elements are transparent. Insert a call this.colour() after, say, setting the CSS.
Error: The css initialiser object value for background color should be the variable name, randomColour not the string literal 'randomColour'. Remove the quote marks from around the identifier.
Severe warning: the for loop in the colour function does not declare i and creates it as an implicit global variable. Insert "use strict"; at the beginning of script files or function bodies to generate an error for undeclared variables.
In summary none of the errors generate errors on the console (undefined CSS values are ignored) but work to prevent the code working.
A: There are number of issues.
1) colour() method is never called.
2) Referring this.top and this.left inside the css construct won't work either.
3) randomColour is a variable, not a string literal.
Fixed the issues and embedded the code here. Have a look.
function Shape () {
this.x = Math.floor(Math.random()*850);
this.y = Math.floor(Math.random()*850);
}
Shape.prototype.draw = function() {
var shapeHtml = '<div></div>';
var widthAndHeight = Math.floor(Math.random()*400);
var left = Math.floor(Math.random()*850);
var top = Math.floor(Math.random()*850);
this.shapeElement = $(shapeHtml);
this.shapeElement.css({
'margin-left': left,
'margin-top': top,
'width': widthAndHeight,
'height': widthAndHeight,
});
$("body").append(this.shapeElement);
}
Shape.prototype.colour = function() {
var colours = '0123456789ABCDEF'.split('');
var randomColour = "#";
for (i = 0; i < 6; i++) {
randomColour+=colours[Math.floor(Math.random()*16)];
};
this.shapeElement.css({backgroundColor: randomColour});
}
$(document).ready(function() {
var square = new Shape();
square.draw();
square.colour();
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Shape</title>
</head>
<body>
<div></div>
</body>
</html>
| |
doc_23525659
|
[
key => data => [
key1 => data => [...]
=> expires => 123456
key2 => data => [...]
=> expires => 123456
]
=> expires => 123456
]
This can be many levels deep (sometimes 10-15 levels).
What I would like to do is return only the values of data, so for instance, to create an array like so
[
key => key1 => [...]
=> key2 => [...]
]
how would I do this?
EDIT
print_r of the structure
Array (
[key] =>
Array (
[data] =>
Array (
[key1] =>
Array (
[data] => Array ( ... )
[expires] => 12345678)
[key2] =>
Array (
[data] => Array ( ... )
[expires] => 12345678
)
)
[expires] => 12345678
)
)
A: You can use recursive function like this:
function extractValues($array, &$result){
foreach($array as $key1 => $val1){
if($key1 == 'data' && is_array($val1)) {
foreach($val1 as $key2 => $val2){
if(is_array($val2)) {
$result[$key2] = array();
extractValues($val2, $result[$key2]);
} else {
$result[$key2] = $val2;
}
}
} else if(!in_array($key1, array('expires'))) {
$result[$key1] = array();
extractValues($val1, $result[$key1]);
}
}
return $result;
}
It will work on array structure like this:
$test = array(
'key' => array(
'data' => array(
'key1' => array(
'data' => array(
'key11' => 11,
'key12' => 12,
),
),
'key2' => array(
'data' => array(
'key21' => 21,
'key22' => 22,
),
),
),
'expires' => 12345,
),
);
$result = array();
extractValues($test, $result);
var_dump($result);
This is the result of the var_dump() hope meets your requirement:
array(1) {
["key"]=>
array(2) {
["key1"]=>
array(2) {
["key11"]=>
int(11)
["key12"]=>
int(12)
}
["key2"]=>
array(2) {
["key21"]=>
int(21)
["key22"]=>
int(22)
}
}
}
| |
doc_23525660
|
What is the best or recommended method for building this singleton type class in Swift?
Or has Swift even come up with something for this yet?
| |
doc_23525661
|
public void setClockTimeScheduler(Context ctx) {
// TODO Auto-generated method stub
Calendar current = Calendar.getInstance();
Calendar cal = Calendar.getInstance();
cal.set(cal.get(Calendar.YEAR),
cal.get(Calendar.MONTH),
cal.get(Calendar.DAY_OF_MONTH),
cal.get(Calendar.HOUR),
cal.get(Calendar.MINUTE)+2, 00);
if(cal.compareTo(current) <= 0){
//The set Date/Time already passed
Toast.makeText(ctx, "Invalid Date/Time"+cal.compareTo(current)+"Or"+current,
Toast.LENGTH_LONG).show();
}else{
setAlarmScheduler(cal,ctx);
}
A: First of all you are adding two minutes in current time. if condition cannot be true at any cost. if statement is useless remove it and used the code below It will surely help you!
public void setClockTimeScheduler(Context ctx){
Calendar cal = Calendar.getInstance();
cal.add(Calendar.MINUTE,2);
setAlarmScheduler(cal, ctx);
}
A: Are you sure, that you are running this code with the right parameters? Because I have tried your code and it worked for me (I mean I never get into the "Invalid Date/Time" block). I did some modification:
public void setClockTimeScheduler(Context ctx){
Calendar current = Calendar.getInstance();
Calendar cal = Calendar.getInstance();
cal.set(Calendar.MINUTE, current.get(Calendar.MINUTE)+2);
if(cal.compareTo(current) <= 0){
//The set Date/Time already passed
Log.d("MainActivity", "Invalid Date/Time" + cal.compareTo(current) + "Or" + current);
}else{
// setAlarmScheduler(cal, ctx);
}
}
| |
doc_23525662
|
When I try the following command this is what I get:
(my_env) crigano@crigano-desktop:~$ python3.8 -m pip install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing
Collecting numpy
Using cached numpy-1.20.2-cp38-cp38-manylinux2014_aarch64.whl (12.7 MB)
Collecting ninja
Using cached ninja-1.10.0.post2.tar.gz (25 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting pyyaml
Using cached PyYAML-5.4.1-cp38-cp38-manylinux2014_aarch64.whl (818 kB)
ERROR: Could not find a version that satisfies the requirement mkl
ERROR: No matching distribution found for mkl
A: If you just want to use PyTorch on the bare-metal Jetson Nano, simply install it with NVIDIA's pre-compiled binary wheel. Other packages can be found in the Jetson Zoo.
MKL is developed by Intel "to optimize code for current and future generations of Intel® CPUs and GPUs." [PyPI]. Apparently it does run on other x86-based chips like AMD's (although Intel has historically intentionally crippled the library for non-Intel chips [Wikipedia]), but unsurprisingly Intel is not interested in supporting ARM devices and has not ported MKL to ARM architectures.
If your goal is to use MKL for math optimization in numpy, openblas is a working alternative for ARM. libopenblas-base:arm64 and libopenblas-dev:arm64 come pre-installed on NVIDIA's "L4T PyTorch" Docker images. You can confirm that numpy detects them with numpy.__config__.show(). This is what I get using numpy 1.12 in python 3.69 on the l4t-pytorch:r32.5.0-pth1.6-py3 image:
blas_mkl_info:
NOT AVAILABLE
blis_info:
NOT AVAILABLE
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
NOT AVAILABLE
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
So presumably it will use openblas in place of MKL for math optimization. If your use case is also for numpy optimization, you can likewise use openblas and shouldn't need MKL... which is fortunate, since it isn't available anyway.
| |
doc_23525663
|
A: Instead of recursive CTEs and while loops, has anyone considered a more set-based approach? Note that this function was written for the question, which was based on SQL Server 2008 and comma as the delimiter. In SQL Server 2016 and above (and in compatibility level 130 and above), STRING_SPLIT() is a better option.
CREATE FUNCTION dbo.SplitString
(
@List nvarchar(max),
@Delim nvarchar(255)
)
RETURNS TABLE
AS
RETURN ( SELECT [Value] FROM
(
SELECT [Value] = LTRIM(RTRIM(SUBSTRING(@List, [Number],
CHARINDEX(@Delim, @List + @Delim, [Number]) - [Number])))
FROM (SELECT Number = ROW_NUMBER() OVER (ORDER BY name)
FROM sys.all_columns) AS x WHERE Number <= LEN(@List)
AND SUBSTRING(@Delim + @List, [Number], DATALENGTH(@Delim)/2) = @Delim
) AS y
);
GO
If you want to avoid the limitation of the length of the string being <= the number of rows in sys.all_columns (9,980 in model in SQL Server 2017; much higher in your own user databases), you can use other approaches for deriving the numbers, such as building your own table of numbers. You could also use a recursive CTE in cases where you can't use system tables or create your own:
CREATE FUNCTION dbo.SplitString
(
@List nvarchar(max),
@Delim nvarchar(255)
)
RETURNS TABLE WITH SCHEMABINDING
AS
RETURN ( WITH n(n) AS (SELECT 1 UNION ALL SELECT n+1
FROM n WHERE n <= LEN(@List))
SELECT [Value] = SUBSTRING(@List, n,
CHARINDEX(@Delim, @List + @Delim, n) - n)
FROM n WHERE n <= LEN(@List)
AND SUBSTRING(@Delim + @List, n, DATALENGTH(@Delim)/2) = @Delim
);
GO
But you'll have to append OPTION (MAXRECURSION 0) (or MAXRECURSION <longest possible string length if < 32768>) to the outer query in order to avoid errors with recursion for strings > 100 characters. If that is also not a good alternative then see this answer as pointed out in the comments, or this answer if you need an ordered split string function.
(Also, the delimiter will have to be NCHAR(<=1228). Still researching why.)
More on split functions, why (and proof that) while loops and recursive CTEs don't scale, and better alternatives, if you're splitting strings coming from the application layer:
*
*Splitting strings
A: Finally the wait is over in SQL Server 2016 they have introduced Split string function : STRING_SPLIT
select * From STRING_SPLIT ('a,b', ',') cs
All the other methods to split string like XML, Tally table, while loop, etc.. has been blown away by this STRING_SPLIT function.
Here is an excellent article with performance comparison : Performance Surprises and Assumptions : STRING_SPLIT
A: if you replace
WHILE CHARINDEX(',', @stringToSplit) > 0
with
WHILE LEN(@stringToSplit) > 0
you can eliminate that last insert after the while loop!
CREATE FUNCTION dbo.splitstring ( @stringToSplit VARCHAR(MAX) )
RETURNS
@returnList TABLE ([Name] [nvarchar] (500))
AS
BEGIN
DECLARE @name NVARCHAR(255)
DECLARE @pos INT
WHILE LEN(@stringToSplit) > 0
BEGIN
SELECT @pos = CHARINDEX(',', @stringToSplit)
if @pos = 0
SELECT @pos = LEN(@stringToSplit)
SELECT @name = SUBSTRING(@stringToSplit, 1, @pos-1)
INSERT INTO @returnList
SELECT @name
SELECT @stringToSplit = SUBSTRING(@stringToSplit, @pos+1, LEN(@stringToSplit)-@pos)
END
RETURN
END
A: The often used approach with XML elements breaks in case of forbidden characters. This is an approach to use this method with any kind of character, even with the semicolon as delimiter.
The trick is, first to use SELECT SomeString AS [*] FOR XML PATH('') to get all forbidden characters properly escaped. That's the reason, why I replace the delimiter to a magic value to avoid troubles with ; as delimiter.
DECLARE @Dummy TABLE (ID INT, SomeTextToSplit NVARCHAR(MAX))
INSERT INTO @Dummy VALUES
(1,N'A&B;C;D;E, F')
,(2,N'"C" & ''D'';<C>;D;E, F');
DECLARE @Delimiter NVARCHAR(10)=';'; --special effort needed (due to entities coding with "&code;")!
WITH Casted AS
(
SELECT *
,CAST(N'<x>' + REPLACE((SELECT REPLACE(SomeTextToSplit,@Delimiter,N'§§Split$me$here§§') AS [*] FOR XML PATH('')),N'§§Split$me$here§§',N'</x><x>') + N'</x>' AS XML) AS SplitMe
FROM @Dummy
)
SELECT Casted.ID
,x.value(N'.',N'nvarchar(max)') AS Part
FROM Casted
CROSS APPLY SplitMe.nodes(N'/x') AS A(x)
The result
ID Part
1 A&B
1 C
1 D
1 E, F
2 "C" & 'D'
2 <C>
2 D
2 E, F
A: All the functions for string splitting that use some kind of Loop-ing (iterations) have bad performance. They should be replaced with set-based solution.
This code executes excellent.
CREATE FUNCTION dbo.SplitStrings
(
@List NVARCHAR(MAX),
@Delimiter NVARCHAR(255)
)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
(
SELECT Item = y.i.value('(./text())[1]', 'nvarchar(4000)')
FROM
(
SELECT x = CONVERT(XML, '<i>'
+ REPLACE(@List, @Delimiter, '</i><i>')
+ '</i>').query('.')
) AS a CROSS APPLY x.nodes('i') AS y(i)
);
GO
A: The easiest way to do this is by using XML format.
1. Converting string to rows without table
QUERY
DECLARE @String varchar(100) = 'String1,String2,String3'
-- To change ',' to any other delimeter, just change ',' to your desired one
DECLARE @Delimiter CHAR = ','
SELECT LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'Value'
FROM
(
SELECT CAST ('<M>' + REPLACE(@String, @Delimiter, '</M><M>') + '</M>' AS XML) AS Data
) AS A
CROSS APPLY Data.nodes ('/M') AS Split(a)
RESULT
x---------x
| Value |
x---------x
| String1 |
| String2 |
| String3 |
x---------x
2. Converting to rows from a table which have an ID for each CSV row
SOURCE TABLE
x-----x--------------------------x
| Id | Value |
x-----x--------------------------x
| 1 | String1,String2,String3 |
| 2 | String4,String5,String6 |
x-----x--------------------------x
QUERY
-- To change ',' to any other delimeter, just change ',' before '</M><M>' to your desired one
DECLARE @Delimiter CHAR = ','
SELECT ID,LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'Value'
FROM
(
SELECT ID,CAST ('<M>' + REPLACE(VALUE, @Delimiter, '</M><M>') + '</M>' AS XML) AS Data
FROM TABLENAME
) AS A
CROSS APPLY Data.nodes ('/M') AS Split(a)
RESULT
x-----x----------x
| Id | Value |
x-----x----------x
| 1 | String1 |
| 1 | String2 |
| 1 | String3 |
| 2 | String4 |
| 2 | String5 |
| 2 | String6 |
x-----x----------x
A: I've used this SQL before which may work for you:-
CREATE FUNCTION dbo.splitstring ( @stringToSplit VARCHAR(MAX) )
RETURNS
@returnList TABLE ([Name] [nvarchar] (500))
AS
BEGIN
DECLARE @name NVARCHAR(255)
DECLARE @pos INT
WHILE CHARINDEX(',', @stringToSplit) > 0
BEGIN
SELECT @pos = CHARINDEX(',', @stringToSplit)
SELECT @name = SUBSTRING(@stringToSplit, 1, @pos-1)
INSERT INTO @returnList
SELECT @name
SELECT @stringToSplit = SUBSTRING(@stringToSplit, @pos+1, LEN(@stringToSplit)-@pos)
END
INSERT INTO @returnList
SELECT @stringToSplit
RETURN
END
and to use it:-
SELECT * FROM dbo.splitstring('91,12,65,78,56,789')
A: I had to write something like this recently. Here's the solution I came up with. It's generalized for any delimiter string and I think it would perform slightly better:
CREATE FUNCTION [dbo].[SplitString]
( @string nvarchar(4000)
, @delim nvarchar(100) )
RETURNS
@result TABLE
( [Value] nvarchar(4000) NOT NULL
, [Index] int NOT NULL )
AS
BEGIN
DECLARE @str nvarchar(4000)
, @pos int
, @prv int = 1
SELECT @pos = CHARINDEX(@delim, @string)
WHILE @pos > 0
BEGIN
SELECT @str = SUBSTRING(@string, @prv, @pos - @prv)
INSERT INTO @result SELECT @str, @prv
SELECT @prv = @pos + LEN(@delim)
, @pos = CHARINDEX(@delim, @string, @pos + 1)
END
INSERT INTO @result SELECT SUBSTRING(@string, @prv, 4000), @prv
RETURN
END
A: If you need a quick ad-hoc solution for common cases with minimum code, then this recursive CTE two-liner will do it:
DECLARE @s VARCHAR(200) = ',1,2,,3,,,4,,,,5,'
;WITH
a AS (SELECT i=-1, j=0 UNION ALL SELECT j, CHARINDEX(',', @s, j + 1) FROM a WHERE j > i),
b AS (SELECT SUBSTRING(@s, i+1, IIF(j>0, j, LEN(@s)+1)-i-1) s FROM a WHERE i >= 0)
SELECT * FROM b
Either use this as a stand-alone statement or just add the above CTEs to any of your queries and you will be able to join the resulting table b with others for use in any further expressions.
edit (by Shnugo)
If you add a counter, you will get a position index together with the List:
DECLARE @s VARCHAR(200) = '1,2333,344,4'
;WITH
a AS (SELECT n=0, i=-1, j=0 UNION ALL SELECT n+1, j, CHARINDEX(',', @s, j+1) FROM a WHERE j > i),
b AS (SELECT n, SUBSTRING(@s, i+1, IIF(j>0, j, LEN(@s)+1)-i-1) s FROM a WHERE i >= 0)
SELECT * FROM b;
The result:
n s
1 1
2 2333
3 344
4 4
A: I take the xml route by wrapping the values into elements (M but anything works):
declare @v nvarchar(max) = '100,201,abcde'
select
a.value('.', 'varchar(max)')
from
(select cast('<M>' + REPLACE(@v, ',', '</M><M>') + '</M>' AS XML) as col) as A
CROSS APPLY A.col.nodes ('/M') AS Split(a)
A: I needed a quick way to get rid of the +4 from a zip code.
UPDATE #Emails
SET ZIPCode = SUBSTRING(ZIPCode, 1, (CHARINDEX('-', ZIPCODE)-1))
WHERE ZIPCode LIKE '%-%'
No proc... no UDF... just one tight little inline command that does what it must. Not fancy, not elegant.
Change the delimiter as needed, etc, and it will work for anything.
A: A solution using a CTE, if anyone should need that (apart from me, who obviously did, that is why I wrote it).
declare @StringToSplit varchar(100) = 'Test1,Test2,Test3';
declare @SplitChar varchar(10) = ',';
with StringToSplit as (
select
ltrim( rtrim( substring( @StringToSplit, 1, charindex( @SplitChar, @StringToSplit ) - 1 ) ) ) Head
, substring( @StringToSplit, charindex( @SplitChar, @StringToSplit ) + 1, len( @StringToSplit ) ) Tail
union all
select
ltrim( rtrim( substring( Tail, 1, charindex( @SplitChar, Tail ) - 1 ) ) ) Head
, substring( Tail, charindex( @SplitChar, Tail ) + 1, len( Tail ) ) Tail
from StringToSplit
where charindex( @SplitChar, Tail ) > 0
union all
select
ltrim( rtrim( Tail ) ) Head
, '' Tail
from StringToSplit
where charindex( @SplitChar, Tail ) = 0
and len( Tail ) > 0
)
select Head from StringToSplit
A: This is more narrowly-tailored. When I do this I usually have a comma-delimited list of unique ids (INT or BIGINT), which I want to cast as a table to use as an inner join to another table that has a primary key of INT or BIGINT. I want an in-line table-valued function returned so that I have the most efficient join possible.
Sample usage would be:
DECLARE @IDs VARCHAR(1000);
SET @IDs = ',99,206,124,8967,1,7,3,45234,2,889,987979,';
SELECT me.Value
FROM dbo.MyEnum me
INNER JOIN dbo.GetIntIdsTableFromDelimitedString(@IDs) ids ON me.PrimaryKey = ids.ID
I stole the idea from http://sqlrecords.blogspot.com/2012/11/converting-delimited-list-to-table.html, changing it to be in-line table-valued and cast as INT.
create function dbo.GetIntIDTableFromDelimitedString
(
@IDs VARCHAR(1000) --this parameter must start and end with a comma, eg ',123,456,'
--all items in list must be perfectly formatted or function will error
)
RETURNS TABLE AS
RETURN
SELECT
CAST(SUBSTRING(@IDs,Nums.number + 1,CHARINDEX(',',@IDs,(Nums.number+2)) - Nums.number - 1) AS INT) AS ID
FROM
[master].[dbo].[spt_values] Nums
WHERE Nums.Type = 'P'
AND Nums.number BETWEEN 1 AND DATALENGTH(@IDs)
AND SUBSTRING(@IDs,Nums.number,1) = ','
AND CHARINDEX(',',@IDs,(Nums.number+1)) > Nums.number;
GO
A: There is a correct version on here but I thought it would be nice to add a little fault tolerance in case they have a trailing comma as well as make it so you could use it not as a function but as part of a larger piece of code. Just in case you're only using it once time and don't need a function. This is also for integers (which is what I needed it for) so you might have to change your data types.
DECLARE @StringToSeperate VARCHAR(10)
SET @StringToSeperate = '1,2,5'
--SELECT @StringToSeperate IDs INTO #Test
DROP TABLE #IDs
CREATE TABLE #IDs (ID int)
DECLARE @CommaSeperatedValue NVARCHAR(255) = ''
DECLARE @Position INT = LEN(@StringToSeperate)
--Add Each Value
WHILE CHARINDEX(',', @StringToSeperate) > 0
BEGIN
SELECT @Position = CHARINDEX(',', @StringToSeperate)
SELECT @CommaSeperatedValue = SUBSTRING(@StringToSeperate, 1, @Position-1)
INSERT INTO #IDs
SELECT @CommaSeperatedValue
SELECT @StringToSeperate = SUBSTRING(@StringToSeperate, @Position+1, LEN(@StringToSeperate)-@Position)
END
--Add Last Value
IF (LEN(LTRIM(RTRIM(@StringToSeperate)))>0)
BEGIN
INSERT INTO #IDs
SELECT SUBSTRING(@StringToSeperate, 1, @Position)
END
SELECT * FROM #IDs
A: I modified +Andy Robinson's function a little bit. Now you can select only required part from returning table:
CREATE FUNCTION dbo.splitstring ( @stringToSplit VARCHAR(MAX) )
RETURNS
@returnList TABLE ([numOrder] [tinyint] , [Name] [nvarchar] (500)) AS
BEGIN
DECLARE @name NVARCHAR(255)
DECLARE @pos INT
DECLARE @orderNum INT
SET @orderNum=0
WHILE CHARINDEX('.', @stringToSplit) > 0
BEGIN
SELECT @orderNum=@orderNum+1;
SELECT @pos = CHARINDEX('.', @stringToSplit)
SELECT @name = SUBSTRING(@stringToSplit, 1, @pos-1)
INSERT INTO @returnList
SELECT @orderNum,@name
SELECT @stringToSplit = SUBSTRING(@stringToSplit, @pos+1, LEN(@stringToSplit)-@pos)
END
SELECT @orderNum=@orderNum+1;
INSERT INTO @returnList
SELECT @orderNum, @stringToSplit
RETURN
END
Usage:
SELECT Name FROM dbo.splitstring('ELIS.YD.CRP1.1.CBA.MDSP.T389.BT') WHERE numOrder=5
A: Simples
DECLARE @String varchar(100) = '11,21,84,85,87'
SELECT * FROM TB_PAPEL WHERE CD_PAPEL IN (SELECT value FROM STRING_SPLIT(@String, ','))
-- EQUIVALENTE
SELECT * FROM TB_PAPEL WHERE CD_PAPEL IN (11,21,84,85,87)
A: here is a version that can split on a pattern using patindex, a simple adaptation of the post above. I had a case where I needed to split a string that contained multiple separator chars.
alter FUNCTION dbo.splitstring ( @stringToSplit VARCHAR(1000), @splitPattern varchar(10) )
RETURNS
@returnList TABLE ([Name] [nvarchar] (500))
AS
BEGIN
DECLARE @name NVARCHAR(255)
DECLARE @pos INT
WHILE PATINDEX(@splitPattern, @stringToSplit) > 0
BEGIN
SELECT @pos = PATINDEX(@splitPattern, @stringToSplit)
SELECT @name = SUBSTRING(@stringToSplit, 1, @pos-1)
INSERT INTO @returnList
SELECT @name
SELECT @stringToSplit = SUBSTRING(@stringToSplit, @pos+1, LEN(@stringToSplit)-@pos)
END
INSERT INTO @returnList
SELECT @stringToSplit
RETURN
END
select * from dbo.splitstring('stringa/stringb/x,y,z','%[/,]%');
result looks like this
stringa
stringb
x
y
z
A: Personnaly I use this function :
ALTER FUNCTION [dbo].[CUST_SplitString]
(
@String NVARCHAR(4000),
@Delimiter NCHAR(1)
)
RETURNS TABLE
AS
RETURN
(
WITH Split(stpos,endpos)
AS(
SELECT 0 AS stpos, CHARINDEX(@Delimiter,@String) AS endpos
UNION ALL
SELECT endpos+1, CHARINDEX(@Delimiter,@String,endpos+1)
FROM Split
WHERE endpos > 0
)
SELECT 'Id' = ROW_NUMBER() OVER (ORDER BY (SELECT 1)),
'Data' = SUBSTRING(@String,stpos,COALESCE(NULLIF(endpos,0),LEN(@String)+1)-stpos)
FROM Split
)
A: I have developed a double Splitter (Takes two split characters) as requested Here. Could be of some value in this thread seeing its the most referenced for queries relating to string splitting.
CREATE FUNCTION uft_DoubleSplitter
(
-- Add the parameters for the function here
@String VARCHAR(4000),
@Splitter1 CHAR,
@Splitter2 CHAR
)
RETURNS @Result TABLE (Id INT,MId INT,SValue VARCHAR(4000))
AS
BEGIN
DECLARE @FResult TABLE(Id INT IDENTITY(1, 1),
SValue VARCHAR(4000))
DECLARE @SResult TABLE(Id INT IDENTITY(1, 1),
MId INT,
SValue VARCHAR(4000))
SET @String = @String+@Splitter1
WHILE CHARINDEX(@Splitter1, @String) > 0
BEGIN
DECLARE @WorkingString VARCHAR(4000) = NULL
SET @WorkingString = SUBSTRING(@String, 1, CHARINDEX(@Splitter1, @String) - 1)
--Print @workingString
INSERT INTO @FResult
SELECT CASE
WHEN @WorkingString = '' THEN NULL
ELSE @WorkingString
END
SET @String = SUBSTRING(@String, LEN(@WorkingString) + 2, LEN(@String))
END
IF ISNULL(@Splitter2, '') != ''
BEGIN
DECLARE @OStartLoop INT
DECLARE @OEndLoop INT
SELECT @OStartLoop = MIN(Id),
@OEndLoop = MAX(Id)
FROM @FResult
WHILE @OStartLoop <= @OEndLoop
BEGIN
DECLARE @iString VARCHAR(4000)
DECLARE @iMId INT
SELECT @iString = SValue+@Splitter2,
@iMId = Id
FROM @FResult
WHERE Id = @OStartLoop
WHILE CHARINDEX(@Splitter2, @iString) > 0
BEGIN
DECLARE @iWorkingString VARCHAR(4000) = NULL
SET @IWorkingString = SUBSTRING(@iString, 1, CHARINDEX(@Splitter2, @iString) - 1)
INSERT INTO @SResult
SELECT @iMId,
CASE
WHEN @iWorkingString = '' THEN NULL
ELSE @iWorkingString
END
SET @iString = SUBSTRING(@iString, LEN(@iWorkingString) + 2, LEN(@iString))
END
SET @OStartLoop = @OStartLoop + 1
END
INSERT INTO @Result
SELECT MId AS PrimarySplitID,
ROW_NUMBER() OVER (PARTITION BY MId ORDER BY Mid, Id) AS SecondarySplitID ,
SValue
FROM @SResult
END
ELSE
BEGIN
INSERT INTO @Result
SELECT Id AS PrimarySplitID,
NULL AS SecondarySplitID,
SValue
FROM @FResult
END
RETURN
Usage:
--FirstSplit
SELECT * FROM uft_DoubleSplitter('ValueA=ValueB=ValueC=ValueD==ValueE&ValueA=ValueB=ValueC===ValueE&ValueA=ValueB==ValueD===','&',NULL)
--Second Split
SELECT * FROM uft_DoubleSplitter('ValueA=ValueB=ValueC=ValueD==ValueE&ValueA=ValueB=ValueC===ValueE&ValueA=ValueB==ValueD===','&','=')
Possible Usage (Get second value of each split):
SELECT fn.SValue
FROM uft_DoubleSplitter('ValueA=ValueB=ValueC=ValueD==ValueE&ValueA=ValueB=ValueC===ValueE&ValueA=ValueB==ValueD===', '&', '=')AS fn
WHERE fn.mid = 2
A: A recursive cte based solution
declare @T table (iden int identity, col1 varchar(100));
insert into @T(col1) values
('ROOT/South America/Lima/Test/Test2')
, ('ROOT/South America/Peru/Test/Test2')
, ('ROOT//South America/Venuzuala ')
, ('RtT/South America / ')
, ('ROOT/South Americas// ');
declare @split char(1) = '/';
select @split as split;
with cte as
( select t.iden, case when SUBSTRING(REVERSE(rtrim(t.col1)), 1, 1) = @split then LTRIM(RTRIM(t.col1)) else LTRIM(RTRIM(t.col1)) + @split end as col1, 0 as pos , 1 as cnt
from @T t
union all
select t.iden, t.col1 , charindex(@split, t.col1, t.pos + 1), cnt + 1
from cte t
where charindex(@split, t.col1, t.pos + 1) > 0
)
select t1.*, t2.pos, t2.cnt
, ltrim(rtrim(SUBSTRING(t1.col1, t1.pos+1, t2.pos-t1.pos-1))) as bingo
from cte t1
join cte t2
on t2.iden = t1.iden
and t2.cnt = t1.cnt+1
and t2.pos > t1.pos
order by t1.iden, t1.cnt;
A: With all due respect to @AviG this is the bug free version of function deviced by him to return all the tokens in full.
IF EXISTS (SELECT * FROM sys.objects WHERE type = 'TF' AND name = 'TF_SplitString')
DROP FUNCTION [dbo].[TF_SplitString]
GO
-- =============================================
-- Author: AviG
-- Amendments: Parameterize the delimeter and included the missing chars in last token - Gemunu Wickremasinghe
-- Description: Tabel valued function that Breaks the delimeted string by given delimeter and returns a tabel having split results
-- Usage
-- select * from [dbo].[TF_SplitString]('token1,token2,,,,,,,,token969',',')
-- 969 items should be returned
-- select * from [dbo].[TF_SplitString]('4672978261,4672978255',',')
-- 2 items should be returned
-- =============================================
CREATE FUNCTION dbo.TF_SplitString
( @stringToSplit VARCHAR(MAX) ,
@delimeter char = ','
)
RETURNS
@returnList TABLE ([Name] [nvarchar] (500))
AS
BEGIN
DECLARE @name NVARCHAR(255)
DECLARE @pos INT
WHILE LEN(@stringToSplit) > 0
BEGIN
SELECT @pos = CHARINDEX(@delimeter, @stringToSplit)
if @pos = 0
BEGIN
SELECT @pos = LEN(@stringToSplit)
SELECT @name = SUBSTRING(@stringToSplit, 1, @pos)
END
else
BEGIN
SELECT @name = SUBSTRING(@stringToSplit, 1, @pos-1)
END
INSERT INTO @returnList
SELECT @name
SELECT @stringToSplit = SUBSTRING(@stringToSplit, @pos+1, LEN(@stringToSplit)-@pos)
END
RETURN
END
A: This is based on Andy Robertson's answer, I needed a delimiter other than comma.
CREATE FUNCTION dbo.splitstring ( @stringToSplit nvarchar(MAX), @delim nvarchar(max))
RETURNS
@returnList TABLE ([value] [nvarchar] (MAX))
AS
BEGIN
DECLARE @value NVARCHAR(max)
DECLARE @pos INT
WHILE CHARINDEX(@delim, @stringToSplit) > 0
BEGIN
SELECT @pos = CHARINDEX(@delim, @stringToSplit)
SELECT @value = SUBSTRING(@stringToSplit, 1, @pos - 1)
INSERT INTO @returnList
SELECT @value
SELECT @stringToSplit = SUBSTRING(@stringToSplit, @pos + LEN(@delim), LEN(@stringToSplit) - @pos)
END
INSERT INTO @returnList
SELECT @stringToSplit
RETURN
END
GO
And to use it:
SELECT * FROM dbo.splitstring('test1 test2 test3', ' ');
(Tested on SQL Server 2008 R2)
EDIT: correct test code
A: The easiest way:
*
*Install SQL Server 2016
*Use STRING_SPLIT https://msdn.microsoft.com/en-us/library/mt684588.aspx
It works even in express edition :).
A: ALTER FUNCTION [dbo].func_split_string
(
@input as varchar(max),
@delimiter as varchar(10) = ";"
)
RETURNS @result TABLE
(
id smallint identity(1,1),
csv_value varchar(max) not null
)
AS
BEGIN
DECLARE @pos AS INT;
DECLARE @string AS VARCHAR(MAX) = '';
WHILE LEN(@input) > 0
BEGIN
SELECT @pos = CHARINDEX(@delimiter,@input);
IF(@pos<=0)
select @pos = len(@input)
IF(@pos <> LEN(@input))
SELECT @string = SUBSTRING(@input, 1, @pos-1);
ELSE
SELECT @string = SUBSTRING(@input, 1, @pos);
INSERT INTO @result SELECT @string
SELECT @input = SUBSTRING(@input, @pos+len(@delimiter), LEN(@input)-@pos)
END
RETURN
END
A: You can Use this function:
CREATE FUNCTION SplitString
(
@Input NVARCHAR(MAX),
@Character CHAR(1)
)
RETURNS @Output TABLE (
Item NVARCHAR(1000)
)
AS
BEGIN
DECLARE @StartIndex INT, @EndIndex INT
SET @StartIndex = 1
IF SUBSTRING(@Input, LEN(@Input) - 1, LEN(@Input)) <> @Character
BEGIN
SET @Input = @Input + @Character
END
WHILE CHARINDEX(@Character, @Input) > 0
BEGIN
SET @EndIndex = CHARINDEX(@Character, @Input)
INSERT INTO @Output(Item)
SELECT SUBSTRING(@Input, @StartIndex, @EndIndex - 1)
SET @Input = SUBSTRING(@Input, @EndIndex + 1, LEN(@Input))
END
RETURN
END
GO
A: Here is an example that you can use as function or also you can put the same logic in procedure.
--SELECT * from [dbo].fn_SplitString ;
CREATE FUNCTION [dbo].[fn_SplitString]
(@CSV VARCHAR(MAX), @Delimeter VARCHAR(100) = ',')
RETURNS @retTable TABLE
(
[value] VARCHAR(MAX) NULL
)AS
BEGIN
DECLARE
@vCSV VARCHAR (MAX) = @CSV,
@vDelimeter VARCHAR (100) = @Delimeter;
IF @vDelimeter = ';'
BEGIN
SET @vCSV = REPLACE(@vCSV, ';', '~!~#~');
SET @vDelimeter = REPLACE(@vDelimeter, ';', '~!~#~');
END;
SET @vCSV = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(@vCSV, '&', '&'), '<', '<'), '>', '>'), '''', '''), '"', '"');
DECLARE @xml XML;
SET @xml = '<i>' + REPLACE(@vCSV, @vDelimeter, '</i><i>') + '</i>';
INSERT INTO @retTable
SELECT
x.i.value('.', 'varchar(max)') AS COLUMNNAME
FROM @xml.nodes('//i')AS x(i);
RETURN;
END;
A: /*
Answer to T-SQL split string
Based on answers from Andy Robinson and AviG
Enhanced functionality ref: LEN function not including trailing spaces in SQL Server
This 'file' should be valid as both a markdown file and an SQL file
*/
CREATE FUNCTION dbo.splitstring ( --CREATE OR ALTER
@stringToSplit NVARCHAR(MAX)
) RETURNS @returnList TABLE ([Item] NVARCHAR (MAX))
AS BEGIN
DECLARE @name NVARCHAR(MAX)
DECLARE @pos BIGINT
SET @stringToSplit = @stringToSplit + ',' -- this should allow entries that end with a `,` to have a blank value in that "column"
WHILE ((LEN(@stringToSplit+'_') > 1)) BEGIN -- `+'_'` gets around LEN trimming terminal spaces. See URL referenced above
SET @pos = COALESCE(NULLIF(CHARINDEX(',', @stringToSplit),0),LEN(@stringToSplit+'_')) -- COALESCE grabs first non-null value
SET @name = SUBSTRING(@stringToSplit, 1, @pos-1) --MAX size of string of type nvarchar is 4000
SET @stringToSplit = SUBSTRING(@stringToSplit, @pos+1, 4000) -- With SUBSTRING fn (MS web): "If start is greater than the number of characters in the value expression, a zero-length expression is returned."
INSERT INTO @returnList SELECT @name --additional debugging parameters below can be added
-- + ' pos:' + CAST(@pos as nvarchar) + ' remain:''' + @stringToSplit + '''(' + CAST(LEN(@stringToSplit+'_')-1 as nvarchar) + ')'
END
RETURN
END
GO
/*
Test cases: see URL referenced as "enhanced functionality" above
SELECT *,LEN(Item+'_')-1 'L' from splitstring('a,,b')
Item | L
--- | ---
a | 1
| 0
b | 1
SELECT *,LEN(Item+'_')-1 'L' from splitstring('a,,')
Item | L
--- | ---
a | 1
| 0
| 0
SELECT *,LEN(Item+'_')-1 'L' from splitstring('a,, ')
Item | L
--- | ---
a | 1
| 0
| 1
SELECT *,LEN(Item+'_')-1 'L' from splitstring('a,, c ')
Item | L
--- | ---
a | 1
| 0
c | 3
*/
| |
doc_23525664
|
Now i want to send this image along with user's email ID. How can i send this image directly without letting user save it on his local system.
A: I'm thinking you need some javascript magic, and because you already use HTML5 canvas, that shouldn't be a problem.
So, an onclick event on the submit button that will make an ajax request to your backend php mailer script.
var strDataURI = oCanvas.toDataURL();
// returns "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMgAAADICAYAAACt..."
You just have to pass the strDataURI as a parameter.
Now, I think you should also save these in your database, so that the email can just contain this image tag inside:
<img src="http://www.yourdomain.com/generate_image.php?id=2" alt="Design #2" />
And that the generate_image.php script will do something like this
<?php
header('Cache-control: max-age=2592000');
header('Expires: ' . gmdate('D, d M Y H:i:s \G\M\T', time() + 2592000));
// connect to db here ..
// $id = (int)$_GET['id']; "SELECT youtable WHERE id = '{$id}'"
// and the $image variable should contain "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMgAAADICAYAAACt..."
list($settings, $encoded_string) = explode(',', $image);
list($img_type, $encoding_method) = explode(';', substr($settings, 5))
header("Content-type: {$img_type}");
if($encoding_method == 'base64')
die(base64_decode($encoded_string)); // stop script execution and print out the image
else { // use another decoding method
}
A: if(!empty($_POST['email'])){
$email=$_POST['email'];
$image=$_POST['legoImage'];
$headers="From:".$email."\r\n";
$headers .= "MIME-Version: 1.0\r\n";
$headers .= "Content-Type: text/html; charset=ISO-8859-1\r\n";
list($settings, $encoded_string) = explode(',', $image);
list($img_type, $encoding_method) = explode(';', substr($settings, 5));
if($encoding_method == 'base64'){
$file=fopen("images/newLego.png",'w+');
fwrite($file,base64_decode($encoded_string)) ;
fclose($file);
}
$my_file = "newLego.png";
$my_path = "images/";
$my_subject = "My Design";
$my_message = "Designed by ".$email;
mail_attachment($my_file, $my_path, "myemail@gmail.com", $email, $email, $email, $my_subject, $my_message);
}
I picked up the mail_attachment() function here.
A: Assuming you've succeeded in creating an image file of your canvas using the tutorial you posted, you can use a library like PEAR's Mail_Mime to add attachments to your email.
You can refer to this question for an example using Mail_Mime.
| |
doc_23525665
|
import java.awt.Component;
import javax.swing.JFrame;
import javax.swing.JScrollPane;
import javax.swing.JTable;
public class gass extends JFrame {
String title[] ={"Box", "Weight", "Priority"};
public gass() {
int nb=interface1.BNumber;
Object[][][] data = new Object[nb][nb][nb];
int E1=0, E2=0;
for (int i=0;i<nb;i++)
{ data[i][0][0] = i+1;
E1 = (int) (Math.random() * 100);
data[0][i][0] = E1;
E2 = (int) (Math.random() * 10);
data[0][0][i] = E2;
}
for (int j=0;j<nb;j++)
{
System.out.println("*"+data[j][0][0]+"*"+data[0][j][0]+"*"+data[0][0][j]+"*");
}
JTable table = new JTable(data, title);
Component add = this.getContentPane().add(new JScrollPane(table));
this.setVisible(true);
table.setPreferredScrollableViewportSize(table.getPreferredSize());
this.setSize(800,400);
}
}
Another problem that i get wrong data always in the first cases of the Object Array ***data[0][0][0] = wrong information !!***
next, a Link for a description of the output of my small application and thanks a lot for the help
Click in this link here to get Description Image
A: The JTable constructor takes an Object[][] as argument.
This array is an array of rows. So data[i] is a row, which is an array of columns.
And each row in the array is itself an array of columns. Each column (data[i][j]) should contain some data displayed in one cell of the JTable.
In your case, this data is itself an array. Since there is no specific renderer associated to object arrays, the toString() method of your array is used to display the array in the cell. And an array's toString() method returns something like [Ljava.lang.Object;@.
You should tell us what you would like to display in each cell, to get a better answer, explaining what you should do.
EDIT:
given what you want to display, you just need a two-dimensional array:
Object[][] data = new Object[nb][3]; // nb rows, 3 columns
for (int row = 0; row < nb; row++) {
data[row][0] = row + 1; // first column: row number
data[row][1] = Math.random(100); // second column: weight
data[row][2] = Math.random(10): // third column: priority
}
| |
doc_23525666
|
(function($) {
$.fn.simpleSpinner = function(options) {
var settings = $.extend({
size: 'large',
step: 1,
}, $(this).data('spinner'), options);
return this.each(function(e) {
var self = $(this);
......
});
};
}(jQuery));
Then I initialize plugin like:
$(function() {
$('.spinner').simpleSpinner();
});
<input class="spinner" type="number" value="1" min="1" max="10" data-spinner='{"size":"large"}'>
<input class="spinner" type="number" value="1" min="1" max="10" data-spinner='{"size":"small"}'>
This all works fine, except that if I have more then one element, then data-attributes are applied to all of them from first element.
I would like to be able to individually control each element using size in data-spinner without the need of having two instances of plugin,
$('.spinner2').simpleSpinner();
with different class assigned to it.
A: Just insert your setting initialization code where you are looping through the elements
(function($) {
$.fn.simpleSpinner = function(options) {
return this.each(function(e) {
var $self = $(this);
var settings = $.extend({
size: 'large',
step: 1,
}, $self.data('spinner'), options);
$self.after($("<pre/>",{text:JSON.stringify(settings)}))
});
};
}(jQuery));
$(function() {
$('.spinner').simpleSpinner();
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<input class="spinner" type="number" value="1" min="1" max="10" data-spinner='{"size":"large"}'>
<input class="spinner" type="number" value="1" min="1" max="10" data-spinner='{"size":"small"}'>
<input class="spinner" type="number" value="1" min="1" max="10" data-spinner='{"size":"lorem"}'>
<input class="spinner" type="number" value="1" min="1" max="10" data-spinner='{"size":"ipsum"}'>
| |
doc_23525667
|
Is there some way that I could modify my web.config file (just for local testing) so that it would actually go to the index.html (load that up) and then to the /home/about state?
Here's my current web.config:
<configuration>
<system.web>
<compilation debug="true" targetFramework="4.5" />
<httpRuntime targetFramework="4.5" />
</system.web>
</configuration>
A: It seems you are using html5mode. In this case there's no # to keep the URL changing from requesting to the server.
With this configuration, you need help from the server. It will serve the index.html when it receives requests from your SPA routes.
This SO answer has details on configuring URL Rewrite on web.config:
Rules go by:
<system.webServer>
<rewrite>
<rules>
<rule name="AngularJS Routes" stopProcessing="true">
<match url=".*" />
<conditions logicalGrouping="MatchAll">
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
<add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
<add input="{REQUEST_URI}" pattern="^/(api)" negate="true" />
</conditions>
<action type="Rewrite" url="/" />
</rule>
</rules>
</rewrite>
</system.webServer>
It assumes your API is under: /api and for any directory or file that it finds, it serves as-is.
Anything else, gets rewritten to / which having default document configured to index.html, will load you SPA.
Also note you need to install the URL Rewrite module for IIS (IIS Express doesn't need the module)
Another option is one of these lightweight HTTP servers npm packages.
John Papa has one: lite-server. It uses BrowserSync under the hood:
BrowserSync does most of what we want in a super fast lightweight
development server. It serves the static content, detects changes,
refreshes the browser, and offers many customizations.
When creating a SPA there are routes that are only known to the
browser. For example, /customer/21 may be a client side route for an
Angular app. If this route is entered manually or linked to directly
as the entry point of the Angular app (aka a deep link) the static
server will receive the request, because Angular is not loaded yet.
The server will not find a match for the route and thus return a 404.
The desired behavior in this case is to return the index.html (or
whatever starting page of the app we have defined). BrowserSync does
not automatically allow for a fallback page. But it does allow for
custom middleware. This is where lite-server steps in.
lite-server is a simple customized wrapper around BrowserSync to make
it easy to serve SPAs.
| |
doc_23525668
|
SomeKeyValueTypedPipe
.mapWithValue(dictForKeys) { case ((key, value), dictForKeys) =>
(dictForKeys.get.getOrElse(key, key), value) }
.mapWithValue(dictForValues) { case ((key, value), dictForValues) =>
(key, dictForValues.get.getOrElse(value, value)) }
I was just wondering whether there's a more compact way of writing this, i.e. use only 1 mapWithValue step with 2 separate ValuePipes.
A: You could create a ValuePipe of a tuple of Maps, like ValuePipe[(Map[String, String], Map[String, String])], and then use it like so:
SomeKeyValueTypedPipe
.mapWithValue(dict) { case ((key, value), (dictForKeys, dictForValues)) =>
(dictForKeys.get.getOrElse(key, key), dictForValues.get.getOrElse(value, value)) }
| |
doc_23525669
|
MSISDN, CARDNUMBER AND ACCOUNTNUMBER
I am trying to add data in the accountnumber column. The value here is Min 1 and Max 4.
so if an msisdn has 2 cardnumbers registered, his accountnumbers will be 1,2 and the next msisdn in the next row will have 1 for his first cardnumber.
i have 10,000 rows and i cannot update all of them manually. please if someone can help me with a formula i would really appreciate.
A: You can use the below formula in B2 and copy that cell and paste in all other rows in B
=SUMPRODUCT((A$2:A2=A2)*1)
A: You can use this formula on column B
=COUNTIF($A$2:A2,A2)
| |
doc_23525670
|
Expected type
class 'protorpc.messages.MHistoryActivityViewMessage'
for field items,
found
MHistoryActivityViewMessage\n body: u'Test Here'\n activity_id: 'WS-MHistory_ag9zfm1jYS1kZXJyaWNrLTFyGQsSEE1IaXN0b3J5QWN0aXZpdHkYsbyuBAw'
[...]
I have run across this issue many times where the expected type is 'absolute.path.to.MyMessage' but the found type is 'MyMessage'.
I'm stumped as to why I get this error. I have gotten this error many times. One workaround that sometimes works is moving the protorpc message definition higher up in the file seems.
Any hints or ideas?
| |
doc_23525671
|
data = list(a = factor(c(1,1,2,2,3,NA,NA)),
b = factor(c("a","b","b")),
c = factor(c(3,4,NA,3)))
data = lapply(data, FUN = function(x) {
if (any(is.na(x))) {
x = addNA(x)
levels(x)[length(levels(x))] = "Missing"
}
})
Any help would be appreciated.
A: We can try
lapply(data, function(x) {
if(anyNA(x)) {
levels(x) <- c(levels(x), "Missing")
x[is.na(x)] <- "Missing"
x}
else x
})
#$a
#[1] 1 1 2 2 3 Missing Missing
#Levels: 1 2 3 Missing
#$b
#[1] a b b
#Levels: a b
#$c
#[1] 3 4 Missing 3
#Levels: 3 4 Missing
| |
doc_23525672
|
I am trying to enable ModelsBuilder in API mode.
As far as I learned in my version of umbraco Umbraco.ModelsBuilder.Api should have already been part of the Umbraco.Core and installed.
However when I check in my Developer > ModelsBuilders tab I can see this:
ModelsBuilder is enabled, with the following configuration:
The models factory is enabled.
The API is enabled but not installed.
External tools such as Visual Studio cannot use the API.
No models mode is specified: models will not be generated.
Models namespace is Umbraco.Web.PublishedContentModels.
Static mixin getters are enabled. The pattern for getters is "Get{0}".
Tracking of out-of-date models is not enabled.
In my web.config I have this:
<add key="Umbraco.ModelsBuilder.Enable" value="true" />
<add key="Umbraco.ModelsBuilder.EnableApi" value="true" />
<add key="Umbraco.ModelsBuilder.ModelsMode" value="Nothing" />
Since it says:
The API is enabled but not installed.
I tried to install Umbraco.ModelsBuilder.Api but have not found any information which ModelsBuilder versions are compatible with Umbraco 7.6 so I installed the latest 8.0.4 into my start Web project.
I installed VS extension and created a separate project to hold models eg. MyProject.Umbraco.Models, right-clicked Builder.cs file and clicked 'Run custom tool' (which I previously set up in the Builder.cs properties.
It does something but at the end it throws an error:
UmbracoModelsBuilder: Starting v8.0.4 10/05/2019 18:29:01.
UmbracoModelsBuilder: UmbracoModelsBuilder failed to generate code: UnsupportedMediaTypeException: No MediaTypeFormatter is available to read an object of type 'IDictionary`2' from content with media type 'text/html'.
UmbracoModelsBuilder: at System.Net.Http.HttpContentExtensions.ReadAsAsync[T](HttpContent content, Type type, IEnumerable`1 formatters, IFormatterLogger formatterLogger, CancellationToken cancellationToken)
at Umbraco.ModelsBuilder.Api.ApiClient.GetModels(Dictionary`2 ourFiles, String modelsNamespace)
at Umbraco.ModelsBuilder.CustomTool.CustomTool.UmbracoModelsBuilder.GenerateRaw(String wszInputFilePath, String wszDefaultNamespace, IntPtr[] rgbOutputFileContents, UInt32& pcbOutput, String& errMsg)
Anyone can help with this?
EDIT
I have created my Api project and installed UmbracoCore, ModelsBuilder and ModelsBuilder.Api - same versions as my Web project (I think it's 3.0.7 ModelsBuilder version, not 8.0.4 as I originally tried) - so no installation is missing in the API or Web project.
| |
doc_23525673
|
I see that some kind of unit test framework is used. It probably somehow calls task0 but the IDE shows no references to task0 except one from todoTask0. The only reference to todoTask0 is in task0. So we have circular references but nowhere do I find an external reference to call up one of these functions.
Can someone explain to me how to get the Kotlin Koans running in the IntelliJ IDE?
A: The easiest way is to install the Kotlin Edu plugin. You may read this JB blog post for additional info.
You could also run all koans tests without the plugin.IDEA allows you to run applications and tests directly from the IDE by clicking the Run icon near the test or application definition:
A: already quite old question but I also struggled a bit. The way to do it is how they described it in their github repo (maybe they changed that since last time you checked)
https://github.com/Kotlin/kotlin-koans
How to build and run tests
Working with the project using Intellij IDEA or Android Studio:
Import the project as Gradle project. To build the project and run
tests use 'test' task on Gradle panel.
What I did:
*
*Clone from github via File -> new project from version control -> github
*After that was done I also could not run anything
*File -> New Project from existing soure -> Choose your folder
*Import Project from external model -> choose Gradle
*No need to change anything, after that it worked for me
A: Follow the documentation:
*Open up the project in IntelliJ IDEA or your favorite editor. Note: If
IntelliJ IDEA prompts you to update the Kotlin library, just click
yes.
*Run a test. Make it pass
You can trigger a test run by opening a file (i.e kotlin-koans/test/i_introduction/_0_Hello_World/_00_Start.kt) and hitting:
You can find more information about running tests in IntelliJ in the documentation.
A: In my case, it is the zsh's bug, can be solved in following:
*
*add setopt no_nomatch in the end of file ~/.zshrc;
*then run source /.zshrc
A: You can run them by clicking the Check Task button. :)
| |
doc_23525674
|
<repositories>
<repository>
<id>MDM</id>
<url>My devnexus URL</url>
<name>My Repo</name>
</repository>
</repositories>
<dependencies>
...
<dependency>
<groupId>com.melissadata</groupId>
<artifactId>mdPhone</artifactId>
<version>2.0</version>
</dependency>
...
<dependencies>
This std-web also has a dependency of another project:
<dependency>
<groupId>com.proj.std</groupId>
<artifactId>std-api</artifactId>
<version>2.0</version>
</dependency>
When I clean install the std-api project, it is successful. And when I clean install the std-web project, I get this error:
Failed to execute goal on project std-web: Could not resolve dependencies for project com.proj.std:std-web:war:2.0: Failed to collect dependencies at com.proj.std:std-api:jar:2.0 -> com.melissadata:mdPhone:jar:3.0: Failed to read artifact descriptor for com.melissadata:mdPhone:jar:3.0: Could not transfer artifact com.melissadata:mdPhone:pom:3.0 from/to MDM (): PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -> [Help 1]
I have verified pretty enough to confirm that the specific dependency com.melissadata:mdPhone:jar:3.0 has not been specified anywhere in both the std-api and std-web pom files. Both the std-api and std-web projects is pointing to the same jdk, which means, it has the same cacerts. No issues in devnexus certificate since std-api had a successful build as well.
Please help throwing some light on how to fix this.
| |
doc_23525675
|
I'm just using
System.out.print("(。•́︿•̀。)");
But this ends up printing (ᄑᄀ¬タ꼬チᄌ¬タ꼬タᄑᄀ)
I also tried using \ to escape formatting in case that was the issue, like:
System.out.print("(\。\•́\︿\•̀\。)");
But this resulted in an illegal escape character error.
A: All files on all modern systems are stored as a sequence of bytes. Every byte is a numeric value from zero to 2⁸−1 (that is, 0 to 255).
Since all files consist of bytes, characters in text files—including Java source files—need to be stored as bytes. When the characters consist only of ASCII characters, this is easy to do: There are only 127 ASCII characters, therefore each character corresponds to one of 127 values, so each character is represented by one byte value in the file.
However, when you want to represent characters which are not ASCII characters, like 。•́︿•̀。, the system where you’re working must know how you want those characters to be represented as bytes. There are hundreds of thousands of character values in existence, so one character cannot possibly fit into the byte value range.
The method for translating characters to bytes (and bytes to characters) is called a Charset, also known as a character encoding.
If you have non-ASCII characters in your source code, you need to tell the Java compiler how the file is representing those characters as bytes. You need to tell the compiler what character encoding was used to save your source file.
On non-Windows systems, files are almost always saved using the UTF-8 encoding. In fact, it appears your file was saved as a UTF-8 file, which usually is the right thing to do. UTF-8 is extremely common and is the best encoding to use in almost every case.
However, when you compiled your code, the Java compiler mistakenly believed that your file is in windows-1252 or a similar windows-12nn encoding. These windows encodings are one-byte encodings, capable of representing no more than 256 characters. The characters 。•́︿•̀。 are not valid in these encodings.
Your source file is a UTF-8 file, so 。 is represented as three bytes in that file. However, the Java compiler didn’t know that; it thought each of those bytes was a separate character, because it was assuming a Windows encoding where every byte represents one character.
You need to tell the compiler which encoding was used to save your source file. If you’re building on the command line, you can use the -encoding compiler option. For example:
javac -encoding UTF-8 MyProgram.java
If you are using an IDE, you will need to open your project’s properties, find the compiler option for the character encoding, and change it to UTF-8.
| |
doc_23525676
|
I've just installed the android development bundle on my windows 8 laptop. I'm trying to install the first app "hello world" (http://developer.android.com/training/basics/firstapp/creating-project.html) to my Nexus 7 (2012) version 4.4.2 but its not working.
Using the logcat viewer (whilst the build is taking place) I can see it is reporting:
"Couldn't load memtrack module (No such file or directory)" followed by
" failed to load memtrack module: -2 at run time."
manifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.myfirstapp"
android:versionCode="1"
android:versionName="1.0" >
<uses-sdk
android:minSdkVersion="8"
android:targetSdkVersion="19" />
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme" >
<activity
android:name="com.example.myfirstapp.MainActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Console output:
[2014-04-04 15:18:35 - myFirstApplication] Dx
trouble writing output: already prepared
[2014-04-04 15:18:43 - myFirstApplication] ------------------------------
[2014-04-04 15:18:43 - myFirstApplication] Android Launch!
[2014-04-04 15:18:43 - myFirstApplication] adb is running normally.
[2014-04-04 15:18:43 - myFirstApplication] No Launcher activity found!
[2014-04-04 15:18:43 - myFirstApplication] The launch will only sync the application package on the device!
[2014-04-04 15:18:43 - myFirstApplication] Performing sync
[2014-04-04 15:18:44 - myFirstApplication] Automatic Target Mode: using device 'mydeviceinfo'
[2014-04-04 15:18:44 - myFirstApplication] Uploading myFirstApplication.apk onto device 'mydeviceinfo'
[2014-04-04 15:18:44 - myFirstApplication] Installing myFirstApplication.apk...
[2014-04-04 15:18:47 - myFirstApplication] Success!
[2014-04-04 15:18:47 - myFirstApplication] \myFirstApplication\bin\myFirstApplication.apk installed on device
[2014-04-04 15:18:47 - myFirstApplication] Done!
"
I've started looking at a similar post Couldn't load memtrack module Logcat Error but i can't see what i would need to do for this situation
A: If there is a newer version (compare to the version code defined in your manifest.xml) of the same Application in the target emulator, you will get error as you mentioned.
Couldn't load memtrack module (No such file or directory)
failed to load memtrack module: -2
A: I faced this error several days ago. if your real device or emulator already contains older version of application you are trying to run, then uninstall this old version and run again.It has resolved my issue.
A: I was facing the same issue, make sure you're running the project in the right emulator, sometimes Eclipse doesn't ask you about the emulator that it will use.
Go to the Run menu, Run Configurations, in the left list pick Android Application and then the application that you're trying to run. In the left panel pick the "Target" tab and choose the option "Always prompt to pick device" and try to run again.
Make sure to pick the right emulator, in my case I had recently installed blue stacks and Eclipse was trying to run the application on it without prompting me, instead running it on the regular emulator.
A: I got the same error because of a missing Semicolon in the java file ";". So do check for proper closing of the code Statements
| |
doc_23525677
|
var pdf = new jsPDF('p', 'mm', 'a4');
pdf.text(30, 30, 'Hello world!');
pdf.save('hello_world.pdf');
here's an example code.
when i run this it downloads the file but don't show the print page. all i want is to show the print page instead of downloading the file then printing it.
Thank You!!!
A: Just use doc.output()
var doc = new jsPDF();
doc.text(20, 20, 'Hello world!');
doc.text(20, 30, 'This is client-side Javascript, pumping out a PDF.');
doc.addPage();
doc.text(20, 20, 'Do you like that?');
// Output as Data URI
doc.output('datauri');
CHROME
var doc = new jsPDF();
doc.text(20, 20, 'Hello world!');
doc.text(20, 30, 'This is client-side Javascript, pumping out a PDF.');
doc.addPage();
doc.text(20, 20, 'Do you like that?');
var base64string = doc.output('datauristrlng');
debugBase64( base64string );
function debugBase64(base64URL){
var win = window.open();
win.document.write('<iframe src="' + base64URL + '" frameborder="0" style="border:0; top:0px; left:0px; bottom:0px; right:0px; width:100%; height:100%;" allowfullscreen></iframe>');
}
| |
doc_23525678
|
Main Frame
- CView derived class
- CWnd derived class
--- CMFCTabCtrl derived class
---- CDialog derived class
The CMFCTabCtrl can hold in turn the CWnd derived class and so on and so on...
If you think of it as a tree of windows lets define the above to be at depth 0.
The problem occurs when the depth of the tree is 1, meaning:
Main Frame
- CView derived class
- CWnd derived class
--- CMFCTabCtrl derived class
----- CWnd derived class
------- CMFCTabCtrl derived class
-------- CDialog derived class
I added the following code to my application:
extern HHOOK hHook = nullptr;
LRESULT CALLBACK HookProc(int nCode, WPARAM wParam, LPARAM lParam)
{
return CallNextHookEx(hook, nCode, wParam, lParam);
}
hHook = SetWindowsHookEx(WH_CALLWNDPROC, &HookProc, AfxGetInstanceHandle(), GetCurrentThreadId());
I then ran the application and resized the main frame, I noticed the following:
*
*In the case where the tree depth is 0 the WM_ERASEBKGND message is received in the dialog.
*In the case where the tree depth is 1 the WM_ERASEBKGND message is not received in the dialog.
I hope my explanation was clear enough.
It seems odd that setting the hook will effect the behavior in such a dramatic way.
Did any of you encounter this sort of problem before?
A: I think I found the problem.
Each time we resize the next nested window the kernel stack increases until it doesn't have enougth stack to call the wndproc and we stop receiving messages.
More details can be found here:
http://blogs.msdn.com/b/alejacma/archive/2008/11/20/controls-won-t-get-resized-once-the-nesting-hierarchy-of-windows-exceeds-a-certain-depth-x64.aspx
| |
doc_23525679
|
ClassLoader currentCls=Thread.currentThread().getContextClassLoader();
InputStream template = currentCls.getResourceAsStream("system/CompartmentTerraformTemplate.json");
ObjectMapper _mapper = new ObjectMapper();
JsonNode obj=_mapper.readTree(template);
String templateString = obj.toString();
InputStream data=currentCls.getResourceAsStream("system/RequestSample.json");
HashMap<String,Object> mapRec=_mapper.readValue(data, HashMap.class);
StringSubstitutor sub = new StringSubstitutor(mapRec);
String finalString = sub.replace(templateString);
finalString=finalString.replace("=", ":");
System.out.println(finalString);
}
sample record for map
{
"compartment_id":"ocid1.compartment.oc1..adgnljndfgvbcoasdbffbvovafeooves34r3",
"description":"Testing for handling the JSON TF configuration",
"name":"compt_abhi",
"defined_tags":{"new":"users","Toy":"story"}
}
sample String Template in which value should be replaced
{
"resource":{
"oci_identity_compartment":{
"compt_req":{
"compartment_id":"${compartment_id}",
"description":"${description}",
"name":"${name}",
"defined_tags":"${defined_tags:-{}}",
"freeform_tags":"${freeform_tags:-{}}"
}
}
}
}
| |
doc_23525680
|
I was able to create a Turing machine that adds two unary, and two binary numbers.
I have a general idea of how to solve this problem:
While first number > 0:
Decrement first number.
Increment the second number.
How do you actually decrement a decimal number?
A:
This way, we can add two numbers.
| |
doc_23525681
|
But I don't know how to do the same for Flow. It doesn't seem to accept Babel plugins. Maybe it parses code completely on its own and doesn't even use Babel? I don't know. Does Flow itself take plugins? I don't know.
Is there some simple way for me to plug a function into SOME tool or other in order to arbitrarily transform my code just before Flow sees it? Or would I actually have to dig into Flow's source code and alter it in order to accomplish this? (I am not interested enough to do that.)
A:
[Flow] doesn't seem to accept Babel plugins.
No, flow has support for some proposed ECMAScript features (some of which were previously behind flow config options), but it does not have any kind of plugin system.
Maybe it parses code completely on its own and doesn't even use Babel?
It does. Flow implements a fully-fledged JavaScript -> AST parser that can be used entirely independently of the flow type checker.
Does Flow itself take plugins? I don't know.
No. I think that this is the most relevant issue suggesting this possibility:
one example I could imagine is if you wanted to put some Babel transforms in front of flow, to transpile experimental/custom language features back to ES6 without having to write it to a JS file and then reparse it again.
In this case he's saying that if flow could take any arbitrary compatible AST as input and perform type checking on it, then babel transforms could be performed on that AST before it was fed into flow.
Is there some simple way for me to plug a function into SOME tool or other in order to arbitrarily transform my code just before Flow sees it? Or would I actually have to dig into Flow's source code and alter it in order to accomplish this? (I am not interested enough to do that.)
The answer is hinted at above. The tool you describe would generally be babel. Basically you could run babel transforms over your code to remove constructions that flow does not recognize, then check the resulting intermediary JavaScript files using flow. By its nature, this solution precludes the possibility of something like LSP or other real-time checking, as you would always be performing a full check as part of a two-step process.
At the end of the day, there are very few situations in which such an approach would be worthwhile. The case would need to be highly non-standard.
| |
doc_23525682
|
I have sorted the array, but I think this's still not helping much.
Here's my pseudocode:
sort boxA
sort boxB
for i in boxA:
for j in boxB:
if i+j < value:
break
else if i+j > value:
count+=1
print the count
set count = 0
The question is asking to output how many combination of boxA + boxB that is greater or equal to the value.
Input:
5 3 1200 #number of boxA | number of boxB | value
100 110 160 750 1030 #number of boxA
400 500 500 #number of boxB
Output:
5
Explanation:
There are five ways combine boxA and boxB so that the value >= value
1. 750 + 500
2. 750 + 500
3. 1030 + 400
4. 1030 + 500
5. 1030 + 500
BoxA and boxB can have 500000 items in their list. I think this kind of test case that make the time limit on my algorithm.
Can you show another effective algorithm to pass the time limit for this problem? Thank you.
A: For each boxes with a items in A, the number of boxes in B that have number of items plus a, larger or equal to a certain value d will be equal to the number of boxes that has number of items larger than (d - a).
So, first, sort array B, then for each box with value x in A, use binary search to find starting from which index in B, the item in boxes are larger or equal to d - x. Add in the final result (n - index) with n is number of item in B.
Time complexity is O(m log n)
Example:
We have two array A is {1,5,9,2,4,5} and B is {1,3,3,4,5,6,7,8};
We want to find the two boxes that has sum larger than 7 for example.
So, for each element in A
1 -> we use binary search to find index of the smallest element that greater or equal to(7 - 1) in B, which now at index 5, so the we add to the result (8 - 5) (with 8 is the number of element in B).
5 -> we need to find (7 - 5) in B -> we have index 1 after the search -> add (8 - 2) into result.
...
A: You may use something like the following: Live example
std::size_t Count(std::vector<int>& A, std::vector<int>& B, int N)
{
std::vector<int>& a = A.size() < B.size() ? A : B;
std::vector<int>& b = A.size() < B.size() ? B : A;
std::sort(b.begin(), b.end());
std::size_t res = 0;
for (int e : a)
{
auto it = std::lower_bound(b.begin(), b.end(), N - e);
res += std::distance(it, b.end());
}
return res;
}
A: You need the divide and conquer strategy. Suppose the ranges [b1, e1) and [b2, e2) are sorted, the following procedure does the work.
#include <iterator>
template<class It1, class It2, class T>
size_t do_work(It1 b1, It1 e1, It2 b2, It2 e2, T const t){
if (b1 == e1 || b2 == e2) return 0;
auto const n1 = std::distance(b1, e1);
auto const n2 = std::distance(b2, e2);
if (n1 > n2) return do_work(b2, e2, b1, e1, t);// always divide the shorter sequence
auto const l11 = n1 / 2, l12 = n1 - l11;
auto const l21 = n2 / 2, l22 = n2 - l21;
auto const m1 = std::next(b1, l11);
auto const m2 = std::next(b2, l21);
if (*m1 + *m2 > t){
return do_work(b1, m1, b2, e2, t) + do_work(m1, e1, b2, m2, t) + l12 * l22;
}
else{
auto const _m1 = std::next(m1);
return do_work(b1, _m1, std::next(m2), e2, t) + do_work(_m1, e1, b2, e2, t);
}
}
If It1 and It2 are random access iterator, the time complexity is about O(n log2(n)), where n = max(n1, n2).
A: I'll assume you can sort both arrays and that you can traverse each one either forward or backward.
sort boxA
sort boxB
let a = first number in boxA // the smallest number in the set
let b = last number in boxB // the largest
let total = 0
let subtotal = 0
while a exists
{
while (b exists) and (a + b >= value)
{
let b = previous number in boxB // "Move" the boxB iterator one place
// toward the start of the array.
let subtotal = subtotal + 1 // Now subtotal is the number of times
// we have "moved" the boxB iterator since
// the algorithm started to execute.
}
// Now subtotal is the number of numbers b in boxB such that a + b >= value.
total += subtotal
let a = next number in boxA
}
print total
If either boxA or boxB contains duplicate entries (the same numeric value more than once), "next number" means "next copy of a number", not "next unique number", and similarly with "previous number".
I could have written while a exists ... let a = next number in boxA as for a in boxA instead, but I wanted to emphasize the relationship between the way this algorithm treats boxA and the way it treats boxB: it iterates one time through boxA (in the forward direction) and one time through boxB (in the backward direction) concurrently.
In particular, unlike a typical nested loop control structure, we do not set the iterator over boxB "back to the start" for each new value from boxA.
Instead, during the entire course of execution of the algorithm the "inner loop" can iterate only as many times as the number of numbers in boxB.
The running time of the algorithm is therefore the time it takes to sort the two arrays
plus an additional O(n), where n is the size of the larger array.
Of course the worst-case cost if the algorithm receives unsorted arrays is O(n log n) due to the sorting, but it is still faster (by a constant factor) than an algorithm that requires an additional O(n log n) steps after the arrays are sorted.
And if we assume the arrays are already sorted (for some other reason), then the algorithm runs in just O(n) time.
Update:
As pointed out in the comments, if you know how many items are in boxA in the first place (easily determined in O(1) time for some data structures--in particular, the "array" in commonly-used languages), you can add logic that will break out of the boxA loop when all the numbers in boxB have been visited. Instead of iterating an additional remaining_size_of_A times, just do
total += remaining_size_of_A * subtotal.
(Note that at this point, subtotal is equal to the size of boxB.)
This can save a few steps.
| |
doc_23525683
|
I have some experience with AngularJS and promises and have searched Stackoverflow but cannot find any solution to this problem.
HTML:
<div ng-repeat="area in vm.areas">
<span>{{area.name}}</span>
<span>Close to: </span> <span>{{vm.getCity(area)}}</span>
</div>
JS:
vm.getCity = function(area){
var center = getCenter(area.paths);
getAddress(center[0], center[1]).then(function(city){
console.log(city);
return city;
})
}
function getAddress (latitude, longitude) {
return $q(function (resolve, reject) {
var request = new XMLHttpRequest();
var method = 'GET';
var url = 'http://maps.googleapis.com/maps/api/geocode/json?latlng=' + latitude + ',' + longitude + '&sensor=true';
var async = true;
request.open(method, url, async);
request.onreadystatechange = function () {
if (request.readyState == 4) {
if (request.status == 200) {
var data = JSON.parse(request.responseText);
var results = data.results;
var returnString = ""
for(var i=0; i<results.length; i++){
var types = results[i].types;
if(types[0] === 'locality'){
returnString = results[i].address_components[0].long_name;
}
}
resolve(returnString);
}
else {
reject(request.status);
}
}
};
request.send();
});
};
I can see that the correct cities are showing up in the console but they are not visible in the view. Any help greatly appreciated!
A: Any reason why you're not using Angular's $http service? By not using it and making a custom HTTP call, Angular has no idea you're doing it. Therefore, when you get your data, Angular has no idea something has changed, and doesn't refresh the data or the view.
If you use Angular's $http service, you'll stay inside its ecosystem and make it aware of your changes.
$http({
method: 'GET',
url: 'http://maps.googleapis.com/maps/api/geocode/json?latlng=' + latitude + ',' + longitude + '&sensor=true'
}).then(function successCallback(response) {
// var data = ......
}, function errorCallback(response) {
});
| |
doc_23525684
|
Uncaught ReferenceError: $ is not defined
I have tried placing a JQuery script within the app and it does not work.
I just want the data to append to the #resultContainer when the page is loaded
app/views/locations/show.html.erb
<div id="resultContainer"></div>
app/assets/javascripts/application.js
var _PremiumApiBaseURL = 'http://api.worldweatheronline.com/premium/v1/';
var _PremiumApiKey = 'APIKEY';
//Get Marine Weather Data
function JSONP_MarineWeather(input) {
var url = _PremiumApiBaseURL + "marine.ashx?q=" + input.query +
"&format=" + input.format +
"&fx=" + input.fx +
"&key=" + _PremiumApiKey +
"&tide=yes&";
jsonP(url, input.callback);
}
// Helper
function jsonP(url, callback) {
$.ajax({
type: 'GET',
url: url,
async: false,
contentType: "application/json",
jsonpCallback: callback,
dataType: 'jsonp',
success: function (json) {
console.dir('success');
},
error: function (e) {
console.log(e.message);
}
});
}
var resultContainer = $('#resultContainer');
var output = '';
$(document).ready(function () {
GetMarineWeather();
});
function GetMarineWeather(e) {
var marineWeatherInput = {
query: '26.13,-80.10',
format: 'JSON',
fx: '',
callback: 'MarineWeatherCallback'
};
JSONP_MarineWeather(marineWeatherInput);
e.preventDefault();
}
function MarineWeatherCallback(marineWeather) {
var allDataToday = marineWeather.data.weather[0]
output = "<br/> Date: " + allDataToday.date;
output += "<br/> Min Temp (f): " + allDataToday.mintempF;
output += " - Max Temp (f): " + allDataToday.maxtempF;
output += "<br/>";
//6AM
output += "<br/> Time: 6AM";
output += " - Surf: " + allDataToday.hourly[2].swellHeight_ft + "ft";
output += " - Swell: " + allDataToday.hourly[2].swellDir16Point + " " + allDataToday.hourly[2].swellPeriod_secs + "sec";
resultContainer.empty();
resultContainer.html(output);
}
Help
A: Make sure you've properly included jQuery library before calling jQuery functions Or Check any conflicting JavaScript libraries which shares jQuery $ alias.
<script src="http://code.jquery.com/jquery-latest.min.js" type="text/javascript"></script>
IF any other JavaScript libraries' $ variable has some conflicts with jQuery, You can use jQuery.noConflict() method to avoid the same.
Eg.
var jq = jQuery.noConflict();
jq( "div p" ).hide(); //Instead of $( "div p" ).hide();
A: If you can access jQuery by typing jQuery, you can alias it yourself with $ = jQuery;
| |
doc_23525685
|
df = pandas.DataFrame([[2001, "Jack", 77], [2005, "Jack", 44], [2001, "Jill", 93]],columns=['Year','Name','Value'])
Year Name Value
0 2001 Jack 77
1 2005 Jack 44
2 2001 Jill 93
For each unique Name, I would like to keep the row with the largest
Year value. In the above example I would like to get the table
Year Name Value
0 2005 Jack 44
1 2001 Jill 93
I tried solving this question with groupby + (apply):
df.groupby('Name', as_index=False)\
.apply(lambda x: x.sort_values('Value').head(1))
Year Name Value
0 0 2001 Jack 44
1 2 2001 Jill 93
Not the best approach, but I'm more interested in what is happening, and why. The result has a MultiIndex that looks like this:
MultiIndex(levels=[[0, 1], [0, 2]],
labels=[[0, 1], [0, 1]])
I'm not looking for a workaround. I'm actually more interested to know why this happens, and how I can prevent it without changing my approach.
A: IIUC, use group_keys=False:
df.groupby('Name', group_keys=False).apply(lambda x: x.sort_values('Value').head(1))
Output:
Year Name Value
1 2005 Jack 44
2 2001 Jill 93
A: use .reset_index(drop=True)
df.groupby('Name').apply(lambda x: x.sort_values('Value').head(1)).reset_index(drop=True)
| |
doc_23525686
|
So everytime I rake db:migrate a few migrations are run and then it stops at a "Table already exists" or "Column already exists"
Is there a way to tell rake what is going on, or even better, a argument that I can pass to rake db:migrate to tell it to ignore "already exists" errors and just move the hell on.
A: You can specify a force: true argument on create_table to force it to drop the table and recreate it which usually gets round these errors. However, that seems like a bit of a brute force way to solve the problem.
The problem usually occurs when you dump the schema from production and load it locally. The production schema doesn't know anything about new tables locally and it copies the entire list of schema_migrations so the fact that you've run a migration locally is lost.
One way to get round this problem is to skip the schema_migrations table from the dump. You'll still get all your production data loaded locally but you won't overwrite your local list of migrations.
With mysqldump it's a question of adding a --ignore-table=schema_migrations parameter.
However, you need to be careful with any way that you solve this problem. It's very easy to skip a migration because it creates a table and forget that it also adds a column to an existing table.
I generally prefer a manual approach and either add the values to schema_migrations for the migrations I know have happened locally. Or just comment out the bits of a migration that are causing problems whilst I rerun them locally. Neither is particularly nice but I know I haven't missed any steps locally that way.
| |
doc_23525687
|
I want that the 2 textviews will be side by side, so the second view will start at the end of the first view.
Please help :-)
A: Though this is the simplest one , if you want them to occupy the same space then you can use layout_constraintVertical_chainStyle.
<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent">
<TextView
android:id="@+id/text_view1"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:text="Random 1"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<TextView
android:id="@+id/text_view2"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:text="Random 1"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toEndOf="@+id/text_view1"
app:layout_constraintTop_toTopOf="parent" />
</android.support.constraint.ConstraintLayout>
Have a look to this https://medium.com/@loutry/guide-to-constraintlayout-407cd87bc013 .ChainStyle is generally used when you want them to spread equally either horizontal or vertical .Chains are controlled by attributes set on the first element of the chain (the “head” of the chain) which is the left-most widget for horizontal chains, and the top-most widget for vertical chains.
A: Try this... Just adjust to top and bottom of the views according to your layout.
<TextView
android:id="@+id/leftView"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_marginEnd="5dp"
android:text="Left View"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toStartOf="@id/rightView"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<TextView
android:id="@+id/rightView"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_marginStart="5dp"
android:text="Right View"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toEndOf="@id/leftView"
app:layout_constraintTop_toTopOf="parent" />
A: Try something like this:
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="wrap_content">
<TextView
android:id="@+id/txt1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintLeft_toLeftOf="parent"/>
<TextView
android:layout_width="0dp"
android:layout_height="wrap_content"
app:layout_constraintLeft_toRightOf="@id/txt1"
app:layout_constraintRight_toRightOf="parent"/>
</androidx.constraintlayout.widget.ConstraintLayout>
| |
doc_23525688
|
I want to be able to add a section where I can add a alt tag to the image - because I use some images more than once on different pages using the standard wordpress alt tags means I would need to upload the image multiple times.
I have used visual composer on other sites that has been customized by the theme author and it has this function already but I can't seem to get it to work on the standard visual composer.
I added this code into the PHP file:
array(
'type' => 'textfield',
'heading' => __( 'Image ALT tag', 'js_composer' ),
'param_name' => 'image_alt',
'holder' => 'alt',
'description' => __( 'Enter the image alt text.', 'js_composer' ),
And it worked as far as adding the alt tag text box but this didn't translate to the front end where it was still pulling the alt tag from the title tag added in the media library.
Am I missing a function or something here?
A: Edit this file:
js_composer/include/templates/shortcodes/vc_single_image.php
Add this line:
$img['thumbnail'] = str_replace( '<img ', '<img alt="' . $image_alt . '" ', $img['thumbnail'] );
Somewhere before this line:
$wrapperClass = 'vc_single_image-wrapper ' . $style . ' ' . $border_color;
| |
doc_23525689
|
dgvReport.DataSource = new DataView(dt, "StudentID = " + txtSearch.Text, "StudentID", DataViewRowState.CurrentRows);
Any help would be appreciated. Thank you.
A: you can use RowFilter Property for this
DataView dataView = new DataView(dt);
dataView.RowFilter = "age > 14 and age < 19";
dgvReport.DataSource = dataView;
you will get rows with age begining from 15 and ending with 18
| |
doc_23525690
|
list = ['bill gates','elon musk','aamir khan','larry page']
allPosts = Post.objects.filter(author=list)
When I change list filter can work dynamically
A: You can use __in:
my_list = ['bill gates','elon musk','aamir khan','larry page']
allPosts = Post.objects.filter(author__in=my_list)
Note: Never use python built in functions (Ex: list) as variable names. It would be better to avoid the need for this by choosing a different variable name.
| |
doc_23525691
|
Usually I run
git log
Then I create a branch for each commit and investigate each sequentially.
Is there a way to create a branch for each commit and name them sequentially like 01, 02 ...etc.
A: This bash script will do the trick, creating a branch called Bn starting at B1 for each commit. I assume you don't want to do your whole repo but some A..B range (excluding A, including B), which I'm arbitrarily setting at HEAD~10..HEAD here.
A=HEAD~10
B=HEAD
counter=0
for commit in `git rev-list --reverse $A..$B`; do
counter=$((counter + 1))
git branch B$counter $commit
done
Notice the use of --reverse: without it, this loop would assign branch B1 to the most recent commit; with it, the loop assigns B1 to the oldest commit.
Now, if you want to tag all the commits in the current branch, replace the for line with:
for commit in `git rev-list --reverse`; do
and if you want to tag all the commits in the repo, in every branch, use this for line:
for commit in `git rev-list --reverse --all`; do
| |
doc_23525692
|
Method:
public void setIncludeAllSubaccounts(JAXBElement<Boolean> paramJAXBElement)
{
this.includeAllSubaccounts = paramJAXBElement;
}
This does not compile:
returnMessageFilter.setIncludeAllSubaccounts(true);
A: A JAXBElement is generated as part of your model when a JAXB (JSR-222) implementation would not be able to tell what to do based on the value alone. In your example you probably had an element like:
<xsd:element
name="includeAllSubaccounts" type="xsd:boolean" nillable="true" minOccurs="0"/>
The generated property can't be boolean because boolean doesn't represent null. You could make the property Boolean but then how do you distinguish been a missing element and an element set with xsi:nil. This is where JAXBElement comes in. See below for a full example:
Foo
package forum12713373;
import javax.xml.bind.JAXBElement;
import javax.xml.bind.annotation.*;
@XmlRootElement
@XmlAccessorType(XmlAccessType.FIELD)
public class Foo {
@XmlElementRef(name="absent")
JAXBElement<Boolean> absent;
@XmlElementRef(name="setToNull")
JAXBElement<Boolean> setToNull;
@XmlElementRef(name="setToValue")
JAXBElement<Boolean> setToValue;
}
ObjectFactory
package forum12713373;
import javax.xml.bind.JAXBElement;
import javax.xml.bind.annotation.*;
import javax.xml.namespace.QName;
@XmlRegistry
public class ObjectFactory {
@XmlElementDecl(name="absent")
public JAXBElement<Boolean> createAbsent(Boolean value) {
return new JAXBElement(new QName("absent"), Boolean.class, value);
}
@XmlElementDecl(name="setToNull")
public JAXBElement<Boolean> createSetToNull(Boolean value) {
return new JAXBElement(new QName("setToNull"), Boolean.class, value);
}
@XmlElementDecl(name="setToValue")
public JAXBElement<Boolean> createSetToValue(Boolean value) {
return new JAXBElement(new QName("setToValue"), Boolean.class, value);
}
}
Demo
package forum12713373;
import javax.xml.bind.*;
public class Demo {
public static void main(String[] args) throws Exception {
JAXBContext jc = JAXBContext.newInstance(Foo.class);
ObjectFactory objectFactory = new ObjectFactory();
Foo foo = new Foo();
foo.absent = null;
foo.setToNull = objectFactory.createSetToNull(null);
foo.setToValue = objectFactory.createSetToValue(false);
Marshaller marshaller = jc.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
marshaller.marshal(foo, System.out);
}
}
Output
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<foo>
<setToNull xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/>
<setToValue>false</setToValue>
</foo>
A: Thanks to NullUserException's comment, I was able implement this in one line. It is slightly different so I thought I'd post it for the benefit of others.
returnMessageFilter.setIncludeAllSubaccounts(new JAXBElement<Boolean>(new QName("IncludeAllSubaccounts"),
Boolean.TYPE, Boolean.TRUE));
Just to clarify, the QName is the XmlElement tag name.
Also, needed to import:
import javax.xml.bind.JAXBElement;
Edit
Better to use the convenience method in ObjectFactory class that returns the JAXBElement as Blaise suggested.
| |
doc_23525693
|
class Test(models.Model):
owner = models.ForeignKey(Member)
name = models.CharField(max_length=6
score = models.IntegerField(validators=[MinValueValidator(1), MaxValueValidator(10)])
service = models.ForeignKey(Service)
feel = models.ForeignKey(feel)
....
If I do
t, created = Test.objects.get_or_create(owner=member)
It would throw IntegrityError: (1048, "Column 'xxx' cannot be null") .
But I just want to fill data later.It is bad to add null=True to Model field which is indeed required not null.How can I achieve my goal?
A: You have to just create the object in-memory and fill out the data before you try and save it to the database. As far as the database is concerned that row is invalid until everything is filled out.
So:
t = Test(owner=member) # Does not go to the database
#
# some other non-related code
#
# The you update your Test object with all of the fields and save
t.name = 34
t.score = 4
t.service = service # added separately
t.feel = feel # added separately
t.save()
A: If you are using a ForeignKey in the instance of get_or_create you need to express whether or not you want to allow it to be null. If not when you try to save it it's going to give you that error.
I know you said you didn't want to use NULL ( however, get_or_create it would be required )
You need to set the following in order to use the get_or_create method.
feel = models.ForeignKey(feel, null=True, blank=True, unique=False/True )
I would move away from get or create, and go the route of regularly creating the object.
example.)
object = ModelObject( data = "blah", data2 = "blah" )
object.save(commit=False)
# do your other processing
object.other_data = "value"
object.save()
All the best, happy coding.
Jody Fitzpatrick
| |
doc_23525694
|
The bottom axis is supposed to look like the top, being placed right below the tip of the highest chart. I have tried to .orient the bottom axis to both "top" and "bottom" to little avail. Any ideas?
var width = 960,
fullHeight = 850,
height = 350;
var y = d3.scale.linear()
.domain([0, d3.max(data)])
.range([height, 0]);
var axisScale = d3.scale.ordinal()
.domain(data)
.rangeBands([0, width]);
var axisScale2 = d3.scale.ordinal()
.domain(data2)
.rangeBands([0, width]);
// .range([0, 960]);
var chart = d3.select(".chart")
.attr("width", width)
.attr("height", fullHeight);
var chart1 = chart.append("g")
.attr("class", "chart-one")
.attr("height", height)
.attr("width", width);
var chart2 = chart.append("g")
.attr("class", "chart-two")
.attr("transform", function() { return "translate(0," + (height + 70) + ")"; })
.attr("height", height)
.attr("width", width);
var barWidth = width / data.length;
var bar = d3.select(".chart-one")
.selectAll("g")
.data(data)
.enter().append("g")
.attr("class", "one")
.attr("transform", function(d, i) { return "translate(" + i * barWidth + ",0)"; });
bar.append("rect")
.attr("y", function(d) { console.log(d, y(d)); return y(d) + 60; })
.attr("height", function(d) { return height - y(d); })
.attr("width", barWidth - 1);
var bar2 = d3.select(".chart-two")
.selectAll("g")
.data(data2)
.enter().append("g")
.attr("class", "two")
.attr("transform", function(d, i) { return "translate(" + i * barWidth + ",0)"; });
bar2.append("rect")
.attr("height", function(d) { return height - y(d) })
.attr("width", barWidth - 1);
var xAxis = d3.svg.axis()
.scale(axisScale)
.tickValues(data);
// .tickPadding([-10]);
// .orient("top");
var xAxis2 = d3.svg.axis()
.scale(axisScale2)
.tickValues(data2)
.tickPadding([15]);
var xAxis3 = d3.svg.axis()
.scale(axisScale)
.tickValues(data)
.tickPadding(27);
var xBotAxis = d3.svg.axis()
.scale(axisScale)
.orient("top")
.tickValues(data);
d3.select(".chart-one").append("g").attr("class", "axis").call(xAxis);
d3.select(".chart-one").append("g").attr("class", "axis").call(xAxis2);
d3.select(".chart-one").append("g").attr("class", "axis").call(xAxis3);
d3.select(".chart-two").append("g").attr("class", "axis").call(xBotAxis);
A: The orientation of the axis only affects where the labels are with respect to the line, not the overall position. If you want it to appear at the bottom, you need to move the element it's appended to there, i.e.
d3.select(".chart-two").append("g")
.attr("transform", "translate(0," + (height-10) + ")")
.attr("class", "axis")
.call(xBotAxis);
You may want to tweak the offset from the total height (10 above) and/or the height of the chart and bars to your liking.
| |
doc_23525695
|
I heard this isn't possible, but maybe it changed, or if not is there any other way around.
A: You can do it now!
context:System:device:deviceId
As far as I can tell it only works on real devices. So if you are testing in the developer's Skills Manager you don't get the field, but when used with a real Alexa device, it works.
A: This is not yet possible, but you can get the 'userId' from event.session.user.userId.
| |
doc_23525696
|
And there is a desktop computer with linux on board(but really I presume there is no difference). I need to type on web browser address like someaddress.com and to see website situated at my server.
My /etc/hosts:
127.0.0.1 localhost
105.123.123.123 someaddress.com
105.123.123.123 www.someaddress.com
But it doesn't work. I see real someaddress.com website. What can be wrong. It will be great if you help me with that.
P.S. Why I need this. There is one project with fixed links(like someaddress.com/inf). And I need to test it.
A: Maybe your distribution is preferring DNS over values in /etc/hosts.
Check /etc/nsswitch.conf. It should have a hosts line something like:
hosts: files dns
Just make sure files comes before dns.
| |
doc_23525697
|
In the UI I am showing Start Date and End Date
.where('createdAt', '>=', startDate)
.where('createdAt', '<=', endDate)
The query works as expected, but when I have the same date on both fields I don't get the results on that date and instead I get an empty result.
Is there a way to fetch a single date results while selecting same date in a range search?
Example:
Start Dte End Date Result
10/03/2020 12/03/2020 -> returns 11/03/2020 items
11/03/2020 11/03/2020 -> returns empty
| |
doc_23525698
|
How to retrieve variables of each task in a process ?
A: For a userTask with id="taskTest" you can use this code:
RuntimeService runtimeService;
ProcessEngine activitiProcessEngine
TaskService taskService = processEngine.getTaskService();
runtimeService = activitiProcessEngine.getRuntimeService();
taskService = activitiProcessEngine.getTaskService();
Map<String, Object> vars =taskService.createTaskQuery().
processInstanceId(pi.getId()).taskDefinitionKey("taskTest")
.singleResult().getProcessVariables();
you can use this too : Variables
A: For a userTask with id="task1" you can use taskService How to get task variables
ProcessEngine processEngine = ProcessEngines.getDefaultProcessEngine();
TaskService taskService = processEngine.getTaskService();
List<Task> tasks = taskService.createTaskQuery().taskDefinitionKey("task1").includeProcessVariables().orderByTaskCreateTime().desc().list();
for (Task task : tasks) {
Map<String, Object> variables = task.getProcessVariables();
}
A: If you need to read task local variables into the process instance, you will need to add a taskListener against the "complete" event. If we are talking simple variable mapping, you can use a scriptListener, otherwise a Java class.
Within the listener, you have access to the "execution" (script listener) or the TaskDelegate where you can set/get process instance variables (getVariables() and setVariable()) or local variables getVariableLocal() and setVariableLocal().
Hope this helps.
| |
doc_23525699
|
What field should I add to the SelectProperties object ? or where I can find these information.
KeywordQuery keywordQuery = new KeywordQuery(SPContext.Current.Site);
keywordQuery.QueryText = queryText;
keywordQuery.ResultsProvider = SearchProvider.Default;
var selecProperties = keywordQuery.SelectProperties;
selecProperties.Add("UniqueId");
selecProperties.Add("FileLeafRef");
selecProperties.Add("ListId");
selecProperties.Add("WebId");
selecProperties.Add("Created");
selecProperties.Add("CheckoutUserOWSUSER");
SearchExecutor searchExecutor = new SearchExecutor();
ResultTableCollection resultTableCollection = searchExecutor.ExecuteQuery(keywordQuery);
ResultTable resultTable = resultTableCollection.Filter("TableType", KnownTableTypes.RelevantResults).FirstOrDefault();
DataTable dataTable = resultTable.Table;
A: I found the solution in the KeywordQuery object in the property HitHighlightedProperties all needed is to add crawled properties (managed properties) to SelectProperties and HitHighlightedProperties, then in the field you will find xml contains details about the found keyword in all available fields.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.