text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Scala how to prevent empty lines in scala interpolated string template?
I have an object with some optional attributes which I want to list in a string template (In a sort of JSON way) If the attribute is None I don't want to see anything, if it is Non None I want to see a label with the value:
case class FooTest(a: String, b: Option[String], c: String)
val test = FooTest("one", Some("two"), "three");
val myTemplate = s"""
| "een" : "${test.a}"
| ${test.b.fold(""){tb => s"\"twee\": \"$tb\"}
| "drie" : "${test.c}"
"""
This works expected with Some("value") for test.b, but it leaves a blank line for
None for test.b:
"een" : "one"
"drie" : "three"
How can I get rid of the empty line (apart from replacing 2 subsequent newline characters with 1 newline in the resulting string)?
I would expect something like paiges: https://github.com/typelevel/paiges to provide a better way to solve this problem.
Depends on how pervasive this kind of pretty printing is. Dependencies are not free. I would probably use string templates for a handful of occasions, implement a simple pretty printer if it pops up more often than that and consider adding a dependency if it’s pervasive, I don’t have the resources to write it on my own, and have specific requirements that make that dependency fit for adoption in my project.
One possibility might be to inline the second line and explicitly add the newline, which kinda works but doesn't interact too nicely with indentation:
case class FooTest(a: String, b: Option[String], c: String)
def template(test: FooTest) = s"""
| "een" : "${test.a}"${test.b.fold(""){tb => s"\n\"twee\": \"$tb\""}}
| "drie" : "${test.c}"
""".stripMargin
println(template(FooTest("one", Some("two"), "three")))
println(template(FooTest("one", None, "three")))
Output:
"een" : "one"
"twee": "two"
"drie" : "three"
"een" : "one"
"drie" : "three"
If you want to keep the template string nice and work well with the template indentation, one possible approach is to remove the empty lines in a second pass, as follows:
case class FooTest(a: String, b: Option[String], c: String)
def template(test: FooTest) = s"""
| "een" : "${test.a}"
| ${test.b.fold(""){tb => s"\"twee\": \"$tb\""}}
| "drie" : "${test.c}"
""".stripMargin
def deleteEmptyLines(s: String) = s.replaceAll("\\n\\s*\\n", "\n")
println(deleteEmptyLines(template(FooTest("one", Some("two"), "three"))))
println(deleteEmptyLines(template(FooTest("one", None, "three"))))
Output:
"een" : "one"
"twee": "two"
"drie" : "three"
"een" : "one"
"drie" : "three"
You can play around with this code here on Scastie.
| common-pile/stackexchange_filtered |
Magento - Accessing values from an Mage_Catalog_Model_Resource_Product_Collection object
I am new to magento and php and I am trying to retrieve values from an object.
$_productCollection=$this->getLoadedProductCollection();
When I do a print_r() I am getting something like below
Mage_Catalog_Model_Resource_Product_Collection Object
(
[_flatEnabled:protected] => Array
(
[1] =>
)
[_productWebsiteTable:protected] => sn_catalog_product_website
[_productCategoryTable:protected] => sn_catalog_category_product
[_addUrlRewrite:protected] => 1
[_urlRewriteCategory:protected] => 3
[_addMinimalPrice:protected] =>
[_addFinalPrice:protected] =>
[_allIdsCache:protected] =>
[_addTaxPercents:protected] => 1
[_productLimitationFilters:protected] => Array
(
[category_id] => 3
[category_is_anchor] => 1
[store_id] => 1
[use_price_index] => 1
[customer_group_id] => 0
[website_id] => 1
[visibility] => Array
(
[0] => 2
[1] => 4
)
)
)
And I need to get the category ID in this.
Can someone please help me
FYI: In Magento, collections are iterable resource objects which may contain a collection of data models. To learn about the containted items' data, you can foreach($collection as $item) and inside use $item->debug().
Trying to get the category from a product list is the long way around as a product list is part of a category. I assume you are doing this on category pages (otherwise you are dealing with more than one category, such as on a search page) in which case you can retrieve it more directly;
$category = Mage::registry('current_category');
$categoryId = $category->getId();
// to find out what other info is stored, temporarily use this
print_r($category->debug());
you are dealing with collection that is a massive containing multiple collection items (objects) so to get data out of there you need to iterate over this first
$_productCollection=$this->getLoadedProductCollection();
foreach($_productCollection as $product){
//display data that object contains
//print_r($product->getData());
//display category id's that product is associated with
//print_r($product->getCategoryIds());
}
| common-pile/stackexchange_filtered |
What should I do when I feel the urge to use object-style polymorphic messaging in Haskell?
I have a follow-up question to this question. What's the idiomatic Haskell equivalent to a polymorphic class-level constant in an object-oriented language?
I'm experimenting with event-sourcing, using Event Store and Haskell. I've got stuck trying to figure out the logic that saves and loads events.
Event Store is based on the concept of event streams; in an object-oriented domain model there's normally a 1:1 relation between event streams and aggregates. You can organise the streams into categories; typically you'll have one category per aggregate class in your domain model. Here's a sketch of how you might model it in C#:
interface IEventStream<T> where T : Event
{
string Category { get; }
string StreamName { get; }
IEnumerable<T> Events { get; }
}
class PlayerEventStream : IEventStream<PlayerEvent>
{
public string Category { get { return "Player"; } }
public string StreamName { get; private set; }
public IEnumerable<PlayerEvent> Events { get; private set; }
public PlayerEventStream(int aggregateId)
{
StreamName = Category + "-" + aggregateId;
}
}
class GameEventStream : IEventStream<GameEvent>
{
public string Category { get { return "Game"; } }
public string StreamName { get; private set; }
public IEnumerable<GameEvent> Events { get; private set; }
public GameEventStream(int aggregateId)
{
StreamName = Category + "-" + aggregateId;
}
}
class EventStreamSaver
{
public void Save(IEventStream stream)
{
CreateStream(stream.StreamName);
AddToCategory(stream.StreamName, stream.Category);
SaveEvents(stream.StreamName, stream.Events);
}
}
This code ensures a GameEvent never gets sent to a Player's event stream and vice versa, and that all the categories are correctly assigned. I'm using polymorphic constants for Category to help protect this invariant, and to make it easy to add new stream types later.
Here's my first attempt at translating this structure into Haskell:
data EventStream e = EventStream AggregateID [e]
streamName :: Event e => EventStream e -> String
streamName (EventStream aggregateID (e:events)) = (eventCategory e) ++ '-':(toString aggregateID)
class Event e where
eventCategory :: e -> String
-- and some other functions related to serialisation
instance Event PlayerEvent where
eventCategory _ = "Player"
instance Event GameEvent where
eventCategory _ = "Game"
saveEventStream :: Event e => EventStream e -> IO ()
saveEventStream stream@(EventStream id events) =
let name = streamName stream
category = eventCategory $ head events
in do
createStream name
addToCategory name category
saveEvents name events
This is pretty ugly. The type system requires eventCategory to mention e somewhere in its signature, even though it's not used anywhere in the function. It'll also fail if the stream contains no events (because I'm trying to attach the category to the type of events).
I'm aware that I'm trying to write C# in Haskell - is there a nicer way to implement polymorphic constants of this type?
Update: As requested, here are the type signatures that I think the (presently unimplemented) stubs in the do block should have:
type StreamName = String
type CategoryName = String
createStream :: StreamName -> IO ()
addToCategory :: StreamName -> CategoryName -> IO ()
saveEvents :: Event e => StreamName -> [e] -> IO ()
These functions would be responsible for communicating with the database - setting up the schema and serialising out the events.
Are you wanting a heterogeneous list (or other data structure) of Event e => e? If you're trying to implement this with Haskell type classes, then you're going about it the wrong way. You can "fake" it with existentially quantified types, but those aren't the recommended way to do it.
@bheklilr My data structures are as per my previous question: data PlayerEvent = PlayerCreated Name | NameUpdated Name. An event stream representing a Player aggregate would then be an EventStream PlayerEvent
@evanmcdonnal - BURN! In fact I'm working on a pet project myself, so I think I'm entitled to use someone else's pet project ;)
@evanmcdonnal Your comment has been flagged as not constructive. Haskell is certainly not an "academic pet project", and SO is not the place to start flame wars over languages and what they should be used for.
@BenjaminHodgson What are the types of createStream, addToCategory, and saveEvents? Since they're not defined yet, what do you think they types should be? Often in Haskell it's advantageous to figure out the types with myFunc :: <Type Signature>; myFunc = undefined, get it to compile, then start implementing the functionality.
@bheklilr I just stubbed them out to make the question read fluently. I don't think they're directly relevant to the question. And since I'm still learning about Event Store, I might have got the steps completely wrong ;). But anyway, see my edit for the type signatures I'm imagining for those actions.
@BenjaminHodgson If I understand, you might want to have either a sum type that determines the "kind" of event (i.e. Player or Game) and then you can have a function that gives the appropriate string based on that like this: data EventType = PlayerEvent | GameEvent. Another possibility would be to have it as a field in an Event type like this: data Event = Event { eventType :: String, ... }. Also, shouldn't createStream have a type more like StreamName -> IO EventStream?
@DavidYoung I did consider introducing a new EventType type. The problem is that the compiler can't stop you from creating an EventStream with an EventType of PlayerEvent and then filling it with events related to the Game aggregate! That's why I introduced the PlayerEvent and GameEvent types in the first place. (See my previous question for more on this decision.)
@bheklilr thank god the OP has a sense of humor, don't be too defensive about Haskell please.
Try an existential, just to get a feel for what OO encoding looks like.
I have a hard time understanding the question. it would be nice to rephrase the question in hindsight.
Some people suggest existential types, but unless I am grossly misunderstanding, you want to limit certain even streams to certain types.
Well first of all,
data EventStream e = EventStream AggregateID [e]
streamName :: Event e => EventStream e -> String
streamName (EventStream aggregateID (e:events)) = (eventCategory e) ++ '-':(toString aggregateID)
Should seem strange. You call eventCategory on the first even and throw away the rest, so you assume the category of all the events is the same. But of course, the eventCategory can return different strings for different values of an event. And if there are no events, you have to do eventCategory undefined.
One idea is to change the type of eventCategory:
data Proxy p = Proxy
class Event e where
eventCategory :: Proxy e -> String
Now it is impossible for the function to return different strings for different values of the event, because it has no access to an actual value. In other words, eventCategory depends only on the type, not the value.
Another possibility is to follow the c# code, namely, category is a property of a stream, not an event:
{-# LANGUAGE MultiParamTypeClasses #-}
import Data.ByteString
class Event e where
deserialize :: ByteString -> e
... other stuff
class Event e => EventStream t e where
category :: t e -> String
aggregateId :: t e -> Int
events :: t e -> [e]
name :: t e -> String
name s = category s ++ "-" ++ show (aggregateId s)
The EventStream typeclass corresponds closely with the interface.
Notice how name is inside the typeclass, but you can write it without knowing which instance you are using. You could just as easily move it out of the typeclass, but an implementation may decide it will define a custom name, which would override the default defintion.
Then you define your events:
data PlayerEvent = ...
instance Event PlayerEvent where ...
data GameEvent = ...
instance Event GameEvent where ...
Now the stream types:
data PlayerEventStream e = PES Int [e]
instance EventStream PlayerEventStream PlayerEvent where
category = const "Player"
aggregateId (PES n _) = n
events (PES _ e) = e
data GameEventStream e = GES Int [e]
instance EventStream GameEventStream GameEvent where
category = const "Game"
aggregateId (GES n _) = n
events (GES _ e) = e
Notice the event types are isomorphic, but still distinct types. You can't have PlayerEvents in a GameEventStream (or rather, you can, but there is no EventStream instance for a GameEventStream containing PlayerEvents). You can even strengthen this relation:
class Event e => EventStream t e | t -> e where
This says that for a given stream type, only one event type may exist, so defining two instances like so is a type error :
instance EventStream PlayerEventStream PlayerEvent where
instance EventStream PlayerEventStream GameEvent where
The save function is trivial:
saveEventStream :: EventStream t e => t e -> IO ()
saveEventStream s = do
createStream (name s)
addToCategory (name s) (category s)
saveEvents (name s) (events s)
I have no idea if this is what you are actually looking for but it seems to accomplish the same things as the c# code.
This seems perfect! It looks like class Event e => EventStream t e | t -> e is the key - could you explain in a little more detail what it means/how it works?
The part after the pipe t -> e is called a functional dependency. They have many uses, but here it is used to say that the event type e is determined uniquely by the stream type t. With the functional dependancy, if you write events (PES 0 []) then the compiler knows the type of the output is PlayerEvent, even though [] is fully polymorphic .
I'd like to show you how this might be done without using type classes. I often find that the results are simpler.
First, here are a few guesses at what types you might be using. You provided some of these:
type CategoryName = String
type Name = String
type PlayerID = Int
type StreamName = String
data Move = Left | Right | Wait
data PlayerEvent = PlayerCreated Name | NameUpdated Name
data GameEvent = GameStarted PlayerID PlayerID | MoveMade PlayerID Move
Record types are often a useful replacement for classes. In this example e will either be a GameEvent or a PlayerEvent.
data EventStream e = EventStream
{ category :: String
, name :: String
, events :: [e]
}
In C# you had subclasses which override the Category property. in Haskell you can use smart constructors for this purpose. If your C# subclasses were overriding methods, then in Haskell you would include functions within the EventStream e type. Other modules in your program can be prevented from creating invalid EventStream objects by only exposing these smart constructors:
-- generalized smart constructor
mkEventStream :: CategoryName -> Int -> EventStream e
mkEventStream cat ident = EventStream cat (cat ++ " - " ++ show ident) []
playerEventStream :: Int -> EventStream PlayerEvent
playerEventStream = mkEventStream "Player"
gameEventStream :: Int -> EventStream GameEvent
gameEventStream = mkEventStream "Game"
Finally, you have defined a few functions. Here's how they might be written:
createStream :: StreamName -> IO ()
createStream = undefined
addToCategory :: StreamName -> CategoryName -> IO ()
addToCategory = undefined
saveEvents :: EventStream e -> IO ()
saveEvents = undefined
saveEventStream :: EventStream e -> IO ()
saveEventStream stream = do
createStream name'
addToCategory name' (category stream)
saveEvents stream
where
name' = name stream
This is closer to what your C# example does. The stream's name is fixed when created, and the category is part of each stream rather than linked to the type of each element.
This does seem much simpler, but how can you prevent clients from creating an EventStream PlayerEvent with a category of "game"?
My mistake, I didn't read your answer properly - you only export the smart constructors, obviously. So a "smart constructor" is kind-of like a factory method?
I'm not sure. In OO, aren't factory methods used with inheritance? As in, some block of code calls methods of a parent class to create objects which at runtime end up being instances of a child class?
Yes, your modules enforce legitimate instances of the e in EventStream e by only exporting the smart constructors; not the data constructors. Another benefit to doing things this way is that you can have several smart constructors which all create the same type. With type classes you're limited to a single instance per type.
Factory methods have a bunch of different uses. They're basically about creating an object when the instantiation logic is too complex for a simple constructor. This can include cases where you need a subclass but don't know which one you need. Things get complex when you subclass the factory itself :)
Data constructors in haskell (ie: PlayerCreated) don't allow for any logic, so I suppose a haskell "smart constructor" would map to a C# class constructor. Perhaps mkEventStream above could be considered a factory method since it doesn't know which type to put in place of e. That is to say, the e in mkEventStream's return value is determined by the caller.
| common-pile/stackexchange_filtered |
Bash Rename Files Script Not Working
I have the following script and for some reason it is not working
find . -name '*---*' | while read fname3
do
new_fname3=`echo $fname3 | tr "---" "-"`
if [ -e $new_fname3 ]
then
echo "File $new_fname3 already exists. Not replacing $fname3"
else
echo "Creating new file $new_fname3 to replace $fname3"
mv "$fname3" $new_fname3
fi
done
However if I use
find . -name '*---*' | while read fname3
do
new_fname3=`echo $fname3 | tr "-" "_"`
if [ -e $new_fname3 ]
then
echo "File $new_fname3 already exists. Not replacing $fname3"
else
echo "Creating new file $new_fname3 to replace $fname3"
mv "$fname3" $new_fname3
fi
done
The script works but I end up with 3 underscores "_" how can I replace the 3 dashes "---" with a single dash?
Thanks,
In bash, you can use new_fname3=${fname3//---/_} to modify the file name.
@chepner This is the cleanest solution, avoiding the need to create a subshell. Put some double quotes around that substitute expression though.
A single expansion on the right-hand side of an assignment does not undergo word-splitting, so the quotes there would be optional.
@chepner Okay. Just being paranoid...
Have a look at man tr. tr will just replace single characters.
Use something like perl -wpe "s/---/-/" instead.
Also have a look at man 1p rename. It is doing pretty much exactly what you want:
rename 's/---/-/' *---*
With tr -s you can truncate sequences of the selected characters, though; but in this case, it's better to do it purely in shell.
Ended up modifying my script to use the rename function. - Thanks
I believe you need to change the tr for a sed substitution:
tr '---' '-' should be changed to sed -e 's/---/-/g
As an example of the difference:
$ echo "a---b" | tr '---' '-'
tr: unrecognised option '---'
try `tr --help' for more information
$ echo "a---b" | sed -e 's/---/-/g'
a-b
| common-pile/stackexchange_filtered |
After finding out which JavaScript events are defined for an element, how do we proceed?
I am new to Watir (and JavaScript as well) and have been following the related questions (and answers as well). I've also gone through the "Firebug" suggestion. I couldn't quite gather what we are supposed to do after finding out which events are fired.
In the following case, manually clicking on "Overview" suffices but I'm unable to write a code to automate the same.
<div id="overview" class="dijitTreeIsRoot dijitTreeNode dijitTreeNodeUNCHECKED dijitUNCHECKED" role="presentation" style="background-position: 0px 0px;" widgetid="dijit__TreeNode_10">
<div class="dijitTreeRow" data-dojo-attach-event="onmouseenter:_onMouseEnter, onmouseleave:_onMouseLeave, onclick:_onClick, ondblclick:_onDblClick" role="presentation" data-dojo-attach-point="rowNode" title="" style="padding-left: 0px;">
<img class="dijitTreeExpando dijitTreeExpandoLeaf" role="presentation" data-dojo-attach-point="expandoNode" alt="" src="/hcsadmin/open/dojo/resources/blank.gif">
<span class="dijitExpandoText" role="presentation" data-dojo-attach-point="expandoNodeText">*</span>
<span class="dijitTreeContent" role="presentation" data-dojo-attach-point="contentNode">
<img class="dijitIcon dijitTreeIcon dijitLeaf" role="presentation" data-dojo-attach-point="iconNode" alt="" src="/hcsadmin/open/dojo/resources/blank.gif">
<span class="dijitTreeLabel" data-dojo-attach-event="onfocus:_onLabelFocus" aria-selected="false" tabindex="-1" role="treeitem" data-dojo-attach-point="labelNode">Overview</span>
</span>
</div>
<div class="dijitTreeContainer" style="display: none;" role="presentation" data-dojo-attach-point="containerNode"></div>
</div>
Now I can flash the "Overview" using:
browser.div(:id, "overview").flash
and it does successfully flashes. But as suspected,
browser.div(:id, "overview").click
does NOT work.
Please provide a solution (or even better, a code) for this.
Is the page you are testing public? Or is there a sample page that has the tree implemented in the same manner?
Since it's a dojo implementation, you may need to identify the appropriate fired event (via firebug or whatever) and use the .fire_event method: http://rubydoc.info/gems/watir-webdriver/Watir/Element#fire_event-instance_method
You're going to have to experiment a little bit, but orde already pointed to the approach to take that will probably work.
I suspect that something along these lines is probably the right answer:
browser.div(id: "overview").span(class: "dijitTreeLabel").fire_event "onfocus"
You may have to follow that line up with another one that does the click...
browser.div(id: "overview").span(class: "dijitTreeLabel").fire_event :click
...again, it's hard to know definitively what's going to work, so you'll have to experiment.
@Justinko: The page I am testing is not public. Otherwise I'd have definitely given the link to the page.
| common-pile/stackexchange_filtered |
Cannot start a session with phantomjs in RSelenium
Cannot start a new session with phantomjs using rsDriver. Other browsers work fine, but when i try the option of phantomjs it does not work and I cannot fully grasp the meaning of the output of the error. How can I solve this?
require(RSelenium)
remDr=rsDriver(port = 4460L, browser = c("phantomjs"))
checking Selenium Server versions:
BEGIN: PREDOWNLOAD
BEGIN: DOWNLOAD
BEGIN: POSTDOWNLOAD
checking chromedriver versions:
BEGIN: PREDOWNLOAD
BEGIN: DOWNLOAD
BEGIN: POSTDOWNLOAD
checking geckodriver versions:
BEGIN: PREDOWNLOAD
BEGIN: DOWNLOAD
BEGIN: POSTDOWNLOAD
checking phantomjs versions:
BEGIN: PREDOWNLOAD
BEGIN: DOWNLOAD
BEGIN: POSTDOWNLOAD
[1] "Connecting to remote server"
Selenium message:Unable to create session from {
"desiredCapabilities": {
"browserName": "phantomjs",
"javascriptEnabled": true,
"nativeEvents": true,
"version": "",
"platform": "ANY"
},
"capabilities": {
"firstMatch": [
{
"browserName": "phantomjs"
}
]
}
}
Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:25:53'
System info: host: 'LAPTOP-302FGG7N', ip: '<IP_ADDRESS>', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_231'
Driver info: driver.version: unknown
Could not open phantomjs browser.
Client error message:
Summary: SessionNotCreatedException
Detail: A new session could not be created.
Further Details: run errorDetails method
Check server log for further details.
Any solution so far ?
Update RSelenium, phantomjs. Try mentioning version in rsDriver
I have been able to use pĥantomjs with the following code :
library(RSelenium)
library(wdman)
url <- "http://www.verema.com/vinos/portada"
port <- as.integer(4444L + rpois(lambda = 1000, 1))
pJS <- wdman::phantomjs(port = port)
remDrPJS <- remoteDriver(browserName = "phantomjs", port = port)
remDrPJS$open()
remDrPJS$navigate(url)
remDrPJS$screenshot(TRUE)
| common-pile/stackexchange_filtered |
revealing module vs object literal pattern on encapsulation
I usually do component for each feature, says I have abc feature I will create below js
var AbcComponent = (function(){}
var initialize = function(){
},
privateMethod1 = function(){
};
return {init:initialize}
)();
and include in app.js, with AbcComponent.init();. Few days ago I ready about OO using object literal pattern and I doubt my own writing style.
How can literal pattern encapsulate scope since javascript is function scope?
All module patterns that need to have truly private data must inherently use an IIFE to maintain their own private scope. Even the object literal module pattern uses this. See a comparison of some module patterns,
You can store pseudo-private data in a couple of ways with an object literal:
By convention, properties that start with an _ underscore are understood to be off-limits to the rest of the world.
{
_privateBar : 1
publicFoo : 4,
}
Alternatively, you can use symbols.
const privateBar = Symbol('private');
myModule[privateBar] = 1;
myModule.publicFoo = 4;
Using the latter, only a reference to the privateBar symbol object can get you the value of 1 from myModule. And no, you can't get it with myModule[Symbol('private')], because symbols are unique and Symbol('private') === Symbol('private') is false.
Unfortunately, they decided to add Object.getOwnPropertySymbols(), so an object's symbols are not truly private and are not suitable for protecting data from malicious activity.
However, in practice, most operations you perform (for of loops, etc.) will not touch symbols. And therefor it is an excellent replacement for the underscore _ convention. But sometimes there are even better ways, such as using a WeakMap.
Using ES6 we can actually avoid the slight overhead of an IIFE thanks to lexical scoping.
const myModule = {};
{
const privateBar = 1;
myModule.publicFoo = 4;
}
I'm asking how object literal pattern can be used for encapsulation since they are just bunch of objects.
@NatelyJamerson I added some more info, including that the so-called "object literal module pattern" still includes an IIFE for encapsulation. If this doesn't do it for you, maybe you could add some links to what you read and what you think the author means or what their code is doing. Then I can explain or correct it with more context.
| common-pile/stackexchange_filtered |
My LinkedList is not saved as it should be (regarding its elements order)
First of all, pardon me for my poor english level. I will try to be as understandable as I can.
I am trying to re-order a playlist of music files. A Playlist is basically a LinkedList<MusicFiles> with a name.
I change the position of an element, it seems to be as it should, cool. But when I save it in the database the order doesn't change! I am doing something wrong, that's a fact, but after hours spent debugging, my mind could really use a debugger for itself...
Here is my jsf code (inside a p:datatable):
<p:commandButton title="Move Down"
ajax="false"
image="down"
action="#{playlistMBean.increasePosition(musicFile.URL)}"
onclick="show_my_playlists?face-redirect=true"/>
The backing bean code:
@Named(value = "playlistMBean")
@SessionScoped
public class PlaylistMBean implements Serializable {
@EJB
private PlaylistManager playlistManager;
private Playlist currentPlaylist;
//...
public void increasePosition(String musicURL) {
currentPlaylist.increasePosition(musicURL);
playlistManager.save(currentPlaylist);
}
//...
}
"currentPlaylist" is obviously a Playlist, so here's the code of the method in the entity bean "Playlist":
@Entity
@NamedQueries(/*...*/)
public class Playlist implements Serializable {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private int id;
@OneToMany(cascade= {CascadeType.REFRESH, CascadeType.MERGE},fetch= FetchType.EAGER)
@OrderBy(/* ??????????? */
private LinkedList<MusicFile> musicList;
private String name;
//...
public void increasePosition(String url) {
if (url != null) {
MusicFile mf = getMusicFileByURL(url);
int position = getPosition(url);
if (position < musicList.size() - 1) {
musicList.remove(position);
musicList.add(position + 1, mf);
}
}
}
And finally the code of the playlist manager which should save the reordered playlist in the database:
@Stateless
@LocalBean
public class PlaylistManager {
@PersistenceContext(unitName = "Doozer-ejbPU")
private EntityManager em;
public void save(Playlist playlist) {
Playlist p = em.find(Playlist.class, playlist.getId());
if (p != null) {
em.merge(playlist);
}
else {
em.persist(playlist);
}
}
//...
}
The given playlist in this last step is the good one (reordered). So my guess is that my problem is with the entity manager (I'm such a genius, I know...).
Does anyone know why?
May it comes from the cascade type? Due to my lack of knowledge about it I'm not sure which one should I put. I have tried CascadeType.ALL but then it raises exception when adding music files. CascadeType.DETACH was my first choice since I dont't want to delete the musics when deleting a playlist... But here again I really don't know for sure if I know what I'm talking about :(
[Edit]: Thanks to Piotr Nowicki, my question has changed quite a lot: how can I define the @OrderBy annotation in order to sort the LinkedList according to its inner order? Is it even possible? The easy/ugly method would be to add a property position to the MusicFile entity but I'd rather not.
How is your entity annotated? Do you have any @OrderColumn or @OrderBy annotations?
I've just edited my post so it now should answer your question -> the entity does not have any annotation dealing with its order... But the list is "directly" saved so should be its inner order, shouldn't it?
Thanks to your post, Piotr, and after a little more digging, it seems you had the correct intuition!!! Without any @OrderBy annotation the elements are sorted according to their id => Do you know how to define the sort order on a list property? I wouldn't like to add a property position to the music entity, but if I have to...
did you try adding the @OrderColumn on the list property? It should automatically add an appropriate column.
My hero........ I let you answer it: it solved my problem and saved my day! ;)
<awkward_face> ;-) So if it worked, lets create an answer from the comments.
If you want to preserve the order in which elements in List are stored in the database (and further retrieved from it) you should use @OrderBy or @OrderColumn annotations.
In your case, if you just want the List to be returned in order without any advanced conditions, the @OrderColumn should be sufficient:
@OneToMany(...)
@OrderColumn
private LinkedList<MusicFile> musicList;
OrdemColumn requires JPA 2.0. it uses an extra field on the database to main order information, I think you should use a position extra field and let your persistence provider update it accordingly when you reorder your List. Here's a good sumary about the subject: EclipseLink/Examples/JPA/2.0/OrderColumns.
To be clear: @OrderColumn(name="POSITION"), no need for anything else.
@AnthonyAccioly yes, it's JPA 2.0 feature; I guess the rest of your comment (about extra position field in the model) refers to JPA 1.0? Also you don't need to specify the @OrderColumn name attribute, as the JPA-provider can provide you a default one (in 'propertyName_ORDER' format)
I meant extra column not field lol :D. It still requires an extra column in the database for JPA 2.0, you can let it apply the defaults as @Piotr mentioned, but I guess It makes more sense to call it position. It will use that column to maintain the List inner order as you require.
Thanks again Piotr, and thanks Anthony for this information! @OrderColumn should be sufficient in my case: it creates a column named "MUSICLIST_ORDER". Meaningful enough.
| common-pile/stackexchange_filtered |
Deploy Simple Smart Contract Error: Cannot sign a transaction due to an error while fetching the most recent nonce value
I created and deployed a simple Rust "Hello Contract" example on WSL2 (Windows 11) with testnet account created and login successfully but always get the following error:
Error:
0: Cannot sign a transaction due to an error while fetching the most recent nonce value
1: handler error: [Access key for public key ed25519:ABBukUwJxXfeBrnWD5pXNzFQLeQyuz2pj69N5tZoaucm has never been observed on the node]
Location:
/home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/near-cli-rs-0.11.0/src/transaction_signature_options/sign_with_legacy_keychain/mod.rs:184
Same error on both rpc server: rpc.testnet.near.org and rpc.testnet.pagoda.co
Someone told me that was the problem sometime and I tried again the next day without any help.
Anybody have same problem?
| common-pile/stackexchange_filtered |
In sklearn logisticRegression, what is the Fit_intercept=False MEANING?
I tried to compare logistic regression result from statsmodel with sklearn logisticRegression result. actually I tried to compare with R result also.
I made the options C=1e6(no penalty) but I got almost same coefficients except the intercept.
model = sm.Logit(Y, X).fit()
print(model.summary())
==> intercept = 5.4020
model = LogisticRegression(C=1e6,fit_intercept=False)
model = model.fit(X, Y)
===> intercept = 2.4508
so I read the user guide, they said Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.
what is this meaning? due to this, sklearn logisticRegression gave a different intercept value?
please help me
LogisticRegression is in some aspects similar to the Perceptron Model and LinearRegression.
You multiply your weights with the data points and compare it to a threshold value b:
w_1 * x_1 + ... + w_n*x_n > b
This can be rewritten as:
-b + w_1 * x_1 + ... + w_n*x_n > 0
or
w_0 * 1 + w_1 * x_1 + ... + w_n*x_n > 0
For linear regression we keep this, for the perceptron we feed this to a chosen function and here for the logistic regression pass this to the logistic function.
Instead of learning n parameters now n+1 are learned. For the perceptron it is called bias, for regression intercept.
For linear regression it's easy to understand geometrically. In the 2D case you can think about this as a shifting the decision boundary by w_0 in the y direction**.
or y = m*x vs y = m*x + c
So now the decision boundary does not go through (0,0) anymore.
For the logistic function it is similar it shifts it away for the origin.
Implementation wise what happens, you add one more weight and a constant 1 to the X values. And then you proceed as normal.
if fit_intercept:
intercept = np.ones((X_train.shape[0], 1))
X_train = np.hstack((intercept, X_train))
weights = np.zeros(X_train.shape[1])
| common-pile/stackexchange_filtered |
FTP Connection Refused
I am on a Windows XP machine that is trying to connect to a Windows 2008 FTP. When trying to connect to my ftp site, I am getting a ftp: connect :Connection refused. I have confirmed that other machines are able to connect to the FTP and transfer data.
I have a batch file doing the FTPing:
I've taken the following steps to try to remedy the situation:
ping: I am able to ping and receive a response from my FTP server
tracert: I was able to do a full tracert and was able to get to my client machine to the host
firewall: There are no firewalls enabled on this machine
other FTP: I am unable to connect to any other FTP site
telnet: I am able to telnet to port 21.
Any assistance would be greatly appreciated.
edit: What I did notice that when I do a netstat, I see that port 21 is being occupied by PID 1256:
Now, if I check my task manager, I see 1256 is inetinfo.exe.
The FTP service runs in the inetinfo.exe process so your netstat output is what I would expect.
@joeqwerty would stopping that process remedy my problem?
No, that would stop the FTP service. The FTP services runs in the inetinfo.exe process, which is perfectly normal and expected.
Since other machines are able to connect to the FTP server I see two possible causes:
The XP machine IP address has been automatically (or manually) banned or blacklisted, causing the FTP server to reject any connection from this IP
A firewall or other IP filtering software is running on the XP machine and block the connection to the FTP server
Yet it's strange that you are able to telnet from the XP machine to the FTP server on port 21.
Could there be some sort of port blocking on the firewall/hardware level?
yes, firewall is all about blocking port. Now Windows XP built-in firewall allow all outgoing connections by default I think. Do you have any other firewall software on the XP box or on the network?
It might be on the network level. This box is sitting on someone else's network.
The other machines that can connect are also XP machines? Maybe your machine has FTP disabled? If you open a command prompt and type FTP does the ftp> come up?
Yes it does, ftp> comes up.
Is it possibly a firewall issue?
I found a site giving a ton of commands to trouble shoot FTP but it's for linux
Maybe it can help you
| common-pile/stackexchange_filtered |
Google Appengine daily budget not reflected in quota
Dear AppEngine people (I understand that all AppEngine support has moved to StackOverflow - if I am mistaken then sorry for posting this here), I have a very serious problem that I hope you can help me to resolve.
Yesterday I enabled billing with a daily budget of $500 on my application (friendbazaar.appspot.com), and my billing status is "Enabled". However, I am still showing that I have maxed out my usage of recipients emailed at 100 of 100.
The quota was just reset 2 hours ago, and so I don't understand why this has not reflected the updated quotas based on the billing settings.
This is a big problem, since I recently sent out invitations to members of my other sites (over 100K people) to sign up for this new site - and since email authorization is required to complete the registration process, I am totally hosed and have basically pissed off a lot of customers by making them register and then never sending them the email to complete the process.
Please let me know if this can be fixed, and what the normal delay is for appengine quotas to reflect billing settings.
Technical questions should be posted on Stack Overflow. Billing and other questions that don't fit the SO remit should still go to the list or via other support channels.
Per the billing FAQ's
Why did my mail quota not increase after I enabled billing?
Google will wait until the first payment is cleared before increasing your mail quota. This means it will take at least seven days to get the higher quota.
https://developers.google.com/appengine/kb/billing#quota_increase
DISCLAIMER - I am not affiliated with Google in any way, nor do I believe you can get an official response from them via stackoverflow.
Thanks, for the info. That is exactly what I needed to know (but didn't want to hear that).
Now that appengine support has moved to StackOverflow (https://groups.google.com/group/google-appengine-python/browse_thread/thread/0414d42be2f4bde0) - where are we supposed to ask administrative related questions - ie. if I really had needed someone from google to address a problem?
There are definitely Googlers active on Stack Overflow (I'm one of them). Regarding billing questions, you can use the form linked at the bottom of the FAQ if you have a billing problem (https://developers.google.com/appengine/kb/billing#morehelp)
GAE no longer supports sending more than 100 emails per day as per their tech support (May 2016). Instead they are nudging people over to SendGrid. Its unfortunate that they are reducing the capabilities of GAE for something as basic as transactional email. I can see why marketing or bulk emails would be frowned upon but for the core transactional emails most apps/sites require it would be nice to have a native, low cost, option. Off to check out Amazon's email service!
| common-pile/stackexchange_filtered |
Visual Studio Code LaTeX date time
I am using visual studio code LaTeX workshop. Here is my code :
\documentclass{report}
\usepackage[yyyymmdd,hhmmss]{datetime}
\title{my title}
\author{author}
\date{Compiled at: \today-\currenttime}
\begin{document}
\maketitle
Some content
\end{document}
This gives:
However, the time is not correct because on my computer it is only January 4th 20:00 (I am at Montréal). It is off by several hours. I know that this is because the time zone is incorrect. On overleaf, I remember I can create a file latexmkrc with line $ENV{'TZ'}='America/Toronto'; to resolve this. But this does not work for visual studio code .
The datetime package has been obsolete for a while now, replaced by datetime2. Give a try to this one, and give additional details about your configuration if it doesn't work.
| common-pile/stackexchange_filtered |
What is the hash-rate of the Bitcoin network that results in the maximum difficulty?
According to https://en.bitcoin.it/wiki/Target the target is a 256-bit number.
As the total hash-rate of the Bitcoin network rises, this number decreases to increase the difficulty. How much hash-rate (in terms of GHash/s) would cause the target to reach the minimum value? will the target ever get to 0? Does that mean it will be impossible to find a new block? (the minimum of a SHA256 hash is 0 which is not less than 0)
The hardest possible target is all zeros plus a one (0000000...1). If such a block were ever mined (which is extremely unlikely: on average you'd need 2^255 hashes per 10 minutes, which is 9.6e73 hashes per second), it would definitely crash the network : there can only be one hash at that difficulty, which means that there can only be one block mined at that difficulty.
Of course, if the target is zero, no block can be mined at all.
Do you mean that there's only one block with the hash zero?
Yes. Block hashes must be unique, and the hash zero is already reserved as the prevblock for the genesis block, so a block with all zeroes is an impossible block.
The network would probably crash before that, because collisions would increase as the target approached the minimum, which would also cause unconfirmed transactions to pile up.
Also if block 0000000...1 was mined (by coincidence) today while the target is much higher, it would be harmless.
This is slightly off: a block is valid when its hash is less than or equal to the target. So if the target is 0, there is still one valid block hash possible.
| common-pile/stackexchange_filtered |
Parse an xml file depending on the current language
Is there a way to parse an xml file depending on the current language? For example, I have two countries.xml files. One in the values folder, and another one in the values-de folder. I want to parse the file specific to the current language.
http://developer.android.com/reference/java/util/Locale.html
The android environment parses these files for you, and you can get to the data through various accessors, like if you have a set of strings in your strings.xml, you use
getText(R.strings.my_string);
which will give you the string from the strings.xml for that locale (or the default one if none match).
Check the developer's guide on resource localization for more info.
| common-pile/stackexchange_filtered |
Visualforce page bringing data from custom object
I am new to visualforce and I would like to create the page which would display values from custom object (Staff__c). I would like it to work this way that each user would see value from the Holidays Remaining field from his/hers record in the Staff object (Staff records have the same name like the name of user). Could anyone help me with this please?
What particular problem do you have? Map your object's fields to a columns in VF's table tag?
The main problem is that I am very new to visualforce and by now I feel slightly lost as my skills are limited. the biggest challenge is to make the relation somehow that Salesforce would recognize the user and according to this choose the record from the Staff object
What are you trying to achieve? As far as I understand you are trying to build a VF page. This VF page should contain some information from Staff__c object basing on current user. Am I right?
Yes Indeed. I would like to make the page where if John Smith enters it, it will bring only his details. If Jane Doe enters the page she will see only her details.
Staff__c object should have a link to a User.
In your controller for VF page you pull all needed data basing on current user who opened the VF page.
| common-pile/stackexchange_filtered |
Is there a correct way of composing moore machines?
A mealy machine is just a stateful function. Hence, two mealy machines can be composed using simple function composition. A moore machine is a restricted mealy machine with an initial output value. Is there a way to compose two moore machines? This is what I tried in Haskell:
type Mealy a b = a -> b -- a stateful function, probably using unsafePerformIO
type Moore a b = (b, Mealy a b) -- output must be consistent with current state
composeMoore :: (c, b -> c) -> (b, a -> b) -> (c, a -> c)
composeMoore (x, f) (_, g) = (x, f . g) -- is this correct?
composeMoore (_, f) (y, g) = (f y, f . g) -- or is this correct?
I believe that they are both wrong. In fact, I believe that it's not possible to compose two moore machines. However, I might be wrong. Is there a correct way of composing moore machines?
Defintion: The composition of moore machines is an operation of the type (.) :: Moore b c -> Moore a b -> Moore a c for which the law of associativity (i.e. h . (g . f) = (h . g) . f) holds true.
Note: This is just a theoretical question. I am not actually using Haskell to write stateful functions.
Haskell functions aren't supposed to be stateful and circumventing this using unsafePerformIO will rarely have the desired effect.
I am not using Haskell for writing stateful functions. I am using Haskell to demonstrate the problem succinctly. This is just a theoretical question. In practice, I might use either JavaScript or OCaml.
You have to define what you mean by "composing" machines.
I really don't see how the type of a mealy machine is different from the one of a moore machine when you are using impure functions to represent them anyway. Please come up with a theoretically sound representation (i.e. not using unsafePerformIO) and we may be able to give an appropriate answer
@AaditMShah is what you're actually asking is how you can represent the subtype relationship between moore and mealy machines using type constraints, encapsulating the fact you cannot use the transition function?
In your impure function a -> b, are those types input and output (with impure state), or does the function represent a transition between states (with impure IO)?
@Bergi A mealy machine doesn't produce an output unless given an input. A moore machine always has an output even if it is never given an input. Yes, the function a -> b is an impure function with input a, output b and some hidden state. The function can also mutate its own state. Hence, it's also a transition function.
@BenjaminGruenbaum No. All I am asking is whether there's some logical way to connect the output of a moore machine to the input of some other moore machine such that it satisfies the law of associativity.
Yes, there is. Just because something is a state machine doesn't mean it has to have uncontrolled state. When you unsafePerformIO you forsake all the information of the machine. Represent it as a 6 tuple like in the definition and composing it will become simple function composition (composing the transition function and "running both machines" together).
Your model of a Moore machine in Haskell is wrong. A Moore machine has state. Its type isn't input -> output, it's (input * state) -> (output * state) or something isomorphic.
@Gilles The state is hidden. That's why I mentioned that it's a stateful function. It's input -> output with hidden mutable state. I don't know the convention of writing the type of functions with mutable state.
@BenjaminGruenbaum I figured out why you can never have a correct composition operator for moore machines: http://stackoverflow.com/a/32171450/783743. The reason is because you can never create an identity moore machine which satisfies both the left and the right identity laws of categories. The reason why you can't create an identity moore machine is because the initial output of the moore machine would always be undefined.
No, there's no correct way of composing two moore machines. Composition has no meaning without an identity element. This is because composition along with its identity element must together form a monoid in order to be meaningful. This is the mathematical definition of a category.
A category C consists of:
A class, ob(C), of sets called objects.
A class, hom(C), of morphisms between the objects. The hom-class, hom(a, b), is the collection of all the morphisms from a to b (each morphism f is denoted as f : a -> b).
A binary operation hom(b, c) * hom(a, b) -> hom(a, c) called the composition of morphisms, for every three objects a, b and c.
In addition, they must satisfy the following laws:
Associativity: For all f : a -> b, g : b -> c and h : c -> d, the relation h . (g . f) = (h . g) . f must hold true.
Left identity: For all objects x there must be a morphism id_x : x -> x called the identity morphism for x such that for all f : a -> x the relation id_x . f = f holds true.
Right identity: For all objects x there must be a morphism id_x : x -> x called the identity morphism for x such that for all g : x -> b the relation g . id_x = g holds true.
There's no way of composing two moore machines simply because there's no identity element for the composition of two moore machines. Hence, moore machines do not form a category. A moore machine is defined as a 6-tuple (s, s0, a, b, t, g) consisting of:
A finite set of states s.
A start state s0 which is an element s.
A finite set a called the input alphabet.
A finite set b called the output alphabet.
A transition function t : s * a -> s.
An output function g : s -> b.
The reason there's no identity element for moore machines is because the output of the moore machine doesn't depend upon its input. It only depends upon the current state. The initial state is s0. Hence, the initial output is g(s0). The initial output would be undefined for the identity element because it would have to match the type of the input which is yet unknown. Therefore, the "identity element" would be unable to satisfy the left and/or the right identity laws.
I'm not sure what's wrong with an empty initial output for the identity element?
@Bergi An empty initial value for the identity element wouldn't be able to satisfy the left and/or right identity laws of a category.
@Bergi empty initial output isn't possible. The machine would have to be nondeterministic in order for it to work. AaditMShah the initial output is an artifact and not an important part of the model (representing initial sensor reading). I'm not sure why the fixation on it (or on categories :S).
Truth be told moore and mealy machines are just not very interesting computation models, they're maybe half a lesson when instructors want to show students output but are still in DFA land :)
Also, it's trivial that when composing two moore machines (which is entirely possible as I've demonstrated in chat) the initial output would be that of the right machine. Just like if you take the function that always returns zero apply it to the result of any other function it will still only return zero. Your categorization is wrong. A moore automaton is not a function, the fact there is no identity moore automaton you can compose with both ways doesn't really prove much alone.
In fact, it's trivial to show there is no identity function since the input and output are from different sets. The input of a word n is always a word of length n+1 (in your choice of the model). You can still compose these machines just fine as I've said here: http://chat.stackoverflow.com/transcript/message/25260531#25260531. Saying you can't compose certain types of functions because there is no identity is silly, you can compose functions of the form + i for a positive integer i just fine, composing +1 with +1 is perfectly legal and well defined.
I think your result machine has to have the start output of the first argument, but also needs to apply the transition of the first on the start output of the second machine.
So your composition function would be neither of the two you've given, but rather a mix of them:
composeMoore :: (c, b -> c) -> (b, a -> b) -> (c, a -> c)
composeMoore (x, f) (y, g) = ((x; f y), f . g)
where ; is the impure computation sequencing operator (think , in JS).
I think your reasoning could benefit a lot from using a pure machine model :-)
I figured out why you can never have a correct composition operator for moore machines: http://stackoverflow.com/a/32171450/783743. I used pure mathematical reasoning to figure it out.
| common-pile/stackexchange_filtered |
Restoration of Hdfs files
We have a spark cluster which is built with the help of docker(singularities/spark image). When we remove containers, data which is stored in hdfs is removed. It is normal I know, but how can I solve the problem such that whenever I start cluster again, files in hdfs restore without upload again
Possible duplicate of I lose my data when the container exits
You can bind/mount a host volume as below for /opt/hdfs directory for both master & worker -
version: "2"
services:
master:
image: singularities/spark
command: start-spark master
hostname: master
volumes:
- "${PWD}/hdfs:/opt/hdfs"
ports:
- "6066:6066"
- "7070:7070"
- "8080:8080"
- "50070:50070"
worker:
image: singularities/spark
command: start-spark worker master
volumes:
- "${PWD}/hdfs:/opt/hdfs"
environment:
SPARK_WORKER_CORES: 1
SPARK_WORKER_MEMORY: 2g
links:
- master
This way your HDFS files will always persist at ./hdfs(hdfs in current working directory) on the host machine.
Ref - https://hub.docker.com/r/singularities/spark/
| common-pile/stackexchange_filtered |
How can i combine the index pages of multiple CRUD?
I'm trying to combine the index pages of my CRUDs into one as a main home page, this is what I tried however it comes up with an underfined method each error, any ideas? Also not sure how to get the redirects to go to it after creating and deletion.
<h1>Weather forecasts</h1>
<%= link_to 'Search by City', new_cityweather_path(@cityweathers) %>
<%= link_to 'Search by Postcode', new_postcodeweather_path(@postcodeweathers) %>
<h1>Listing Postcodeweathers</h1>
<table>
<thead>
<tr>
<th>Postcode</th>
<th colspan="3"></th>
</tr>
</thead>
<tbody>
<% @postcodeweathers.each do |postcodeweather| %>
<tr>
<td><%= postcodeweather.postcode %></td>
<td><%= link_to 'Show', postcodeweather %></td>
<td><%= link_to 'Edit', edit_postcodeweather_path(postcodeweather) %></td>
<td><%= link_to 'Destroy', postcodeweather, method: :delete, data: { confirm: 'Are you sure?' } %></td>
</tr>
<% end %>
</tbody>
</table>
<h1>Listing Cityweathers</h1>
<table>
<thead>
<tr>
<th>City</th>
<th colspan="3"></th>
</tr>
</thead>
<tbody>
<% @cityweathers.each do |cityweather| %>
<tr>
<td><%= cityweather.city %></td>
<td><%= link_to 'Show', cityweather %></td>
<td><%= link_to 'Edit', edit_cityweather_path(cityweather) %></td>
<td><%= link_to 'Destroy', cityweather, method: :delete, data: { confirm: 'Are you sure?' } %></td>
</tr>
<% end %>
</tbody>
</table>
Please tell us what you mean with it doesn't work, I think most probably it's an issue with the controller. Are you sure that you are correctly loading the data into the controller's instance variables?
It comes up with an undefined method each error
You're not initializing those variables (@cityweathers, etc.) in the controller (home_controller or whatsitcalled)
You can define separate controller which can be said as main controller as like:
class MainController < ApplicationController
def index
@postcodeweathers = PostCodeWeather.all
@cityweathers = CityWeather.all
end
end
Then, the template views/main/index.html.erb can use partial as:
<h1>Weather forecasts</h1>
<%= link_to 'Search by City', new_cityweather_path(@cityweathers) %>
<%= link_to 'Search by Postcode', new_postcodeweather_path(@postcodeweathers) %>
<%= render 'post_code_weathers/index' %>
<%= render 'city_weathers/index' %>
Then, the partial views/post_code_weathers/_index.html.erb will be as:
<h1>Listing Postcodeweathers</h1>
<table>
<thead>
<tr>
<th>Postcode</th>
<th colspan="3"></th>
</tr>
</thead>
<tbody>
<% @postcodeweathers.each do |postcodeweather| %>
<tr>
<td><%= postcodeweather.postcode %></td>
<td><%= link_to 'Show', postcodeweather %></td>
<td><%= link_to 'Edit', edit_postcodeweather_path(postcodeweather) %></td>
<td><%= link_to 'Destroy', postcodeweather, method: :delete, data: { confirm: 'Are you sure?' } %></td>
</tr>
<% end %>
</tbody>
</table>
And, the another partial views/city_weathers/_index.html.erb go like this:
<h1>Listing Cityweathers</h1>
<table>
<thead>
<tr>
<th>City</th>
<th colspan="3"></th>
</tr>
</thead>
<tbody>
<% @cityweathers.each do |cityweather| %>
<tr>
<td><%= cityweather.city %></td>
<td><%= link_to 'Show', cityweather %></td>
<td><%= link_to 'Edit', edit_cityweather_path(cityweather) %></td>
<td><%= link_to 'Destroy', cityweather, method: :delete, data: { confirm: 'Are you sure?' } %></td>
</tr>
<% end %>
</tbody>
</table>
Now, your create and delete action on the postcodeweathers controller can be:
class PostCodeWeatherController < ApplicationController
def create
# create code goes here
redirect_to main_index_path
end
def delete
# delete code goes here
redirect_to main_index_path
end
end
Same type of actions in cityweathers controller.
Have you added the additional variable inside the index method of the corresponding controller?
(NOTE: this will combine both objects into the index page of PostCodeWeather)
class PostCodeWeathersController < ApplicationController
def index
@postcodeweathers = PostCodeWeather.all
@cityweathers = CityWeather.all
end
end
This will allow you to access both sets of records in your index view.
As far as redirecting to the index page after creation and deletion, you should be able to add this inside the desired methods in the controller.
def create
@cityweather = CityWeather.new(params[:city_weather])
if @cityweather.save
redirect_to action: "index"
end
end
| common-pile/stackexchange_filtered |
Computing $f \circ g$ and $g \circ f$ for functions by cases
I want to confirm my solution for the following problem from Ethan D. Bloch’s Proofs and Fundamentals.
Problem: Let $f,g: \mathbb{R} \rightarrow \mathbb{R}$ be real functions of a real variable given by
$$f(x) = \begin{cases} 1-2x, & \text{if} & 0 \leq x \\ |x|, & \text{if} & x < 0 \end{cases}, \quad \forall x \in \mathbb{R} \quad \text{and}$$
$$g(x) = \begin{cases} 3x, & \text{if} & 0 \leq x \\
x-1, & \text{if} & x < 0 \end{cases}, \quad \forall x \in \mathbb{R}.$$
Find $f \circ g$ and $g \circ f$.
Solution: We start with $f \circ g$. First, we observe that $0 \leq g(x)$, for all $x \in [0, \infty)$. So, $(f \circ g) (x) = 1-6x$, for all $x \in [0,\infty)$. Next, we observe that $g(x) < 0$ for all $x \in (-\infty,0)$. Hence, $(f \circ g)(x) = |x-1|$, for all $x \in (-\infty,0)$.
Then, we compute $g \circ f$. Following the same argument, we see that $0 \leq f(x)$, for all $x \in (-\infty,\frac{1}{2}]$. So, we have that $(g \circ f)(x) = 3-6x$, for all $x \in (-\infty,\frac{1}{2}]$. Next, we observe that $f(x) < 0$, for all $x \in (\frac{1}{2},\infty)$. Hence, we have that $(g \circ f)(x)=|x|-1$, for all $x \in (\frac{1}{2},\infty)$.
Therefore, we have that $f \circ g, g \circ f: \mathbb{R} \rightarrow \mathbb{R}$ are defined by
$$
(f \circ g)(x) = \begin{cases} 1-6x, & \text{if} & 0 \leq x \\
|x-1|, & \text{if} & x<0\end{cases}, \quad \forall x \in \mathbb{R} \quad \text{and}
$$
$$
(g \circ f)(x) = \begin{cases} 3-6x, & \text{if} & x \leq \frac{1}{2} \\
|x|-1, & \text{if} & \frac{1}{2} < x \end{cases}, \quad \forall x \in \mathbb{R}.
$$
My question:
Is this enough and correct?
Is there any online calculator (or the like) that compute this expressions?
Thank you for your attention!
Looks good to me and is enough
$g\circ f$ is wrong.
The reason that your simple approach to $f\circ g$ works is that $g(x)\geq 0$ iff $x\geq0.$
What is the mistake here?
Try $g(f(1)).$ You get $f(1)=-1,$ so $g(f(1))=-2.$ You have $g\circ f(1)=|1|-1=0.$ @AirMike
You only get $3-6x$ for $0\leq x\leq 1/2$
Basically, there should be three cases for $x$, (1) $x<0$, (2) $0\leq x\leq 1/2,$ and (3) $x>1/2.$
@ThomasAndrews thank you for your observation, I’m going to work on that!
After reading the feedback in the comments, I came with the following solution that a I believe to be the right one. So, we have that $f \circ g, g \circ f: \mathbb{R} \rightarrow \mathbb{R}$, defined by
$$
(f \circ g)(x)= \begin{cases} 1-6x, & \text{if} & 0 \leq x \\
|x-1|, & \text{if} & x < 0 \end{cases}, \quad \forall x \in \mathbb{R} \quad \text{and}
$$
$$
(g \circ f)(x)= \begin{cases} 3|x|, & \text{if} & x < 0 \\
3-6x, & \text{if} & 0 \leq x \leq \frac{1}{2} \\
-2x, & \text{if} & \frac{1}{2} < x \end{cases}, \quad \forall x \in \mathbb{R}.
$$
The problem with the initial solution was that I didn’t observe that, for $x \in (-\infty, \frac{1}{2}]$, the function $f$ is different when $x<0$ and $0 \leq x$.
| common-pile/stackexchange_filtered |
Backup jobs in ARCServe fail trying to back up deleted files and directories. How can fix this?
I am using ARCServe r12 SP2. Our daily backup jobs are failing while trying to back up files and directories on a remote server that have been deleted. How can I fix this?
BackupExec will occasionally exhibit this behavior as well. Refreshing your backup selection list may resolve it.
Had this problem with old arcserve version, it make a difference if you select a path to get all under (like c:\ all sub dir), than just selecting some under a root (like d:\group\share1, d:\group\share2, etc..)
In that case if share1\ is deleted then it throw out a error.
nb. From the arcserve version I was using I had to click two time on the [+] sign to get all under correctly.
| common-pile/stackexchange_filtered |
Saving and reusing mate-terminal configuration
I installed the mate-terminal package on my Ubuntu 20.04. Then I executed mate-terminal command from a native terminal. Next I created some profiles and opened several tabs with different profiles. I would like to save this configuration and have it automatically loaded next time I call mate-terminal.
The information provided is not clear how to achieve this. I used dconf dump /org/mate/ > mymate to save the configuration. My profiles description appears in the output file, but there is no info about my tabs.
You are going in the right direction.
To backup settings and profiles use:
dconf dump /org/mate/terminal/ > bkp
To restore use:
dconf load /org/mate/terminal/ < bkp
Practical example about profiles is available in other Q&A.
If you want to have customized tab layout - then Terminator may be a good choice. It uses the same VTE library under the hood.
| common-pile/stackexchange_filtered |
Django display table in templates
I am using Django 2.0.1, and I have the following code:
Models.py:
class Category(models.Model):
category_name = models.CharField(max_length=50)
class CategoryItems(models.Model):
category_name = = models.ForeignKey(Categories, related_name='categoriesfk', on_delete=models.PROTECT)
item_name = models.CharField(max_length=50)
item_description = models.CharField(max_length=100)
Thereafter my views.py:
def data(request):
categories_query = Categories.objects.all()
category_items_query = CategoriesItems.objects.all()
return render_to_response("data.html",
{'categories_query': categories_query,'category_items_query': category_items_query}
In the template I'm trying to display all items for each category, for example, suppose there are 4 categorizes, e.g. Bicycle, then it display all items belonging to that category only. For example, as follows:
Category 1:
Category_Item 1,
Category_Item 2,
Category_Item 3,
and so on ...
Category 2:
Category_Item 1,
Category_Item 2,
Category_Item 3,
and so on ...
I have tried to write so many different for-loops, but they just display all items, I need it to show items only for that category, then for loop to next category and show items for that.
You don't need your category_items_query variable, just category_query:
{% for category in category_query %}
<b>{{ category.category_name }}</b><br />
{% for item in category.categoriesfk.all %}
{{ item.item_name }}<br />
{% endfor %}
{% endfor %}
Your related_name of categoriesfk is weird, it'd make more sense to be something like items.
Fantastic! So clean!
What you need is two for loops with an if to check if the second loop should belong to the first.
Try this:
{% for category in categories_query %}
{% for category_item in category_items_query %}
{% if category_item.category_name == category %}
{# do something with category_item #}
{% endif %}
{% endfor %}
{% endfor %}
I believe it would be more clear if you named the ForeignKey in CategoryItems to just "category", instead of "category_name", since this field will have the category itself, not just it's name. Your "if" would then be more readable and make more sense:
{% if category_item.category == category %}
Hope it helps.
A few things,
Since your model name is already Category, your field names should be like name instead of category_name.
Model names must be singular, so it should beCategoryItem instead of CategoryItems
When you do a model_name.objects.all(), you do not get a query but a Queryset, make your variable names such that they describe what they do. Currently, categories_query is wrong. You could instead use category_qs.
Now, coming to your question, you require two for loops. One to loop through the categories and then one to loop through items in a particular category.
Something like,
for category in category_qs:
for item in category:
# Do something with item
You have the basic idea here, now you can convert it to real working code. Good luck!
Grouping data in template is not best idea as template should be logic free,
also good idea would be to use database capabilities and use select_related to query out related set
category_data = Category.objects.select_related('item').all()
afterwards you could do following in template
{% for category in category_data %}
{# print category #}
{% for item in category.items %}
{# print item #}
{% endfor %}
{% endfor %}
Django has a built-in regroup template tag for this functionality.
{% regroup category_items_query by category_name as categoryname_list %}
{% for category in categoryname_list %}
<strong>{{ category.grouper }}</strong><br>
{% for item in category.list %}
{{ item.item_name }}: {{ item.item_description }}<br>
{% endfor %}
{% endfor %}
You have to order your CategoryItems for this functionality to work.
category_items_query = CategoriesItems.objects.all().order_by('category_name')
| common-pile/stackexchange_filtered |
MySQL – Should I use last_insert_id()? or something else?
I have a table users that has an auto-incrementing id column. For every new user, I essentially need to insert three rows into three different tables, which are 1. user, 2. user_content, and 3. user_preferences. The rows inserted into user_content and user_preferences are referenced by their id's which correspond to each user's id (held in user)
How do I accomplish this?
Should I do the INSERT INTO user query first, obtaining that auto-incremented id with last_insert_id(), and then the other two INSERT INTO queries using the obtained user id? Or, is there a more concise way to do this?
(note: I am using MySQL and PHP, and if it makes a difference, I am using a bigint to store the id values in all three tables.)
Thank you!
First, get it working, then make it fast. The most optimised code won't be worth the characters used to write it if it doesn't do the right thing.
Yes, indeed, but I do try to "measure twice, cut once" when doing things such as this.
I see 3 options:
PHP side using some *_last_insert_id (as you describe)
Create a trigger
Use a stored procedure.
| common-pile/stackexchange_filtered |
@focus event is not bubble by default
Vue version:
3.2.39
While click on the button, the focus does not bubble to the wrapper, and that is why it does not show the "wrapper focus". How to fix this?
<template>
<div tabindex="-1" @focus="onWrapperFocus" @blur="onWrapperBlur">
<button @focus="onInnerFocus" @blur="onInnerBlur">Hello</button>
</div>
</template>
<script >
export default {
setup() {
return {
onWrapperFocus() {
console.log("wrapper focus");
},
onWrapperBlur() {
console.log("wrapper blur");
},
onInnerFocus() {
console.log("inner focus");
},
onInnerBlur() {
console.log("inner blur");
},
};
},
};
</script>
Expected behavior:
inner focus
wrapper focus
As it turned out, this is not related to vue. Just needs to read the documentation time to time. About focusin/focusout -
https://developer.mozilla.org/en-US/docs/Web/API/Element/focusin_event
Instead focus/blur, I use focusin/focusout and it's work how I wanted.
There is another phase of event called "capturing". It is rarely used in real code, but can be usefull in this case.
You can add the capturing phase by adding .capture after the event name like shown below:
<div tabindex="-1" @focus.capture="onWrapperFocus" @blur.capture="onWrapperBlur">
<button @focus="onInnerFocus" @blur="onInnerBlur">Hello</button>
</div>
| common-pile/stackexchange_filtered |
Law of Demeter and over-wide interfaces
The Law of Demeter makes sense in some obvious cases.
# better
dog.walk()
# worse
dog.legs().front().left().move()
dog.legs().back().right().move()
# etc.
But in other cases it seems to lead to an interface that is too wide, and doesn't really feel like it "hides" any knowledge.
# Is this worse?
print(account.user().fullName())
print(account.user().socialSecurityNumber())
# Is this better?
print(account.userFullname())
print(account.userSocialSecurityNumber())
It feels like account has methods for things that really aren't anything to do with accounts now, just to avoid a technical violation of the Law of Demeter. Instictively, account.userFullName() is pretty "code smelly".
Is anyone aware of any more specific guidelines or refinements of the LoD that helps account for the difference between the "dog" cases where the principle clearly makes sense, and the cases where it doesn't? And how do you avoid the LoD leading to over-wide interfaces?
One principle I have heard is that it matters less in a context of immutability, but many have disputed this.
If you need a guideline, then this should work.
@laiv that's just structural thinking. That could be enforced by a static code analyzer. Java 8 Streams prove that there are good exceptions to these rules. It's best to understand why the rules exist.
What about a 3-legged dog? Or an account with 2 users?
For internal code I try to make my code follow the "worse" example. Hiding knowledge is good for consumers, but not so helpful when trying to debug or extend.
# better
dog.walk()
# worse
dog.legs().front().left().move()
dog.legs().back().right().move()
# etc.
There are two reasons why the second is worse. The first, not really directly LOD-related, is that your walk logic isn't reusable, which is a problem in the case where there are multiple places in your codebase where a dog must walk.
The LOD-related reason why this code is bad is because it forces the current consumer to know that DogLeg exists and how to operate it. The unspoken expectation here is that your consumer only knows about a Dog, and that it can be made to move around, but how that Dog moves around isn't something the consumer cares about (that's up to the Dog to manage for themselves).
However, that is not necessarily the case for your other example.
account.user().fullName()
If Account and User are both domain objects of which your consumer has public knowledge, then there's no issue with asking them to handle a User object directly.
The expectation here is that "the account refers to its owner" is part of the Account interface, and therefore returning the owner (represented by a User object) is fair game.
Comparatively, your Dog interface is not expected to include "the dog has legs", but rather "the dog is able to move around", and the legs are just an implementation detail so the dog is able to fulfill its contract (i.e. moving around). The interface itself doesn't specify the existence of legs, and therefore the consumer of Dog shouldn't be relying on the existence of legs.
In essence, a DogLeg is considered a private implementation detail, whereas a User (class) is publically known. This means that there's significantly less issue with expecting your consumer to handle a User than there is with expecting them to handle a DogLeg.
That being said, if account.user() was actually an AccountUser object which would also be considered a private implementation detail, then the same principle applies as it does for DogLeg.
This is what makes LOD so tricky to pinpoint. It's not something that is objectively true based on your code alone, it hinges on subjective context and expectation of interfaces/contracts. By renaming the code, you change the reader's implicit expectation, which can change whether something is considered an LOD violation.
dog.legs().front()
account.user().fullName()
Technically, it's the same code. But what changes is our expectation of how acceptable it is to force a consumer to directly handler a DogLeg vs forcing them to handle a User.
The meaning of LoD in this regard is not really up for debate, it is quite clearly written. Calling methods on instance variables of other objects is not allowed. It doesn't matter what the user of the code or the writer expects.
@RobertBräutigam the law is clear, but it's a rule of thumb, not an actual law. The question isn't "what does the law say?", but "should I listen to the law in this case?". It should make you ask questions like this, not automatically refactor every line with two or more dots.
@RobertBräutigam "Calling methods on instance variables of other objects is not allowed." The Law of Demeter is not a dot counting exercise. You cannot and should not judge LOD violations based on the accessing code itself. The surrounding context matters in making the accessing code applicable or non-applicable to LOD considerations.
I've made a factual observation about your comment. Whether something is or is not a LoD violation does not depend on the "reader's implicit expectation" as you've tried to argue. That's it. Am I wrong? For reference, here's the original paper.
@RobertBräutigam: It seems you misunderstood my answer. The implicit expectation of the reader of the example code does matter, because we (as the reader) have to infer what the intended contract of the Dog and Account classes is. Example code in questions is usually light on rigorous analysis documentation that would shed light on the things that we are now left to infer. So whether or not we designate this example code as a LOD violation or not, hinges on our implicit inference on what is the contract and what is a private implementation detail.
@RobertBräutigam: Just to finish that thought, based on the presented example code, it's in my opinion reasonable to infer that a dog's legs are an implementation detail and the Dog contract only stipulates movement; whereas the Account contract likely stipulates that it has an owner (represented by a User, whose contract in turn stipulates that users have a last name) and not that the account contains its owner's last name specifically. I cannot guarantee that this is correct, but this is why it's called an inference.
@Flater by I like this idea of telling the objective and let the class carry it out without worrying for implementation details: dog.goto(master) seems simpler than telling the route. I didn’t elaborate on dog in my own answer because there’s no LoD issue (if a class knows about dog, we can assume it can use its public interface; we can argue about granularity of interface but this is not LoD relevant in this case imho)
@Flater the law of demeter is not subjective, you simply choose whether or not it is worth fixing.
This is a great answer and illustrates why dogmatic adherence to the LoD is generally misguided.
@user949300 Hehe, "dogmatic" :) I see what you did there :)
account.getUser().getName() will cause the troubles LoD tries to prevent you from if user changes and that's not a matter of perspective. It will turn into red all the classes coupled to user, whether you belive user must be accessible for everybody any place or not. If you need access to the user then pass the user not the account. How and who retrieves user from account is a different matter.
@Laiv: If the Account contract stipulated that it contains a reference to a User, then this is not an LOD violation. This means that the consumer who calls account.getUser() is inherently expected to be able to work with a User object, since the Account class contract isn't set up to do it for them. There is a difference between "an account contains its owner's name (and storing that name in a User is an implementation detail" and "an account contains a reference to its owner (i.e. User)". In the former, account.getUser().fullname() is an LOD violation, in the latter it is not.
@Flater then the consumer is inherently expected to know about the interface of User and so on. LoD sets the limits where this inherently should end.
@Laiv: The "inherently" ends based on how the contracts have been stipulated. LOD at its very core aims to disallow directly handling private implementation details, and instead tells you to only use the contract as it has been specified (and, by extension, to develop your classes so that their private implementation details don't leak and you adhere to their contract).
@Laiv: People always tersely summarize LOD as "don't talk to strangers", but I'd argue that it's more a matter of "don't handle others' privates, even if they've exposed them". But that's maybe a less SFW interpretation.
By this reasoning (the contract interpretation) I could argue that DogLeg is expected to be part of the Dog public contract and hence not breaking LoD either. If yo leave it open to interpretation then LoD is useless. If you want to narrow down the coupling of certain elements you need to set a sort of rules of thumb. Rules you can break when is convenient. The account example could be one of these convenient cases. But it doesn't mean you are not breaking LoD because you are inherently inheriting the issues LoD is trying to prevent. This is not good nor bad. But it's what it's.
Anyways, I agreed, Principles and "laws" should be read and interpreted carefully and applied in the way that best meets your needs. Even break'em occasionally if need it.
@Laiv: You can definitely argue that interpretation, but my answer explicitly mentions that calling it a LOD violation is based on the inference that a dog leg is not considered part of the Dog contract. It assumes a contract like interface IDog { void Move(); } where it only stipulates that a dog can move. How that dog moves is a private implementation detail. Whether it uses legs, hoverjets, or wind sails is irrelevant as far as the contract is concerned. But I agree that OP left the contract up to inference, cfr my comment to RobertBrautigam (13 comments up from here).
The LoD or principle of least knowledge has nothing to do with immutability. It’s about decoupling systems by decreasing indirect dependencies.
In your dog case, you only need to know about dogs to make your chained invocations.
In your account case, when some module is working with accounts, it also needs to know about users, and perhaps even about addresses. So it’s not just about friends but also about friends of friends. This leads to complex and time-consuming change propagation: e.g. a change in the address class may impact user class and account class.
The way out is to tell objects what they should do without micromanaging the details:
account.print(); // account will invoke user.print()
This is of course easier said than done, because this may create other issues (e.g. bloated interfaces if you need printSumary() and printDetails()). And sometimes, you just need to know about friends of closer friends. In the end the design will not be about respecting all the “laws” but about balancing different principles to get an optimal fit.
Adding more methods to an interface is not always the solution to the Law of Demeter. You do not need to distinguish these cases, you need to find a different way to resolve them.
From the wikipedia page:
On the other hand, at the class level, if the LoD is not used correctly, wide (i.e. enlarged) interfaces may be developed that require introducing many auxiliary methods. This is due to poor design rather than a consequence of the LoD per se. If a wrapper method is being used, it means that the object being called through the wrapper should have been a dependency in the calling class.
Take your second example:
print(account.user().fullName())
print(account.user().socialSecurityNumber())
Does your caller here need to interact with an account at all? Remove account as a dependency and provide user as an argument instead:
print(user.fullname())
print(user.socialSecurityNumber())
Indeed, it’s all a question of balance between the different principles and needs. In fact, adding new methods like that may even create a hidden coupling (e.g. if for privacy reasons you’d get rid of social security in the user, you’d have to change a lot of methods.
Decide Who Your Neighbors Are
The premise of this law is not to avoid "chaining" of function calls (or property/field access, etc.), but rather to limit the "reach" of one class of objects. In the case of the walking dog, the caller must know about the dog, a collective structure that contains all of the legs, and about an individual leg. This requires tight coupling between the consumer and the leg details, so it's better to let the dog handle its own legs.
Starting at the beginning
User Robert Bräutigam linked to the original paper in a comment to another answer. I thought it best to transcribe the actual text of the "law" here:
For all classes C, and for all methods M attached to C, all objects to which M sends a message must be instances of classes associated with the following classes:
The argument classes of M (including C).
The instance variable classes of C.
(Objects created by M, or by functions or methods which M calls, and objects in global variables are considers as arguments of M.)
K. Lieberherr, I. Holland, and A. Riel. 1988. Object-oriented programming: an objective sense of style. In Conference proceedings on Object-oriented programming systems, languages and applications (OOPSLA '88). Association for Computing Machinery, New York, NY, USA, 323–334. DOI:https://doi.org/10.1145/62083.62113
Classes, not Instances
To apply the law, look at the classes* that are known to the consuming class C. If C stores an instance of class A, or can access an instance of A through global/hierarchical scope, or accepts an instance of A in a given method, then the damage is done. That method of C already "knows about" A, and depends on the contract it provides. That consumer method may call any method of any instance of A, no matter where it came from (or how complex it was to find it).
This is what differentiates the dog example from, say, fluent APIs. in a fluent API, you may chain a dozen function calls together, but each return value tends to be an instance of the same class! Each successive result may be a new instance, but the point is that there is only one class. You're counting classes, not instances or consecutive invocations.
*Presumably, this applies to all manner of types including specific interfaces; you don't "know about" MySqlDbConnection just because you know about DbConnection.
The Spirit of the Law
Does the dog example break the Law of Demeter? We technically don't know, but it probably does. We don't have much information about the consuming context; does it "know about" dog legs because of some instance field we can't see? If so, the law permits the consumer to call any method of any leg.
Thus, the problem with the dog example isn't that the consumer accesses a deeply nested function (a direction for one particular among a set of legs on a dog), but that the consumer has to understand the concepts of a dog and of a set of legs and of individual legs. You don't fix it by hiding method calls, you fix it by decoupling from the classes that contain those methods.
The Law of Demeter doesn't reduce coupling so much as it it keeps coupling explicit. From there, you can more easily detect and address issues related to coupling.
What about Wallets?
A popular analogy (apparently created by David Bock) when talking about the Law of Demeter is that of a paper boy that is owed money by a customer. The paper boy doesn't ask for a wallet so that he can grab the money, he just asks for the money. Who cares if the customer keeps it in a wallet, and why should the paper boy have to get the money from it anyway?
Interestingly, if the paper boy has a wallet of his own, this does not break the Law of Demeter! The paper boy knows how to use a wallet, so he knows how to use the customer's wallet. The real lesson of this scenario is to assign responsibilities properly and to keep implementation details (such as wallet vs. money clip vs. wad of bills) private.
Tl;dr
Here are some possibilities for the name example, vis-à-vis the Law of Demeter, assuming account.user() returns type User.
Maybe there is nothing wrong with account.user().fullName()! If the consumer does other work involving Users, then the consumer has already "paid the price" of depending on the User class contract.
If it's not explicit now, then make it explicit! Work with the user directly - don't go through account. Instead of calling account.user(), have the consuming method explicitly depend on one User instance as a parameter. Now, even if account.user() suddenly returns a different class, that's somebody else's problem. The consumer can focus on working with a User. Notably, this only "adds" coupling to User for the consuming method in question, not the whole consuming class.
You could create a dedicated interface for situations like this. The consumer doesn't rely on a the User, but only on some abstraction that carries a smaller contract than a full User.
| common-pile/stackexchange_filtered |
Migrating UI-Bootstrap modal to component based
I'm studying UI-Router and UI-Bootstrap modal and had problem. the cloned one from UI-Bootstrap tutorial works fine, but when migrating to the component based version I got the problem: it does not show the modal window. May any one help to tell what is wrong.
You should refactor function declarations as following:
From var modalDemoCtrl = function($scope, $uibModal, $log, $document)
to function modalDemoCtrl($scope, $uibModal, $log, $document)
and from var ModalInstanceCtrl = function($uibModalInstance)
to function ModalInstanceCtrl()
This way you can reference to the functions from previous code in file.
Also, you shouldn't pass $uibModalInstance to ModalInstanceCtrl.
Thank you very much @lzagkaretos after the modifications it worked (I am still trying to understand why). Sorry I'm a new-be reputation under 15, cannot vote you up.
@Oops, you can accept my answer and with the first chance, come back and vote up! Thanks.
| common-pile/stackexchange_filtered |
Why after downloading my web file offline I need to add index.php?
I wonder what is the difference between my page offline and online?
After downloading my page offline I need to add (index.php) after my folder file in order to make it viewable?
http://localhost/gsaconst/index.php/articles/news
my online page would be
http://www.gsa-constructionspecialist.com/articles/news
How to delete the index.php in the offline page and still make it viewable not error message like:
Object not found!
The requested URL was not found on this server. The link on the referring page seems to be wrong or outdated. Please inform the author of that page about the error.
you need to add .htaccess file in offline directory
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php/$1 [L]
and also make sure you remove index.php from config file $config['index_page'] = 'index.php';
find config file application/config/config.php
| common-pile/stackexchange_filtered |
Is it worth to learn Ada instead of another languages [c++, c#]?
If Im going to make robots, which language do you recommend me?
In our university we can choose between several languages. Most of the students are choosing Ada just because our teacher uses it.
After some research I found out that ada is old. What are your thoughts?
Is is worth to learn it?
Regarding 'old' - The latest revision of Ada is Ada 2012.
Why "instead"? Why not "alongside"?
C++ isn't old? Okay, okay, it's younger than ADA. By three years. :D
The "++" of C++ is as old as the oldest version of Ada (1983). The "C" part is more than a decade older!
your teacher using Ada is the most important reason to pick Ada. The language (s)he prefers is the language (s)he is best able to teach you.
C++ for embedded system, and C# for Windows OS application, for Ada...here is an opinion (https://www.reddit.com/r/ProgrammingLanguages/comments/eyeonv/whatever_happened_to_the_language_ada/fggvgc1/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
Yes. Very much so, in my opinion.
Ada is very good in two points that are often overlooked in other programming languages:
The generic system: packages, functions, and procedures can all be generic. This works together particularly well for ingraining the DRY [Don't Repeat Yourself] principle. Yes, you can go overboard on generics, but the thing I've seen in (other language) projects was that cut-and-paste programming seems to be more common. [and that just introduces multiple points-of-failure if a bug needs fixed on that spot of code.]
Native multitasking. One of the great things about a language-level tasking feature is that multithreading doesn't feel 'tacked on' and isn't an unwieldy extension but integrated into the language.
That said, there's other good reasons to learn/use Ada. One is the picky compiler, by being so picky it forces you to start to think in ways that naturally reduce programming errors. (Like the prohibition on and and or in the same conditional.)
Another is the concept of subtypes; subtypes are the opposite from objects in OOP, instead of branching out the further derevitions you go, you narrow the type (it's acceptable values) the more you get away from the base type. {Think of it in terms of proper sets; Integer is your base type, Natural are your non-negative Integers, and Positive are your positive-integers... and with Ada 2012, you can have interesting exclusions on your subtypes, say Primes which are subtypes of Positives (sorry for being a bit vague on that last point; I'm still learning the 2012 stuff).}
With Ada 2012, you can also have pre- and post-conditions on your subprograms; this alone can make your libraries more stable. (As can null-excluding pointers.)
Peter Norvig's advice is probably relevant here:
When asked "what operating system should I use, Windows, Unix, or Mac?", my answer is usually: "use whatever your friends use." The advantage you get from learning from your friends will offset any intrinsic difference between OS, or between programming languages.
Teach Yourself Programming in Ten Years
The whole essay is a good read.
Incidentally, Ada was the first language that I learned. Learning Ada first had one major advantage: no one uses Ada, so I was forced to learn another language before doing anything else, and thereby gain perspective. One of the worst problems that beginning programmers can have is the belief that the way things are done in their first language is just The Way Things Are Done. Learning a new language is easy and immensely valuable.
If your teacher and most of your fellow students use Ada, it is probably best to go with Ada. It is a very good language, both for small and large scale programs. It is very readable and it protects you (to some degree) against a lot of common mistakes.
Ada is for software engineering. As darkestkhan said, it is not a "hack and hope for the best" language.
The advice of Paul Stansifer is very good: Learn several languages. Make sure you have a plenty of tools at your disposal - don't limit yourself to just one language. The perspective you gain from learning different languages is very valuable. There's definitely more than one way to skin a cat - take a look at Haskell for example. You'll gain a whole new perspective on how to solve problems.
Language syntax is the least of learning how to program. The hard part is learning how to model complex real world problems into something that makes sense in the very limited scope of a programming language. With its strong type system, tasking model, generics and solid tools for real-time and concurrent programming Ada is ideally suited to this job.
Since you're asking specifically about robotics, a good book to get might be "Building Parallel, Embedded, and Real-Time Applications with Ada" by John W. McCormick, Frank Singhoff and Jerome Hugues. It's a very good read.
http://www.cambridge.org/gb/knowledge/isbn/item5659578/?site_locale=en_GB
Ada is definitively worth it. In addition to 2 things written by Shark8 one has to add that almost all languages are focusing on the easiness of writing instead of reading (and most often you are going to write once and read many times). Also culture surrounding Ada is quite different from mainstream - focusing on correct working of programs and readability of source instead on "hack and make it work while counting on luck that no bugs will introduced".
As for age - oh, come on - C is even older. What is so important about the age of language itself? you should take look at what was put into Ada 83 first before you can even speak about age being issue (hint: mainstream languages started reaching feature set of Ada 83 in second half of 90s, and many features [notably subtypes] still aren't in mainstream). Beside newest Ada standard is from 2012 - if you can call that old then I don't know what is "new".
I have this idea that the upcoming C++20 standard may come close to Ada83 in terms of compile-time checking ability. However, the ability to generate or not generate a warning/error based on analysis of control flow is still missing and seems unlikely to ever be added. Further, the real problem with C++ is that you need to somehow disallow a plethora of unsafe 'legacy' constructs that were necessary prior to the newest standard.
C++ isn't exactly a spring chicken you know. I think it is all of one decade younger than Ada.
To make matters worse, both C++ and C# derive the core of their language from C, which is actually a decade older than Ada.
C was designed to make construction of compilers easier (not the programmer's job) back in the days when 16KB was a lot of RAM and hard drives were as big as industrial air conditioners. C++ basically started as good old late-60's C with OO features bolted on, and C# is essentially Microsoft's tweaks to C++ to make it more Java-like. At their core, C# and C++ are actually older than Ada.
There are a lot of hip new languages out there to learn (and at some point you should learn them). However, C++ and C# aren't them.
While Shark8 has some very good points about Ada as a language in general, the question was specifically about robotics. In this respect, I would say "don't bother". After more than 10 years of experience doing all sorts of robotics that range from Lego to state of the art robot arms and hands, I have never seen a robotic system that used Ada.
At the end of the day, robotics is hard. So hard that you are not going to tackle it all on your own. And that means leveraging lots of existing libraries and frameworks. And this means that you will be using their languages, interfaces, etc. Your robotic system will likely include a mountain of code, only a tiny fraction of which will be custom for your specific application. While it is possible to write your tiny part in Ada and do some wrapper or interface so it can play nice with the rest of the code. But I think you will find that is a lot of work with very little benefit.
When embarking on a new robotics project, i take a pragmatic approach. I look at the existing tools, libraries, and utilities and pick the one that best suits my needs. Then i use whatever language it does. And 95% of the time that is C++ or Python.
In this case, your own "pragmatic" logic would argue for Ada, as that is what the teacher and most of the other students are using.
Yes, robotics is hard. That is actually a good reason to choose Ada, which has the preconditions and postconditions to implement Design-by-Contract (DBC, which Bertrand Meyer pioneered) and other tools to create correct code instead of quickly make a copy-and-paste coding mess. One of the people I follow, Jack Ganssle, likes Ada very much for embedded programming.
Ada is more engineering oriented than most other languages. Unlike ordinary programming languages, Ada provides a powerful capability for data engineering. That is, before an Ada developer even begins to develop algorithms, s/he designs (engineers) the data.
The Ada type model allows for the design of precise data. A trivial example is:
type Channel is range 1..135;
where every instance of this integer data type belongs to that pre-constrained range of values. This greatly simplifies the algorithms that use such instances.
The underlying principle is that when the data is engineered with precision, the related algorithms will be less complicated. This feature is rarely found in other languages.
In Ada it is common practice.
By supporting this notion of data engineering as a subset of the overall software engineering process, we can significantly reduce the Cyclomatic complexity of our overall software design.
My example is only a small taste of the full range of data engineering capabilities in Ada. However, it is that data engineering capability that makes it the ideal choice for safety-critical software, which is the niche where it is most commonly used now, in this 21st Century software environment.
It depends on what you want to do with this language. Lisp is even older than Ada and is widely used within artificial intelligence. Learn the language you want to, learn the language which is easy for you to learn and easy for you to understand it's concepts. Then you can go further and learn more languages.
Well, we are going to make some advanced robots which uses sensors, motors and other electronically components. Isnt it better to start with c# for example and learn it completely instead of learning a bit of ada and a bit from other languages?
@Alex90 - You aren't going to learn any language completely from one or two little school projects.
"Learn learn the language which is easy for you to learn and easy for you to understand it's concepts." After you've done this, then for the rest of your career, learn one paradigm you don't know every year. Learning an easy language is good when starting out, but pushing through the harder languages will teach you the most, make you well rounded, and make you much more competitive -- you will be the more professional programmer, being dedicated to tackling the hard things and being well rounded and practicing continual investment in yourself. I recommend Factor and Lisp with its macros.
| common-pile/stackexchange_filtered |
Wiring up Backbone.js view and template for dialog box
In a Backbone.js template, templates/nutrients/show_template.jst.ejs, I have this:
<div id="nutrient_container">
<table class="person_nutrients">
...
<% for(var i=0; i < person[nutrientsToRender]().length; i++) { %>
<% var nutrient = y[x[i]]; %>
<tr class="deficent_nutrients">
<td>
<span class="nutrient_name"><%= I18n.t(nutrient.name) %></span>
</td>
<td><a id="show_synonyms" href="#"><%= I18n.t("Synonyms") %></a></td>
<% } %>
</table>
</div>
Then, in the Backbone.js view, views/nutrients/show_view.js, I have this:
el: 'table.person_nutrients',
parent_el: 'div#nutrient-graphs',
template: JST["backbone/templates/nutrients/show_template"],
initialize: function(options) {
...
this.render()
},
events: {
'click a#show_synonyms':'synonyms_event'
},
render: function() {
...
$(this.parent_el).append(this.template({person: this.model_object, nutrientsToRender: this.nutrientsToRender(), x: x_prep, y: y_prep}))
},
synonyms_event: function(event) {
alert("I got called");
}
Why doesn't the event (the alert box) get triggered? I click the link for "Synonyms" and all I get is the root url with a # after it. Why doesn't the Javascript match up?
you have to event.preventDefault() before your alert to make sure the link doesn't fire regularly
@imrane, thanks. Does the wiring up look right, though?
You need to either set the el or tagName for the view somewhere
@Jack, you helped me earlier today, too - thanks. The el and parent_el for the view are set (I didn't show them.) Does the wiring up look right?
It does, are you changing the view's el somewhere? perhaps you need to call the re delegate your events. You also probably want to call event.PreventDefault like @imrane mentioned.
You mention a parent_el, what are you referring to? As far as I know there is no such thing in vanilla Backbone.js, are you using a framework on top of Backbone.js?
@Jack, I added the el's which are in the view. I don't really understand what the el's do. The site starts with Ruby on Rails and then has Backbone.js after Devise authentication. If you can, please explain more about the delegate suggestion. I don't understand what that does. Thanks very much.
is table.person_nutrients a parent of a#show_synonyms element? or is it it's sibling?
@ekeren, this is a very interesting question. I have assumed that table.person_nutrients has rows, and each row includes a a#show_synonyms element. Eventually, I need show_synonyms to be passed a specific "nutrient" referring to which row it is. I haven't yet learned how to pass the information to the alert. Does that help answer your question?
First of all don't use id='show_synonyms' on multiple elements in a document, id must be unique. use class instead. can you show the entire template file?
@ekeren, that is very interesting, yes, I made all the links "show_synonyms", and that is likely part of it. I will post the template in a moment. Thanks so much for the help.
So, @ekeren, I want to pass "nutrient" to the show_synonyms event. You are totally right that show_synonyms should have a different name, since there are going to be N of them.
I think you are misunderstanding the way a view's el works.
In backbone.js views always reference a DOM element which is its el, this DOM element can either be an existing element in the page's DOM or it can be a new element that does not exist on the DOM.
The way views in backbone.js listen to events is by delegating (binding) them to its el. If a view's el changes, or you want to change what events it listens to you can also manually (re)delegate the events by calling the delegateEvents method
There are two common patterns with views, one is where a view references an existing element on the DOM (this is done by specifying it's el property or passing in an el when it's instantiated), and the second is where the view's el doesn't reference an existing element on the DOM and instead renders some HTML in it's el which is then inserted into the DOM.
Very often what's done is that you have one parent view (often a collections view) which references an existing element on the DOM, and then a bunch of child view's whose el gets appended.
In your case you probably want to split up your view into a parent view that references a higher container div, and have the child views' els appended into it.
For example
<div id="dvContainer">
<div id="synomCont"></div>
<input type="button" id="btnAdd" value="add" />
</div>
var ParentView = Backbone.Views.extend({
el: 'dvContainer',
events: {
"click #add" : "addSyn"
},
addSyn: function() {
//here you can create a model with the approptate data, and pass that in to
// the view
var view = new SynomView();
this.$el.find('#synomCont').append(view.render().el);
}
})
Thanks, also to @ekeren. The el was part of the problem. I was able to get the code working after quite a number of changes. I will have a problem remaining, which I will post in a different thread. Thanks so much for helping me.
| common-pile/stackexchange_filtered |
How do I get the current user's id in this Laravel 8 and Auth0 API?
I am working on a Laravel 8 API. I use Auth0 for user registration and login.
At registration, I need the user's id returned by Auth0 (as user_id), in order to insert it into my own users table, in a column called uid.
Also, once a user is logged in, I need to display the user's data at myapp.test/api/user-profile/show/ for which, also, I need the user_id.
For this purpose I have the code:
In routes\api.php:
Route::get('/authorization', [UserController::class, 'authorize']);
In the UserController:
public function authorize(){
$appDomain = 'https://' . config('laravel-auth0.domain');
$appClientId = config('laravel-auth0.client_id');
$appClientSecret = config('laravel-auth0.client_secret');
$appAudience = config('laravel-auth0.api_identifier');
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_URL => "$appDomain/oauth/token",
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => "",
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 30,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "POST",
CURLOPT_POSTFIELDS => "{\"client_id\":\"$appClientId\",\"client_secret\":\"$appClientSecret\",\"audience\":\"$appAudience\",\"grant_type\":\"client_credentials\"}",
CURLOPT_HTTPHEADER => array(
"content-type: application/json"
),
));
$response = curl_exec($curl);
$err = curl_error($curl);
curl_close($curl);
if ($err) {
return "cURL Error #:" . $err;
} else {
return $response;
}
}
The problem
The authorize() method (taken from a tutorial on the Auth0 website), only returns the access_token info but not the user_id.
Questions
How do I get the current user's user_id?
I suppose I need to replace CURLOPT_URL => "$appDomain/oauth/token" with something, not with what?
Why are you using curl inside Laravel? Generally Guzzle or Laravel's builtin HTTP client is used: https://laravel.com/docs/8.x/http-client. Also, building your own JSON is a bad idea.
@miken32 What would you do?
Guzzle or Laravel HTTP client both accept arrays of data for posting, and build the JSON for you. https://laravel.com/docs/8.x/http-client#request-data
Please look at this community answer:
https://community.auth0.com/t/how-to-get-user-information-from-the-laravel-api-side/47021/3
Make a request to the Authentication API’s userinfo endpoint to get the user’s profile.
https://auth0.com/docs/api/authentication#user-profile
Which of the endpoints seems relevant?
I have replaced CURLOPT_URL => "$this->appDomain/oauth/token" with CURLOPT_URL => "$this->appDomain/userinfo", yet, tere is a problem: I get Unauthorized from Postman, even though the other protected routes are accessible.
@RazvanZamfir endpoint /userinfo but You must also add header Authorization: Bearer with token, that You will recive after calling authorize() method and it's lot easier to use Guzzle
How do I use Guzzle?
to get the id of current login user you can use
use Auth ;
$user_id = Auth::user()->id ;
it will give your the id current login user id. but if you need complete login user information then you can user
$user = Auth::user() ;
I need the user_id from Auth0, not Laravel.
| common-pile/stackexchange_filtered |
System.Data.SQLite.dll isn't found trying to add a reference
(Beginner level)
I have a DB in SQLite3.
I would like to populate a DataGridView by data from the DB.
Currently I'm at the first phase to create a connection/using.
Also, when I add this: using System.Data.SQLite; I get an error:
Error CS0234 The type or namespace name 'SQLite' does not exist in the namespace 'System.Data' (are you missing an assembly reference?)
In this tutorial it guides to add System.Data.SQLite.dll as reference - in folder Program Files (x86) should appear System.Data.Sqlite, but the folder does not exist. The dll file cannot be found, I searched for it also in the Reference Manager Browse.
(Thanks in advance)
http://blog.tigrangasparian.com/2012/02/09/getting-started-with-sqlite-in-c-part-one/
First Add With Nuget Syste.Data.SqLite.
or run the command in the Package Manager Console
Install-Package System.Data.SQLite
Then right click on your project references, an select add reference.
Select Under Assemblies, Framework.
Then left click on System.Data.SQLite.
This will add Mono.Data.SqLite to your reference.
So type in your code
using Mono.Data.Sqlite;
This reference is of the same use case as System.Data.SQLite.
You can add the assembly with NuGet. Install System.Data.SQLite.
To install System.Data.SQLite (x86/x64), run the following command in the Package Manager Console
Install-Package System.Data.SQLite
| common-pile/stackexchange_filtered |
Button isn't following my touch
I have used On touch method to make a button follow my touch.
My code is
b.setX(event.getX());
b.setY(event.getY());
My button shivers when I drag it and my starting point is inside the button.
But it doesn't do so when the starting point is somewhere other than the button.
The best thing to do I think is declare a View that will be your touch area. Here I put all the screen, I put the event on the Layout that match_parent so it's all the screen:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
final Button btn = (Button)findViewById(R.id.second);
final RelativeLayout layout = (RelativeLayout)findViewById(R.id.layout);
layout.setOnTouchListener(new View.OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
btn.setX(event.getX());
btn.setY(event.getY());
return true;
}
});
btn.setOnTouchListener(new View.OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
int[] pos = new int[2];
layout.getLocationOnScreen(pos);
btn.setX(event.getRawX());
btn.setY(event.getRawY() - pos[1]);
return true;
}
});
}
xml:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@android:color/black"
android:id="@+id/layout">
<Button
android:text="OK"
android:id="@+id/second"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:background="@android:color/holo_red_dark" />
</RelativeLayout>
But still if we click the button and start dragging, it misbehaves. Doesn't it?
I eddited my answer. getRawX() will get you the x and y based on the device ;).
The only problem is that it's adding the actionbar and the notifiction bar height so you have to find a way to get them and substract. Like getting the height of the screen and the height of the root view for exemple ;)
I already knew that but please tell me the way to get height of the notification bar and Action bar.
| common-pile/stackexchange_filtered |
break navigation buttons into two rows
Just started learning CSS for a week now and I'm having an issue with this. I have a navigation bar at my footer but I want it to be responsive so that when it's in mobile mode, it will split into two rows of two buttons each (4 buttons total).
Trying to figure out how to do this for hours but can't seem to get working, not sure if it has to do with my button element but I do want to continue using this.
Here is my html code:
<footer>
<!-- include navigation buttons -->
<nav id="navigation">
<button class="nav-button" href="#" style="color: #000000; text-decoration: none;">BACK</button>
<button class="nav-button" href="#" style="color: #000000; text-decoration: none;">MAP</button>
<button class="nav-button" href="#" style="color: #000000; text-decoration: none;">AREA</button>
<button class="nav-button" href="#" style="color: #000000; text-decoration: none;">NEXT</button>
</nav>
</footer>
I'm also curious: if I want to edit the design of the button (the space between the buttons, the width of the buttons, etc. how am I supposed to do this?)
You can use bootstrap Grid for this for this. in small screen 2 buttons will be in one line and in bigger screens they will all be in one line.
<footer>
<!-- include navigation buttons -->
<nav id="navigation">
<div class="row">
<button class="nav-button col-sm-6 col-md-3 col-lg-3" href="#" style="color: #000000; text-decoration: none;">BACK</button>
<button class="nav-button col-sm-6 col-md-3 col-lg-3" href="#" style="color: #000000; text-decoration: none;">MAP</button>
<button class="nav-button col-sm-6 col-md-3 col-lg-3" href="#" style="color: #000000; text-decoration: none;">AREA</button>
<button class="nav-button col-sm-6 col-md-3 col-lg-3" href="#" style="color: #000000; text-decoration: none;">NEXT</button>
</div>
</nav>
</footer>
Do it like this:
#navigation div {
display: inline-block;
}
<footer>
<!-- include navigation buttons -->
<nav id="navigation">
<div>
<button class="nav-button" href="#" style="color: #000000; text-decoration: none;">BACK</button>
<button class="nav-button" href="#" style="color: #000000; text-decoration: none;">MAP</button>
</div>
<div>
<button class="nav-button" href="#" style="color: #000000; text-decoration: none;">AREA</button>
<button class="nav-button" href="#" style="color: #000000; text-decoration: none;">NEXT</button>
</div>
</nav>
</footer>
Also, see this fiddle.
I'm also curious: if I want to edit the design of the button (the
space between the buttons, the width of the buttons, etc. how am I
supposed to do this?)
Look at margin, padding, width, height etc. These are all basic properties which are explained on the very first pages of each book about CSS.
First of all, use <a> instead of <button>...href is an attribute of <a> tag, not of <button>.
...and to achieve the result you can use Flexbox (It will also remove the spaces between your buttons)
...and media query rule for mobile screens
...and to style your link apply css to .nav-button
Stack Snippet
#navigation {
display: flex;
flex-wrap: wrap;
}
a.nav-button {
flex-basis: 25%;
background: #ececec;
padding: 10px;
box-sizing: border-box;
border: 1px solid #ccc;
text-align: center;
}
@media (max-width:480px) {
a.nav-button {
flex-basis: 50%;
}
}
<footer>
<!-- include navigation buttons -->
<nav id="navigation">
<a class="nav-button" href="#" style="color: #000000; text-decoration: none;">BACK</a>
<a class="nav-button" href="#" style="color: #000000; text-decoration: none;">MAP</a>
<a class="nav-button" href="#" style="color: #000000; text-decoration: none;">AREA</a>
<a class="nav-button" href="#" style="color: #000000; text-decoration: none;">NEXT</a>
</nav>
</footer>
have you tried CSS flex?
there are many posibility with flex.
Check this Example Code
| common-pile/stackexchange_filtered |
Show uniform convergence on a compact set given local comparison
For $z \in \Bbb{C}$, $n\geq 1$ let,
$$a_n(z) := e^{-z/n}\left(e^{z/n}-1-\dfrac{z}{n}\right)$$
Then, for $z \neq 0$, $$ \left(\dfrac{n}{z}\right)^2 a_n(z) \xrightarrow[n\rightarrow +\infty]{} \dfrac{1}{2}
\quad \text{ therefore }\quad |a_n(z)| \underset{n\rightarrow +\infty}{\sim} \dfrac{1}{2} \dfrac{|z|^2}{n^2} $$
Hence the series $\sum_{n = 1}^{+\infty} a_n(z)$ converges absolutely, but given a compact subset $K$ of $\Bbb{C}$, I feel one can get more, namely,
$$\sup_{z \in K} \{ |a_n(z)| \} =: \Vert a_n\Vert_{\infty}^K = \mathcal{O} \left( \dfrac{1}{n^2} \right) $$
that would give us the uniform convergence of $\sum a_n(z)$ on any compact subset. But I cannot think of any proper explanation at the moment. Any help would be gladly appreciated.
Edit: After thinking about it, I might have found a more general approach. Let $K$ be a compact subset of $\Bbb{C}$, then as $$ f(w) := \dfrac{e^{-w}}{w^2}(e^w-1-w) \xrightarrow[w\rightarrow 0]{} \dfrac{1}{2} $$
there is $r>0$ such that $$\forall w \in D(0,r), |f(w)|\leq 1$$
As $K$ is compact, it is bounded therefore there is some $M_K>0$ such that $K \subset D(0,M_K)$. Therefore, there exists $N_K \in \Bbb{N}$ such that,
$$\forall n \geq N_K, \forall z \in K, \left| \dfrac{z}{n} \right| \leq r.$$
Hence,
$$\forall n \geq N_K, \forall z \in K, \quad \left|f\left(\dfrac{z}{n}\right)\right| \leq 1,$$
which yields,
$$\forall n \geq N_K, \forall z \in K, \quad |a_n(z)| \leq \dfrac{|z|^2}{n^2} \leq \dfrac{M_K^2}{n^2}.$$
Therefore, $$ \forall n \geq N_K, \quad \sup_{z \in K} |a_n(z)| \leq \dfrac{M_K^2}{n^2}.$$
Whence the normal convergence of $\sum a_n(z)$ on any compact subset of $\Bbb{C}$.
One can show that the function
$$
f(w) = e^{-w}(e^w-1-w)
$$
satisfies
$$ \tag{$*$}
|f(w)| \le \frac 12 |w|^2 e^{|w|}
$$
for all $w \in \Bbb C$. Setting $w = z/n$ then gives the estimate
$$
|a_n(z)| \le \frac 12 \frac{|z|^2}{n^2} e^{|z|/n}
\le \frac 12 \frac{|z|^2}{n^2} e^{|z|}
$$
and since $|z|^2e^{|z|}$ is bounded on a compact set $K$, the desired conclusion
$$
\sup_{z \in K} \{ |a_n(z)| \} = \mathcal{O} \left( \dfrac{1}{n^2} \right)
$$
follows.
It remains to prove $(*)$: First we determine the power series of the function $f$:
$$
\begin{align}
f(w) &= 1-e^{-w}(1+w) \\
& = 1 - \left( \sum_{k=0}^\infty \frac{(-1)^k}{k!}w^k\right) \cdot (1+w) \\
& = \sum_{k=2}^\infty (-1)^k\left( -\frac{1}{k!}+\frac{1}{(k-1)!}\right) w^k \\
& = \sum_{k=2}^\infty (-1)^k \frac{k-1}{k!} w^k \, .
\end{align}
$$
Now we can estimate the absolute value:
$$
\begin{align}
|f(w)| &\le \sum_{k=2}^\infty \frac{k-1}{k!} |w|^k \\
& = \frac 12 |w|^2 \sum_{k=2}^\infty \frac{2(k-1)}{k!} |w|^{k-2} \\
& = \frac 12 |w|^2 \sum_{k=0}^\infty \frac{2(k+1)}{(k+2)!} |w|^{k} \\
& \le \frac 12 |w|^2 \sum_{k=0}^\infty \frac{1}{k!} |w|^{k} \\
& = \frac 12 |w|^2 e^{|w|} \, .
\end{align}
$$
Thank you for this clear and well-written answer. It perfectly solves the problem. Before marking your answer as accepted, I would like to take some time to think about this problem more thoroughly, and try to come with another approach if possible, my intuition tells me there must be another way to tackle it, but I might be wrong. Thank you again, have a nice evening.
I tried answering my question in the edit, do you think it is correct?
@Axel: Yes, I think it is.
| common-pile/stackexchange_filtered |
mark search string dynamically using angular.js
How can I mark my search pattern dynamically in my html?
Example:
I'm using angular and my html looks like this:
<div>
<input type="text" ng-model="viewmodel.searchString"/>
<!--Moving over all phrases-->
<div ng-repeat="phrase in viewmodel.Phrases">
{{phrase.title}}
</div>
</div>
I want the string matching pattern will be mark on every change in search string.
Can you help me?
Angular UI is a great choice. You can also do it with filter like: http://embed.plnkr.co/XbCsxmfrgmdtOAeBZPUp/preview
The essence is as commented by @Hylianpuffball, dynamically create styled 'span' tags for the matches.
.filter('highlight', function($sce) {
return function(text, phrase) {
if (phrase) text = text.replace(new RegExp('('+phrase+')', 'gi'),
'<span class="highlighted">$1</span>')
return $sce.trustAsHtml(text)
}
})
And use it like:
<li ng-repeat="item in data | filter:search.title"
ng-bind-html="item.title | highlight:search.title">
</li>
great solutions, both of them. i went for the separate filter due to modularity :)
what if initial text or search phrase contain html in them?
I believe it still works with HTML text, but for the search phrase you will need to escape the special characters so that the RegExp is valid. If that's the case, you might want to switch to a more sophisticated text matching "backend" such as Lunr instead of plain RegExp.
I am not getting the class if i implement the example as given in the plukr code :(
Just in case that someone (like me a moment ago) needs this for angular2:
highlight-pipe.ts:
import {Pipe, PipeTransform} from '@angular/core';
@Pipe({name: 'highlightPipe'})
export class HighlightPipe implements PipeTransform{
transform(text:string, filter:string) : any{
if(filter){
text = text.replace(new RegExp('('+filter+')', 'gi'), '<span class="highlighted">$1</span>');
}
return text;
}
}
and use it like this:
at top of file:
import {HighlightPipe} from './highlight-pipe';
in template where 'yourText' is the original text and 'filter' is the part you want to highlight:
<div [innerHTML]="yourText | highlightPipe: filter"/>
in component:
pipes: [HighlightPipe]
EDIT:
I updated it for RC 4
and created a plunker for testing:
http://plnkr.co/edit/SeNsuwFUUqZIHllP9nT0?p=preview
How do you define the highlighted css class?
I have a similar filter that I use in a table row populated by *ngFor.
The result I get is indeed a span with the highlight class but the style cannot be applied because the css gets modified to end with [_ngcontent-hnu-2], and the browser obviously can't find my span this way. Do you have any workaround for this? At the moment, only inline style="font-weight:bold" works because I can't use css classes this way which is pretty sad :/
I defined a regular css file with a .highlighted{} class which is referenced in the index.html, the classes did not get changed. What is modifying your css class?
I made a plunker for you to see my implementation: http://plnkr.co/edit/SeNsuwFUUqZIHllP9nT0?p=preview
@BjörnKechel, never mind! I've just had to replace the pipes:[]´ from @Component to @NgModule
Try Angular UI
They have a highlight directive. You can use it as a reference to make your own (or just use it directly).
There's no 'simple' way to do this in core angular... though you could argue it is 'simple' to write your own directive that does it. Either way, you need a custom directive that can search through its contents and dynamically create styled spans based on a match. I would also recommend using an existing one if possible, like Angular UI, to avoid some headache.
Inspired by @tungd's answer but valid for multiple search terms.
.filter('highlight', function($sce) {
return function(text, phrase) {
if (phrase){
phrases = phrase.split(" ");
for(i=0;i<phrases.length;i++)
text = text.replace(new RegExp('('+phrases[i]+')', 'gi'),'~~~~~$1%%%%%')
text = text.replace(new RegExp('('+'~~~~~'+')', 'gi'),'<span class="bold greenTxt">');
text = text.replace(new RegExp('('+'%%%%%'+')', 'gi'),'</span>')
}
return $sce.trustAsHtml(text)
}
});
PS: One can always limit the input to be in non-special chars for this to be 100% bullet-proof.
| common-pile/stackexchange_filtered |
Can we redirect a user into a VF page from an Apex trigger
I have a scenario where a user if he/she uploads a file against a particular record, then the user will be redirected to a VF page after the file upload is successful.
I wrote a simple trigger in which I am redirecting to a VF page. The code is as following :-
trigger CheckAttachmentName on ContentDocumentLink (after insert) {
if(trigger.isAfter) {
if(trigger.isInsert) {
system.debug('test line');
//pageReference pdf = Page.ss;
//pdf.setRedirect(true);
//PageReference pageRef = new PageReference('/apex/ss');
PageReference pageRef = new PageReference('https://xxyyyaaaa-dev-ed--c.ap4.visual.force.com/apex/ss');
pageRef.setRedirect(true);
}
}
}
As one can see I have tried a few known ways to redirect to the VF page but have been unsuccessful. Here "ss" is the simple VF page which has a hard coded text message.
Any suggestion can be very helpful for me.
Triggers cannot redirect you to a new page. You'll need to accomplish this with something like a button which brings your users into a visualforce page.
Let's take a step back ....
Triggers execute at the database layer of the software application stack. As such, they occur whenever DML occurs to the database object. In Salesforce, this can happen in many ways:
user action from the UX (e.g. clicking Save)
DML from a scheduled flow or batchable
DML within an APEX REST class
DML from execution of a native SFDC REST or SOAP API
Dataloader
BulkAPI
Downstream action from subscribing to a platform event
DML from a LWC or Aura component
...
As such, redirecting to a VF page from this layer in the software stack does not make architectural sense.
Redirection to VF pages should be done when the initial stimulus is from a UX action and the redirection should be done by the "controller" that directly responds to the stimulus.
any idea how can I go along solving the above issue. Basically I am trying to show a custom alert message to the user after the file is uploaded...
How is the file being uploaded? Is it through the standard UX or via some custom action?
it is getting uploaded by the standard UX way....
Use a VF or LWC component on the Lightning Page for the record that queries the record's ContentDocumentLink.createdDate to decide whether an alert should be shown
| common-pile/stackexchange_filtered |
How would I solve this, considering I have no values whatsoever?
I think that SQ is straight and so have tried to use Pythagoras, which leaves me with $a + b + c = ac/b$, but I don't see any values. How could I find values?
Is this question from an Olympiad paper?
I don't know - this was given to me by my maths teacher. It may well be
If you are looking for solutions with "no values whatsoever," then you may want to contact a politician. They're good at that.
Ok, I'm in a lesson now, after it I'll try to solve it for you.
You don't need any values. The desired answer is the ratio.
The thing you're missing is $a+b=c$ which, together with $a+b+c=ac/b$ leads to $2:1:3$ @oscar6721 Indeed: $a+b$ is $RQ=ST$, but $ST$ is $c$
Notice the shorter sides of the rectangle are equal and hence $a+b=c$. Using Pythagoras, we have $$(b+c)^2+(a+b)^2=(a+c)^2 \\ (2c-a)^2+c^2=(a+c)^2 \\ 4c^2=6ac \implies \frac ac=\frac 23$$ And also $$\frac ac+\frac bc = 1\implies \frac bc =\frac 13$$ This gives $$a:b:c=2:1:3$$
| common-pile/stackexchange_filtered |
Conditions to apply Gradient Descent?
I was reading "An introduction to optimization" by Edwin KP, there is a chapter where they explain the basics of Neural Networks and as example; they use gradient descent with sigmoid function as activation function for a Neural network of 3 layers (input, hidden and output layer).
Also, all the material that i found (book included), to ensure the convergence of the algorithm (theorems), the function to minimize have to satisfy some conditions as: to be convex, have lipschitz gradient, to be strongly convex.
My questions are: In practice, have to we ensure that the function to minimize have those propierties? In particular, if a use least squares with sigmoid funtion (as the example in the book), will gradient descent still working? why?
PD: In advance, im thankful for your help and i apologize for my english.
I think you might be mixing some things here. In general, gradient descent is probably the simplest optimization algorithm there is, and it requires very few assumptions. You do have to assume that the function is continuously differentiable. However, if you seek to ensure convergence to a global minimum or you wish to derive the complexity of the algorithm, things get a bit more complicated.
Given that you make no assumptions on the function, then gradient descent can only be guaranteed to advance towards a stationary point (local min/max or saddle point), if such a point exists. The other assumptions will give you additional guarantees:
If we assume the function has a Lipschitz gradient - then we can derive iteration complexity (how much the algorithm "progresses" from one iteration to the other).
If we assume the function is convex - then in case it will stop, it must be at a local minimum (however that minimum need not exist).
If we assume the function has bounded level sets, then we are guaranteed it will converge to a stationary point. Bounded level sets can be attained in various ways, most notably if the domain is compact, or if the function is coercive or strictly convex.
Strictly convex functions admit a unique global minimum (while convex functions might have a range of minimizers).
Strongly convex functions give an improved iteration complexity.
Note that neural networks optimize cost functions that do not meet any of these assumptions. Moreover, they may not even be continuously differentiable. Therefore neural networks have no theoretical guarantees, but in practice they will converge to a local minimum or saddle point. The rate of convergence will be unknown, and in fact they will use a variant of gradient descent called stochastic gradient descent (SGD). SGD processes the data in batches at every iteration, since calculating and storing the gradients might be too expensive from the perspective of memory and computation resources required.
Thank you so much! I can see it more clear now. Do you have any material recomendations to study gradient descent?
| common-pile/stackexchange_filtered |
iOS: Convert UnsafeMutablePointer<Int8> to String in swift?
As the title says, what is the correct way to convert UnsafeMutablePointer to String in swift?
//lets say x = UnsafeMutablePointer<Int8>
var str = x.memory.????
I tried using x.memory.description obviously it is wrong, giving me a wrong string value.
What does the pointer point to? UTF-8 bytes? NUL-terminated?
Assuming it is a valid null-terminated C string, you can use String(CString:, encoding:).
If the pointer points to a NUL-terminated C string of UTF-8 bytes, you can do this:
import Foundation
let x: UnsafeMutablePointer<Int8> = ...
// or UnsafePointer<Int8>
// or UnsafePointer<UInt8>
// or UnsafeMutablePointer<UInt8>
let str = String(cString: x)
let str = String.fromCString(x)
Times have changed. In Swift 3+ you would do it like this:
If you want the utf-8 to be validated:
let str: String? = String(validatingUTF8: c_str)
If you want utf-8 errors to be converted to the unicode error symbol: �
let str: String = String(cString: c_str)
Assuming c_str is of type UnsafePointer<UInt8> or UnsafePointer<CChar> which is the same type and what most C functions return.
this:
let str: String? = String(validatingUTF8: c_str)
doesn't appear to work with UnsafeMutablePointer<UInt8>
(which is what appears to be in my data).
This is me trivially figuring out how to do something like the C/Perl system function:
let task = Process()
task.launchPath = "/bin/ls"
task.arguments = ["-lh"]
let pipe = Pipe()
task.standardOutput = pipe
task.launch()
let data = pipe.fileHandleForReading.readDataToEndOfFile()
var unsafePointer = UnsafeMutablePointer<Int8>.allocate(capacity: data.count)
data.copyBytes(to: unsafePointer, count: data.count)
let output : String = String(cString: unsafePointer)
print(output)
//let output : String? = String(validatingUTF8: unsafePointer)
//print(output!)
if I switch to validatingUTF8 (with optional) instead of cString, I get this error:
./ls.swift:19:37: error: cannot convert value of type 'UnsafeMutablePointer<UInt8>' to expected argument type 'UnsafePointer<CChar>' (aka 'UnsafePointer<Int8>')
let output : String? = String(validatingUTF8: unsafePointer)
^~~~~~~~~~~~~
Thoughts on how to validateUTF8 on the output of the pipe (so I don't get the unicode error symbol anywhere)?
(yes, I'm not doing proper checking of my optional for the print(), that's not the problem I'm currently solving ;-) ).
| common-pile/stackexchange_filtered |
Merge into nested array of dictionary
I have something like this:
vars:
process_names:
- name: myprocess
exe:
- /usr/myexe
- name: myprocess1
exe:
- /usr/myexe1
update1:
- name: myprocess2
exe:
- /usr/myexe2
update2:
- name: myprocess1
exe:
- /opt/myexe1
Consider the process_names is the original data which I want to update in 2 different scenarios.
Scenario 1: update1 contains name as myprocess2 and that entry is not available in process_names. Hence add that block. So the process_names should become:
process_names:
- name: myprocess
exe:
- /usr/myexe
- name: myprocess1
exe:
- /usr/myexe1
- name: myprocess2
exe:
- /usr/myexe2
Scenario 2: update2 contains name as myprocess1 which is already available in process_names. Hence the existing block should be updated something like this:
process_names:
- name: myprocess
exe:
- /usr/myexe
- name: myprocess1
exe:
- /opt/myexe1
- name: myprocess2
exe:
- /usr/myexe2
I've achieved Scenario 1 by a simple array concatenation (+). However, struggling for Scenario 2 where I have to perform update operation.
So far, I've tried with combine and union, but no luck. Ex: {{ process_names | union(update2) }}. Need guidance and help.
This actually become much easier and the exact same operation for both scenario if your transform your initial list to a dictionary to apply the changes. The name becomes the unique key and you just have to combine with your update(s) (transformed to dict(s) as well). You can transform back to a list once done.
Using only original data
The following example process_update.yml playbook:
---
- name: update a process list
hosts: localhost
gather_facts: false
vars:
process_names:
- name: myprocess
exe:
- /usr/myexe
- name: myprocess1
exe:
- /usr/myexe1
update1:
- name: myprocess2
exe:
- /usr/myexe2
update2:
- name: myprocess1
exe:
- /opt/myexe1
tasks:
- name: Show original list for comparison
debug:
var: process_names
- name: Apply updates to my list
vars:
process_dict: "{{ process_names | items2dict(key_name='name', value_name='exe') }}"
apply_updates:
- update1
- update2
my_dict_updates: "{{ apply_updates | map('extract', vars) | flatten | items2dict(key_name='name', value_name='exe')}}"
updated_process_names: "{{ process_dict | combine(my_dict_updates) | dict2items(key_name='name', value_name='exe') }}"
debug:
var: updated_process_names
Gives:
$ ansible-playbook process_update.yml
PLAY [update a process list] ***********************************************************************************************************************************************************************************************************
TASK [Show original list for comparison] ***********************************************************************************************************************************************************************************************
ok: [localhost] => {
"process_names": [
{
"exe": [
"/usr/myexe"
],
"name": "myprocess"
},
{
"exe": [
"/usr/myexe1"
],
"name": "myprocess1"
}
]
}
TASK [Apply updates to my list] ********************************************************************************************************************************************************************************************************
ok: [localhost] => {
"updated_process_names": [
{
"exe": [
"/usr/myexe"
],
"name": "myprocess"
},
{
"exe": [
"/opt/myexe1"
],
"name": "myprocess1"
},
{
"exe": [
"/usr/myexe2"
],
"name": "myprocess2"
}
]
}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Going further
The above will only work if your list elements contain 2 keys (name and exe). If you add one or more keys to your elements, they will be lost during the list => dict transformation.
Below is an example to go around this problem. This is absolutely not bullet proof and the best solution IMO would be to transform your original structure to a dict from source.
The following process_update_revised.yml playbook
---
- name: update a process list
hosts: localhost
gather_facts: false
vars:
process_names:
- name: myprocess
exe:
- /usr/myexe
a_key: some value
- name: myprocess1
exe:
- /usr/myexe1
a_key: other value
update1:
- name: myprocess2
exe:
- /usr/myexe2
a_key: no value
update2:
- name: myprocess1
exe:
- /opt/myexe1
a_key: changed value
tasks:
- name: Show original dict for comparison
debug:
var: process_names
- name: Apply updates to my list
vars:
process_dict: "{{ process_names | groupby('name') | map('flatten') | items2dict(key_name=0, value_name=1) }}"
apply_updates:
- update1
- update2
my_dict_updates: "{{ apply_updates | map('extract', vars) | flatten | groupby('name') | map('flatten') | items2dict(key_name=0, value_name=1) }}"
updated_process_names: "{{ process_dict | combine(my_dict_updates) | dict2items | map(attribute='value') }}"
debug:
var: updated_process_names
Gives:
$ ansible-playbook process_update_revised.yml
PLAY [update a process list] ***********************************************************************************************************************************************************************************************************
TASK [Show original dict for comparison] ***********************************************************************************************************************************************************************************************
ok: [localhost] => {
"process_names": [
{
"a_key": "some value",
"exe": [
"/usr/myexe"
],
"name": "myprocess"
},
{
"a_key": "other value",
"exe": [
"/usr/myexe1"
],
"name": "myprocess1"
}
]
}
TASK [Apply updates to my list] ********************************************************************************************************************************************************************************************************
ok: [localhost] => {
"updated_process_names": [
{
"a_key": "some value",
"exe": [
"/usr/myexe"
],
"name": "myprocess"
},
{
"a_key": "changed value",
"exe": [
"/opt/myexe1"
],
"name": "myprocess1"
},
{
"a_key": "no value",
"exe": [
"/usr/myexe2"
],
"name": "myprocess2"
}
]
}
Thanks for the 2nd solution which covers another scenario I had in my mind
Please note that the best scenario is still to use a dict in first place since your names are unique. Usint dict2items <=> items2dict becomes much easier than the other way arround.
| common-pile/stackexchange_filtered |
How to get topic-probs matrix in bertopic modeling
I ran BERTopic to get topics for 3,500 documents. How could I get the topic-probs matrix for each document and export them to csv? When I export them, I want to export the identifier of each document too.
I tried two approaches: First, I found topic_model.visualize_distribution(probs[#]) gives the information that I want. But how can I export the topics-probs data for each document to csv?
Second, I found this thread (How to get all docoments per topic in bertopic modeling) can be useful if I can add the column for probabilities to the data frame it generates. Is there any way to do that?
Please share any other approaches that can produce and export the topic-probabilities matrix for all documents.
For your information, this is my BERTopic code. Thank you!
embedding_model = SentenceTransformer('all-mpnet-base-v2')
umap_model = UMAP(n_neighbors=15)
hdbscan_model = HDBSCAN(min_cluster_size=20, min_samples=1,
gen_min_span_tree=True,
prediction_data=True)
stopwords = list(stopwords.words('english')) + ['http', 'https', 'amp', 'com']
vectorizer_model = CountVectorizer(ngram_range=(1, 3), stop_words=stopwords)
model1 = BERTopic(
umap_model=umap_model,
hdbscan_model=hdbscan_model,
embedding_model=embedding_model,
vectorizer_model=vectorizer_model,
language='english',
calculate_probabilities=True,
verbose=True
)
topics, probs = model1.fit_transform(data)
The probs variable contains all the topic probabilities corresponding to each individual document. You can create a dataframe from those values like so:
#convert 2D array to pandas DataFrame
topic_prob_df = pd.DataFrame(probs)
#create 'data' column - or, alternatively, an identifier column for data
topic_prob_df['data'] = data
#export as csv
topic_prob_df.to_csv('topic-probs.csv')
Thank you! In this way, I got three columns showing probabilities and one column for 'data'. But how do I know which topic with which the probability values are associated? I have 56 different topics and don't know which topic probabilities those three columns mean.
According to the source code, setting calculate_probabilities to True should return all the the probabilities of all topics across all documents. Can you share the shape of your 'probs' array as returned by fit_transform method?
Yes, my 'probs' array is:
array([[2.37977792e-002, 5.68686253e-002, 9.19333595e-001],
[2.13985734e-309, 1.76208444e-309, 1.00000000e+000],
[1.25879747e-309, 2.29313720e-309, 1.00000000e+000],
...,
[9.23867916e-003, 1.72322197e-002, 9.73529101e-001],
[6.63774096e-003, 1.45281342e-002, 9.78834125e-001],
[1.60348529e-002, 4.77972587e-002, 9.36167888e-001]])
And the output for model1.get_topics() shows you 56 different topics?
Yes, and now I have 53 clusters after I ran the program again. And now it works! There must have been some wrong manipulation. Thanks a lot! But it gives probs for only 52 clusters (52 different columns for each document) not 53 clusters. Is this because the program drops the cluster (-1) as (-1) group is not meaningful?
Good to know it's working!
The number of columns should be exactly equivalent to the number of topics as they are being created using the probs array which contains all topic probabilities. So, either it's a counting error (keeping in mind that indexing in python is from 0), or, as you mentioned, the -1 topic which doesn't actually count as a topic
| common-pile/stackexchange_filtered |
Simple question about logarithms: $\log _{\ln5}(\log^{\log 100}n)$
Can any one tell me why the asymptotic complexity of $\log _{\ln5}(\log^{\log 100}n)$ is Θ(\log(\log(n))) ?
I thought that $\log _{\ln5}(\log^{\log 100}n)$ is $\log _{\ln5}(n)$, so the asymptotic complexity is just Θ(\log(n))
Hint: $$\log _{\ln5}(\log^{\log 100}n) = \frac{\log ((\log n)^{\log 100})}{\log(\ln 5)} = \frac{\log 100}{\log(\ln 5)} \log (\log n)$$
I took the $\log 100$ superscript to be composition. Your interpretation sure makes this easier.
| common-pile/stackexchange_filtered |
Continuity ( Functions of 2 variables ).
Given ,
$$ f(x,y) = \begin{cases}
\dfrac{xy^{3}}{x^{2}+y^{6}} & (x,y)\neq(0,0) \\
0 & (x,y)=(0,0) \\
\end{cases} $$
We need to check whether the function is continuous at $(0,0)$ or not.. The solution says it is continuous at $(0,0)$.
What I tried was the following;
For the function to be continuous at the point $(0,0)$, the limit
$$\lim_{(x,y) \to (0,0)} f(x,y)$$
should exist.
Consider
$$\lim_{(x,y) \to (0,0)} \frac{xy^{3}}{x^{2}+y^{6}}$$
I choose a path $y=mx^{\frac{1}{3}}$ and approach $(0,0)$ along this path, thus the above expression becomes
$$\lim_{(x,y) \to (0,0)} \frac{xm^{3}x}{x^{2}+m^{6}x^{2}}$$
which comes out to be
$$\frac{m^{3}}{1+m^{6}}$$
Clearly the limit isn't unique and should not exist, but the solution says that the function is continuous at $(0,0)$.
How ? Can anyone help? What am I doing wrong ?
I got the same conclusion as you. It also looks pretty discontinuous at (0,0). http://www.wolframalpha.com/input/?i=plot+xy%5E3%2F%28x%5E2%2By%5E6%29
You are correct. The answer key is wrong.
You correctly proved that limit at origin doesn't exist.
You correctly proved that limit at origin doesn't exist. The answer key is wrong.
| common-pile/stackexchange_filtered |
$pull is not working for array in mongoose (MongoDB)
In mongoDB (Through mongoose), I am trying to remove an element in array of a collection and using $pull operator for that, but it is not working.
Rooms collection in mongoDB
{
"_id":"sampleObjectId",
"users"<EMAIL_ADDRESS><EMAIL_ADDRESS>}
Backend code to remove a user (mongoose using Node.js)
Rooms.update(
{ _id: "sampleRoomId" },
{
$pull: {
users<EMAIL_ADDRESS>}
},
}
)
Noting happens to collection on running this code, no change happens. code runs with no errors and in output it says "nModified":0 .
I have no idea how to remove a user from collection.
https://mongoplayground.net/p/f58HFvHkJcC this is working as you expected
There is a typo in your update document<EMAIL_ADDRESS>instead of<EMAIL_ADDRESS>
@varman please look into this code link ,it is not working
Thanks for pointing it, but. I have just given sample of my real code (code given is not the real one), typo is not the real problem @MontgomeryWatts. Please look into this link
_id is an ObjectId but you're passing the value as a string. Cast it to an ObjectId like so
I tried with this also, it is working in Mongo Playground but not in my code .
{
"n": 1,
"nModified": 0,
"opTime": {
"ts": "6880754585644826625",
"t": 4
},
"electionId": "7fffffff0000000000000004",
"ok": 1,
"$clusterTime": {
"clusterTime": "6880754585644826625",
"signature": {
"hash": "oHGadmcbcv4n/jRd6B9vvPYm/2s=",
"keyId": "6877425517838991363"
}
},
"operationTime": "6880754585644826625"
}
This is my output
@Rohitkumar https://mongoplayground.net/p/EtGGLxbeo3J is working, But ObjectId s in data, but you pass String in parameter. I just did with String and working
@varman you are using string in _id, that is why it is working with string. You can check this link. It is not working here.
I think there is some error in my code, mongoDB is working fine. I will figure this out. Thank you all for helping me
//actual code output from the mongo shell. This is working as expected. check if you given //correct value in object_id query parameter.
> db.rooms.find();
{ "_id" : "sampleObjectId", "users" : [<EMAIL_ADDRESS><EMAIL_ADDRESS>] }
> db.rooms.update(
... {_id: "sampleObjectId"},
<EMAIL_ADDRESS>... );
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
> db.rooms.find();
{ "_id" : "sampleObjectId", "users" : [<EMAIL_ADDRESS>] }
> db.version();
4.2.6
>
| common-pile/stackexchange_filtered |
Ethernet adapter with static IP configuration not accessible on start up
My home workstation motherboard has two Ethernet adapters. Both ports connect to the same local network and Internet router. The only difference is one is set as DHCP while the other has a static IP configuration.
In recent weeks, I don't really know when it began to be a problem, the adapter with the static IP address won't be able to configure itself properly on system start up (stuck with one of those automatic IPv4 addresses with /16 subnet mask). I just have to manually disable and enable the adapter before it can complete its IP configuration and recognise the network it's residing; no configuration value change.
What factors can cause the adapter to stall its static configuration?
On fresh start up
Ethernet adapter Top Ethernet:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Realtek PCI GBE Family Controller
Physical Address. . . . . . . . . : 90-E6-BA-1E-70-D8
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::b024:6058:b813:c33e%9(Preferred)
Autoconfiguration IPv4 Address. . : <IP_ADDRESS>(Preferred)
Subnet Mask . . . . . . . . . . . : <IP_ADDRESS>
Default Gateway . . . . . . . . . : <IP_ADDRESS>
DHCPv6 IAID . . . . . . . . . . . : 361817786
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-18-37-4F-6B-90-E6-BA-1E-70-D8
DNS Servers . . . . . . . . . . . : <IP_ADDRESS>
<IP_ADDRESS>
NetBIOS over Tcpip. . . . . . . . : Enabled
After disabling/enabling
Ethernet adapter Top Ethernet:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Realtek PCI GBE Family Controller
Physical Address. . . . . . . . . : 90-E6-BA-1E-70-D8
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::b024:6058:b813:c33e%9(Preferred)
IPv4 Address. . . . . . . . . . . : <IP_ADDRESS>(Preferred)
Subnet Mask . . . . . . . . . . . : <IP_ADDRESS>
Default Gateway . . . . . . . . . : <IP_ADDRESS>
DHCPv6 IAID . . . . . . . . . . . : 361817786
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-18-37-4F-6B-90-E6-BA-1E-70-D8
DNS Servers . . . . . . . . . . . : <IP_ADDRESS>
<IP_ADDRESS>
NetBIOS over Tcpip. . . . . . . . : Enabled
Generally you only get the automatic IP when DHCP fails to get you a new lease. If it's actually set static, it shouldn't change. Could things to try: Have you tried setting it to a different IP address and see if it behaves any differently? Does it behave the same way in Safe Mode with Networking?
Perhaps edit your question and post the result of an ipconfig for the network connection in question, so we can see how it's currently set.
I have added in the ipconfig to show the state it gets itself into.
I do not know why, but if both adapters are manually configured with static IP addresses, they get their settings on system start up and able to sense the network they're in with Internet connectivity.
| common-pile/stackexchange_filtered |
Template template argument access
Lets say I have two classes. First one is simple template class Point<N, T> and the other one is Function<Point<N, T>>. Is it possible to access in class Function to type T and int N.
Here is my Point which I think it's OK
template<int N, class T>
class Point {
public:
Point() {
std::fill(std::begin(data), std::end(data), T(0));
}
Point(const std::initializer_list<T> &init) {
std::copy(init.begin(), init.end(), std::begin(data));
}
public: // just for easier testing, otherwise protected/private
T data[N];
};
and now the Function implementation that I think has some issues
template<template<int, typename> class P, int N, typename T>
class Function {
public:
T operator()(const P<N, T> &pt) {
for (int i = 0; i < N; i++) {
// do something and expect loop unrolling
}
return T(0); // for example's sake
}
T operator()(const P<N, T> &pt1, const P<N, T> &pt2) {
// force pt1 and pt2 to have the same `N` and `T`
return T(0); // for example's sake
}
};
And here is how I imagine I'd use my classes. Maybe I'm thinking too much java-like :)
typedef Point<3, float> Point3f;
typedef Point<4, float> Point4f;
Point3f pt3f({ 1.0, 2.0, 3.0 }); // Point<3, float>
Point4f pt4f({ 1.0, 2.0, 3.0, 4.0 }); // Point<4, float>
Function<Point3f> f3; // Function<Point<3, float>> f3;
float val = f3(pt3f); // no error
float val = f3(pt3f, pt3f); // no error
float val = f3(pt4f); // compile error
float val = f3(pt4f, pt3f); // compile error
How can I achieve such behaviour? I keep getting errors like "Point<3, float>" is not a class template or too few arguments for class template "Function"
template<class Point>
class Function;
template<template<int, typename> class P, int N, typename T>
class Function<P<N,T>>
to replace:
template<template<int, typename> class P, int N, typename T>
class Function
solves your syntax problem.
Thank you for your help. Works now. I was affraid this was not possible.
| common-pile/stackexchange_filtered |
HTML intellisense does not work in VS code
I am writing a Flask app and notice that HTML intellisense does not work now. As you can see in the screenshot, the input shows no autocomplete options. The form tag also does not work. I type that all manually.
I've tried lots of extensions, but none of them work. The "html.autoClosingTags" has been set true.
Could someone help with that?
| common-pile/stackexchange_filtered |
Rails: How ActiveRecord generates next primary key
I am printing id of the record in after save callback method as shown below. after printing the "id" i am raising the exception.
def after_save_run
puts "id is #{self.id}"
raise "exception"
end
Above method is generating below output for every save call
id is 1
id is 2
id is 3
Due to the exception in the after save method no records are saving in the database and hence my table is empty but then How acitverecord auto increments the primary key? How does activerecord knows what was the last generated id if there are no records in table?
Does this answer your question? serial in postgres is being increased even though I added on conflict do nothing
This is because internally, the ID isn't determined by Rails - it's given by a 'sequence' in your RDBMS. This sequence is a running counter that is incremented whenever you pull a new number out of it - whether or not you actually commit that number into your table.
This means that attempting insert a new row will increment the sequence counter, and since you're raising an exception that row isn't saved. But that does not change the fact that a number has been pulled out of the sequence and cannot be put back into it.
The sequences are also table agnostic for the most part - you could share the same sequence across tables if you wanted to.
https://www.postgresql.org/docs/current/functions-sequence.html
| common-pile/stackexchange_filtered |
Can a StringBuffer be used as a key in a HashMap?
Can a StringBuffer as a key in a HashMap?
If so, what is the difference between using a String and a StringBuffer as the key?
What language or library are you referring to? You ought to tag your question with it.
@jB language is java
Can a StringBuffer as a key in a HashMap?
No, since StringBuffer overrides neither equals nor hashCode, so it is not suitable as a HashMap key (recall that HashMap relies on those two methods to tell if a given key is present in the map).
Beyond that, StringBuffers are mutable, and you typically want Map keys to be immutable. From Map:
Note: great care must be exercised if mutable objects are used as map keys. The behavior of a map is not specified if the value of an object is changed in a manner that affects equals comparisons while the object is a key in the map. A special case of this prohibition is that it is not permissible for a map to contain itself as a key. While it is permissible for a map to contain itself as a value, extreme caution is advised: the equals and hashCode methods are no longer well defined on such a map.
No, you cannot, unless you want to distinguish between separate buffers instead of their contents. The StringBuffer class does not implement equals or hashCode which means it inherits these methods from Object. The Object implementation of these methods only distinguishes between object instances, not their contents.
In other words, if you would have two StringBuffer instances with the same contents, they would not be considered equal. Even weirder, if you would reinsert the same buffer with a different value, it would be considered equal to the previous one.
In general you should take care using mutable values as keys. Mutations will not alter the position in the Map, as the Map instance will not be notified of the change. In this case, since equals is not implemented anyway, this issue will not come up.
Yes, any object can be used as a key in a HashMap, although that may not be a good idea.
Class HashMap
Type Parameters:
K - the type of keys maintained by this map
V - the type of mapped values
From this SO answer:
When you put a key-value pair into the map, the hashmap will look at
the hash code of the key, and store the pair in the bucket of which
the identifier is the hash code of the key. (...) Looking at the above
mechanism, you can also see what requirements are necessary on the
hashCode() and equals() methods of keys (...)
Do notice, however, that StringBuffer does not override the required methods so your "key" will be the object's memory address. From the hashcode() docs:
(This is typically implemented by converting the internal address of
the object into an integer, but this implementation technique is not
required by the JavaTM programming language.)
Meaning it's use as a key will be very different than String's:
Map<String, String> hashA = new HashMap<>();
a.put('a', 'a');
System.out.println(hashA.get('a')); //prints 'a'
Map<StringBuffer, String> hashB = new HashMap<>();
StringBuffer buffer = new StringBuffer('a');
hashB.put(buffer, 'a');
System.out.println(hashB.get(new StringBuffer('a'))); //prints null
System.out.println(hashB.get(buffer)); //prints 'a'
All classes in java are intended to be used as hash keys, because all of them inherit the supermethod hashCode. Altough there are some cases in which, though it might compile well, would be quite weird, like Connection or Streams... or StringBuffer. This is why:
The main difference between String and StringBuffer is that a String is immutable by design, and it contains a proper implementation of hashCode. StringBuffers, instead, may change, and because of this, this class does not have a proper implementation of hashCode: It does not override the default implementation inherited from Object. Now you can see the consequences: A StringBuffer cannot contain a hash of high quality, nor coherent with its contents, damaging then the result of the hashing algorithm.
| common-pile/stackexchange_filtered |
C - manually parsing char* into integer
I'm trying to parse a char* into an int without using atoi(). I walk through the string, check if it's a valid digit, then add that digit to my integer by multiplying by 10 and adding the digit. I am not accepting negative integers.
Here's the code I'm using:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define TRUE 1
#define FALSE 0
int main(int argc, char *argv[]) {
if (argc < 2)
exit(1);
char *p = argv[1];
int amount = 0;
int len = strlen((const char*) argv[1]); // calculate len beforehand
int contin = TRUE;
for (int i = 0; contin && i < len; ++i, ++p) {
switch (*p) {
case '0':
amount = (amount * 10);
break;
case '1':
amount = (amount * 10) + 1;
break;
case '2':
amount = (amount * 10) + 2;
break;
case '3':
amount = (amount * 10) + 3;
break;
case '4':
amount = (amount * 10) + 4;
break;
case '5':
amount = (amount * 10) + 5;
break;
case '6':
amount = (amount * 10) + 6;
break;
case '7':
amount = (amount * 10) + 7;
break;
case '8':
amount = (amount * 10) + 8;
break;
case '9':
amount = (amount * 10) + 9;
break;
default:
contin = FALSE;
break;
}
}
fprintf(stdout, "amount: %i\n", amount);
return 0;
}
...which works nicely. But, is there a better/more idiomatic way to do this?
EDIT: Thanks to Groo, I'm able to remove the giant switch statement:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define TRUE 1
#define FALSE 0
int main(int argc, char *argv[]) {
if (argc < 2)
exit(1);
char *p = argv[1];
int amount = 0;
int len = strlen((const char*) argv[1]); // calculate len beforehand
int contin = TRUE;
for (int i = 0; contin && i < len; ++i, ++p) {
/* handle negative integers */
if (*p == '-' && i == 0) {
fprintf(stderr, "negative integers are invalid.\n");
exit(1);
}
if (*p > 0x2F && *p < 0x3A)
amount = (amount * 10) + (*p - '0');
else {
contin = FALSE;
--p;
}
}
fprintf(stdout, "amount: %i\n", amount);
return 0;
}
```
I wouldn't use 0x2F and 0x3A because most people don't know ASCII by heart and because your intent is to compare then with the digits zero and nine. Use '0' and '9' -- it's easier to understand and better reflects the intent of the code.
Also note that every answer so far works because the integer value of numbers 0 to 9 are guaranteed by the C standard to be consecutive. There's no such guarantee for other characters, so you can not reliably do the same thing with hexadecimal digits.
Since command line arguments are null-terminated, we can avoid strlen() entirely by just running through the string until we hit '\0'; this also gets rid of contin and i
(*p > 0x2F && *p < 0x3A) isn't very obvious, I'd just use '0' and '9' to make it easier to understand the intention and to be consistent with the rest of the code
You could use EXIT_SUCCESS and EXIT_FAILURE instead of 0 and 1 respectively
For consistency, I'd replace your call to exit() with a simple return
I'd remove the amount: part of the print; it should be clear from the context (running this program) what the output actually is; this would also make processing the output easier
Putting it all together, we end up with this short piece:
int main(int argc, char *argv[]) {
if (argc < 2)
return EXIT_FAILURE;
char *p = argv[1];
int amount = 0;
while (*p) {
if (*p < '0')
return EXIT_FAILURE;
if (*p > '9')
return EXIT_FAILURE;
amount = (amount * 10) + (*p++ - '0');
}
fprintf(stdout, "%i\n", amount);
return EXIT_SUCCESS;
}
Note, however, that this doesn't account for:
negative numbers (starting with a -)
positive numbers starting with a +
any leading whitespace
Furthermore:
If the input starts with a number and has a letter following sometimes later, it will break out and print the number it has found until then; but if the first character found is a letter, the program prints 0 and reports success, which could be by design, but could also be considered a bug
The use of *p++ can be a bit dangerous if this was code shared with less experienced programmers, as simply changing it to ++*p would increase the value pointed to instead of the pointer itself; changing it to *(p++) might help
While this comes down to taste, I would argue to always use curly braces, even for one-liners: otherwise, adding another line later would break the code; also consistency is a plus
Adding support for leading whitespace is easy and also supports my previous point, as missing brackets would break the program:
if (*p == ' ') {
*p++;
continue;
}
If you don't want to implement support for negative numbers, it might be reasonable to change the type of amount to unsigned or - even better - unsigned long long, which would give you a possible range of [0, +18446744073709551615]
Naive handling of negative integers would be quite easy to implement (but note Roland's answer for caveats with this):
int sign = 1;
if (*p == '-') {
sign = -1;
*p++;
}
/* loop */
amount *= sign;
yes, the part when the program stops when it encounters an invalid digit is intentional.
Why write EXIT_FAILURE twice in 2 if statements.? Why not if (*p < '0' || *p > '9') EXIT_FAILURE?
@OliverSchonrock both would be completely fine, as it shouldn't make a difference in performance. Personally, I find it easier to read and understand when they are separate, but that's really down to personal preference.
Sure, You're right that the ASM will be identical. I always aim for "least vertical space, while retaining readability". Being able to scan vertically on a finite height Monitor is crucial in understanding code. But you're right it is subjective. If you find || not clear, then sure.
@OliverSchonrock that makes sense. I guess it has become a habit for me to split things into the smallest and simplest possible pieces. Regarding vertical space, you would most likely hate the style that I use myself in these situations (curly brackets for one-liner, each on its own line). Feel free to edit the code to your suggested variant, by the way.
Yeah, the whole "1 line if" discussion is highly subjective. You're right though, I do use one line ifs, especially for this kind of "early return" code where it is very unlikely someone wants to add a second line. Your answer is very good, it doesn't need editing. These are subjective discussion points. Your aversion to || was new to me, but there is nothing wrong with it.
Handling negative integers is not as easy as you think. There is one integer that is a valid negative number but an invalid positive number. For int32_t that is the number -2147483648.
Good point @RolandIllig - and on that note, I've completely left out the whole topic of overflow / handling big numbers. Maybe you can elaborate on that aspect some more in your answer.
As a learning example, printing the result is perfectly acceptable, but to be usable, the program should be a function returning the parsed int.
What should the output be if I enter<PHONE_NUMBER>12345678901234567890? That's producing an integer overflow right now.
uh, I'm new, but shouldn't this be a comment? Just asking :)
correct, when I put this into my application I'll add code that will handle an overflow.
Granted, this answer is very short. It reveals a bug in your code though and thus might be considered a review. It's only meant as an addition to the other answer, though.
In this cases we can store the answer by taking this number modulo a prime number i.e $10^9 + 7$.
@strikersps, doesn't that just change program from Undefined Behaviour to one that produces a wrong answer? It's hard to see how that would be an improvement. You need a better strategy for dealing with overflow - I suggest it's better to return EXIT_FAILURE, since a number that's too large is the same category as one that's too small.
| common-pile/stackexchange_filtered |
svg genrated with angular ng-repeat is not rendeerd
I'm trying to generate some svg element trough angular ng-repeat
I created one directives that should just import the template
this is the code with the ng repeat
<svg width="600" height="600" style="border:1px solid #000000;" >
<text x="10" y="10">Project Title</text>
<c-object-draggable ng-repeat="cObject in cObjects" ng-controller="cObjectCtrl">
</c-object-draggable>
</svg>
this is what I have in the template that I'm loading via the c-Object-draggable directiv
<g transform="matrix(1,0,0,1,100,100)">
<text x="1" y="1">ciao </text>
<circle cx="1" cy="1" r="20"></circle>
</g>
The result in the browser looks good, I can copy it in an other file and it will be correctly rendered.
<svg width="600" height="600" style="border:1px solid #000000;">
<text x="10" y="10">Project Title</text>
<!-- ngRepeat: cObject in cObjects -->
<g transform="matrix(1,0,0,1,100,100)" ng-repeat="cObject in cObjects" ng-controller="cObjectCtrl">
<text x="1" y="1">ciao </text>
<circle cx="1" cy="1" r="20"></circle>
</g>
<!-- end ngRepeat: cObject in cObjects -->
<g transform="matrix(1,0,0,1,100,100)" ng-repeat="cObject in cObjects" ng-controller="cObjectCtrl">
<text x="1" y="1">ciao </text>
<circle cx="1" cy="1" r="20"></circle>
</g>
<!-- end ngRepeat: cObject in cObjects -->
<g transform="matrix(1,0,0,1,100,100)" ng-repeat="cObject in cObjects" ng-controller="cObjectCtrl">
<text x="1" y="1">ciao </text>
<circle cx="1" cy="1" r="20"></circle>
</g>
<!-- end ngRepeat: cObject in cObjects -->
</svg>
the first text object generated outside the ng-repeat of my directive is correctly displayed. But all the rest is invisible.
It appears if I add a new element modifying the html in the browser everything gets rendered.
Any suggestion to solve this issue?
Under the hoods AngularJS uses JQuery or JQLite to create elements from templates to replace with.
JQuery and JQLite both use document.createElement rather than document.createElementNS with the correct SVG namespace.
In your directive you need to take over the creation of SVG elements from AngularJS.
You can inject the following helper function into your directive:
.value('createSVGNode', function(name, element, settings) {
var namespace = 'http://www.w3.org/2000/svg';
var node = document.createElementNS(namespace, name);
for (var attribute in settings) {
var value = settings[attribute];
if (value !== null && !attribute.match(/\$/) && (typeof value !== 'string' || value !== '')) {
node.setAttribute(attribute, value);
}
}
return node;
})
And make use of it in the link function rather than using a template (either external or inline) - something like:
link: function(scope, element, attrs) {
var cx = '{{x}';
var cy = '{{y}}';
var r = '{{r}}';
var circle = createSVGNode('circle', element, attrs);
angular.element(circle).attr('ng-attr-cx', cx);
angular.element(circle).attr('ng-attr-cy', cy);
angular.element(circle).attr('ng-attr-r', r);
element.replaceWith(circle);
$compile(circle)(scope);
}
You can see an example of this working - in a piechart context - over at https://github.com/mjgodfrey83/angular-piechart/.
A fix landed in angular 1.3.0-beta8 to allow non html directive template types to be specified - see here. For an example of it being used check out angular-charts.
Hope that helps.
| common-pile/stackexchange_filtered |
Makefile to generate objects to a different directory
I have source codes in the "src/" folder, and want to write a Makefile to generate objects files in the "lib/" folder. Following is the code I tried, but it did not work:
DIRSRC=src/
DIRLIB=lib/
SRC=a.cc b.cc c.cc
OBJ=$(SRC:.cc=.o)
$(OBJ): $(DIRLIB)%.o: $(DIRSRC)%.cc
$(CC) -c $< -o $@
Apparently the error comes from the last pattern rule. I know there is a simply solution, but not sure what is that.
You want to use something like:
DIRSRC=src/
DIRLIB=lib/
SRC=a.cc b.cc c.cc
# Add DIRLIB to the beginning of each entry in OBJ so that they match the static pattern rule target-pattern.
OBJ=$(addprefix $(DIRLIB),$(SRC:.cc=.o))
# Give make a default target that builds all your object files.
all: $(OBJ)
$(OBJ): $(DIRLIB)%.o: $(DIRSRC)%.cc
$(CC) -c $< -o $@
| common-pile/stackexchange_filtered |
Keep gradle daemons fresh & alive
I have a custom CI runner which builds our Gradle projects. The issue we are facing is that the Gradle daemons get stopped very quickly and so each job takes a very long time just because of the starting gradle daemons:
Starting a Gradle Daemon, 3 busy and 32 stopped Daemons could not be reused, use --status for details
Is there a way to keep some number of daemons fresh and ready? Eg. to have 20 daemons always ready?
According to official document, I quote:
Gradle Daemon: a long-lived background process that executes your builds much more quickly than would otherwise be the case. We accomplish this by avoiding the expensive bootstrapping process as well as leveraging caching, by keeping data about your project in memory. Running Gradle builds with the Daemon is no different than without. Simply configure whether you want to use it or not — everything else is handled transparently by Gradle.
It means that if you see this
Starting a Gradle Daemon, 3 busy and 32 stopped Daemons could not be reused, use --status for details.
32 stopped Daemon is still present from previous builds and it may has plenty of reasons. We have already this and this in stackoverflow.
But about your issue I guess your CI runner is just inside temporary docker container, and every time docker container dispose all data in memory will be disposed too. So why do you need to use gradle daemon?
The best solution is disable gradle deamon. put this line org.gradle.daemon=false inside gradle.properties of your project it must be at root of your project if it's not create one.
This remark has already been discussed in this, this, this
I am using long-lived CI runners, that is why I want to keep the daemon fresh and not disable it. I have already seen the linked documents and questions, however this does not help.
| common-pile/stackexchange_filtered |
Operator '+' cannot be applied to operands of type MvcHtmlString
I am converting an ASP.NET MVC application to ASP.NET MVC 2 4.0, and get this error:
Operator '+' cannot be applied to operands of type 'System.Web.Mvc.MvcHtmlString' and 'System.Web.Mvc.MvcHtmlString'
HTML = Html.InputExtensions.TextBox(helper, name, value, htmlAttributes)
+ Html.ValidationExtensions.ValidationMessage(helper, name, "*");
How can this be remedied?
You can't concatenate instances of MvcHtmlString. You will either need to convert them to normal strings (via .ToString()) or do it another way.
You could write an extension method as well, see this answer for an example: How to concatenate several MvcHtmlString instances
I am new to MVC, here is the code:
public static string ExTextBox(this HtmlHelper helper, string name, object value, bool readOnly, object htmlAttributes)
{
string HTML = "";
//if (readOnly) HTML = String.Format("{1}", name, value);
if (readOnly) HTML = value == null ? "" : value.ToString();
else HTML = System.Web.Mvc.Html.InputExtensions.TextBox(helper, name, value, htmlAttributes) + System.Web.Mvc.Html.ValidationExtensions.ValidationMessage(helper, name, "*");
return HTML;
}
@hnabih: If this is the correct answer, don't forget to mark it as such :)
Personally I use a very slim utility method, that takes advantage of the existing Concat() method in the string class:
public static MvcHtmlString Concat(params object[] args)
{
return new MvcHtmlString(string.Concat(args));
}
@Anders method as an extension method. The nice thing about this is you can append several MvcHtmlStrings together along with other values (eg normal strings, ints etc) as ToString is called on each object automatically by the system.
/// <summary>
/// Concatenates MvcHtmlStrings together
/// </summary>
public static MvcHtmlString Append(this MvcHtmlString first, params object[] args) {
return new MvcHtmlString(string.Concat(args));
}
Example call:
MvcHtmlString result = new MvcHtmlString("");
MvcHtmlString div = new MvcHtmlString("<div>");
MvcHtmlString closediv = new MvcHtmlString("</div>");
result = result.Append(div, "bob the fish", closediv);
result = result.Append(div, "bob the fish", closediv);
It would be much nicer if we could overload operator+
It appears that your Append method discards the first parameter.
Indeed. You should use return new MvcHtmlString(first + string.Concat(args)).
Here is my way:
// MvcTools.ExtensionMethods.MvcHtmlStringExtensions.Concat
public static MvcHtmlString Concat(params MvcHtmlString[] strings)
{
System.Text.StringBuilder sb = new System.Text.StringBuilder();
foreach (MvcHtmlString thisMvcHtmlString in strings)
{
if (thisMvcHtmlString != null)
sb.Append(thisMvcHtmlString.ToString());
} // Next thisMvcHtmlString
MvcHtmlString res = MvcHtmlString.Create(sb.ToString());
sb.Clear();
sb = null;
return res;
} // End Function Concat
public static MvcHtmlString Concat(params MvcHtmlString[] value)
{
StringBuilder sb = new StringBuilder();
foreach (MvcHtmlString v in value) if (v != null) sb.Append(v.ToString());
return MvcHtmlString.Create(sb.ToString());
}
Working off of @mike nelson's example this was the solution that worked best for me:
No need for extra helper methods. Within your razor file do something like:
@foreach (var item in Model)
{
MvcHtmlString num = new MvcHtmlString(item.Number + "-" + item.SubNumber);
<p>@num</p>
}
| common-pile/stackexchange_filtered |
Reference request: correlation and spectral analysis of stochastic processes
I'm wondering if anyone knows of a reasonably rigorous text on stochastic processes that discusses specifically things like the autocorrelation, spectral density, and other "correlation and spectral" properites of stochastic processes. It seems like many rigorous mathematical probability & stochastic books (e.g. Karatzas & Shreve) don't seem to employ these tools whatsoever, while engineering texts (like Papoulis) seem to favor them heavily. Is there any reason for this difference, and do any interesting mathematical texts on stochastic processes make use of these tools?
You may want to look at Ash and Gardner's "Topics in Stochastic Processes." These topics are covered rigorously, though perhaps not in the level of generality you might want for all applications.
Grimmett and Stirzacker has some material. I don't know whether it's what you wanted.
| common-pile/stackexchange_filtered |
How to disable new Angular project from automatically adding files to git
I want my Intellij IDEA not to add all files to Version Control System (VCS) - it happens when new Angular project is created, however I want to add it manually later.
Settings > Version Control > Confirmation > When files are created > Do not add
This setting is not working, the same configuration is in
File > Other Settings > Settings for New Project > Version Control > Confirmation > When files are created > Do not add
As I know Intellij IDEA the same as VisualStudioCode or Webstorm when creating new project they automatically create a git repository.
Probably the only way to prevent this is using this command via terminal:
ng new projectName --skipGit=true
Doc: https://angular.io/cli/new
| common-pile/stackexchange_filtered |
Silverlight play,pause and resume events
I am using silverlight player in our project. It is working well. These days I was loking for silverlight events becasue I will send some information to google analytics on player pause,resume and play events. I checked lots of article. Unfortunately there is no information about these events. All articles give the same example and same events.(onError and onLoad events). How can I add play,pause and resume events in javascript?
Please check below javascript code.
Silverlight.createObject("XXXXXXXXX",player.get(0),"xxPlayer",
{
width: playerWidth + "", height: playerHeight + "", background: "black"version: "4.0.60310.0",enableHtmlAccess: true'},
{ onError: onSilverlightError, onLoad: null },
extra, "context");
There is a general purpose CurrentStateChanged event which returns the current state of the media (playing, stopped, etc.)
MediaElement events
To access the events you need something like this:
player = sender.findName("ObjectName");
var stateChangedToken = player.addEventListener("CurrentStateChanged", onCurrentStateChanged);
Then you can fill out the onCurrentStateChanged JavaScript to do what you want.
Handling Silverlight events in JavaScript (MSDN)
| common-pile/stackexchange_filtered |
Cannot pass model instance variable to mailer
I have a model that sends out invitations via email by parsing a CSV file for names and emails. I have a before_create that creates a url and saves it as an instance variable. After the record is created, it is supposed to send the results to the mailer, along with the instance variable of the URL. It seems like the URL is not being sent to the mailer as the emails are sent successfully, but with the URL. Below are the relevant lines of code. I am confirm that the invite_token is being created, so that is not an issue.
Note: I am using SmarterCSV gem to parse the csv and using the delayed_jobs gem to create a background process.
Let me explain the process:
Controller (not shown) receives the CSV and sends it to Invitation.import. The file is parsed and before the record is created, an invite token is created, then a URL is built. The email is then sent.
Thanks!
Controller: invitations_controller.rb
class InvitationsController < ApplicationController
before_filter :get_event #, only: [:create]
def new
@invitation = @event.invitations.new
end
def create
@invitation = @event.invitations.build(params[:invitation])
@invitation.event_id = params[:event_id]
if @invitation.save
flash[:success] = "Invitation confirmed!"
render 'static_pages/home'
else
render 'new'
end
end
def import_csv
@invitation = @event.invitations.new
end
def import
Invitation.import(params[:file], params[:event_id])
flash[:success] = "Invitations sent!"
redirect_to @event
end
private
def get_event
@event = Event.find(params[:event_id])
end
end
Model: Invitation.rb
class Invitation < ActiveRecord::Base
before_save { |user| user.email = user.email.downcase }
before_create :create_invite_token
before_create :build_url
@@url = "" #added 9/8/13
def self.import(file, id)
file_path = file.path.to_s
file_csv = SmarterCSV.process(file_path)
file_csv.each do |x|
x[:event_id] = id
Invitation.delay.create! x
UserMailer.delay.invitation_email(x, @@url)
end
end
def build_url
@@url = 'http://localhost:3000/confirmation/' + self.invite_token
end
private
def create_invite_token
self.invite_token = SecureRandom.urlsafe_base64
end
end
Mailer: user_mailer.rb
def invitation_email(invitation, signup_url)
@invitation = invitation
@signup_url = signup_url
mail(:to => invitation[:email], :subject => "You're invited!")
end
Invitation email:
<!DOCTYPE html>
<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type" />
</head>
<body>
<h2>Hi <%= @invitation[:name].split.first %>,</h2>
<p>
Click here: <%= @signup_url %>
</p>
</body>
</html>
Can you please post your controller?
@jvperrin I've added the controller. Hope it helps! Thanks.
Are the urls different for each Invitation? If they are, I would recommend creating a migration that adds a url column to the invitations table. Otherwise, I think changing the instance variable to a class variable (@@url) would work well. I think the issue that is occurring here is that the instance variable @url is not available in the self.import method because it is only defined for an instance of the Invitation class, not for the class itself.
@jvperrin I added the class variable (@@url) and unless I added it incorrectly, I am still unable to pass the invite token to the mailer.
Since the class variable is not working, I would try adding a migration that adds a url column for invitations, since I would imagine each invitation has a different url. Then you should be able to just call self.url. Hopefully that works better.
I solved this using a different approach. I created a method within the Invitation model that sends the record's information and URL to the mailer. I then added a after_create callback to that method so that the email is sent after the record is created. See below for the changes to the model.
class Invitation < ActiveRecord::Base
before_save { |user| user.email = user.email.downcase }
before_create :create_invite_token
before_create :build_url
after_create :send_invitation
def self.import(file, id)
file_path = file.path.to_s
file_csv = SmarterCSV.process(file_path)
file_csv.each do |x|
x[:event_id] = id
Invitation.delay.create! x
UserMailer.delay.invitation_email(x, @@url)
end
end
def build_url
@url = 'http://localhost:3000/confirmation/' + self.invite_token
end
def send_invitation
if self.accepted == false
UserMailer.delay.invitation_email(self, @url)
end
end
private
def create_invite_token
self.invite_token = SecureRandom.urlsafe_base64
end
end
| common-pile/stackexchange_filtered |
Solving Dijkstra's algorithm - passing costs / parents with two edges
I have a graph like this:
# graph table
graph = {}
graph['start'] = {}
graph['start']['a'] = 5
graph['start']['b'] = 2
graph['a'] = {}
graph['a']['c'] = 4
graph['a']['d'] = 2
graph['b'] = {}
graph['b']['a'] = 8
graph['b']['d'] = 7
graph['c'] = {}
graph['c']['d'] = 6
graph['c']['finish'] = 3
graph['d'] = {}
graph['d']['finish'] = 1
graph['finish'] = {}
And I am trying to find the fastest way from S to F.
In the first example in the book only one edge was connected to one node, in this example for example, node D has 3 weights and a cost table was used:
costs = {}
infinity = float('inf')
costs['a'] = 5
costs['b'] = 2
costs['c'] = 4
costs['d'] = # there is 3 costs to node D, which one to select?
costs['finish'] = infinity
And a parents table:
parents = {}
parents['a'] = 'start' # why not start and b since both `S` and `B` can be `A` nodes parent?
parents['b'] = 'start'
parents['c'] = 'a'
parents['d'] = # node D can have 3 parents
parents['finish'] = None
But this also works, by works I mean no error is thrown, so do I only have to name the parents from the first node S?
parents = {}
parents['a'] = 'start'
parents['b'] = 'start'
parents['finish'] = None
The code:
processed = []
def find_lowest_cost_node(costs):
lowest_cost = float('inf')
lowest_cost_node = None
for node in costs:
cost = costs[node]
if cost < lowest_cost and node not in processed:
lowest_cost = cost
lowest_cost_node = node
return lowest_cost_node
node = find_lowest_cost_node(costs)
while node is not None:
cost = costs[node]
neighbors = graph[node]
for n in neighbors.keys():
new_cost = cost + neighbors[n]
if costs[n] > new_cost:
costs[n] = new_cost
parents[n] = node
processed.append(node)
node = find_lowest_cost_node(costs)
def find_path(parents, finish):
path = []
node = finish
while node:
path.insert(0, node)
if parents.__contains__(node):
node = parents[node]
else:
node = None
return path
path = find_path(parents, 'finish')
distance = costs['finish']
print(f'Path is: {path}')
print(f'Distance from start to finish is: {distance}')
I get:
Path is: ['finish']
Distance from start to finish is: inf
Where is my mistake and how should I add cost and parent to a node which can be visited from more than 1 node?
Edit
I do believe this is not the best approach for this problem, the best in practice solution / recommendations are welcome.
You should not have to fill the cost table, this is being built by the algorithm. Take a look at https://rosettacode.org/wiki/Dijkstra%27s_algorithm#Python, I have tried it with your graph, it delivers the correct result.
You do not need to initialise the cost table with more than costs['start'] = 0 or the parents dictionary with more than parents = {}. That is what your algorithm is going to create for you!
The only other change you need to make is to your while loop. It just needs to check whether the new node has already been detected before. If so then we check to see whether the new path is shorter and update as required; if not then we establish the new path.
while node is not None:
cost = costs[node]
neighbors = graph[node]
for n in neighbors.keys():
new_cost = cost + neighbors[n]
if n in costs:
if costs[n] > new_cost:
costs[n] = new_cost
parents[n] = node
else:
costs[n] = new_cost
parents[n] = node
processed.append(node)
node = find_lowest_cost_node(costs)
I think there are much neater ways to deal with graphs but this is the minimal change needed to make your code work as required. Hope it's helpful!
So after making the parents and costs only {} I get an error because there is no costs['finish']. If I remove it then I get the same answer.
You do need to include costs['start'] = 0 as well as I said above :P
KeyError: 'finish' if I leave parents = {} only.
Hmm I'm not sure where our code differs then. My full file is here: https://pastebin.com/HBZnuL9a
It outputs:
Path is: ['start', 'a', 'd', 'finish']
Distance from start to finish is: 8
| common-pile/stackexchange_filtered |
Visual Studio Code and GIT on the MAC OS
Git not detected in the Visual Studio Code on Mac OS.
I see only "There are no active suppliers of management systems", please help.
Enter to bash: git --version, and see "git version 2.18.0", add to $PATH="/usr/local/git/bin".
I looked VSCode in Windows 8 with a Git all good and work. I noticed the difference, in User settings, there are GIT in the extensions, but on the MACOS there are not.
Did you install Git?
Yes, Git install with Xcode.
Delete VSC and setup, all work.
Make sure to launch vscode from a session where git --version works, meaning your $PATH does reference Git.
If not, check the vscode setting "git.path": you can reference directly the executable git (/full/path/to/bin/git) there.
Command in the bash: "giti --version" - >git version 2.15.2 (Apple Git-101.1)
Command in the bash: "which git" -> /usr/bin/git
Command in the bash: $PATH - > /usr/bin/git:/Users/konstantin/go/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin:/usr/local/share/dotnet:/opt/X11/bin:~/.dotnet/tools:/Library/Frameworks/
Then try the git.path setting, directly within VSCode.
In Visual studio code setup "USER SETTINGS" "git.enabled":true,
"git.path": "/usr/bin/git",
VSCode write error about "git.path": "Unknown configuration setting".
and i don't see GIT in the "DEFAULT USER SETTINGS".
Can you make sure you have the plug-in gitlens installed?
| common-pile/stackexchange_filtered |
Relay on public and private network; recommendations
I have set up a relay with its own dynamic public IP directly on the Internet. The only port allowed on the public interface is the ORPort, and this is the way I want to keep it. However, it would be beneficial to also let this relay be reachable from my private network for management purposes. I would like both SSH and Control Port. The relay should not have Internet access through the private network.
What is the best way to achieve this?
Are there security concerns with this setup?
Thanks!
Is a router an option? Sounds like what you need is the local IP for management.
Recommendation: SSH over Tor. Create an v3 onion service for SSH. This will reduce your attack surface to only allow connections from Tor but not from the internet. Best practice for SSH security is to use a key to log in and not a password. Also, you can have SSH run on 2222 or some random port instead of the standard port 22 to further hide that it is actually out there.
The negative is that SSH over Tor is noticeable laggy. Not to the point of being unusable, but to the point that it can be really irritating if you are expecting a near instantaneous response time.
Using single hop hidden service may improve the lag a bit. It has to be noted however that this comes with a cost of service no longer being anonymous (which may be acceptable in this case).
Thanks for the recommendation, but according to Tor itself this is not a good practice. When I try to configure a hidden SSH service on my relay to manage it, Tor throws this warning during startup:
[warn] Tor is currently configured as a relay and a hidden service. That's not very secure: you should probably run your hidden service in a separate Tor process, at least -- see https://trac.torproject.org/8742
Any comments to this?
ControlPort is not so safe to be opened just by itself even in a private network, but a STunnel with a strict local CA and certificate verification solves this problem easily. SSH port can be opened to your private network, but for public network just make a hidden service for it with the client key to avoid the service guessing and corresponding attack vectors
| common-pile/stackexchange_filtered |
OpenGL Sphere Lighting/Transparency Problem
I am drawing a sphere but when I move the viewpoint it sometimes seen as I attached in the image. What may be the reason of it?
I thought it may be sth related with normals but could not find a solution.
Image That Shows Sphere:
You've to to draw the primitives in sorted order form the back to the front. Read Transparency
what is the topology of your sphere? the parallel discs are hinting there is something fishy in your sphere mesh. Anyway with transparency you need depth sort your faces or fake it with other means see OpenGL - How to create Order Independent transparency?
Is there an easy way to remove transparency at all?
glDisable(GL_BLEND) but my bet is that your mesh is wrong (wrongly triangulated connected as discs instead of surface) You can try this: sphere triangulation anyway without MCVE (minimal complete verifiable example) we can only guess ...
Pretty late, Im sorry but you just need to enable BACKFACE_CULLING. glEnable(GL_CULL_FACE) should take care of it.
| common-pile/stackexchange_filtered |
Append string to an existing gzipfile in Ruby
I am trying to read a gzip file and append a part of the gzip file (which is string) to another existing gzip file. The size of string is ~3000 lines. I will have to do this multiple times (~10000 times) in ruby. What would be the most efficient way of doing this?. The zlib library does not support appending and using backticks (gzip -c orig_gzip >> gzip.gz) seems to be too slow. The resulting file should be a gigantic text file
You're going to need to be more clear...
It's not clear whether you want your resulting gzip file to contain one big text file (concatenating each appended string to the text file), or a number of small files representing each appended gzip-file portion.
I would want one big text file
It's not clear what you are looking for. If you are trying to join multiple files into one gzip archive, you can't get there. Per the gzip documentation:
Can gzip compress several files into a single archive?
Not directly. You can first create a tar file then compress it:
for GNU tar: gtar cvzf file.tar.gz filenames
for any tar: tar cvf - filenames | gzip > file.tar.gz
Alternatively, you can use zip, PowerArchiver 6.1, 7-zip or Winzip. The zip format allows random access to any file in the archive, but the tar.gz format usually gives a better compression ratio.
With the number of times you will be adding to the archive, it makes more sense to expand the source then append the string to a single file, then compress on demand or a cycle.
You will have a large file but the compression time would be fast.
If you want to accumulate data, not separate files, in a gzip file without expanding it all, it's possible from Ruby to append to an existing gzip file, however you have to specify the "a" ("Append") mode when opening your original .gzip file. Failing to do that causes your original to be overwritten:
require 'zlib'
File.open('main.gz', 'a') do |main_gz_io|
Zlib::GzipWriter.wrap(main_gz_io) do |main_gz|
5.times do
print '.'
main_gz.puts Time.now.to_s
sleep 1
end
end
end
puts 'done'
puts 'viewing output:'
puts '---------------'
puts `gunzip -c main.gz`
Which, when run, outputs:
.....done
viewing output:
---------------
2013-04-10 12:06:34 -0700
2013-04-10 12:06:35 -0700
2013-04-10 12:06:36 -0700
2013-04-10 12:06:37 -0700
2013-04-10 12:06:38 -0700
Run that several times and you'll see the output grow.
Whether this code is fast enough for your needs is hard to say. This example artificially drags its feet to write once a second.
You might want to ask this same question on http://superuser.com as they might be able to give you good insight into how to do it more efficiently.
?? The question was not about compressing several files into an archive. The question was about appending data to a (single) compressed gzip stream. tar and zip have nothing to do with the question.
The question is about appending ~10000 gzip portions to another gzip. The gzip documentation says it doesn't work for compressing multiple files and recommends using tar.gz. I recommended expanding the main zip and appending the strings to it, recompressing as necessary.
You confused the two concepts again. Appending data to a gzip file is not compressing multiple files. It is appending data to a single file. This is exactly the same concept as appending data to an uncompressed file. There are no multiple files involved.
It sounds like your appended data is long enough that it would be efficient enough to simply compress the 3000 lines to a gzip stream and append that to the existing gzip stream. gzip has the property that two valid gzip streams concatenated is also a valid gzip stream, and that gzip stream decompresses to the concatenation of the decompressions of the two original gzip streams.
I don't understand "(gzip -c orig_gzip >> gzip.gz) seems to be too slow". That would be the fastest way. If you don't like the time spent compressing, you can reduce the compression level, e.g. gzip -1.
The zlib library actually supports quite a bit, when the low-level functions are used. You can see advanced examples of gzip appending in the examples/ directory of the zlib distribution. You can look at gzappend.c, which appends more efficiently, in terms of compression, than a simple concatenation, by first decompressing the existing gzip stream and picking up compression where the previous stream left off. gzlog.h and gzlog.c provide an efficient and robust way to append short messages to a gzip stream.
You need to open the gzipped file in binary mode (b) and also in append mode (a), in my case it is a gzipped CSV file.
file = File.open('path-to-file.csv.gz', 'ab')
gz = Zlib::GzipWriter.new(f)
gz.write("new,row,csv\n")
gz.close
If you open the file in w mode, you will overwrite the content of the file. Check the documentation for full description of open modes http://ruby-doc.org/core-2.5.3/IO.html#method-c-new
| common-pile/stackexchange_filtered |
Pass an array generated in Google Apps Script to a HTML page and using for basic navigation
I have a simple function who retrieves data from a spreadsheet and passes this array to a modal window.
var ui = SpreadsheetApp.getUi();
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName('sheet1');
var data = sheet.getRange(2,1,sheet.getLastRow()-1,sheet.getLastColumn()).getValues();
const htmlForModal = HtmlService.createTemplateFromFile("detailedView");
htmlForModal.DATA = data;
htmlForModal.rowIndex = 0; // starts with the first array
const htmlOutput = htmlForModal.evaluate();
return ui.showModalDialog(htmlOutput, "title");
Obviously data is an array of arrays.
Inside the HTML I can use DATA values through scriptlets like this:
<p id="title">
<?= DATA[rowIndex][4] ?>
</p>
and it works. The problem is that I need a system to navigate through the array. A simple "NEXT" button to update the paragraph with a new rowIndex (rowIndex+1) each time I click it. I know I can use document.queryselector to change the innerHTML of my paragraph, but ho can I change <?= DATA[rowIndex][4] ?> with <?= DATA[rowIndex+1][4] ?> ?
I tried a console.log at the beginning of the HTML for logging DATA and it is filled correctly. BUT it appears to be a string of text, not an array. If DATA renders only the first time, is there a method to rebuild the array when loading HTML and making it available for scripts inside the HTML?
Possibly I would avoid a call to google.script.run. I tried it and it works but each time I click the button it has to call google script, load the row and return it... and it is sooo slow :-( While passing the entire array into the html is fast...
Can you help me? :-)
You should request all the data from the client side once using google.script.run (see docs), save it, and change the page layout using JavaScript when the user changes the page without requesting the data again. See best practices about asynchronously loading vs adding data using templates.
// Code.gs snippet
function showDialog() {
var htmlOutput = HtmlService.createTemplateFromFile('detailedView')
.evaluate();
var ui = SpreadsheetApp.getUi();
return ui.showModalDialog(htmlOutput, 'Dialog title');
}
function getData() {
// Example based on the question code
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName('sheet1');
var data = sheet.getRange(2,1,sheet.getLastRow()-1,sheet.getLastColumn()).getValues();
return data.map(function (value) {
return value[4];
})
}
<!-- detailedView.html snippet -->
<script type="text/javascript">
var data;
function onGetData(d) {
// Save the result in the global variable
data = d;
// the call to load the first page (change it to fit your code)
loadPage(0);
}
google.script.run
.withSuccessHandler(onGetData)
.getData();
</script>
I'd also recommended you to have a loading... message by default, so the user knows that the data is being processed.
References
Documentation for google.script.run
HTML Service: Best Practices
Martì, first of all thank you very much for your answer. I think we are near the solution but I would need more help...
where do I have to put the script of detaildeView? In the section, before the tag or it makes no difference?
I suppose google.script.run won't run unless I put it into a function and load this function with an onload event. Is it right?
I don't understand the loadPage command. If it's jquery I'm not using jQuery in my script...
I tried putting the google.script.run into a function and calling it onload. It calls the function but it fails...
Ok, I made some steps ahead :-) I put the script in the tag, I run the script through a function called by body onload. It works :-). But what if I need to map other values than value[4]? If I map an array of arrays (i.e. [value[1],value[2],value[3],value[4]] ) how can I retrieve those values (by row and column) into HTML?
loadPage(0) it was a place holder for your code that shows the user the first page. It's not defined and should not work.
As for the the map, it's not necessary. You question was only using the forth element and that's why I added the map. You can send the entire data to have it available on the client side.
Ok. But how can I retrieve data (unmapped) values inside HTML? In getData() I simply changed the last part with "return data;". How do I have to change onGetData() to return the array of arrays? If I console.log("data") it says "data is null".
OOOOK, I found the solution: my data array contains a "date" value. And dates are not permitted. Reference here: https://developers.google.com/apps-script/guides/html/reference/run#parameters. Once I transform this value with Utilities.formatDate all works perfectly :-)
Thank you Martì
| common-pile/stackexchange_filtered |
PSRR Analysis in LTSpice
I am performing PSRR analysis in LTSpice from this online tutorial. It shows PSRR analysis of LT 3042.
Here is my LTSpice schematic:
The thing is no matter how much I try, I only get data till 10kHz. What am I doing wrong?
P.S. Let me know, I will attach my .asc file if needed.
EDIT 1:
When I change the .tran stop time to 2 sec, it shows the initial missing data, still the range is stuck at 10kHz.
It would help a lot if you'd supply your netlist/.asc file.
[link] (https://mega.nz/file/1BZCBaDC#THrNl-iTadS-ddgNkW1SlxM5FLh2aHVSzONSYJyelP0)
The problem is your .step statement
.step dec param freq 10kHz 80MHz 100
In the SPICE Error Log (see below) you can see that the freq starts from 0.08 which is not what you intended. 80MHz seems to be interpreted as 80mHz i.e., 0.08 and the frequency starts at 0.08Hz and goes upto 10kHz. This is the problem.
To fix this, change the statement to
.step dec param freq 10e3 80e6 100
You can then simulate and see that the sweep starts from freq = 10kHz and goes higher like what you want
thanks for such an elaborate explanation!
e6 works great, but if the questioner is starting to get into SPICE they should really start using meg.
Yes, I do use that now.
In LTSpice, 'M' is not a recognised suffix, you have to use 'meg'.
So your sim statement should say .step dec param freq 10kHz 80meg 6
Correction from @qrk: 'm' gets recognised as e-3, which is why you have to use 'meg'
As a suffix, "m" or "M" is recognized as e-3. "meg" or "MEG" is recognized as e6.
Yes, this worked well.
| common-pile/stackexchange_filtered |
Custom shape design for button in android
I'm trying to create a shape for Button that would have one side round and other right side slop , the actual image and a shadow.
I'm tried alot but i'm not understand how to draw it and how to draw shadow for same.
Thanks in advance.
Maybe, you should think about using canvas? You can draw your button as a sprite, and place it on canvas.
Design the shape whatever you want in any designing tool, export it as SVG. Then, in android studio right click on "drawable" folder -> new vector asset, then choose the exported SVG file from your local disc.
Now you can able to set the created drawable as button background.
Here is the information about custom buttons. In a different approach you can use canvas and onClickListener.
Provide context for links
| common-pile/stackexchange_filtered |
Trying to run a buffer-overflow with Python/pwntools
I work on a online program in which I should do a buffer Overflow.
When I run the program, I have to complete a sum of two numbers generated randomly) :
>>> 451389913 +<PHONE_NUMBER> =
If I put the right result, I get a "That's okay". Otherwise the console writes "Try it again".
I decompiled the program with Ghidra and get the following code in the main() function :
{
int iVar1;
char local_9 [36];
int local_42;
uint local_6;
uint local_f;
local_f = rand();
local_6 = rand();
local_42 = local_6 + local_f;
printf(">>> %d + %d = ",(ulong)local_f,(ulong)local_6);
fflush(stdout);
fgets(local_9,100,stdin);
iVar1 = atoi(local_9);
if (local_42 == iVar1) {
puts("That's ok");
}
else {
puts("Try it again");
}
return 0;
}
I notice a fgets function that make me suppose I can do the buffer overflow just before the sum. I also see that the local9 variable is composed of 36 characters. So I suppose that at the beginning of the payload, there must be 36 characters.
I began to write the following snippet with the pwntools Python library :
import pwn
offset = 36
payload = b'A'*offset + b'[.....]'
c = pwn.remote("URL",Port)
c.sendline(payload)
c.interactive()
The thing is I know I have to write something after the b'A'*offset but I don't really see what to add.. My difficulty is to join that sum of random numbers to the payload.
Is it possible to do it ?
Any ideas would be very appreciated, thanks
The ending of your payload changes depending on some parameters. If you have a "get_flag" function, you have just to put its address. If not you should try to inject some code somewhere and jump to it. If the executable is NX (pages can't be both witeable and executable) you have to rop.
Be aware that the offset is something more than 36. It is at least 36 + sizeof(addr($rbp)). You can find out how much by using cyclic: just run cyclic 100 and put its output as the executable imput. Then see in dmesg the crash $rip and use the command cyclic -l 0x.... It will return to you a number n. The payload you need will be payload = b"a"*n+b"..."
Thanks for your answer ! Should I use gdb to run the cyclic and dmesg commands ?
Without gdb, you should run them in bash. cyclic should be installed since you have pwntools. dmesg is linux crash log.
Of course, it's possible, pwn is the swiss army knife for CTFs.
# string = c.recv()
# assuming the string you receive is this
string = b">>> 451389913 +<PHONE_NUMBER> ="
# receive expression as bytes
# convert it into utf8 string for convenience
# split() splits the string at whitespaces and store them as array elements
expression = string.decode("utf8").split()
# covert string to int for to get logical sum instead of literal
solution = int(expression[1])+int(expression[3])
c.sendline(payload)
However, in a more challenging scene, where the server might ask you multiple answers with division or difference you'd have to change operators as needed.
And I'd rather use operator lib than write a bunch of ifs.
import operator
ops = {
'+' : operator.add,
'-' : operator.sub,
'*' : operator.mul,
'/' : operator.truediv
}
string = b">>> 451389913 +<PHONE_NUMBER> ="
expression = string.decode("utf8").split()
solution = ops[expression[2]](int(expression[1]), int(expression[3]))
c.sendline(solution)
| common-pile/stackexchange_filtered |
Placing data into specific bits
I am trying to learn SPARC and trying to create an array of size 4,000 bytes. Inside of this array I need to calculate an offset to place values in the correct location in that array. I think I know how to size the array, (just use .skip?) and I know how to calculate my offset, but can anyone tell me how to place the values into the correct byte?
Thanks everyone.
EDIT: I originally said bits, meant to say bytes.
I tried mov but I know that isn't right. I wasn't really sure how to do it.
Use read-modify-write and the proper bitwise operations (AND to clear a bit, OR to set a bit). If memory is not an issue, you could of course use one byte for each bit too.
Update: sample code illustrating how to clear a bit in the array. Setting a bit is similar, except instead of using andn it would use or.
! clear bit index %o0 in "array"
clrbit:
mov %o0, %o1
srl %o0, 3, %o0 ! byte offset
and %o1, 7, %o1 ! bit offset
set array, %o2 ! array base
add %o2, %o0, %o0 ! byte address
set 1, %o3 ! bit mask
sll %o3, %o1, %o1 ! 1 << bit offset
ldub [%o0], %o3 ! load byte
andn %o3, %o1, %o3 ! mask off bit to clear
stb %o3, [%o0] ! write back
retl
nop
Oh, I see question has been updated to bytes instead of bits. Well, that's easier. Assuming index in %o0, data to write in %o1:
set array, %o2 ! array base address
add %o2, %o0, %o2 ! add byte offset
stb %o1, [%o2] ! write byte
Suppose I wanted to clear data in byte 5... how would I go about this? and 5,"\n",5 gives an error...
Your help is much appreciated
| common-pile/stackexchange_filtered |
Where can I find high-resolution, historic weather data for urban areas (like ERA5 but higher spatial resolution)?
Pretty much what the title says. For my work I need historic (at least starting 2010), atmospheric weather data (mostly temperature at 2m, wind velocity and direction, humidity respectively dewpoint, nothing fancy) in hourly frequency and ERA5 delivers that globally and conveniently via free API access.
But I'd like to concentrate on areas with high population density (basically large cities) and I'm a bit afraid, that the ~30km grid of ERA5 is a bit too coarse to capture that accurately, especially for cities with a very high population density in a smaller area, as it is often the case for old European cities.
Does anyone know a data source that can offer this weather data on a higher resolution grid (10km, 5km, ...?) but with otherwise similar features as the ERA5 data (variables, frequency, access, ...)?
Degrees of Freedom: I'm fine if the coverage would be restricted to urban centers, I'm not interested in rural areas. Equally, if such data would only be available in parts of the world (e.g. only Europe or only North America), that would be acceptable if not ideal.
Restrictions: However, I'm not interested in just a collection of 'local' weather stations because I need to avoid any bias from any uncorrected measurements below blending height. I do not have the capability to do such corrections myself and due to the nature of my work I need a 'synoptic' base for my data. For example wind velocity at 10m should be provided in a similar fashion as by ERA5: with a roughness factor of grass, since I'd like to introduce my own surface parameters instead. (Ref: https://confluence.ecmwf.int/display/FUG/9.3+Surface+Wind)
Possible Workaround: Im not very well versed in ERA5 data (not meteorologist by training), so there might be a way to extract this data from the model by accessing sub-grid information in some way?
Currently I'm using code like this to grab my data:
...
cdsclientrequest.retrieve(
'reanalysis-era5-single-levels',
{
'product_type': 'reanalysis',
'format': 'grib',
'variable': climatevars,
'area': bbox.bbox,
'year': year,
'month': month,
'day': times.day,
'time': times.time,
},
filepath)
...
I hope this makes sense, if not please ask! I'm thankful for any tips!
I think the ERA5-Land dataset suits your purpose. See https://www.ecmwf.int/en/era5-land. As the website says, 'The data is a replay of the land component of the ERA5 climate reanalysis with a finer spatial resolution: ~9km grid spacing.'
Right what I was looking for :) Note to anyone using this: the retrieval via API is much slower with era5-land than with era5.
| common-pile/stackexchange_filtered |
How to create a redirect url from a form
A user has a listing. For example, user 1 has listing 2. The current URL will look something like this:
http://localhost:3000/listings/2
I want to let users customize their own backlink for their listings like this:
http://localhost:3000/uniquelistingname
Ideally, I would like to have the listing URL stay like the unique one, but a redirect will do.
I found the friendly URL gem, but I'm not sure if it fully fits my needs. It seems to allow the URL to be a param as opposed to creating a fully customized one, which needs to be unique. What suggestions do you have on how I should go about this?
One option is to create separate model for it e.g named UserListingLink that looks like:
class UserListingLink < ApplicationRecord
belongs_to :user
belongs_to :listing
validates :link, uniqueness: true
end
and just allow users edit it. Then you can redirect your users in routes (put it at the very bottom so other routes like localhost:3000/posts keep working):
routes.rb:
get '/:user_link', to: 'user_listings#show'
and just redirect or show whatever you want in that controller:
class UserListingsController < ApplicationController
def show
listing = UserListingLink.find_by(link: params[:user_link]).listing
redirect_to listings_path(listing)
end
end
So, with this... I would need to create a separate page for the user to create a redirect for the listing? how would the form for this look and would i need to be nesting routes?
Yes, if you want it to be editable by users. It can be just default Rails form without any route nesting, it all depends on your use-case.
forgive me as i am fairly new to rails but is this what i should be donig... creating controller and models, then adding "user_link" to listings schema? and then i can simply use the listing form to create the link?
| common-pile/stackexchange_filtered |
Something between "satisfactory" and "good"
I'm looking for an adjective meaning something between "satisfactory" and "good".
For example, let's say we can rate something (restaurant, homework, etc.) and give it 1 to 5 stars, but we can also rate is as 3.5 stars. Which adjective can be used to describe 3.5 stars, i.e. better than satisfactory/average but not good enough to be just good?
I'd say it was not half bad.
You could say it was decent.
How about "above average"?
Other than "not bad" or "pretty good", there are a number of synonyms that mean somewhat less than "good", but something more than "OK":
favorable, positive, satisfying, nice, pleasing, agreeable, commendable, gratifying, decent, fitting
The actual degree of "goodness" can vary with context and intonation.
She thought the meal was rather nice, as the soup was delightful, and the entree sufficiently gratifying; however some of the other courses were merely agreeable.
Note that some rankings (mostly on either extreme of the spectrum) are entirely subjective. I've been in meetings where two developers vehemently disagreed whether "amazing" was better than "wonderful" or vice versa.
Perhaps we can try decent here.
This is defined as: fairly good, or conforming to the recognized standard of propriety, good taste or modesty
In context:
- "So what do you think about the lobster bisque here?"
- "Hmm, not too shabby, it's actually decent."
The problem with this suggestion is that things get a little muddled with your use of the word pretty. I'd have trouble discerning the difference between your the review of the bisque ("Hmm, not too shabby, it's actually pretty decent") and one that said, "Hmm, not too shabby, it's actually pretty good." It's tricky to pinpoint or quantify the strength of expressions like pretty good, quite nice, and halfway decent.
Good point, perhaps the use of pretty is unnecessary here. Edited to reflect this change, thank you!
| common-pile/stackexchange_filtered |
Dictionary: How do I minimize space in a hash table and space for unused char array elements?
I am writing a program that reads a text document of 200,000 dictionary words. I will compare it with another document of 2,000,000 words to count the lines that are not in the dictionary. I will only be storing alphabetical characters a-z (26 characters). Thus I only need 5 bits to represent each one. The maximum length of a character is 29 (30 including the null character). I will also need to consider later on if the dictionary only contains words that at less than 7 characters.
The conditions for this system are that it has its own custom allocator that I must use. So whenever I use the "new" (or any other sort of heap memory allocation) to dynamically allocate memory, it will use up AT LEAST 32 bytes of memory each time the new keyword is used. To save memory, I would need to initialize larger sizes of char arrays.
I am also not allowed to read the dictionary words in the file directly. They will be provided to me in an array called dictionaryWords and will be destroyed after I am done constructing my hash table. So pointing to the words given to me is not plausible.
The program must run in less than 1 second.
I.E.
Dictionary:
abc
lkas
ddjjsa
dada
Word Doc:
abc
lkas
weee
dada
jajaja
Wrong Numbers List: 3, 5
It will put numbers 3 and 5 into an array for the line numbers in Word Doc that are not in the dictionary.
I am keeping the 200,000 dictionary words in a Hash Table, so lookup from the word document to the hash table is O(n) for n number of words. The problem is that my hash table needs to rehash with a load factor of 0.5, so it must rehash and double its size whenever it is half full. This leaves me with a wasted 200,000 empty hash entries that take up memory. Since the maximum word length is 30, and a char takes 1 byte, it will cost 30 bytes per word in my hash entry. I am wasting 30 characters * 1 byte * 200,000 empty entries = 6000000 bytes / 6MB of memory space in the heap while the program is running. I want to minimize the space usage as much as possible, but be able to run in O(n) time.
This is my hash entry for my hash table. You can see here, for a table size of 200,000 words, I will need 400,000 of these entries to keep a load factor of 0.5
struct HashElement
{
char element[30];
HashElement(char * word)
{
memset(element, '\0', 30); //Set all bits to NULL for collision checking
if (e != NULL)
{
strcpy(element, word);
}
}
};
If I represented each character with only 5 bits, I would be able to save 3/8's of my wasted space.
I have considered the pointer approach for my Hash Entries:
struct HashElement
{
char * element;
HashElement(char * word)
{
element = NULL;
if (e != NULL)
{
element = word;
}
}
};
But using the new keyword will use up AT LEAST 32 bytes for this system. So each time I initialized the hash entry's element, it will cost 32 bytes no matter what the word length size is. If I wanted to diverge this problem to only containing 7 characters per line in the word document, I would be in trouble because I only need 8 bytes for each word, and I am using 32 bytes.
I was thinking bits* my bad.
Something is very wrong with your understanding of hash tables. When you resize the hash table, you aren't creating more entries in it, just more buckets that can hold zero or more entries.
Since the smallest data structure is 8 bits, I would suggest, your only option is to recode the data into 5 bit chunks then reorganize it into 8 bit chunks. if the strings are short you won't find a huge space saving.
Could you please re-check your question? It seems to me that you are confusing “characters”, “words” and “lines”. The math for 46875 bytes equals 45MB seems also off, plus, it's most certainly heap-allocated memory and not stack space. Those errors are benign enough to still allow the post to get your basic point across but it would be easier to understand what you're asking if the question were a bit cleaner.
6000000 bytes / 6GB. Off by a thousand. Giga is a thousand million; the number you compute is six million, or six MB. Not worth worrying about these days. (And where does it say you need to keep your LF under 0.5?)
Note: in C, one could allocate the entire hashtable as one piece. (even in global and/or static memory, since the size is known in advance). The strings could be allocated as one consecutive string space (including the NULs) , too. So the fundamental problem here seems to be the tight coupling of C++'s constructors and the memory allocator.
You don't need to rehash, since you know the size(200K) in advance, so just allocate a fixed size table.This will cost you about 8MB of memory.
You're working on a hard problem when there are lots of easy problems you need to solve first. Solving all the easy problems will likely be sufficient, so you don't need to work on the hard problem of compacting bits into fewer bytes.
Why allocate 30 bytes just because you might need up to that many?
Why allocate elements for empty hash buckets?
This seems to be good advice. After those low-hanging fruits are harvested, it seems unlikely that the – very moderate, by today's standards – remaining memory consumption of the program would cause any actual problem. I would of course even start by simply using std::unordered_map<std::string> and see how well it does. Maybe there is no need to optimize this at all.
@5gon12eder Exactly. And on the off chance that doesn't do it, just changing std::string to a compressed string class would get the last step fairly easily.
I edited the main post. Each dynamic allocation will take at the very least 32 bytes. So it costs 32 bytes minimum each time the new keyword is used. As for the empty hash buckets, wouldn't each hash bucket still cost memory if I used a pointer? Since a pointer costs 8 bytes per bucket on a 64 bit system.
And as for std::string, it will still cost 32 bytes per word of any length less than 32 because std::string uses the "new" keyword and resizes when needed in its implementation.
@TommySaechao It depends what allocator you use. You may need a custom allocator, but save the hard problems until you've solved the easy ones. Yes, each hash bucket costs memory, but if it's 8 bytes, then 400,000 buckets is 3MB. You're focused on the wrong problems.
You could also memory map the text document and use pointers into it. But I wouldn't bother because I think if you do the simple stuff, it will be more than enough. Your problem is quite simple and doesn't require an exotic solution.
Edited main post again* I must use the allocator given to me. Also pointing to the words in the file is not an option, since they will be destroyed after constructing my hash table.
@TommySaechao It sound like you have a large number of bizarre, arbitrary constraints. Trying to explain them all will make your question so specific that it will turn into "do my complex job for me".
Here a simple string compressor class that will take a string of characters between 'a' and 'z' inclusive and compress each 8 bit representation to 5 bit and splitting the resulting binary, back into 7 bit representations. Since each character is still represented by a unique number the hash of the word should still be as unique as expected:
class StringCompressor
{
public:
static string Compress( string );
static string ToBinary( long long input , int length );
static int ToInt( string input );
static string Decompress( string );
};
string StringCompressor::Compress( string input )
{
stringstream ss;
for ( char c : input )
{
string temp = ToBinary( ( c - 'a' ) , 5 );
ss << temp;
}
ss << string( ( 7 - ( ss.str().length() % 7 ) ) , '0' );
string temp = ss.str();
ss.str( "" );
for ( int i = 0; i < temp.length(); i += 7 )
{
string temp2 = temp.substr( i , 7 );
ss << (char)ToInt( temp2 );
}
return ss.str();
}
string StringCompressor::Decompress( string input )
{
stringstream ss;
for ( char c : input )
{
string temp = ToBinary( c , 7 );
ss << temp;
}
string temp = ss.str().substr( 0 , ss.str().length() - ( ss.str().length() % 5 ) );
ss.str( "" );
for ( int i = 0; i < temp.length(); i += 5 )
{
ss << (char)( ( ToInt( temp.substr( i , 5 ) ) ) + 'a' );
}
return ss.str();
}
string StringCompressor::ToBinary( long long input , int length )
{
string output( length , '0' );
for ( int i = length - 1; i >= 0; i-- )
{
long long test = pow( 2.0 , i );
if ( input >= test )
{
output[( length - 1 ) - i] = '1';
input -= test;
}
}
return output;
}
//Take a string representation of a binary number and return the base10 representation of it.
//There's no validation of the string
int StringCompressor::ToInt( string input )
{
int length = input.length();
int output = 0;
double temp = 0;
for ( int i = 0; i < length; i++ )
{
temp = ( input[( length - 1 ) - i] - '0' );
output += pow( 2.0 , i ) * temp;
}
return output;
}
| common-pile/stackexchange_filtered |
Oracle AQ/JMS - Why is the queue being purged on application shutdown?
I have an application that queues and deques messages from Oracle AQ using the JMS interface. When the application is running items get queued and dequeued and I can see queued items in the queue table. However, one the application shuts down the queue table is cleared and the application cannot access the previously queued items. Any idea what might cause that behavior?
The Oracle AQ is created using this code:
BEGIN
dbms_aqadm.create_queue_table(
queue_table => 'schema.my_queuetable',
sort_list =>'priority,enq_time',
comment => 'Queue table to hold my data',
multiple_consumers => FALSE, -- THis is necessary so that a message is only processed by a single consumer
queue_payload_type => 'SYS.AQ$_JMS_OBJECT_MESSAGE',
compatible => '10.0.0',
storage_clause => 'TABLESPACE LGQUEUE_IRM01');
END;
/
BEGIN
dbms_aqadm.create_queue (
queue_name => 'schema.my_queue',
queue_table => 'schema.my_queuetable');
END;
/
BEGIN
dbms_aqadm.start_queue(queue_name=>'schema.my_queue');
END;
/
I also have a Java class for connecting to the queue, queueing items and processing dequeued items like this:
public class MyOperationsQueueImpl implements MyOperationsQueue {
private static final Log LOGGER = LogFactory.getLog(MyOperationsQueueImpl.class);
private final QueueConnection queueConnection;
private final QueueSession producerQueueSession;
private final QueueSession consumerQueueSession;
private final String queueName;
private final QueueSender queueSender;
private final QueueReceiver queueReceiver;
private MyOperationsQueue.MyOperationEventReceiver eventReceiver;
public MyOperationsQueueImpl(DBUtils dbUtils, String queueName) throws MyException {
this.eventReceiver = null;
this.queueName = queueName;
try {
DataSource ds = dbUtils.getDataSource();
QueueConnectionFactory connectionFactory = AQjmsFactory.getQueueConnectionFactory(ds);
this.queueConnection = connectionFactory.createQueueConnection();
// We create separate producer and consumer sessions because that is what is recommended by the docs
// See: https://docs.oracle.com/javaee/6/api/javax/jms/Session.html
this.producerQueueSession = this.queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
this.consumerQueueSession = this.queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
this.queueSender = this.producerQueueSession.createSender(this.producerQueueSession.createQueue(this.queueName));
this.queueReceiver = this.consumerQueueSession.createReceiver(this.consumerQueueSession.createQueue(this.queueName));
this.queueConnection.start();
} catch (JMSException| NamingException exception) {
throw new MyOperationException("Failed to create MyOperationsQueue", exception);
}
}
@Override
protected void finalize() throws Throwable {
this.queueReceiver.close();
this.queueSender.close();
this.consumerQueueSession.close();
this.producerQueueSession.close();
this.queueConnection.close();
super.finalize();
}
@Override
public void submitMyOperation(MyOperationParameters myParameters) throws MyOperationException {
try {
ObjectMessage message = this.producerQueueSession.createObjectMessage(myParameters);
this.queueSender.send(message);
synchronized (this) {
if(this.eventReceiver != null) {
this.eventReceiver.onOperationSubmitted(message.getJMSMessageID(), myParameters);
}
}
} catch (JMSException exc) {
throw new MyOperationException("Failed to submit my operation", exc);
}
}
@Override
public void setMyOperationEventReceiver(MyOperationEventReceiver operationReceiver) throws MyOperationException {
LOGGER.debug("Setting my operation event receiver");
synchronized (this) {
if(this.eventReceiver != null) {
throw new IllegalStateException("Cannot set an operation event receiver if it is already set");
}
this.eventReceiver = operationReceiver;
try {
this.queueReceiver.setMessageListener(message -> {
LOGGER.debug("New message received from queue receiver");
try {
ObjectMessage objectMessage = (ObjectMessage) message;
eventReceiver.onOperationReady(message.getJMSMessageID(), (MyOperationParameters) objectMessage.getObject());
} catch (Exception exception) {
try {
eventReceiver.onOperationRetrievalFailed(message.getJMSMessageID(), exception);
} catch (JMSException innerException) {
LOGGER.error("Failed to get message ID for JMS Message: "+message, innerException);
}
}
});
} catch (JMSException exc) {
throw new MyOperationException("Failed to set My message listener", exc);
}
}
}
}
| common-pile/stackexchange_filtered |
Oracle persistent connections maxing out server from asyncronous PHP scripts
I have written a simple OCI wrapper class in PHP which uses persistent connections (oci_pconnect). The class destructor calls oci_close.
This class is used for all my AJAX PHP scripts and so is called a lot. However, despite using persistent connections and oci_close not removing these from the cache (as per my understanding), the number of open connections to the database is maxing out, causing the system to fail. I was expecting the number of open connections to be just one for the whole app!
Am I doing something obviously wrong?
Skeleton code:
class Oracle {
private $connection;
private $connected;
function __construct($connectionString, $username, $password) {
if (!($this->connection = @oci_pconnect($username, $password, $connectionString))) {
echo 'Cannot connect to<EMAIL_ADDRESS> $this->connected = false;
} else {
$this->connected = true;
}
}
function __destruct() {
if ($this->connected) {
oci_close($this->connection);
}
}
}
I experienced the same problem with an inherited system. drop oci_pconnect in favour of oci_connect (non-persistent)
It solved the problem, so never found out why connections weren't being re-used.
Whilst this would solve the problem, it makes the system quite noticeably slower.
Are all your connections using the same username, password, and connection string?
@BobJarvis Yep, they all connect to the same schema on the same SID
My only suggestion is to echo the username and connection string after a successful connection so you can eyeball it. Other than that I'm short on ideas.
One other thing - add an echo in __destruct so you can eyeball-verify that oci_close is being called.
Do NOT close the connection if you want it to be persistent.
From the documentation: "The oci_close() function does not close the underlying database connections created with oci_pconnect()."... Does oci_close therefore not work as documented?
That's probably outdated. oci_close() does close persistent connections since PHP 5.3.
@Xophmeister Depending on your setting and PHP version, oci_close may be able to close oci_pconnect(): PHP Doc
I'm using PHP 5.2.17, but oci8.persistent_timeout = -1, so presumably then oci_close is closing the connections... Can anyone confirm?
Can't confirm, but two things: 1) the behavior of your system seems to indicate this is not the case - or at least indicates that the underlying database connection is not being closed, and 2) it's simple enough to test - comment out the oci_close call and see what happens.
According to the doc page, "Setting this option to -1 means that idle persistent connections will be retained until the PHP process terminates or the connection is explicitly closed with oci_close()". So if this is set to -1 it would seem you'd need to call oci_close to make the connection available for reuse.
On a second thought ... do you have HTTP/1.1 and keep-alive turned on?
Yes: HTTP/1.1 and keep-alive. (With regards to experimenting: I don't have access to the DB server's logs, so can't directly test how code changes affect the system.)
| common-pile/stackexchange_filtered |
Crop area different than Selected Area in iOS?
Here is link on github https://github.com/spennyf/cropVid/tree/master to try it out your self and see what I am talking about it would take 1 minute to test. Thanks!
I am taking a video with a square to show what part of vid will be cropped. Like this:
Right now I am doing this of a piece of paper with 4 lines in the square, and half a line difference on top and bottom. And then I crop the video using code I will post, but then when I display the video I see this (Ignore background and green circle):
As you can see there are more than four lines, so I am setting it to crop a certain part but it is adding more, when I am using the same rectangle that is displayed in the camera, and the same rectangle that is used to crop?
So my question is why is the cropping not the same size?
Here is how I do crop and display:
//this is the square on the camera
UIView *view = [[UIView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height-80)];
UIImageView *image = [[UIImageView alloc] init];
image.layer.borderColor=[[UIColor whiteColor] CGColor];
image.frame = CGRectMake(self.view.frame.size.width/2 - 58 , 100 , 116, 116);
CALayer *imageLayer = image.layer;
[imageLayer setBorderWidth:1];
[view addSubview:image];
[picker setCameraOverlayView:view];
//this is crop rect
CGRect rect = CGRectMake(self.view.frame.size.width/2 - 58, 100, 116, 116);
[self applyCropToVideoWithAsset:assest AtRect:rect OnTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(assest.duration.value, 1))
ExportToUrl:exportUrl ExistingExportSession:exporter WithCompletion:^(BOOL success, NSError *error, NSURL *videoUrl) {
//here is player
AVPlayer *player = [AVPlayer playerWithURL:videoUrl];
AVPlayerLayer *layer = [AVPlayerLayer playerLayerWithPlayer:player];
layer.frame = CGRectMake(self.view.frame.size.width/2 - 58, 100, 116, 116);
}];
And here is code that does the crop:
- (UIImageOrientation)getVideoOrientationFromAsset:(AVAsset *)asset
{
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
CGSize size = [videoTrack naturalSize];
CGAffineTransform txf = [videoTrack preferredTransform];
if (size.width == txf.tx && size.height == txf.ty)
return UIImageOrientationLeft; //return UIInterfaceOrientationLandscapeLeft;
else if (txf.tx == 0 && txf.ty == 0)
return UIImageOrientationRight; //return UIInterfaceOrientationLandscapeRight;
else if (txf.tx == 0 && txf.ty == size.width)
return UIImageOrientationDown; //return UIInterfaceOrientationPortraitUpsideDown;
else
return UIImageOrientationUp; //return UIInterfaceOrientationPortrait;
}
And here is rest of the cropping code:
- (AVAssetExportSession*)applyCropToVideoWithAsset:(AVAsset*)asset AtRect:(CGRect)cropRect OnTimeRange:(CMTimeRange)cropTimeRange ExportToUrl:(NSURL*)outputUrl ExistingExportSession:(AVAssetExportSession*)exporter WithCompletion:(void(^)(BOOL success, NSError* error, NSURL* videoUrl))completion
{
// NSLog(@"CALLED");
//create an avassetrack with our asset
AVAssetTrack *clipVideoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
//create a video composition and preset some settings
AVMutableVideoComposition* videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.frameDuration = CMTimeMake(1, 30);
CGFloat cropOffX = cropRect.origin.x;
CGFloat cropOffY = cropRect.origin.y;
CGFloat cropWidth = cropRect.size.width;
CGFloat cropHeight = cropRect.size.height;
// NSLog(@"width: %f - height: %f - x: %f - y: %f", cropWidth, cropHeight, cropOffX, cropOffY);
videoComposition.renderSize = CGSizeMake(cropWidth, cropHeight);
//create a video instruction
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = cropTimeRange;
AVMutableVideoCompositionLayerInstruction* transformer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:clipVideoTrack];
UIImageOrientation videoOrientation = [self getVideoOrientationFromAsset:asset];
CGAffineTransform t1 = CGAffineTransformIdentity;
CGAffineTransform t2 = CGAffineTransformIdentity;
switch (videoOrientation) {
case UIImageOrientationUp:
t1 = CGAffineTransformMakeTranslation(clipVideoTrack.naturalSize.height - cropOffX, 0 - cropOffY );
t2 = CGAffineTransformRotate(t1, M_PI_2 );
break;
case UIImageOrientationDown:
t1 = CGAffineTransformMakeTranslation(0 - cropOffX, clipVideoTrack.naturalSize.width - cropOffY ); // not fixed width is the real height in upside down
t2 = CGAffineTransformRotate(t1, - M_PI_2 );
break;
case UIImageOrientationRight:
t1 = CGAffineTransformMakeTranslation(0 - cropOffX, 0 - cropOffY );
t2 = CGAffineTransformRotate(t1, 0 );
break;
case UIImageOrientationLeft:
t1 = CGAffineTransformMakeTranslation(clipVideoTrack.naturalSize.width - cropOffX, clipVideoTrack.naturalSize.height - cropOffY );
t2 = CGAffineTransformRotate(t1, M_PI );
break;
default:
NSLog(@"no supported orientation has been found in this video");
break;
}
CGAffineTransform finalTransform = t2;
[transformer setTransform:finalTransform atTime:kCMTimeZero];
//add the transformer layer instructions, then add to video composition
instruction.layerInstructions = [NSArray arrayWithObject:transformer];
videoComposition.instructions = [NSArray arrayWithObject: instruction];
//Remove any prevouis videos at that path
[[NSFileManager defaultManager] removeItemAtURL:outputUrl error:nil];
if (!exporter){
exporter = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetHighestQuality] ;
}
// assign all instruction for the video processing (in this case the transformation for cropping the video
exporter.videoComposition = videoComposition;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
if (outputUrl){
exporter.outputURL = outputUrl;
[exporter exportAsynchronouslyWithCompletionHandler:^{
switch ([exporter status]) {
case AVAssetExportSessionStatusFailed:
NSLog(@"crop Export failed: %@", [[exporter error] localizedDescription]);
if (completion){
dispatch_async(dispatch_get_main_queue(), ^{
completion(NO,[exporter error],nil);
});
return;
}
break;
case AVAssetExportSessionStatusCancelled:
NSLog(@"crop Export canceled");
if (completion){
dispatch_async(dispatch_get_main_queue(), ^{
completion(NO,nil,nil);
});
return;
}
break;
default:
break;
}
if (completion){
dispatch_async(dispatch_get_main_queue(), ^{
completion(YES,nil,outputUrl);
});
}
}];
}
return exporter;
}
So my question is why is the video area not the same as the crop/ camera area, when I have used exactly the same coordinates and size of square?
Just to be sure, once the cropped video is performed ( so in the completion block ) is should be saved on the iphone disk. Please, check that file directly, i mean access to the file ( connecting the iphone to the mac, and using tool like iExplorer or iFunBox ). then copy it on the mac, and open it with the default mac quick time player. In this way you'll be sure that the resulting cropped video is exactly what you see in that square. Also, be sure that the crop area use the proper coordinates to the referred view, for both x and y axis
@LucaIaco Okay I am using iExplorer and put the video on my mac and played it with quick time and the cropped area is still not correct. I have looked at the coordinates again and again and I am sure they are right. I am going to make a git hub project a post the link, so you could download and run and see for yourself if you wouldn't mind. Right now I am take video of a green square and just the square in the cropped part, but then I see white when its cropped. I would really appreciate if you look at project
Here is correct link https://github.com/spennyf/cropVid
@LucaIaco were you able to try it for yourself?
i'll try it as soon as possible ;)
Maybe Check This Previous Question.
It looks like it might be similar to what you are experiencing. A user on that question suggested cropping this way:
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
I hope that this helps or at least gives you a start in the right direction.
This answer is completely irrelevant. The question asks about cropping a video, not an image.
Sorry about that, I was up pretty late and definitely misread this one! Thanks for the heads up.
Haha, np, happens to all of us.
| common-pile/stackexchange_filtered |
How to handle D3 drag events when mousedown and mouseup happen on different elements?
I'm experiencing problems with the d3 drag behavior. Variations of the issue I'm having have been answered before, but I couldn't find an answer to my specific problem.
Here's a fiddle which illustrates the problem I'm going to describe.
What I want to do is to have a click handler on a draggable element where the click handler shouldn't be executed on dragend. I know that inside the click handler I can use d3.event.defaultPrevented which should be set to true if the element was dragged. The problem arises when the mouseup event happens on another element than the mousedown event. This happens when the movement of the dragged element is slower than the mouse cursor. If the mouse is released and the dragged element isn't under the mouse yet d3.event.defaultPrevented is set to false and the click handler doesn't get called. This makes it impossible to find out whether the click event was fired after a drag or not.
In my example the circle flashes green if the click handler executes but d3.event.defaultPrevented was set to true. Also in the click handler propagation is prevented preventing the event to bubble up to the svg click handler. If the click handler of the circle doesn't execute and the event bubbles to the svg click handler the circle flashes blue if d3.event.defaultPrevented was set to true otherwise it flashes red.
What I want to achieve is to get the circle to flash green or blue no matter where the circle is on mouse up in order to be able to know whether the click event happened after a drag or not. Is this even possible or is this a limitation by the nature of javascript/browsers? If so is there a workaround? Or do I just have to disable the 'slowing down' of the circle when it is dragged?
I found a very similar question on SO but there wasn't really a useful answer.
Any help appreciated!
EDIT
Looks like the idea to stop the element from slowing down during dragging solves the problem. But I would still be interested if this is possible using the event information available.
Here's the code of the fiddle:
var nodes = [{}];
var svg = d3.select('body')
.append('svg')
.attr({
width: 500,
height: 500
})
.on('click', function(){
var color = d3.event.defaultPrevented ? 'blue' : 'red';
flashNode(color);
});
var force = d3.layout.force()
.size([500, 500])
.nodes(nodes)
.friction(.2)
.on('tick', forceTickHandler);
var nodeElements = svg
.selectAll('circle');
nodeElements = nodeElements
.data(force.nodes())
.enter()
.append('circle')
.attr({
cx: 10,
cy: 10,
r: 10
})
.on('click', function(){
d3.event.stopPropagation();
var color = d3.event.defaultPrevented ? 'green' : 'orange';
flashNode(color);
})
.call(force.drag);
function forceTickHandler(e){
nodes.forEach(function(node) {
var k = e.alpha * 1.4;
node.x += (250 - node.x) * k;
node.y += (250 - node.y) * k;
});
nodeElements
.attr('cx', function(d, i){
return d.x;
})
.attr('cy', function(d, i){
return d.y;
});
};
function flashNode(color){
nodeElements
.attr('fill', color)
.transition()
.duration(1000)
.attr('fill', 'black');
}
force.start();
The issue seems to be coming from the code in your forceTickHandler that updates the node positions:
nodes.forEach(function(node) {
var k = e.alpha * 1.4;
node.x += (250 - node.x) * k;
node.y += (250 - node.y) * k;
});
When this is commented out, the position of the node does not lag the mouse pointer. I don't really understand what you want to happen with the code above. The "typical" way of doing this, is similar to the sticky force layout example at http://bl.ocks.org/mbostock/3750558
Update: Here's a way that might get you close to what you are after: https://jsfiddle.net/ktbe7yh4/3/
I've created a new drag handler from force.drag and then updated what happens on dragend, it seems to achieve the desired effect.
The code changes are the creation of the drag handler:
var drag = force.drag()
.on("dragend", function(d) {
flashNode('green');
});
And then updating the creation of the nodes to use the new handler:
nodeElements = nodeElements
.data(force.nodes())
.enter()
.append('circle')
.attr({
cx: 10,
cy: 10,
r: 10
})
.on('click', function(){
d3.event.stopPropagation();
var color = d3.event.defaultPrevented ? 'green' : 'orange';
flashNode(color);
})
.call(drag);
The dragend in the drag handler gets called no matter what, but it still suffers from a similar problem that you describe, but you can deal with it a little better in the handler. To see what I mean here, try changing:
flashNode('green');
to:
flashNode(d3.event.defaultPrevented ? 'green' : 'orange');
and you'll see that if you release the mouse when the pointer is directly pointing at the circle, it'll flash green. If the circle is lagging the pointer and you release the mouse before the circle has settled under the cursor, then it'll flash orange. Having said that, the data element in the dragend handler always seems to be set to the circle that was dragged to begin with, whether the mouse button was released whilst pointing at the circle or not.
Thanks for your answer! I understand that the problem originates in the tick handler and the lag the recalculation of position causes. What I want to achieve is to get the click handler to fire even when the mouse cursor isn't on the element anymore when the dragend event happens. But now I that I write it like that it sounds kind of impossible and silly. I guess I'll just have to skip the recalculation of position when the element is dragged.
My idea was to have something like this but with additional interactions on the canvas and the element.
I have updated my answer with a possible way of achieving the outcome that you're after. It's a bit of a workaround, but it may let you keep the node moving the way you had it, with the flash behaviour you're after.
| common-pile/stackexchange_filtered |
How does Git keep track of branch state despite being unable to connect to my git server
My company uses a VPN to limit access to the corporate network. Our Git server sits on this corporate network, and as such requires the VPN for access.
Recently, when I checked out master locally I got the following output:
Your branch is behind 'origin/master' by 33 commits, and can be fast-forwarded.
However, when attempting to pull master, I got the following, as I was not connected to the VPN:
squishman@squishman-Parallels-Virtual-Platform:~/SkyATF$ git pull origin master
ssh: Could not resolve hostname [REDACTED]: Name or service not known
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
My question is, how can Git know how far behind my local master is, if it cannot connect to the git server?
My question is, how can Git know how far behind my local master is, if it cannot connect to the git server?
It doesn't.
It knows how far you were behind, the last time your Git got an update from the other Git. A successful git fetch updates things and you'll get an answer that is merely seconds out of date, instead of minutes.
The count you see is the result of comparing the branch's commit (as found via git rev-parse HEAD) against the branch's upstream's commit (git rev-parse HEAD@{upstream}). Running git rev-list --count --left-right HEAD...HEAD@{u} will get you the numbers for ahead and behind respectively. (This three-dot syntax and the --count --left-right are a bit tricky to explain, although a combination of the gitrevisions documentation and the git rev-list documentation will get you there.)
This is a fundamental concept in git. Almost all git commands are local other than push and fetch (and pull which is just a fetch + merge|rebase, and also some expert-level commands.) that do talk to a remote. @DVukajlo make sure you fully understand it. Talk to your colleagues/team leader/mentor, do some git tutorials.
By git remote -v you can get the list of remotes (if you have more then one).
By git branch -av you get a detailed list of branches and the related remote, so keep in mind that a branch is indirectly associated to a remote URI.
| common-pile/stackexchange_filtered |
Argument 2 passed to App\Http\Controllers\HomeController::productDetail() must be an instance of App\Product, string given
I have this route
Route::get('catalog/{category}/{product}', 'HomeController@productDetail')->name('product.index2');
And controller
public function productDetail(categories $categories, product $product)
{
$products = product::where('active', 1)->get();
if($product->categories != $categories){
abort(404);
}
return view('products', compact('product', 'products'));
}
The error
Argument 2 passed to App\Http\Controllers\HomeController::productDetail() must be an instance of App\Product, string given
Front
<ul class="accordion-menu">
@foreach ($categories as $item)
<li>
<div class="dropdownlink">{{$item->name}} <img src="{{ asset('build/img/d1.svg') }}" alt="Банковские терминалы"></div>
<ul class="submenuItems">
@foreach($item->children as $subcategory)
<li><a href="{{route('category.index2', $subcategory)}}">{{ $subcategory->name }}</a></li>
@endforeach
</ul>
</li>
@endforeach
</ul>
kernel.php
<?php
namespace App\Http;
use Illuminate\Foundation\Http\Kernel as HttpKernel;
class Kernel extends HttpKernel
{
/**
* The application's global HTTP middleware stack.
*
* These middleware are run during every request to your application.
*
* @var array
*/
protected $middleware = [
\App\Http\Middleware\CheckForMaintenanceMode::class,
\Illuminate\Foundation\Http\Middleware\ValidatePostSize::class,
\App\Http\Middleware\TrimStrings::class,
\Illuminate\Foundation\Http\Middleware\ConvertEmptyStringsToNull::class,
\App\Http\Middleware\TrustProxies::class,
];
/**
* The application's route middleware groups.
*
* @var array
*/
protected $middlewareGroups = [
'web' => [
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
// \Illuminate\Session\Middleware\AuthenticateSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
'api' => [
'throttle:60,1',
'bindings',
],
];
/**
* The application's route middleware.
*
* These middleware may be assigned to groups or used individually.
*
* @var array
*/
protected $routeMiddleware = [
'auth' => \App\Http\Middleware\Authenticate::class,
'auth.basic' => \Illuminate\Auth\Middleware\AuthenticateWithBasicAuth::class,
'bindings' => \Illuminate\Routing\Middleware\SubstituteBindings::class,
'cache.headers' => \Illuminate\Http\Middleware\SetCacheHeaders::class,
'can' => \Illuminate\Auth\Middleware\Authorize::class,
'guest' => \App\Http\Middleware\RedirectIfAuthenticated::class,
'signed' => \Illuminate\Routing\Middleware\ValidateSignature::class,
'throttle' => \Illuminate\Routing\Middleware\ThrottleRequests::class,
'verified' => \Illuminate\Auth\Middleware\EnsureEmailIsVerified::class,
];
/**
* The priority-sorted list of middleware.
*
* This forces non-global middleware to always be in the given order.
*
* @var array
*/
protected $middlewarePriority = [
\Illuminate\Session\Middleware\StartSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\Authenticate::class,
\Illuminate\Session\Middleware\AuthenticateSession::class,
\Illuminate\Routing\Middleware\SubstituteBindings::class,
\Illuminate\Auth\Middleware\Authorize::class,
];
}
Can you share your code where you called this route ?
You are using Model Bindings. Could you show us your app/Http/Kernel.php file just to make sure you have the correct middlewares set up?
Please share the link that you use to call the product.index2 route. you shared the one that calls the route category.index2.
Your route and your controller method expect 2 parameter one is category object and the other is product object. But when you call route give only one parameter.
Route::get('catalog/{category}/{product}', 'HomeController@productDetail')->name('product.index2');
you need to pass your product object also like below,
<li><a href="{{route('category.index2', ['category'=> $subcategory, 'product'=>$product ])}}">{{ $subcategory->name }}</a></li>
Argument 2 passed to App\Http\Controllers\HomeController::productDetail() must be an instance of App\Product, string given
| common-pile/stackexchange_filtered |
Group records using lookup object reference fields in fflib
I have a use case to group the accounts with the DNB Company Records(lookup in account)'s Global_Ultimate_DUNS_Number__c and check if the accounts tied via Global DUNS if they are present in more than one country and employee count > 10000 set the Segmentation to "Global" else set it to "Enterprise".
Here's what I tried:
IAccounts accountsWithGlobalDuns = Accounts.newInstance(accountsWithGlobalDuns);
Set<Id> dnbIds = globalDUNSAccounts.getDNBIds();
// gives the dnbrecord id to global duns where dnbid is the id tied to the global ultimate account
Map<Id, String> gdnbIdToGduns = DnBCompanyRecords.newInstance(
DnBCompanyRecordsSelector.newInstance().selectById(dnbIds)
).getDNBIdByGDUNSWithDUNSSameAsGDUNS();
//gives the dnbrecordid to its global duns association
Map<Id, String> dnbIdToGduns = DnBCompanyRecords.newInstance(
DnBCompanyRecordsSelector.newInstance().selectById(dnbIds)
).getDNBIdByGDUNS();
//filters only the globalultimateaccounts
IAccounts globalUtimateAccounts = accountsWithGlobalDuns.selectByDNBId(gdnbIdToGduns.keySet());
Map<String, Decimal> gdunsByCountryCount = accountsWithGlobalDuns.getCountryCountByGDUNS(dnbIdToGduns)
IAccounts globalUtimateAccountsInMoreThanOneCountry = globalUtimateAccounts .selectByDnbIdAndGDUNSMoreThanCountryCount(
1, dnbIdToGduns, gdunsByCountryCount
).selectByEmployeesGreaterThan(10000).setSegmentation('Global');
Accounts:
selectByDnbIdAndGDUNSMoreThanCountryCount(Decimal countryCount, Map<Id, String> idToGDUNS, Map<String, Decimal>
gdundsToCount) {
for (Account account : getAccounts){
if (gdundsToCount.get(idToGDUNS.get(account.Dnb_Company_Record__c)) > countryCount) {
// add
}
return new Accounts(<list>);
}
I feel like above is very complex. Is there any proper way to do this in fflib? Also, can we use directly use lookup field's reference in a domain class?
In general, the selector should just do the query, in your case, perhaps with an Aggregate query, then have a method that takes the selector results and refines it if the Aggregate query can't be used to deliver a final result. Your domain class issue should be posed as a separate question
| common-pile/stackexchange_filtered |
Highcharts - colorByPoint for serie.scatter.marker
In Highchart I would get a series of colored points séquencielle way:
marker 1: green,
marker 2: blue,
marker 3: red,
marker 4: green,
marker 5: blue,
marker 6: red,
marker 7: green,
marker 8: blue,
marker 9: red,
etc.
This is exactly what is achieved for columns with chart.options.plotOptions.column.colorByPoint = true
How to get the same thing for Scatter?
You can use colorByPoint in scatter series as well. To have only three colors in your scatter series you need to update the colors array in Highcharts object:
$('#container').highcharts({
chart: {
type: 'scatter',
zoomType: 'xy'
},
colors: ['red', 'green', 'blue'],
series: [{
colorByPoint: true,
data: [
[161.2, 51.6],
[167.5, 59.0],
[159.5, 49.2],
[157.0, 63.0],
[155.8, 53.6],
[170.0, 59.0],
[159.1, 47.6],
[166.0, 69.8],
[176.2, 66.8],
[160.2, 75.2],
[172.5, 55.2],
[170.9, 54.2],
[172.9, 62.5],
[153.4, 42.0],
[160.0, 50.0]
]
}]
});
Here you can see an example how it can work: http://jsfiddle.net/22419j0j/
Yes, thanks! and it surprised me! (Parameter not in highshart API reference ...)
But I forgot to mention that I use Highstock. Does it make a difference?
You're right, it works. Thank you very much. I have yet to identify what the problem was in my own testing. But it works now.
| common-pile/stackexchange_filtered |
Teamcity - deploying to virtual application "ERROR_USER_UNAUTHORIZED" error
I have a website that also has a virtual application (separate project) at the root.
I am trying to deploy to the virtual application using Teamcity. However, I keep receiving the following error:
"More Information: Connected to the remote computer ("xxxk") using the Web Management Service, but could not authorize. Make sure that you are using the correct user name and password, that the site you are connecting to exists, and that the credentials represent a user who has permissions to access the site. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_USER_UNAUTHORIZED."
I know the credentials are correct, the site does exist and the users have all the correct credentials. I have tried everything on the "http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_USER_UNAUTHORIZED" link.
I get the feeling that its just not possible to deploy to a virtual application. Has anyone managed to successfully deploy to a virtual application before?
Thanks in advance.
Adam
How are you trying to deploy the virtual application? What type of build runner are you using?
@tspauld I have 3 build steps within teamcity - 1) MSBuild, 2) Recycle App pool, 3) Deploy package to server. The deploy build step uses the command line runner. Any help greatly appreciated.
Did you ever find an answer to this? Having the same issue
| common-pile/stackexchange_filtered |
Python tuple assignment and checking in conditional statements
So I stumbled into a particular behaviour of tuples in python that I was wondering if there is a particular reason for it happening.
While we are perfectly capable of assigning a tuple to a variable without
explicitely enclosing it in parentheses:
>>> foo_bar_tuple = "foo","bar"
>>>
we are not able to print or check in a conditional if statement the variable containing
the tuple in the previous fashion (without explicitely typing the parentheses):
>>> print foo_bar_tuple == "foo","bar"
False bar
>>> if foo_bar_tuple == "foo","bar": pass
SyntaxError: invalid syntax
>>>
>>> print foo_bar_tuple == ("foo","bar")
True
>>>
>>> if foo_bar_tuple == ("foo","bar"): pass
>>>
Does anyone why?
Thanks in advance and although I didn't find any similar topic please inform me if you think it is a possible dublicate.
Cheers,
Alex
Essentially commas when used in assignments are what actually create tuples, not the parentheses. However, eq operator is a function that takes only single argument, and when you pass it values separated by commas, it takes it as passing arg rather than passing the tuple 'foo','bar'. Wrapping in parens forces the tuple assignment operator to happen before the evaluation of arg, so it behaves as expected.
To put it another way, if you think of foo_bar_tuple == 'foo','bar' as actually being foo_bar_tuple.__eq__('foo', 'bar'), you can immediately see why you need to wrap in parens to make it work
@aruisdante: Your statement that foo_bar_tuple == 'foo', 'bar' is equivalent to foo_bar_tuple.__eq__('foo', 'bar') is incorrect. The comparison is happening with just the 'foo' string (foo_bar_tuple.__eq__('foo'), which is False) and 'bar' is left as a separate expression.
It's because the expressions separated by commas are evaluated before the whole comma-separated tuple (which is an "expression list" in the terminology of the Python grammar). So when you do foo_bar_tuple=="foo", "bar", that is interpreted as (foo_bar_tuple=="foo"), "bar". This behavior is described in the documentation.
You can see this if you just write such an expression by itself:
>>> 1, 2 == 1, 2 # interpreted as "1, (2==1), 2"
(1, False, 2)
The SyntaxError for the unparenthesized tuple is because an unparenthesized tuple is not an "atom" in the Python grammar, which means it's not valid as the sole content of an if condition. (You can verify this for yourself by tracing around the grammar.)
Right! Okay thanks for that mate! I will accept as soon as it will let me :) cheers
Considering an example of if 1 == 1,2: which should cause SyntaxError, following the full grammar:
if 1 == 1,2:
Using the if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite], we get to shift the if keyword and start parsing 1 == 1,2:
For the test rule, only first production matches:
test: or_test ['if' or_test 'else' test] | lambdef
Then we get:
or_test: and_test ('or' and_test)*
And step down into and_test:
and_test: not_test ('and' not_test)*
Here we just step into not_test at the moment:
not_test: 'not' not_test | comparison
Notice, our input is 1 == 1,2:, thus the first production doesn't match and we check the other one: (1)
comparison: expr (comp_op expr)*
Continuing on stepping down (we take the only the first non-terminal as the zero-or-more star requires a terminal we don't have at all in our input):
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
Now we use the power production:
power: atom trailer* ['**' factor]
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
And shift NUMBER (1 in our input) and reduce. Now we are back at (1) with input ==1,2: to parse. == matches comp_op:
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
So we shift it and reduce, leaving us with input 1,2: (current parsing output is NUMBER comp_op, we need to match expr now). We repeat the process for the left-hand side, going straight to the atom nonterminal and selecting the NUMBER production. Shift and reduce.
Since , does not match any comp_op we reduce the test non-terminal and receive 'if' NUMBER comp_op NUMBER. We need to match else, elif or : now, but we have , so we fail with SyntaxError.
I think the operator precedence table summarizes this nicely:
You'll see that comparisons come before expressions, which are actually dead last.
in, not in, is, is not, Comparisons, including membership tests
<, <=, >, >=, <>, !=, == and identity tests
...
(expressions...), [expressions...], Binding or tuple display, list display,
{key: value...}, `expressions...` dictionary display, string conversion
| common-pile/stackexchange_filtered |
How do you remove an li with specific text?
Thanks for taking the time to read this.
I need to remove the list item that contains "Log In". The problem is that once logged in the list items change and I can't remove them by the order they are displayed in the list. I also cannot just delete the html or give any of the list items additional Ids or classes. I can add css and javascript though that does interact with the page.
Is there a way to remove only the list item that contains "Log In" within the li and anchor tags?
<ul class="nav navbar-nav" id="navLinks">
<li class="list-group-item">
<a href="/">Home</a>
</li>
<li class="list-group-item">
<a href="/postings/search">Search</a>
</li>
<li class="list-group-item">
<a href="/user/new">Create Account</a>
</li>
<li class="list-group-item">
<a href="/login">Log In</a>
</li>
<li class="list-group-item">
<a href="/help">Help</a>
</li>
</ul>
I assume you want to show the login element if the user is not logged in, if they are then you will hide it. Since you have control over the CSS, add a helper class .hide that sets display property to display: none;.
Then when you have a user logged in, you run a conditional in JS to check the textContent of the list-items (children[0]), which will be the a tag. Check the userLoggin status and if both natch add the class. I also added let userLoggedIn = true; as a parameter to the function to check if this is also true or not allowing you to control whether the class is added depending on whether user is logged in or not.
You can query the list-group-item using document.querySelectorAll('#navLinks .list-group-item'); then iterate over the nodeList and use a conditional to search the first childs textContent using item.children[0].textContent === 'Log In', if this conditional returns true then and user is logged in add a css class to hide the element.
const listItems = document.querySelectorAll('#navLinks .list-group-item')
const listItems2 = document.querySelectorAll('#navLinks2 .list-group-item')
function hideItem(el, bool) {
el.forEach(item =>
item.children[0].textContent === 'Log In' && bool === true ?
item.classList.add('hide') :
item.classList.remove('hide')
)
}
let userLoggedIn = true
let userLoggedIn2 = false
hideItem(listItems, userLoggedIn)
hideItem(listItems2, userLoggedIn2)
.hide {
display: none;
}
<h2>user is logged in</h2>
<ul class="nav navbar-nav" id="navLinks">
<li class="list-group-item">
<a href="/">Home</a>
</li>
<li class="list-group-item">
<a href="/postings/search">Search</a>
</li>
<li class="list-group-item">
<a href="/user/new">Create Account</a>
</li>
<li class="list-group-item">
<a href="/login">Log In</a>
</li>
<li class="list-group-item">
<a href="/help">Help</a>
</li>
</ul>
<h2>user is NOT logged in</h2>
<ul class="nav navbar-nav" id="navLinks2">
<li class="list-group-item">
<a href="/">Home</a>
</li>
<li class="list-group-item">
<a href="/postings/search">Search</a>
</li>
<li class="list-group-item">
<a href="/user/new">Create Account</a>
</li>
<li class="list-group-item">
<a href="/login">Log In</a>
</li>
<li class="list-group-item">
<a href="/help">Help</a>
</li>
</ul>
Dale, thank you! Your first answer prior to your edit actually worked perfectly. I have small sections I can add styles and scripts. However, I am clueless on how to manipulate items outside of a defined Id or given class. I know how to manipulate list items by order but when the user logs in, more list items appear. Hiding the Log in is perfect because in the custom navigation I have access to, I am able to provide two different login links, one to perspective users, and another for current users. I've had people frustrated & create new accounts. Problem is eliminated. I have much to learn!
Glad to help out and if your not continuously learning, then IMHO, you're not working ;) Happy coding!
If the Login item is always the fourth, you can do it by adding the next js:
document.querySelectorAll('.list-group-item')[3].remove()
That does not require an interaction with the user and you can add it in any function you want.
| common-pile/stackexchange_filtered |
PHP page not working on Bluemix, but works fine on my local webserver
I'm trying to run this code
include 'safe/DAL.php';
$feedback = getAllCups();
if ($feedback != "Error.") {
for ($x = 0; $x < count($feedback); $x++) {
$tmp_cup = $feedback[$x];
echo '<a href="#" class="list-group-item">';
echo '<h4 class="list-group-item-heading">';
echo $tmp_cup->getCupName() . ', ' . $tmp_cup->getYear();
echo '</h4>';
echo '<p class="list-group-item-text">Vinnare: Inte avslutad ännu</p>';
echo'</a>';
}
} else {
echo '<a href="#" class="list-group-item">';
echo'<h4 class="list-group-item-heading">Inga cuper hittades</h4>';
echo'</a>';
}
on my bluemix app (which works fine on my local webserver, when I'm debugging)
It shows nothing, and stops any other code from being shown after this code.
I've tried to enable error reporting like this
error_reporting(E_ALL);
but this also, does not show anything.
Why does it work on my local webserver, but not on bluemix?
Check your server's error logs.
Are you sure the right file is on the server? Have you tried restarting the server? Do you have some kind of opcode cache?
I have seriously no clue about how to debug it on my server. @SecondRikudo
Likely, the answer to your problem lies in some error being generated by Bluemix or your code. In a production system, you don't want to output stack traces and other system errors to potential clients, so Bluemix generally forces errors to be logged only to log files.
From your command line, log in to Bluemix using the cf tool, then run this command to get the most recent logs from your app: cf logs <app_name> --recent.
The result should show you which file and line is causing the app crash (which may in fact be something within DAL.php).
I've only just begun using bluemix. Where can I get hold of the cf tool and how do I use it? And how come it chrashes on bluemix, but not on other web servers?
You can obtain the CF CLI tool here: https://github.com/cloudfoundry/cli#downloads.
It's a command-line tool, so after you install it, you use it from the Command Prompt or Terminal on your computer. First, run cf login, which will ask you for the CF API to contact. In the case of Bluemix, this is https://api.ng.bluemix.net. Next, enter your email and password. After successful login, you can run the command to get your logs.
Thanks, I'll check it out tomorrow.
As to why your app works locally but not on Bluemix, it could be related to any number of things. Likely, it's some environmental thing you're not taking into account. If you're connecting to a database, you need to have that bound and then use the VCAP_SERVICES environment information to access it. Additionally, filesystem access is limited in Bluemix, so you usually can't write to disk directly. The logs will hopefully tell you what the exact issue is.
Has this been helpful? Any success?
Sorry Matt I've been busy. I will however need to address this issue sooner or later and then I'll try this approach first of all and let you know.
I've included the logs from the CLI tool, feel free to have a look. I can not find any critical errors. Sorry again for the late response.
Did you visit the page that renders as blank today before pulling the logs?
Yes,I wrote cf logs myApp, then I navigated to where the problem arises.
Are you connecting to a database? MySQL perhaps? Can you show the code in DAL.php? I've occasionally seen DB related code that fails without throwing errors.
Yes, I'm connecting to a MySQL database in DAL.php. it seems like that's the problem, but I don't know how to solve it since the connection is working everywhere else except on Bluemix, and as you say it throws no errors.
Please show the basic code from DAL.php. You could also troubleshoot it by inserting echo statements at various places in your code and when they stop showing up, you've found the issue.
Sorry Matt, but I've decided to abandon bluemix and use a different webserver due to lack of time to solve this issue. I'm grateful for your persistence in this matter tho, so thanks for trying.
| common-pile/stackexchange_filtered |
Using keyboard.read_key to quit a program on a condition
I'm trying to add functionality to my first project, a number guessing game. I want to allow the user to either enter a number to guess, or press escape to exit the program.
I've contained an if else conditional within a try block, at the point that the user provides input. I feel I have semantic error that I can't quite place, as the quit condition works upon first entering the while loop, but not in any further iterations.
I suspect there is other ways I can make this code more concise, and creating some functions is on my short-term to do list.
Any feedback appreciated,
Cheers.
generatedNumber = rand.randint(1,10)
while True:
try:
# prompt the user for their input
print("Please enter a number between 1 and 10.")
# read the users keystroke and if it is equal to 'esc'
if keyboard.read_key(suppress=True) == 'esc':
#close the program
print("The program will now close.")
quit()
else: # otherwise
#read their guessed number
userInput = int(input())
except ValueError:
print("Your input was invalid.")
continue
if userInput < 1 or userInput > 10:
print("Your guess should be a number between 1 and 10.")
continue
if userInput > generatedNumber:
print("Your guess was too high.")
continue
if userInput < generatedNumber:
print("Your guess was too low.")
continue
if userInput == generatedNumber:
print("Congratulations! Your guess was correct.")
break
print("The game is now over.")
exit()
The problem you have is a conflict between the read_key function and input because if ESC is not entered then another character is expected and this then affects the input call. Although there are complicated low-level ways to deal with this it would be much simpler just to accept say Q to quit and pick this up after input before the conversion to int
| common-pile/stackexchange_filtered |
Position of rendered GeoTiffs are inconsistent
I'm using React With OpenLayers to render a GeoTiff overlay on top of a map (mapbox satellite).
The problem I'm facing is that sometimes these geoTiff files are a little bit off of their original location.
Here are some screen shots
image 1
image 2
As we can see here the images are a little to the top right.
This doesn't happen on some GeoTiff files.
Here's the code that I"m using
const getViewConfig = map => (viewConfig) => {
const code = viewConfig.projection?.getCode();
return fromEPSGCode(code).then(() => {
const projection = code;
const extent = transformExtent(
viewConfig.extent,
viewConfig.projection,
projection
);
const width = getWidth(extent);
const height = getHeight(extent);
const center = getCenter(extent);
const size = map.getSize();
return {
projection,
center,
resolution: Math.max(width / size[0], height / size[1]),
};
});
};
function init() {
const key = import.meta.env.VITE_MAPBOX_KEY;
const url = `https://api.mapbox.com/styles/v1/xxxx/xxxx/tiles/256/{z}/{x}/{y}@2x?access_token=${key}`;
const baseSource = new XYZ({
url,
maxZoom: 22,
});
const baseLayer = new Tile({
source: baseSource,
});
const source = new GeoTIFF({ sources: [{ url: props.src }] });
const layer = new TileLayer({ source });
setLayer(layer);
const customMap = new Map({
controls: [],
target: mapRef.current,
layers: [baseLayer, layer],
});
customMap.setView(source.getView().then(getViewConfig(customMap)))
setMap(customMap);
customMap.on("loadstart", function () {});
customMap.on("loadend", function () {
setShowLoader(false);
});
}
I try to use some post rendering code to move the geotiff half of it's height down and full of it's width to the left but that will cause other geotiff which are rendering correctly to be incorrect. So I don't think that's a good solution.
I'd like to know why it's happening and how we can fix them, are the issues with the tif files? or the code. If it's in the tif file how can I test it?
Google "geotiff viewer online" for some sites where you can test a file. e.g. https://app.geotiff.io/load or you could use a Mapbox account https://docs.mapbox.com/help/troubleshooting/uploads/#tiff-uploads
Hi, Mike I've tested it with Qgis and there it's rendering at the proper location
Hi, @Mike I'm using proj4 for transforming the projection code to coordinate. Do you the problem might be in this library?
There might be a problem with proj4, or the projection definition obtained from epsg.io, if it is an unusual projection.
| common-pile/stackexchange_filtered |
datepicker doesn't work in ie7
I have implemented a ui datepicker which works perfectly in FF but not in IE.
See here link try to click on the "Geboortedatum" field.
What's going wrong?
here is your problem:
//set the rules for the field names
rules: {
firstname: {
required: true,
minlength: 2
},
surname: {
required: true,
minlength: 2
},
email:{
required: true,
email: true
},
password:{
required: true
}, <----- remove this comma !!!
},
remove that comma above. also, if you put that block of code into http://jslint.com you will find out the same thing. IE does not like trailing commas like that in hashes, as others have pointed out
+1 spotted the good comma, nice one!
Line 76 - 78 in Ladosa.js:
password:{
required: true,
},
should be
password:{
required: true
},
At least that's what my IE is giving errors about.
Hope this helps you in the right direction.
EDIT (line 78, also in Ladosa.js)
rules: { //begin rules tag
...
password:{
required: true
}, // <--- remove this comma also!
}, //end rules tag
messages: {
name: "Please enter your name",
email: "Please enter a valid email address"
},
Be sure to check if all opening tags have closing tags, and when removing code in a function please be sure to remove everything...
Also if you INDENT (code laten inspringen) your code it is easier to spot the mistake.
how are you able to see line 79, my view source only shows to line 69 for some reason
I edited the answer: it's line 76-78 in Ladosa.js.
no...my question is for viewing the source code, when I open it in notepad, the farthest the line goes to is 69 and i dont understand why
| common-pile/stackexchange_filtered |
Is there any way to create index in Pig Script?
I have a data file which has no id number ( index). Can one create index of each entry using UDF or any inbuilt function in pig?. For example:
data = load 'myfile.txt' using PigStorge(',') AS ( speed:float, location:charrarray);
A = foreach data generate index as (Id:int), speed, location;
I am having problem loading data from pig to Hbase because hbase reads speed as row-key value and there are many duplicate data (speed) in my file. I want to set index as row-key value and store in Hbase table. Do you have any suggestion for this?. Thank you.
Fundamentally, adding an increasing count like an ID is really hard to do in distributed computing. The problem is each split needs to know how many numbers were before it in order to keep counting. So, I don't suggest that approach.
You can write a UDF that generates a Java UUID or you can use RANDOM to just generate a random number. For example:
data = load 'myfile.txt' using PigStorge(',') AS ( speed:float, location:charrarray);
A = foreach data generate RANDOM() as id, speed, location;
Some unsolicited advice: you need to think about why you are using HBase for a second. The whole point of using a key-value store is so that you can look up things by keys. If you are just jamming it into HBase with an arbitrary ID, how are you going to look it up? If you are just planning on doing full table scans, you should probably just be using HDFS.
What kinds of questions are you going to ask of your data? If you are doing by location, you could make your location be part of the key and have the event be in the row.
I assure you that you are just using it wrong or don't understand how the data model works. It is neither garbage or incomplete. You should read more about it to figure out what it can do for you, but what you need to do in order for it to work correctly. For example, in 2010, Facebook was using HBase to store 135 billion messages a month. I'm sure it can handle your sensor data. http://highscalability.com/blog/2010/11/16/facebooks-new-real-time-messaging-system-hbase-to-store-135.html
Pig 0.14 version supports built-in function UniqueID. It will return unique id string for each record in the form of "taskindex-sequence".
A = foreach data generate UniqueID() as id, speed, location;
http://pig.apache.org/docs/r0.14.0/func.html#uniqueid
I had no idea this was a thing. Thanks for the update!
This was simply solved by using the following python UDF:
import random
from datetime import datetime as dt
def currTime():
x =dt.now()
return x.microsecond+ int(random.random()*100)+x.second*1000000
In every steps it generates the unique number and i used this number as an index. Still there is a chance to be same index in different entries but it is too small, which could be less than(0.01%) and depends on the processing time of computer for each unit step.
You are absolutely going to get duplicate IDs using this. x.microsecond has 1000 values, x.second has 60 values, and int(random.random()*100) has 100 values = 6m combinations. By the birthday problem you will have a 99% chance of a duplicate ID after only 7434 ID generations.
| common-pile/stackexchange_filtered |
Using a batch file to open Task Manager
Basically I want to use a batch file to open Task Manager.
All I need it to do is open Task Manager.
Easy, just type in TaskMgr in Notepad then save as a batch file.
Easy, just type in TaskMgr.exe in Notepad then save as a batch file.
Thank you for posting an answer - but maybe it would help to provide some clarification, in this specific case. The question already has an accepted answer. Your answer is almost identical to that answer. Do you need to include the .exe suffix? If so, then why? Or, are there any specific advantages to your solution - and, if so, what are they?
Don't forget to take the [tour] and read [ask]. And welcome to Stack Overflow!
| common-pile/stackexchange_filtered |
Java Spring Boot : How to find all rows with multiple Ids like findAllByAge(ArrayList<Long> ageList)?
I am having a table/entity User with fields like userId, name, age etc.
I know with Spring boot, in Repository, we can search for rows like
User findByUserId(Long userId)
Which may return User with the passed argument user id.
But I am looking for an option where I can pass a list in argument something like below.
List<User> findAllByAge(ArrayList<Long> ageList)
Which may return all the users with passed ageList. Let me know if this is possible with this or I have to use native query? I know I can do this with native queries also but if it's possible with this then it would be great.
I often forget how the query method syntax works, perhaps this well help some:
https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#jpa.query-methods
@Gavin the link is really helpful.
Anytime. I didnt know the answer, but had an idea where to look. Looks like @YCF_L answer below is what you are need, or a very good starting place. If you are looking to query an age range there is a keyword for that too :)
You have to use In in the end of the name of your method:
List<User> findAllByAgeIn(ArrayList<Long> ageList)
^^
For more details refer to this: Supported keywords inside method names
| common-pile/stackexchange_filtered |
Stripping a string of characters in PHP
Possible Duplicate:
stripping out all characters from a string, leaving numbers
I have something produced by an API that looks like: 34 Canadian Dollars. I'm trying to get rid of the alphabetical characters and keep only the numeric characters. How can I do that?
Given the fact that you're working with currencies and that the number may contain a comma or decimal point, you should use this instead:
preg_match('/([0-9\.,]+)/', $input, $matches);
// Output the amount.
echo $matches[1];
$result = preg_replace('/[^0-9\.,]/', '', $input);
HTH.
Edited to accept commas and periods.
This doesn't take into account decimal points and commas. See my solution below. He's trying to use the Google Calculator API to convert currencies which will result in decimal points.
That's easily fixed and IMO still much nicer than using preg_match. :)
You're right except that should the string contain a dot somewhere else (i.e. "34.31 Canadian Dollars."), your preg_replace would return "34.31." while my example above would return "34.31".
If it is an integer you are trying to collect you can use (int) $input
Well, i dont know your purpose, but if numbers are always in the beginning of the string, you can use casting.
$int = (int)"34 Canadian Dollars";
OR
$string = "34 Canadian Dollars";
$number = (int)$string
hope that helps. the preg_replace in the other answer will also do the job.
preg_match would be perfect for this.
preg_match("/^([0-9]*) *$/", $inputStr, $results);
echo $results[1];
Also, this online regex tester is a great place to test out other regex patterns.
preg_replace would probably work better, as per @jupaju's answer.
I completely disagree. If there are decimals, preg_replace won't work. Even by adding a decimal point to the list "34.12 Canadian Dollars." would become "34.12." With that said, this preg_match doesn't take decimal points or commas into consideration.
| common-pile/stackexchange_filtered |
how to change the border of a cupertinotextfield when is focused in flutter
I am trying to develop a form using flutter and I need to change the border of my cupertinotextfield when the user focused on it.
What have you tried so far? Please [edit] your question to include a [mcve] of the code you want to change, including any attempts you've made to solve your problem yourself. This helps the community to give you a more useful and relevant answer.
You can copy paste run full code below
You can use onFocusChange of FocusScope to check focus and change BoxDecoration to what you need
code snippet
FocusScope(
child: Focus(
onFocusChange: (focus) {
if (focus) {
setState(() {
boxSetting = boxHasFocus;
});
} else {
setState(() {
boxSetting = defaultBoxSetting;
});
}
},
child: CupertinoTextField(
decoration: boxSetting, controller: _textController)))
working demo
full code
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
visualDensity: VisualDensity.adaptivePlatformDensity,
),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
TextEditingController _textController;
BoxDecoration boxSetting;
BoxDecoration defaultBoxSetting = CupertinoTextField().decoration;
BoxDecoration boxHasFocus = BoxDecoration(
border: Border.all(color: Color(0xFFFFFF00), width: 0.5),
color: Color(0xFF9E9E9E),
shape: BoxShape.rectangle,
borderRadius: BorderRadius.circular((20.0)),
);
@override
void initState() {
_textController = TextEditingController(text: '');
boxSetting = defaultBoxSetting;
super.initState();
}
@override
Widget build(BuildContext context) {
return Scaffold(
backgroundColor: Colors.black,
appBar: AppBar(
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
FocusScope(
child: Focus(
onFocusChange: (focus) {
if (focus) {
setState(() {
boxSetting = boxHasFocus;
});
} else {
setState(() {
boxSetting = defaultBoxSetting;
});
}
},
child: CupertinoTextField(
decoration: boxSetting, controller: _textController))),
],
),
),
);
}
}
| common-pile/stackexchange_filtered |
EXC_BAD_ACCESS error.. textfield resignFirstResponder
I am a newbie to this Xcode stuff.
Details of my error:
1) I started with utility application in Xcode
2) In my MainView.xibI placed a text field. and linked it to code the easy way.(just dragging in to my MainViewController.h file. ..which declared it and even set both property and synthesize for it.)
3) Now when i run the program .. I see my textfield..I press in it..Keyboard come up... and when i try to type in ..just after 1 or 2 letters it runs an error in UIApplicationMain saying EXC_BAD_ACCESS. I searched it for quite some time and somewhere here only on stack overflow I found a solution to change correction in textField to no .. which solves it... This is not my main problem .. but I want to confirm if this method is okay to keep doing in future....sometime in future I will want to have correction enabled.
4) My main problem is that I want to have my textfield keyboard pop in back when user presses return key on the keyboard.
I used this method ..
and you can see it I tried several ways to do it .. to check for various methods.
-(BOOL) textFieldShouldReturn:(UITextField *)textField {
// if([textField becomeFirstResponder]) {
NSLog(@"IN FINCTION");
[textField resignFirstResponder];
return YES;
}
The program was run with both lines commented and not commented but still every time it hits the end of the function it takes me to the the same error in UIApplicationMain: EXC_BAD_ACCESS.
Also, I did enable zombies but i am not sure if i did it correctly so if any of you guys want info on that.
I checked the code on different version of Xcode.. one was normal and other one was GM.
The console shows the log of IN FUNCTION.
Thanks
You would want to return NO from that method because the way you have it now, you are removing focus (and the cursor) from the textField, but then indicating that you DO want to process the return button. But since you have resigned first responder, there is noone to process the return button being pressed.
Try this:
-(BOOL) textFieldShouldReturn:(UITextField *)textField {
[textField resignFirstResponder];
return NO;
}
From UITextFieldDelegate Protocol docs:
textFieldShouldReturn:
Asks the delegate if the text field should process the pressing of the return button.
If the above doesnt work try this:
Set NSZombieEnabled, MallocStackLogging, and guard malloc in the debugger. Then, when your App crashes, type this in the gdb console:
(gdb) info malloc-history 0x543216
Replace 0x543216 with the address of the object that caused the crash, and you will get a much more useful stack trace and it should help you pinpoint the exact line in your code that is causing the problem.
See this article (how-to-debug-exc_bad_access-on-iphone) for more detailed instructions.
If the above also doesnt work try this:
You might also want to try turning off "Auto-Correction" in the simulator keyboard settings. There is a bug in one of the versions of the simulator that can cause a crash when you try to enter text in a text field.
Thanks.. Thad did it!.I returned NO!.
| common-pile/stackexchange_filtered |
What if the Buddha wasn't a prince?
If the Buddha wasn't a prince would his outlook on life been different? If he was a farmer looking after his land, feeding his family for example, would he have simply left his responsibilities behind to attain Nirvana?
If he was the father or husband of a family that needed him, would he have left them?
The purpose of the question is to understand that the Buddha left his family who had all the comforts in life. He didn't need to be concerned with their welfare since they would be provided for. How does one relate this lives of those who do not have these safety nets?
Regardless of the family situation, there's no issue in leaving if the intention is to genuinely strive for enlightenment.
To the first few questions the answer lies with parami/paramitas. These are the perfections which someone who made a vow to become a Buddha will have to develop to perfection for countless aeons. So a Buddha is someone who have trained for aeons until he is perfect to re-discover dhamma and teach. So his last birth will always be of a high social class.
"How does one relate this lives of those who do not have these safety nets?"
Develop the ten paramis and situations in life will change in such that one may have the opportunity.
In the Pāli canon's Buddhavaṃsa[3] the Ten Perfections (dasa pāramiyo) are (original terms in Pāli):
Dāna pāramī : generosity, giving of oneself
Sīla pāramī : virtue, morality, proper conduct
Nekkhamma pāramī : renunciation
Paññā pāramī : transcendental wisdom, insight
Viriya (also spelled vīriya) pāramī : energy, diligence, vigour, effort
Khanti pāramī : patience, tolerance, forbearance, acceptance, endurance
Sacca pāramī : truthfulness, honesty
Adhiṭṭhāna (adhitthana) pāramī : determination, resolution
Mettā pāramī : loving-kindness
Upekkhā (also spelled upekhā) pāramī : equanimity, serenity
The Buddha will always be of a Noble birth and high social status. Not all Buddhas were in the past were princes. This said, the rest of the hypothetical speculation will not hold.
| common-pile/stackexchange_filtered |
Find all dependencies in a Java class
I'm trying to get all dependencies in a Java class, including classes used for generics parametrization and local variables types. So far best framework I've found is apache bcel. Using it I can easily find all fields, method arguments and local variables from byte code. Basically everything except of generics and local variables types. For example, from line List<Point> points = new ArrayList<Point>(); I can only find one dependency - ArrayList using JavaClass.getConstantPool() method from bcel. It can't detect neither List interface nor Point class. I also tried tattletale and CDA, unfortunately without success (the same results). Examining imports is not enough - I need also dependencies from the same package and I can't accept wildcards. I would be grateful for any help.
Are you trying to do this at runtime from bytecode? Compile-time?
I want to find all static dependencies (not runtime). Reading bytecode is probably the best idea for that. But I can try to parse source code as well (it just doesn't seem to be the best idea and I have not found any solution for that).
Not all the source dependencies need end up in the class file (generic types within the method, inline constants). Just look at class files in a decent text editor, or on UNIX strings.
I've just checked it and bytecode does contain everything I need. It contains type of every local variable and every type used for generic classes/interfaces parameterization. I just don't know how to read that data from code using bcel. Thanks for comments.
I think the best approach (if in runtime) is Reflection
@user2511414 Reflection is not enough. Using reflection allows me to inspect fields, methods signatures, interfaces and so on. It doesn't show me local variables inside methods. I needed every dependency, inluding those used only inside methods code. Anyway, my problem is already solved. If you are interested, look at my answer. Cheers.
I've finally found solution. ASM Bytecode Framework is the right tool to use. Using official tutorial and right example it's quite easy to get all needed dependencies. In the example there is already a visitor class DependencyVisitor which does what I want. To get right formatting I had to change only one method in DependencyVistitor example code, so it adds full class names instead of packages only:
private String getGroupKey(String name)
{
//Just comment that block so you can get full class names instead of package only
/*
int n = name.lastIndexOf('/');
if (n > -1)
{
name = name.substring(0, n);
}
*/
// Replace resource char with package separator char
packages.add(name.replace("/", "."));
//packages.add(name);
return name;
}
Looking at DependencyVisitor code you can easily understand what it does and modify it to your needs. Running it on my example class it gives me nice, useful output:
[java.util.ArrayList, java.lang.Object, java.util.List, java.awt.Point, goobar.test.asmhello.TestClass, java.lang.String, java.lang.Integer, java.awt.Graphics, goobar.test.asmhello.TestClass2]. It contains every class and interface I've used and every type used for generics parameterization.
perfect, haven't had such this problem before, usually identify by annotations, but this is cool, thanks
The example code has moved here.
The example code is now on Guthub
| common-pile/stackexchange_filtered |
basic questions about SharePoint
I am newbie to SharePoint. Can anyone explain when to use what?
Team Site
Blank Site
Group WorkSite
Document Workspace
thanks
How we roll at my company:
A blank site is a good place to start out clean and empty. Use when no other site provides any clear benefit.
A team site is a blank site that ships with a calendar, task list, document sharing, and a few other goodies. It is good when you would plan on setting up the like anyway.
We don't use a group work site, really.
A document workspace is useful when collaborating on a single document, such as a contract or a proposal, if the intent is to have a heavy process for the document, including meetings, calendar events, tasks, and so on. They tend to be sub-sites in our environment.
| common-pile/stackexchange_filtered |
Rewrite HTTP to HTTPS Django
I currently have a Django web server where I need to have users access through HTTPS. I went through this link Django HTTP to HTTPS and understand I need to implement a rewrite rule but am unsure about which file to modify.
Here is the code snippet in nginx:
server {
listen 80;
rewrite ^(.*) https://$host$1 permanent;
}
Although, I am confused as to where this would be placed. Any help would be greatly appreciated!
Reason for all of this is I am implementing an iOS e-commerce application that uses Stripe Connect. But I need to supply an HTTPS redirect URI, not HTTP.
EDIT
So I am trying to go through the tutorial http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html but am getting stuck when adding nginx.
Installing is no problem, but its mainly the part where Im linking uWSGI and nginx together.
The step where I implement mysite_nginx.conf is confusing to me. I already added uwsgi_params to my django project root directory, so thats steps all good. But i feel my problem is linking nginx to the mysite_nginx.conf file. I linked the file to the /usr/local/etc/nginx/sites-enabled/ directory but once I run the server uwsgi --socket :8001 --wsgi-file test.py, then direct the web browser to port 8000, test.py isn't being rendered, the typical nginx intro template is being rendered. I have setup nginx to serve to port 8000 as well.
So I've followed the tutorial very explicitly, i feel, but am getting stuck on the part nginx and uWSGI and test.py
Any help would be awesome! Also, I am using Mac OS not Linux
Do you want to do this in nginx or in django? In django, use a middleware, in nginx, well, use that snippet. But Django should be behind nginx or some other server anyway, which is probably a better location to include the rewrite.
That rewrite belongs in nginx, but you'll also need to set up other stuff too. Here's a good guide on how to do this: http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html
So i decided to go with the good old Apache Solution and was able to get it working. Thanks guys for your input! I greatly appreciate it
| common-pile/stackexchange_filtered |
Use touchable/virtual keyboard with react-select
I'm deploying a react application that will run over PanelPC, so without keyboard.
In other part of code I use react-touch-screen-keyboard https://github.com/xtrinch/react-touch-screen-keyboard to show a virtual keyboard on pressing some input field.
Now I need to use it within react-select https://github.com/JedWatson/react-select but I don't know if is possible and how..
Anyone have already done something similar? Or have you an alternative to react-select that works with touch screen keyboard?
Thanks
Paolo
I have the same problem. Did you ever find a solution? I've tried triggering keyboard events, but no luck so far.
Not yet..I will try different alternatives in this days..I'll update you if I have some news..
I got<EMAIL_ADDRESS>working with https://github.com/srm985/mok-project. I made a react component wrapper and changed the source code so I could use it as a controlled component, with a callback for keypresses.
| common-pile/stackexchange_filtered |
Phalcon PhP - flash messages not showing
I'm having trouble showing flash messages with Phalcon PhP. Here is how I register the service:
use Phalcon\Flash\Direct as Flash;
$di->set('flash', function () {
return new Flash(array(
'error' => 'alert alert-danger',
'success' => 'alert alert-success',
'notice' => 'alert alert-info',
'warning' => 'alert alert-warning'
));
});
In my controller I add the flash message like this
$this->flash->success('The carrier was successfully activated');
In my view I try to show like this (volt):
{{ flash.output() }}
My layout has the {{ content() }} tag and I have tried to apply the discussed in this post but it doesn't work anyway.
Can you see what I'm missing here? Thanks for any help!
Your are using the wrong flash session. Instead of
use Phalcon\Flash\Direct as Flash;
Use
use Phalcon\Flash\Session as Flash;
The documentation says:
Flash\Direct will directly outputs the messages passed to the
flash.
Flash\Session will temporarily store the messages in
session, then messages can be printed in the next request
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.