Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Wood chair: Cutting out back post vs (domino) tenons
I would like to build several outdoor teak chairs, similar to this one https://www.williams-sonoma.com/products/larnaca-outdoor-dining-side-chair/?pkey=s~larnaca~70&sbkey=williams-sonoma-home
What I am wondering about though is, if I have to cut out the rear leg/back post for the angle or if I could use a (domino) tenon. A tenon would not only be easier but also save a lot of wood. All chair-building plans I found so far seem to cut out the rear leg rather than use a tenon joint between the back post and rear leg. Would a tenon joint not be strong enough?
Do you have drawings (electronic or simple sketches on paper) you could share to indicate what you mean by "cut out the rear leg/back post" vs using a "(domino) tenon"? A mortise & tenon joint would be very strong for this situation, but I'm not sure I'd trust a loose tenon (domino) for a structural joint that's going to see a lot of shear forces as the chair is dragged around.
@FreeMan I'm sure he means mill the back leg/post in a single piece rather than cutting the leg and post separately and then join them. The latter would waste less because the two parts are straight, but the strength of the joint could be a problem.
Thanks, @Caleb, I certainly failed to gain that understanding from the original question.
Hi, welcome to StackExchange. If you make a chair with this shape of leg from two pieces I would not trust the joint to any of the typical or common Domino lengths.
Hi, can you cut a very long bridle joint?
For a two-piece leg I was all set to recommend alternatives to typical/usual Dominos1 for reinforcing the joint so as to make them a more viable option, until I realised that the area immediately adjacent to the joint needs to accept the horizontal tenon/dowels/screws that would join the chair rails to those legs.
Any such fixing either A) directly interferes with a Domino (or other floating tenon) regardless of length, ditto dowels, that lies vertically in the wood there or, B) undermines the wood to both sides compromising strength. My gut feeling is there's simply not enough thickness here — even assuming it's strong, straight-grained wood you have bought — to get a guaranteed result2. Sure, joints like this are seen in modern furniture but that's no recommendation; numerous times I've seen furniture repairs where joints featuring just such intersecting or crossing/adjacent joinery have failed, apparently spectacularly judging by the level of splintering around the joint.
With the potential for waste and that you want to use teak I can fully understand the desire not to have to make the rear legs by cutting out of the solid (and that's even without the inflated prices we're seeing today). However, I think the above makes making these legs from two parts really untenable, not if you want to ensure a long and safe working life.
If you can't swallow the cost of teak in the required dimensions to make the legs from the solid I have two suggestions.
Laminate
Obviously there's still a certain amount of waste to live with here, but you start with thinner stock which won't have the same $$$ attached to it.
Choose a different hardwood
I think perhaps the best option here is to choose to make the chairs from another hardwood (and not just because it will be more modestly priced).
There are actually numerous domestic hardwoods you could pick from that have superior exterior durability. That could probably do with being stressed: superior durability to teak3.
Even white oak potentially offers a very viable alternative, and while it might be more expensive currently than one would prefer it won't be in the *gulp* territory of teak.
If oak doesn't take your fancy see table from the FPL in this previous Answer.
1 Which are too short to rely on.
2 Which isn't something you can guarantee you'll be able to get going in to the project. Very often with tropicals unless you're paying a super premium or can carefully select in person from a large standing stock at the supplier (best of luck with that!) you're getting stuff that is not ideal.... and might even have been discounted by a previous generation of woodworkers.
3 Because you won't be buying old-growth teak.
What I am wondering about though is, if I have to cut out the rear leg/back post for the angle or if I could use a (domino) tenon.
Those back joints take a lot of stress! Not only do they carry a lot of the weight of the person sitting in the chair, they also have to resist racking in two directions and the force of someone leaning on the chair's back. It's even worse if the person tilts the chair back on it's back legs.
There are chair designs where the back leg and the posts holding the chair back are separate pieces, but in those cases the back usually has other support, like vertical spindles or the arms of the chair. Even with those extra supports, the joint between the posts and (typically) the seat often fails.
Chairmaking and Design is a great book on chair design by Jeff Miller, and one of the lessons I took from it is that there are lots of different possible designs for chairs, and also lots of problems that only become apparent when you build a prototype. In that spirit, if you think you can do something differently or better but aren't sure that it'll work, then by all means build one and see how it goes. In this case, you don't really have to build the whole chair; you could just cut out one solid leg and also build a version joined with tenons or floating tenons and test them. Keep in mind that there are already two mortise and tenon joints where the seat rails connect to the back leg, so fitting another one in the same area without losing too much strength could prove difficult.
If you're not interested in doing some preliminary testing, I'd stick with using a single solid piece. You can of course reduce waste by nesting several legs together. Paying extra for a wide board could make sense because you'll reduce waste.
|
STACK_EXCHANGE
|
This security update for SfBS 2015 is build 6.0.9319.842, up from July’s 6.0.9319.634. Despite the strange jump of over TWO HUNDRED build numbers it updated only two components on my Standard Edition Front-End and delivers only one fix.
- Kb 5022533 Response Group Service Crashes in Skype for Business Server 2015
Nothing noted. No cmdlets have been added to the SfB module in this update.
There’s a new pre-req’s section – see below. Otherwise, nothing noted.
The KB calls out new pre-req’s:
To apply this update, you must have the following installed:
- Microsoft .NET Framework 4.7.2 or a later version (4.8 is supported), either Offline Installer or Web Installer
- Windows PowerShell 5.1
- The current version of the PowerShellGet module (requires the ability to download directly from PSGallery)
Here’s how to check these:
.NET Framework 4.7.2 or later:
This quick registry command will query your .NET framework version:
(Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full").Release -ge 461808
This will report True if you’re running the 4.7.2 GA release or later.
$PSVersionTable is your friend here. Here’s mine:
PS C:\Users\greig> $PSVersionTable.PSVersion Major Minor Build Revision ----- ----- ----- -------- 4 0 -1 -1 PS C:\Users\greig>
Oh dear. My old Server 2012R2 machine is still only running PowerShell v4. We’ll need to update that to 5.1. Fortunately that’s not too hard.
Download Windows Management Framework 5.1 and choose the version appropriate to your OS. I’ve selected the version for Windows 2012R2 in this screen-grab:
You’ll be prompted for confirmation, and then to accept the software licence before it will install.
Ahh, that’s better:
PS C:\Users\greig> $PSVersionTable.PSVersion Major Minor Build Revision ----- ----- ----- -------- 5 1 14409 1005 PS C:\Users\greig>
Installing the latest version of PowerShellGet
Having just upgraded to PowerShell 5.1, that version doesn’t include NuGet, another pre-pre-req.
- Install TLS1.2:
[Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::Tls12
- Install NuGet:
Install-PackageProvider -Name NuGet -Force
- Install PowerShellGet:
Install-Module PowerShellGet -AllowClobber -Force
- Close PowerShell and launch a fresh window.
- Register the PowerShell Gallery as a trusted repository:
Set-PSRepository -Name PSGallery -InstallationPolicy Trusted
Here’s the “before” view of it going on to my Lab’s Standard Edition Front-End:
The installer didn’t prompt me to reboot, but I always like to give it one for good measure.
- .Net Framework: https://learn.microsoft.com/en-us/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed
- Installing the latest version of PowerShellGet
17th December 2022: This is the initial release.
|
OPCFW_CODE
|
Differences Between Buck-Converter and Linear Current Booster
Thinking through a design that interfaces a solar panel to a 12V diaphragm pump, and considering the interfacing options. I've come across the term "Linear Current Booster", and wanted to explore this idea further.
While the concept of "MPPT" controllers is fairly well documented for solar panel to battery systems (and ICs are marketed for this purpose, like the fairly fancy Linear LT8490), I'm not seeing as much documentation around the Linear Current Booster.
I understand the concept of a Linear Current Booster as a DC-DC converter that:
Reduces a solar panel voltage to a lower output voltage, and increases output current,
Ensures that the minimum output voltage is above some threshold (how does it determine this?) to ensure that the pump does not run at 0V; which would be effectively running at 0 RPM and burn out the windings in time,
Ensures that the maximum output voltage is limited, to ensure that the pump does not overspeed.
The advantage (as I understand it) is to be able to operate in lower light conditions; when the panel could not provide enough current to the pump to give it enough torque to start moving if it was directly connected.
Some examples of commercially available Linear Current Boosters:
EAST LCB
SolarConduit LCB
Clean Energy Brands LCB
So, does anyone here have any insights on the difference between a DC-DC buck converter and an LCB? I think that all of the functions of an LCB can be accomplished with a buck converter that implements an output undervoltage/overvoltage lockout. Is my understanding correct?
EDIT -
It looks like the author of this buck converter design indicates that "Another application for this circuit is as a "linear current booster" (LCB). Common LCB uses include running motors and other devices directly from a solar panel with no battery. In the LCB mode of operation, it is possible to use the circuit to change a low current solar panel input to a higher current (but lower voltage) output."
A buck converter, MPPT, and linear current booster are basically the same. The only difference is where you get the feedback from. For a power point solar controller, the input voltage is monitored. Below the power point voltage, the converter is faked out into thinking the output voltage is too high and shuts down. So the converter has to stay below the max output voltage and above minimum panel voltage. Typical MPPT hunt to find the best power point. Since power point only changes with temperature, you can track panels temp and be pretty close. If you just set a fixed power point voltage, you do just about as well.
Linear current booster acts in the same way. They generally have a fixed power point and often don't care about the output voltage. The motor is often used as the inductor in the buck conversion with a diode in parallel. I use the same process to heat water with PV efficiently.
using the motor as the inductor is an interesting concept; but in this case, what would you be using as the feedback? The current through the motor-as-buck-converter inductor, or the voltage at the 'output' of the motor at the capacitor?
I think an LCB is sort of an adaptive buck converter, possibly a converter designed for constant current output. I have not been able to find a good schematic for comparison, however.
I have not either, unfortunately. I wonder, though -- given that typically an MPPT has the luxury of a dump load (batteries) to push all of that power into, if the linear current booster acts in such a way that it does not operate the solar panel at the maximum power point.
What I mean is, with a pump, I believe that you may have additional constraints (e.g. maximum current into the pump, maximum voltage across the pump) that may prevent MPPT operation.
|
STACK_EXCHANGE
|
There is a relationship between parcellation scale and performance
One of the key goals of this work was describing the relationship between spatial scale (i..e, number of parcels) and predictive performance. That some relationship exists at all is abundantly clear from a range of results presented. The interesting piece is how this relationship appears to follow a power-law scaling, though the details of this scaling can vary with choice of pipeline and parcellation strategy. We identified scaling across roughly 3 orders of magnitude (~10-4000) with coefficients between ¼ and ⅓ (exact coefficient dependent on specific setup). We further tested how stable this relationship was when compared in an intra-pipeline fashion, finding that the general pattern was preserved. That said, the SVM-based results varied significantly from the other two pipelines, with a more steep estimated scaling coefficient and a larger estimated region of scaling, which may be a result of the front-end feature selection portion of the SVM pipeline.
We also explored the influence of choice of ML pipeline. For parcellations larger than size ~100 the SVM based pipeline outperformed all other pipelines and for those less than ~100 the Elastic-Net pipeline. Whereas the LGBM tree-based pipeline was not competitive at any size, an observation inline with recent work based on sMRI ML on UK Biobank participants (Schulz 2020). We also investigate one potential explanation for why the SVM outperforms the other pipelines, that is is the front-end univariate feature selection responsible for improved scaling? as that is a piece only added to the SVM based pipeline. In short, we found that this was not the case.
While perhaps interesting conceptually, treating choice of parcellation as a nested hyper-parameter, in practice, yielded lackluster results, especially when compared with the ensemble based methods. We observed that this approach fell closely in line with expected random parcellation performance at the same size. In contrast, we observed a significant performance improvement from the multiple parcellation ensemble based strategies when compared to the single parcellation only results. Notably, the ensemble based random parcellations continue to exhibit scaling beyond the ~4000 range where scaling was estimated to have ended with respect in analyses with single parcellations. These results establish the merit in constructing ensembles across multiple parcellations to achieve maximal predictive performance. Specifically, we found no significant differences in predictive performance between the voting and stacking ensemble approaches tested. We did observe significant differences between ensembles with random parcellations of the same size versus ensembles with parcellations of multiple sizes, in this case finding that the fixed size parcellations on average performed better. Therefore, to maximize predictive performance and computational demands, we recommend that, of the ensemble methods tested, fixed size parcellations with a voting ensemble be used in future work.
Notably, it is not necessarily true that parcellations with a higher number of parcels will always perform better. For example, comparing between randomly generated parcellations and existing literature based parcellations revealed consistently better performance for existing parcellations. This could suggest that, on average, the existing parcellations map better onto meaningful neuro-anatomy relative to random parcellations of similar size. Different phenotypes of interest also vary in how much they follow the observed scaling relationship. We also found an interesting increase in spread of mean ranks as the number of parcels grew, where not only the mean rank increased but also the inter quartile range at each size increased. This behavior is likely a result of the distributed and complex brain-based nature of the phenotypes studied, where different targets may have different optimal resolutions. That said, the pattern in the average case remains clear and, we argue, is still meaningful despite recognizing that variation exists across possible phenotypes.
|
OPCFW_CODE
|
using DslTools.Tests.StockObjects;
using DslTools.Tests.StockObjects.Verifiers;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Linq;
using static DslTools.Tests.StockObjects.Stock.Tokens;
namespace DslTools.Tests
{
[TestClass]
public class LexerBaseTests
{
[TestMethod]
public void ReturnsStringOfTokens()
{
var tokens = Stock.Lexers.GetOneTwoThree().Tokenize("ThreeOneThree");
Stock.Verify.CreateTokenStreamExpectation<Numbers>()
.Expect(Numbers.Three)
.Expect(Numbers.One)
.Expect(Numbers.Three)
.Verify(tokens);
}
[TestMethod]
public void LexError()
{
var tokens = Stock.Lexers.GetOneTwoThree().Tokenize("I don't know what this is");
Assert.IsFalse(tokens.Any(), "No tokens should be returned from this string");
}
[TestMethod]
public void TokenStreamCanBeRewritten()
{
const string code = "TwoOneTwoThreeTwo";
var lexer = Stock.Lexers.GetOneTwoThree();
Assert.AreEqual(5, lexer.Tokenize(code).Count(), "Sanity Check: Test results depend on there being 5 tokens initially");
lexer.RemoveTwos = true;
Stock.Verify.CreateTokenStreamExpectation<Numbers>(Options.IgnorePositions, "Something wrong in Rewrite: ")
.Expect(Numbers.One)
.Expect(Numbers.Three)
.Verify(lexer.Tokenize(code));
}
}
}
|
STACK_EDU
|
|o||a base class, which provides methods to convert marked-up text into runnable code,|
|o||an extension which enhances security by runing the code in a Safe compartment,|
|o||an extension which feeds output through an HTML-escaping filter, and|
|o||an extension which records internal profiling data for benchmarking purposes.|
A naive implementation of this might use a subclass for each behaviour, and look like the following:
The well-known problem with this implementation appears when you want to combine several features:
This is the dreaded diamond inheritance problem: if Base provides a compile() method, which Filter and Safe each override to perform additional actions before or after calling SUPER::compile(), how can we ensure they are all called in the correct sequence?
The standard software engineering solution is to replace the use of inheritance with decomposition into several different classes of objects, which then cooperate through decoration and delegation; for example, using separate classes for a resolver, a lexer, a parser, a compiler, and an output channel. (See HTML::Mason for an example of this design.)
Indeed, composition is an underutilized design technique, and there are many times when inheritance is not the best tool to use. But of course, in Perl theres more than one way to solve this problem, one of which is facilitated by this distribution.
We can rearrange our class hierarchy to avoid diamond inheritance by using a base and a collection of mixin classes, which dont directly inherit from the base class:
However, in this condition our mixin classes cant call SUPER methods at all! Instead, another redispatch mechanism is needed, one that is able to back-track through the inheritance tree and explore other branches. (See NEXT for such an implementation.)
The order in which mixins are stacked is significant, so the caller does need to have some understanding of how their behaviors interact. For example, youd typically want to ensure that the Benchmarking mixin was the first in the chain, so that it could time everything later in the sequence.
The Class::MixinFactory distribution provides serveral elements to facilitate tihs kind of dynamic mixin architecture. The top level package is just a facade that loads the other necessary classes and provides a few import options for compile-time convenience. (See Class::MixinFactory.)
To generate an object with some combination of mixins, you first use a mixin factory to generate a mixed class. If a class with that combination of classes has already been created, it is reused. You can add a factory method to your base class, create a separate factory object, or inherit to produce a factory class. (See Class::MixinFactory::Factory.)
To allow mixin classes to redispatch to subsequent classes, all mixed classes also inherit from a class which provides a NEXT() method. (If you would prefer, your mixin class can alternately use the AUTOLOAD solution provided by the NEXT::ACTUAL module from CPAN, or any other equivalent re-dispatch mechanism.) (See Class::MixinFactory::NEXT.)
There are number of other modules on CPAN that also support mixins, method importing, or run-time multiple inheritance, while others dont use mixins but are addressing a similar area of concern.
o The mixin, Class::Mixin, and Spiffy modules support mixin classes but dont have a configurable factory object or support run-time mixin selection. o The Class::Mix and Class::Mutator modules provide run-time class generation with multiple inheritance, but dont provide a configurable factory object or a redispatch technique. o The Class::Factory module has a factory interface, but doesnt support multiple inheritance. o The NEXT module provides a backtracking equivalent to SUPER similar to the NEXT method included here, but uses AUTOLOAD rather than a universal method. o The Class::Delegate and other modules support decoration to address this problem via decomposition. o The Class::Role, Class::Roles and Class::Trait modules support composing shared behaviors into your class.
This is version 0.92.
This module is new and relatively untested.
Please report any problems you encounter to the author at the below address.
This module should work with any version of Perl 5, without platform dependencies or additional modules beyond the core distribution.
perl -MCPAN -e install Class::MixinFactory
After downloading the distribution, follow the normal procedure to unpack and install it, using the commands shown below or their local equivalents on your system:
tar xzf Class-MixinFactory-*.tar.gz cd Class-MixinFactory-* perl Makefile.PL make test && sudo make install
If you have questions or feedback about this module, please feel free to contact the author at the below address. Although there is no formal support program, I do attempt to answer email promptly.
I would be particularly interested in any suggestions towards improving the documentation, correcting any Perl-version or platform dependencies, as well as general feedback and suggested additions.
Bug reports that contain a failing test case are greatly appreciated, and suggested patches will be promptly considered for inclusion in future releases.
If youve found this module useful or have feedback about your experience with it, consider sharing your opinion with other Perl users by posting your comment to CPANs ratings system (http://cpanratings.perl.org/rate/?distribution=Class-MixinFactory).
For more general discussion, you may wish to post a message on PerlMonks (http://perlmonks.org/?node=Seekers%20of%20Perl%20Wisdom) or on the comp.lang.perl.misc newsgroup (http://groups.google.com/groups?group=comp.lang.perl.misc).
Developed by Matthew Simon Cavalletto at Evolution Softworks. You may contact the author directly at evo#cpan.org or simonm#cavalletto.org, replacing # with @.
Custom development and technical consulting are available at www.evolutionsoftworks.com. More free Perl software is available at www.evoscript.org.
My sincere thanks to the Perl Monks community for their feedback on earlier versions of this commentary.
Copyright 2004 Matthew Simon Cavalletto.
You may use, modify, and distribute this software under the same terms as Perl.
|perl v5.20.3||MIXINFACTORY::README (3)||2004-11-19|
|
OPCFW_CODE
|
Closeness with spouse/family among the Trimurti
I am asking this question purely on a pictorial stance.
Lord Brahma (And Mother Saraswati):
Picture of both of them closest that I got is this:
Now to Lord Vishnu:
Here is a relatively more close painting of the Lord and Mother Lakshmi(close in the sense that the above one shows Mother in a state of service,borne out of love,and is a/their Standard depiction,while the below one shows Mother more explicitly as arddhangini .)
Now Lord Shiva:
Although a good number of pictures depict Lord Shiva in a lone,meditative pose when I search”Lord Shiva”,the number of paintings that show a close
bond between Lord Shiva and Mother Parvathi is significantly higher ,in that search itself,as compared to the number of pictures showing Mother Lakshmi at Lord Vishnu’s feet,let alone as ardhangini when I search Lord Vishnu,and that showing Mother Saraswati and Lord Brahma together/close ;almost nil when I search Lord Brahma.
Also,among the Trimurti,only Lord Shiva has a complete family(depicted along with Himself.
Is there any theological meaning that can be deduced from this/reason that explains this?
'Shiva' if you remove 'E' kaara from Shiva it becomes Shava.. thus it is 'Ekara' which transforms Shava to Shiva.. this is the relation of Shiva-Shakti (Ardhanarishwara rupa, Uma dehardha dharinam, Vaag artha Viva samprukta / word-meaning relation). Ganesha is gatekeeper of Shakti (in Kundalini sadhanas Ganapati is invoked) formed from body of Shakti and latter again given life by Shiva.. Kartikeya is Guha (the hidden), Shanmukha (with six faces) and DevaSenapati (commander of Daivic forces).. it is very difficult to get Shanmukha rahasya as he is Guha.. all these have very deep yogic meanings
@Tezz,I don’t understand what you’re trying to say.
I think these are all the expressions of the artist. Lord Ram, a Vishnu avatar, is known for the amount of love and respect her had for Goddess Sita, so much so that he took the ekpatni vrat. Moreover, while address Lord Vishnu, the name of Goddess Lakshmi always comes first - Lakshmi Narayan, Sita Ram etc. Lord Balaji has Goddess Lakshmi depicted in his heart.
Regarding the family, Sri Suktam clearly mentions the children of Lord Vishnu and Goddess Lakshmi.
@curiousseeker Never knew about the Sri Suktam part.My point was not to question the love of Mahavishnu for Mother Lakshmi(or for that matter even Brahmaji),but to know if there is any higher reason behind the “expressions of the artist”.Something like “Wealth is at the command of Lord Vishnu “(denoted by the Mother at His feet)(I don’t know if the above example is an implied meaning of the said painting).Likewise I was expecting some implied meaning for Lord Shiva and Brahmaji.
@Amethyst- Ok, I understand your question. I don't know that reason. But another good example that comes to mind is how Goddess Lakshmi sits on Lord Narasimha's lap. This is seen in every photo and murti.
@curiousseeker Yes,I’ve seen that a couple of times...Things are interesting.By the way,what is the name of Mother Lakshmi in that avatar,as wife of Narasimha?
@Amethyst- As per Sripada Srivallabha Charitamrutam, her name is Chenchu Lakshmi.
https://sripada-srivallabha.org/_media/en/sripada_srivallabha_charitamrutam.pdf
@curiousseeker Thanks for the info.
@Tezz “Ganesha is gatekeeper of Shakti (in Kundalini sadhanas Ganapati is invoked) formed from body of Shakti and latter again given life by Shiva.. Kartikeya is Guha (the hidden), Shanmukha (with six faces) and DevaSenapati (commander of Daivic forces).. it is very difficult to get Shanmukha rahasya as he is Guha.. all these have very deep yogic meanings” <—— This is wonderful, thx for typing this comment
|
STACK_EXCHANGE
|
[SciPy-user] feed-forward neural network for python
mwojc at p.lodz.pl
Mon Dec 11 16:54:36 CST 2006
On Wed, 06 Dec 2006 20:16:05 -0000, <scipy-user-request at scipy.org> wrote:
> I find that the rprop algorithm is the best for real-world-noisy data
> (besides GAs), and this could be an easy add. I have a python
> implementation that uses scipy to run rprop, there is definite room
> for improvement, I'm mainly using scipy.where() and wrote the function
> directly from the original rprop paper (i can send you this code if
> you'd like it). However, it may be faster to use fortran here as
> well. The original c version or rprop is here
Yes, adding rprop would be quite easy, and it can be done in fortran for
I think it can appear in next release. If you like, just send me your
of rprop in python, I'll see if it can be useful...
Generally, in future, I planned to have all fortran parts also in python.
This is for all these poor people without numpy and scipy. How do you
Another solution would be to try to include ffnet to scipy distribution.
There is actually no support of ANNs in scipy. I saw scipy.sandbox.ann
but these, in opposite to ffnet, don't calculate even gradients for
Besides, ffnet has pikaia genetic optimizer bindings (which could be
to scipy.optimize) and many improvements, like automatic normalization of
makes using ANNs much easier...
> I also think that it would be great to have two versions of this. One
> for general usage and one for altivec and sse 64bit ibm and intel
> chips, which at this stage of the project would be easy to modify
> gcc 4.0.3 does some auto optimization with the right flags and I've
> found you can increase the speed by factors of 10 or more in many
> cases. What type of platforms do you have access to? The CHUD tools
> on OS X are very effective and might be useful
> (http://developer.apple.com/tools/performance/) to find optimizations
> relevant to all platforms.
Generally I was never interested in optimizing my code for
platforms other than x86. I have access to sourceforge compile farm which
several architectures (but I don't suspect there are altivec sorts of
I, personally, have no time to investigate special coding for altivec/sse.
If you are
interested in this and you think it could be really useful, I may consider
creating a branch
of ffnet for this purpose.
And just to remind, ffnet is at https://sourceforge.net/projects/ffnet
More information about the SciPy-user
|
OPCFW_CODE
|
Your Pocket PC might have shipped with a music player already installed, but PocketMusic makes a case for itself as your mobile jukebox of choice. By default, the program's soothing blue interface puts all of the main controls near the center of the screen, with additional icons for accessing options; activivating the EQ and X-Bass features; and shuffling tracks. The program also features a "skin" that makes PocketMusic look almost identical to Winamp, but we had to fool around for a bit to find our way back to the default layout.
Creating a playlist is a simple matter of clicking the appropriate icon and browsing your handheld for songs. The adjustable 10-band EQ comes with a healthy helping of presets, and you can easily enable or disable a crossfading feature. The built-in alarm clock is certainly a plus, as is support for codecs beyond MP3, including OGG and WMA. The 15-day trial is a bit shorter than we'd like but should give most mobile users enough time to fully evaluate the program. Pocket PC owners who routinely use their handheld for entertainment purposes should test this one out sooner or later.
PocketMusic plays MP3 files with all known bit rates along with possibility to recognize some not fully ISO compatible MP3 files, so you can enjoy your favorite music no matter how it's encoded. PocketMusic has a multilingual user interface - English, German, French, and Spanish.
This program has it all. You can even get add ons for some of the newer file formats out there. I spent several hours online searching for the right music app for my new 6700 and settled on this one after trying it for free for 7 days. It has very clear playback an extremely intuitive interface and for those used to Winamp it has a skin that looks just like WinAMP. As a m atter of fact you can even mod it with most winamp 2.0 skins available on the 'net.
I honestly have nothing negative that I've discovered about it yet.
At first it worked extremely well but if you choose not to upgrade at the end of the period it stops playing ogg ad wma. Considering my entire collection is in ogg, it became useless. Otherwise, its great. It even lets you cut off the screen to save power, but still use programmable-buttons to interact, lets my HP5695 play for almost 6 hours.
"Locks up my iPAQ 2215 (conflict with sleep mode override)"
March 14, 2004
Version: PocketMusic 2.52
Both the bundle and freeware versions lock up my iPAQ 2215. Here's what happens: I set Pocket Music to override my iPAQ's sleep mode. I load some tracks into the program and start playing. At the time my iPAQ is scheduled to go into sleep mode (on external power, this is a maximum of 30 minutes after the last keystroke) the music stops. I can't wake the screen up, and if I use the power button, all I get is a white screen...I have to perform a hard reset to get the iPAQ to work.
This is a serious problem that really shouldn't exist in either the pay or free versions of this program.
"Good FREE product - What an improvement on MediaPlayer!"
February 28, 2004
Version: PocketMusic 2.52
Overall I find this application good. Made GREAT by configuring 'Key Mapping Options'. I have made it more useable by setting a hardware key for landscape mode so I can toggle between standard screen or large screen buttons when using it in the car. Try It!
Overall, the software was extremely good. After using Windows Media Player for two years, this software was exhillerating. The interface could be better, many of the buttons are small and can be difficult to press without the use of a stylus.
|
OPCFW_CODE
|
About the Client
The Client used Team Foundation Server (TFS) 2010 as a SDLC tool to maintain their code base. The Client was struggling with being on the antiquated TFS 2010 as it was hindering their ability to efficiently deliver quality software in a timely manner. The Client lacked an expert resource to effectively manage their project and work activities using TFS 2010. They also wanted to get benefits from the latest version of TFS i.e. TFS 2013 such as:
- Agile Portfolio Management
- Utilize your choice of source control providers
- Cloud-Based Load Testing
- Team Rooms for collaboration
- Web Test Case Management
- Heads up Display
The client approached Silicus for the migration. Silicus consultants evaluated the technical & functional dependencies in the code base, and came up with a migration plan designed to minimize the impact on business activities.
Silicus provided the client consultation on TFS 2013 features and it’s usage - code base maintenance and work management using custom reports, TFS queries and work progress. The next step was to migrate the clients TFS 2010 evironment into TFS 2013 using SQL Server Data Migration and TFS Upgrade Wizard.
Post migration, customization of Task Templates, Role & Access Control, Query Management, Task Tracking Rules was done with the required Sharepoint interfaces also customised. Silicus helped the client implement a Continuous Integration Plan. The team also generated reports using SQL Server Reporting Services(SSRS).
Silicus also migrated the client’s existing WSS (Windows SharePoint Services 3.0) implementation to SharePoint 2013 using Metalogix and configured TFS project SharePoint sites as per client’s need. The entire migration process was completed within the scheduled application maintenance hours with minimal system downtime.
TFS Upgrade Wizard, TFS 2013, SharePoint 2013
SQL Server 2012
Windows Server 2012
SQL Server Reporting Services(SSRS)
Effective Project Tracking
Client had a multitenant application, so it was very difficult to track work progress on different tenant applications. TFS 2013’s centralized project tracking features helped the client reduce the cost of project tracking with clear visibility of project progress to senior management as well.
Reduced Manual Efforts
The Team Foundation Server (TFS) alerts and customized query features helped the client significantly reduce manual effort to track work and resolve dependency. Continuous Integration (CI) helped reduce manual effort and deployment errors on the development and production environments as the entire software package is built and deployed (to a CI environment) and then tested.
Customized SharePoint interfaces helped individual team members visualize work progress, and issues reported into their respective tenant projects. The TFS 2013 allows the employees to collaborate together on their work, share updates with PM’s and team members, thus eliminating the need for status update meetings and emails.
|
OPCFW_CODE
|
# -*- coding: utf-8 -*-
'''
QDataWidgetMapper控件
'''
from PyQt5.QtGui import *
from PyQt5.QtCore import *
from PyQt5.QtWidgets import *
import sys
class Form(QMainWindow):
__slots__=("model","typeModel","nameEdit","addressEdit",
"typeComboBox","mapper","previousButton","nextButton")
def __init__(self,parent=None):
super().__init__(parent)
centerWidget = QWidget()
layout = QGridLayout()
# 添加控件代码
self.setupModel()
self.mapper = QDataWidgetMapper()
self.mapper.setModel(self.model)
nameLabel = QLabel("Name")
self.nameEdit = QLineEdit()
nameLabel.setBuddy(self.nameEdit)
addressLabel = QLabel("Address")
self.addressEdit = QTextEdit()
addressLabel.setBuddy(self.addressEdit)
typeLabel = QLabel("Type")
self.typeComboBox = QComboBox()
self.typeComboBox.setModel(self.typeModel)
self.nextButton = QPushButton("Next")
self.nextButton.clicked.connect(self.mapper.toNext)
self.previousButton = QPushButton("Previous")
self.previousButton.clicked.connect(self.mapper.toPrevious)
self.typeComboBox.currentIndexChanged.connect(self.updateButton)
self.mapper.addMapping(self.nameEdit,0)
self.mapper.addMapping(self.addressEdit,1)
self.mapper.addMapping(self.typeComboBox,2)
layout.addWidget(nameLabel,0,0,1,1)
layout.addWidget(self.nameEdit,0,1,1,1)
layout.addWidget(self.previousButton,0,2,1,1)
layout.addWidget(addressLabel,1,0,1,1)
layout.addWidget(self.addressEdit,1,1,2,1)
layout.addWidget(self.nextButton,1,2,1,1)
layout.addWidget(typeLabel,3,0,1,1)
layout.addWidget(self.typeComboBox,3,1,1,1)
centerWidget.setLayout(layout)
self.setCentralWidget(centerWidget)
self.resize(640,480)
self.setWindowTitle("PyQt5-QDataWidgetMapper")
self.mapper.toFirst()
def setupModel(self):
items = ["Home","Work","Other"]
self.typeModel = QStringListModel(items)
self.model = QStandardItemModel(5,3)
names = ["Alice","Bob","Carol","Donald","Emma"]
address = ["<qt>123 Main Street <br/>Market Town</qt>",
"<qt>PO Box 32 <br/>Mail Handling Service<br/>Service City</qt>",
"<qt>The Lighthouse<br/>Big City</qt>",
"<qt>47338 Park Avenue<br/>Base Camp<br/>Big Mountain</qt>",
"<qt>Research Station<br/>Base Camp<br/>Big Mountain</qt>"]
types = ["0","1","2","0","2"]
for row in range(5):
name = QStandardItem(names[row])
self.model.setItem(row,0,name)
addr = QStandardItem(address[row])
self.model.setItem(row,1,addr)
itemType = QStandardItem(types[row])
self.model.setItem(row,2,itemType)
def updateButton(self,index):
self.previousButton.setEnabled(index > 0)
self.nextButton.setEnabled(index < self.model.rowCount() - 1)
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = Form()
ex.show()
sys.exit(app.exec_())
|
STACK_EDU
|
How do I get the first instance of an inner join to do a "select into" in SQL Server?
This is my SQL Statement thus far
INSERT INTO RegisteredDonors (firstname,
lastname,
organization,
OrganizationType,
OrganizationGroup,
OrganizationField1,
OrganizationField2,
OrganizationOther,
Website,
PrimaryEmail,
AltEmail,
PrimaryPhone,
PrimaryPhoneName,
AltPhone,
AltPhoneName,
FaxNumber,
BillingFirstName,
BillingLastName,
BillingCompany,
BillingAddressLine1,
BillingAddressLine2,
BillingCity,
BillingPostalCode,
BillingState,
BillingCountry,
ShippingFirstName,
ShippingLastName,
ShippingCompany,
ShippingAddressLine1,
ShippingAddressLine2,
ShippingCity,
ShippingPostalCode,
ShippingState,
ShippingCountry,
DateAdded,
DonorStatusCode,
qbId,
CreatedBy)
SELECT firstname,
lastname,
organization,
organization_type,
organization_group,
organization_field_1,
organization_field_2,
organization_other,
organization_website,
email,
email_alt,
telephone,
telephone_name,
telephone_alt,
telephone_alt_name,
fax,
payment_firstname,
payment_lastname,
payment_company,
payment_address_1,
payment_address_2,
payment_city,
payment_postcode,
payment_zone,
payment_country,
shipping_firstname,
shipping_lastname,
shipping_company,
shipping_address_1,
shipping_address_2,
shipping_city,
shipping_postcode,
shipping_zone,
shipping_country,
GETDATE() as CreateDate,
1 as DonorStatusCode,
qb.Id,
'Me' as CreatedBy
FROM
cart_order co
INNER JOIN
qb_customers qb ON (co.organization + ' (' + co.payment_zone + ')') = qb.FullyQualifiedName
ORDER BY
co.order_id DESC
How do I get the inner join to return the first row of results from cart_order? I am using SQL Server (Azure) and I can't figure this out. I looked at this StackOverflow answer and I am not sure if it applies correctly here.
SQL Left Join first match only
Thanks!
Please don't show a gazillion irrelevant columns. Using only 3 or 4 relevant columns makes your question exactly the same, and MUCH more readable.
Point Noted :-)
You have to use ranking function.
;with co as (
select *, rank() over(partition by organization + ' (' + co.payment_zone + ')' order by organization + ' (' + co.payment_zone + ')') rnk
from cart_order
)
INSERT INTO RegisteredDonors (firstname,
lastname,
organization,
OrganizationType,
OrganizationGroup,
OrganizationField1,
OrganizationField2,
OrganizationOther,
Website,
PrimaryEmail,
AltEmail,
PrimaryPhone,
PrimaryPhoneName,
AltPhone,
AltPhoneName,
FaxNumber,
BillingFirstName,
BillingLastName,
BillingCompany,
BillingAddressLine1,
BillingAddressLine2,
BillingCity,
BillingPostalCode,
BillingState,
BillingCountry,
ShippingFirstName,
ShippingLastName,
ShippingCompany,
ShippingAddressLine1,
ShippingAddressLine2,
ShippingCity,
ShippingPostalCode,
ShippingState,
ShippingCountry,
DateAdded,
DonorStatusCode,
qbId,
CreatedBy)
SELECT firstname,
lastname,
organization,
organization_type,
organization_group,
organization_field_1,
organization_field_2,
organization_other,
organization_website,
email,
email_alt,
telephone,
telephone_name,
telephone_alt,
telephone_alt_name,
fax,
payment_firstname,
payment_lastname,
payment_company,
payment_address_1,
payment_address_2,
payment_city,
payment_postcode,
payment_zone,
payment_country,
shipping_firstname,
shipping_lastname,
shipping_company,
shipping_address_1,
shipping_address_2,
shipping_city,
shipping_postcode,
shipping_zone,
shipping_country,
GETDATE() as CreateDate,
1 as DonorStatusCode,
qb.Id,
'Me' as CreatedBy
FROM co
inner join qb_customers qb on (co.organization + ' (' + co.payment_zone + ')') = qb.FullyQualifiedName
where co.rnk = 1
order by co.order_id desc
I'm not 100% sure of rank() over(partition by organization + ' (' + co.payment_zone + ')' order by organization + ' (' + co.payment_zone + ')') rnk. If the result is not the expected one, run only the CTE part to ensure rnk = 1 is once per customer.
Ok. I think this is close. I am still getting the dupes with organization + ' (' + payment_zone + ')'.
Ok, change rank() with ROW_NUMBER().
https://www.red-gate.com/simple-talk/sql/learn-sql-server/working-with-window-functions-in-sql-server/
You can use top (1):
select top (1) . . .
from cart_order co inner join
qb_customers qb
on co.organization + ' (' + co.payment_zone + ')') = qb.FullyQualifiedName
order by co.order_id desc
I am not sure about this answer. The inner join returns multiple results based on the concatenation "inner join qb_customers qb on (co.organization + ' (' + co.payment_zone + ')') = qb.FullyQualifiedName" If I do TOP(1) that will just return one row of the entire query not just the one result of "inner join qb_customers qb on (co.organization + ' (' + co.payment_zone + ')') = qb.FullyQualifiedName"
@MrTouya . . . The question is: "How do I get the inner join to return the first row of results from cart_order?". This returns one row of results that match the from conditions ordered by the ordering clause. I can't think of any other interpretation of your question.
The target table is RegisteredDonors. I think he want 1 row per customer, top (1) is only one row. Not one row per customer. This is how my interpretation of the question.
@DanielBlais . . . I am not denying that might be intention. However, I see nothing at all in the question that supports that interpretation.
|
STACK_EXCHANGE
|
Our team was discussing the advances in wearable technology, and how it helps us stay connected while making daily life easier in aspects like health, navigation, and safety. Eventually, we began to wonder how wearable technology could play these same roles in the lives of different user groups. We considered the design specifications and changes for groups that cannot use the sense of sight- primarily people with visual impairment or in occupations like mining/mountaineering that can at times be in pitch darkness.
What it does
Step (1) Environmental Assessment
The Telus DevKit Board takes sensory data from the users environment (e.g. temperature, pressure, humidity, latitude/longitude and location) and sends it over LTE to the Azure database via Azure IOT Hub. This step provides invaluable information in the case of accidents or emergencies to authorities on the environment at the time of the accident (e.g. mining disasters, missing person, etc.)
Step (2) Obstruction Detection
The Raspberry Pi board is connected to 3 Ultrasonic Sensors, through which it determines any obstructions within a 4 metre range from the user's head. This data is wirelessly communicated to an online server via WiFi, that can then provide input to the Arduino system. The system is also equipped with a help button that sends an emergency text message to an emergency contact with just a simple press of a button by the user.
Step (3) Haptic Direction
The Arduino uses input communicated wirelessly from the Ultrasonic sensors to set custom vibrations that can communicate any obstructions, along with their range and direction relative to the user. This component allows users to act upon all the information gathered from our system to seamlessly move around without any hand-held aids.
How we built it
Challenges we ran into
- Setting up the Python Wireless Connection for Data collection
- Drawing out (and accordingly building) sound circuitry to ensure all UltraSonic Sensors provide accurate data
- Connecting Telus Devkit to Azure
- Stably building the housing components in wearable form for the systems
- Interfacing Raspberry Pi with Arduino
Accomplishments that we're proud of
We did not have actuators or vibrators available, so we really had to understand the inner workings of Stepper Motors and rewire them (as well as format the code accordingly) to have them work as vibrators using input from the Raspberry Pi
Setting up the environments for all the systems-on-a-chip was a time-consuming and tedious job with much debugging. Getting through that took many hours, but we managed to successfully interface and integrate all components of our tech
The data output by hardware was a bit challenging to work with, especially in the case of the UltraSonic Sensors. However, with much testing and documentation review, we did eventually achieve proper obstacle detection using these sensors.
What we learned
Setting up multi-component IoT Systems, and the importance of considering different user groups and their specific design specs when prototyping.
What's next for VisionSense
- making the wearable technology smaller and easier to wear
- using computer vision with an embedded camera that communicates (using audio) any pre-determined tags to user if they appear before the user
|
OPCFW_CODE
|
/**
* @description LTE EARFCN Calculator
* @author Taesu Hyeon
* @license MIT
*/
const table = require("../lteBandTable.json");
// TODO: Do you want to slice if the end of value ".0"? Do not?
// const regex = /.0$/;
// earfcn.match(regex) ? earfcn.substr(0, earfcn.length - 2) : earfcn;
const LTE = {
lteBandTable: table,
/**
* Calculate the frequeny using earfcn and band.
* @param {number} inputBand LTE Band.
* @param {number} freq LTE Frequency.
* @returns {number}
*/
freqToEarfcnByBand: (inputBand, freq) => {
for (let element of table) {
let earfcn;
if (inputBand == element.band) {
if (freq >= element.FDL_Low && freq <= element.FDL_High) {
earfcn = (freq - element.FDL_Low) * 10 + element.NDL_Min;
earfcn = earfcn > element.NDL_Max ? earfcn - 1 : earfcn;
} else if (freq >= element.FUL_Low && freq <= element.FUL_High) {
earfcn = (freq - element.FUL_Low) * 10 + element.NUL_Min;
earfcn = earfcn > element.NUL_Max ? earfcn - 1 : earfcn;
} else {
console.error("[Error] Wrong frequency value.");
return false;
}
return earfcn.toFixed(1);
}
}
},
/**
* Calculate the frequency using earfcn.
* @param {number} earfcn LTE earfcn value.
* @returns {number}
*/
earfcnToFreq: earfcn => {
for (let element of table) {
let freq;
// Check Uplink Earfcn
if (earfcn >= 18000 && earfcn < 36000) {
if (earfcn >= element.NUL_Min && earfcn <= element.NUL_Max) {
freq = element.FUL_Low + 0.1 * (earfcn - element.NUL_Min);
}
}
// Check Downlink Earfcn
else if (earfcn < 36000 || (earfcn > 65535 && earfcn <= 98303)) {
if (earfcn >= element.NDL_Min && earfcn <= element.NDL_Max) {
freq = element.FDL_Low + 0.1 * (earfcn - element.NDL_Min);
}
}
// Check TDD Uplink Earfcn
else if (earfcn >= 36000 && earfcn <= 65535) {
if (earfcn >= element.NDL_Min && earfcn <= element.NDL_Max) {
freq = element.FDL_Low + 0.1 * (earfcn - element.NDL_Min);
}
}
if (freq) {
return freq.toFixed(1);
}
}
console.error("[Error] Wrong earfcn value.");
return false;
}
};
// module.exports = LTE;
export default LTE;
|
STACK_EDU
|
When testing against the FrankieOne Sandbox you may need to run some common scenarios. These are described below using Test Personas from our Test Data.
An easy initial test is confirming that a match works. To do this please make sure you enter the information exactly as it appears within the test data spreadsheet. Depending on the Recipe you're using, the amount of information that needs to be provided will vary (for example, if 2 matches required then enter 2 different document details). See Entity Profiles/Recipes Available to Test With for details on what checks are done with each available profile/recipe.
The easiest way to simulate a failed match is to choose an individual from our Test Data, enter the information as shown and then vary one or more of the values (e.g. First Name is listed as
Bobby but you instead enter
Bobble, which will cause the first name to not match). What you choose to change will depend on the Recipe, as the attributes you've varied will need to be used by the checks to have an effect on the outcome (e.g. if you vary just the documentation details, and the chosen recipe doesn't involve ID checks, it may still complete the match successfully).
️ Avoid changing Family Name
When attempting to execute a partial check failure, avoid varying the Family Name as this may cause other unintended consequences. This is due to this field forming part of the primary key of the Test Persona.
While our Test Data is a static list of defined attributes, you are also able to simulate outcomes using your own information.
To simulate a scenario where your entered information always passes, all you need to do is set the Middle Name to
passall, while the remaining information can be whatever you need it to be. However, please ensure the entered data meets the minimum required for the Recipe. If you would like to only provide successful check results on only one data source (as opposed to two), you can set the Middle Name to
Sometimes you may be limited in setting name information to explicitly force a match (for example when the name has to match another integrated test system). So to support that, you have two alternatives to consider:
- You can set the Street Name to
onepass) which will force successful checks to occur
- You can include
37125in the Document Number for the first Identity Document you've provided in your check. This value can appear anywhere in the document number (e.g. XXX37125 or MM37125PP).
If you enter information which doesn't match any of the available Test Personas, it will not match (see above Simple Fail Test).
Here are some other common scenarios that you may want to simulate:
If you want to simulate a scenario where the test will only pass the document checks (and fail the remaining), you can set the Other Name to
FailNonID, and then enter the data as per our Test Data spreadsheet.
Don't forget the details...
You will need to ensure that the rest of the details normally verified with the ID (such as name, DoB, and the ID details) are correct for this to work as anticipated.
If you want to simulate a credit header failure from Equifax or Experian, please set the Street Name or Long Form to include
Experian (e.g. 77 Experian Street Melbourne, VIC 3000, Australia).
From the Test Data spreadsheet use a Test Persona that has a PEP, Sanctions and/or Fraud specified (see the last column in the Test Data). When this individual is used for matching, a pre-canned match will be done and made available. Please ensure the Recipe being used is doing a PEP check.
The potential match used for PEP, Fraud and/or Sanctions is static and does not vary between tests. If you have other external test services within your Recipe structure, you can use a row where the individual is marked as
SKIP- this will bypass the PEP/Sanctions check in our Test Data and use the next external test service for this check.
Updated 1 day ago
|
OPCFW_CODE
|
Change PhpMyAdmin Cookie Path
PhpMyAdmin sets the cookie path to the path PMA is installed in.
So if it is installed in /pma/ it will set the cookie path to /pma/.
However, I have logout code which needs to access the session to be able to log the user out of PMA along with other system sessions, but it is located in /.
This of course does not work since the browser will not allow the cookie to be accessed outside of the cookie path.
Setting the default cookie path in php.ini does not work since PMA specifies its own path when setting the cookie which will override the php.ini setting.
Is there a way I can change the default cookie path set by PMA to / by setting a config option or possibly extending a PMA class? I've also read about a thing called monkey patching using runkit, can it do that? If so how?
Editing PMA core files is not an option as I want the change to stick between upgrades. If this can be coded into config.inc.php it would stick between upgrades.
Some things I have tried via config.inc.php :
setting my own cookie with same name and sessionid, does not work as cookie is overwritten by the one PMA sets further in the PHP processing.
Changing the value of $_SERVER['PHP_SELF'] which is one of the values used in determining path, does not work, not sure why.
I tried to extend the PMA_Config class particularly the getCookiePath() method, did not work as I don't know how to instantiate it so it overrides the PMA_Config's method.
I would consider runkit a bit over the top for that. Why don't you just hack the place where the string is originating from and so just patch the application to fulfill your needs if it does not provide the configuration option you're looking for? Add it as a configuration option and file a feature request with your patch then.
Also it looks like you're looking for SingleSignOn which is supported by PMA. Look in the docs what the existing options are: http://wiki.phpmyadmin.net/pma/Auth_types
whoa, didn't think people would respond so soon. I'm actually in the midst of setting up SSO already =o)
@hakre Got SSO to work, I had spent a week trying to figure out how to get by this and all I needed was to use different authtype. Since you did respond with a correct solution, would you want to put it as an answer with an explanation on how to set it up?
I have a better suggestion: Answer your own question. That's totally valid and I bet you can write a better answer than I am able to. I was just giving some pointers, that was all ;). And yes, it's totally accepted on this site that you answer your own questions. Good to read you solved it now ;)
I ended up using the Signon authtype for PMA
My PMA config then became:
$cfg['Servers'][$i]['auth_type'] = 'signon';
$cfg['Servers'][$i]['SignonSession'] = 'YOUR_SESSION_ID'; // You can use ini_get('session_name') if you prefer
$cfg['Servers'][$i]['SignonURL'] = 'FULL_URL'; // URL it will redirect unauthenticated users
$cfg['Servers'][$i]['LogoutURL'] = 'FULL_URL'; // URL where user will be redirected after logout
In my SSO script, I added these lines when logging into the rest of the system so it will simultaneously login to PMA as well.
$_SESSION['PMA_single_signon_user'] = 'MY_PMA_USERNAME';
$_SESSION['PMA_single_signon_password'] = 'MY_PMA_PASSWORD';
Now I don't require having to access PMA cookie as it uses the SID cookie set by main system. To logout of main system and PMA I do a regular cookie logout:
$_SESSION = array();
setcookie(session_name(), '', time() - 3600);
session_destroy();
|
STACK_EXCHANGE
|
Free Software -
Command line utilities
Written by Thomas
Tuesday, 10 February 2009 18:34
Page 4 of 5
/? = Displays this help screen and exits with code 1.
Displays a help screen with all supported parameters
/0 = Include files with 0 length.
Since files with a file size of 0 don't have any contents, all files with a size of zero are automatically reported as duplicate files. Use this option with great care!
/B = Show file sizes to test.
Displays messages with file sizes the software is currently comparing. FindDuplicates starts with the larger files first. This is the "best guess" to estimate the remainder of the process because the software does not show a progress bar.
/DD = Delete all duplicates.
Deletes duplicate files. The software removes the duplicate with the longer path/file name. If both path/file names have the same length the second one is removed. Use with care!
(1) 21883418 octets, folder1\file1.dat
(2) 21883418 octets, folder2\file2.dat
This would remove the file file2.dat in folder2.
/NR = Don't move files to the recycle bin.
By default, FindDuplicates moves files to the recycle bin instead of deleting them immediately. The recycle bin is handled by Windows automatically. If it cannot hold more files least recent files are removed permanently. Use with care!
/D = List all directory names, not just the top one.
Displays a list with all folder names to walk through.
/E = Only list files with 0 length.
Displays a list with all files that have a file size of 0. This parameter can be used to identify files FindDuplicates would remove because of their file size is 0.
/H = Include hidden files.
Includes hidden files in the comparison and deletion process.
/L = List all read files.
Displays a list with all read files before the actual comparison process starts.
/O = Show the sorted file list.
Before FindDuplicates starts comparing files it sorts them by file sizes. Larger files are put at the top and smaller files at the end. This command line parameter displays the sorted list before the actual comparison process starts.
/S = Check subdirectories.
All subfolders are included in the comparison/deletion process.
/T = List files to test.
Displays a complete list with all files to compare.
/V = Verbose ('Comparing files...' etc).
Displays a message for each file comparison.
/Q = Quiet mode (overrides any other output parameter).
Make the software quiet.
/Y = Answer 'Do you want to delete...' questions quietly with 'Y'.
Useful with /DD and /NR. When /Y is provided FindDuplicates does not ask for confirmations when deleting files or when moving them to the recycle bin. Use with care!
Last Updated on Monday, 28 June 2010 09:16
to post comments.
|
OPCFW_CODE
|
Novice Programming Environments Assessments
We are seeking additional assessments of either the listed environments or other similar environments. Assessments need not be published.
Moskal, B., D. Lurie, and S. Cooper, 2004: Evaluating the effectiveness of
a new instructional approach. SIGCSE '04: Proceedings of the 35th SIGCSE
technical symposium on Computer science education,
Van Haaster, K. and Hagan, D., Teaching and Learning with BlueJ: an Evaluation of a Pedagogical Tool. Information Science and Information
Technology Education Joint Conference. Rockhampton, QLD,
3. Ragonis, N., and M. Ben-Ari, 2005: On understanding the statics and dynamics of object-oriented programs. SIGCSE '05: Proceedings of the 36th SIGCSE technical symposium on Computer science education, St. Louis, Missouri, USA, ACM Press, 226-230.
4. Levy, R. B., M. Ben-Ari, and P. A. Uronen, 2003: The Jeliot 2000 program animation system. Comput. Educ., 40, 1-15.
5. Sanders, D. and Dorn, B., “Classroom Experience With Jeroo”, Journal of Computing Sciences in Colleges, Vol. 18, No 4., April 2003, pp. 308-316.
Forte, A., and M.
Guzdial, 2004: Computers for
Communication, Not Calculation: Media as a Motivation and Context for Learning.
HICSS '04: Proceedings of the Proceedings of the 37th Annual
7. Hendrix, T. D., J. H. Cross, S. Maghsoodloo, and K. H.Chang. The effectiveness of control structure diagrams in source code comprehension activities. IEEE Transactions on Software Engineering. Vol. 28. No. 5. May 2002, 463-477.
8. Cross, J. H., T. D. Hendrix, Mathias, K. S. and Larry A. Barowski. Software visualization and measurement in software engineering education: an experience report. Proceedings of Frontiers in Education 1999. November 10-13, 1999. Vol. 2. 12B105-12B110.
Cross, J. H., T.
Dean Hendrix, and Larry A. Barowski, Debugging
[in] CS1. Proceedings of 2001 ACM
10. Borge, R., Fjuk, A. and A.-K. Groven. Using Karel J collaboratively to facilitate object-oriented learning. IEEE International Conference on Advanced Learning Technologies. August 30 – September 1, 2004. 580-584.
11. Fagin, B., and L. Merkle, 2003: Measuring the effectiveness of robots in teaching computer science. SIGCSE '03: Proceedings of the 34th SIGCSE technical symposium on Computer science education, Reno, Navada, USA, ACM Press, 307-311.
12. Wlliams, A. B. The qualitative impact on using Lego Mindstorms robots to teach computer engineering. IEEE Transactions on Education. Vol. 46. Issue 1. Feb. 2003. 206.
López J., N. Myller, and E. Sutinen, 2004:
Sorting out sorting through concretization with robotics. AVI '04: Proceedings
of the working conference on advanced visual interfaces,
15. Wang, E. Teaching freshmen design, creativity, and programming with Legos and Labview. Proceedings of Frontiers in Education Conference 2001. October 10-13, 2001. Vol. 3. F3G11 – F3G15.
Dr. Kris D. Powers Paul Gross
Dept. of Computer Science Tufts University 161 College Ave. Medford, MA 02155
|
OPCFW_CODE
|
In today’s rapidly advancing technological landscape, the fields of cognitive computing and machine learning have gained significant prominence. As businesses strive to harness the power of artificial intelligence (AI) to gain a competitive edge, Synapse Network’s SYN platform emerges as a leader in enhancing cognitive computing and machine learning capabilities. In this article, we will delve into the innovative features and benefits of SYN, exploring how it empowers organizations to unlock the full potential of AI for their business needs. Click here to explore BitAi Method and start your trading journey with confidence and hassle-free.
Understanding Cognitive Computing
Cognitive computing encompasses the capability of computers to replicate human thinking processes, empowering them to analyze extensive datasets, comprehend natural language, and make well-informed choices. Through the integration of diverse technologies such as machine learning, natural language processing, and data mining, cognitive computing systems strive to enhance human intelligence and improve decision-making.
The Rise of Machine Learning
Machine learning, which falls under the umbrella of artificial intelligence (AI), is dedicated to empowering computers to learn from data and enhance their performance without relying on explicit programming. It entails the creation of algorithms and statistical models that enable machines to recognize patterns, make predictions, and dynamically adjust to new information. The impact of machine learning is reshaping various industries by enabling automation, facilitating predictive analytics, and delivering personalized experiences.
SYN: Revolutionizing Cognitive Computing and Machine Learning
SYN by Synapse Network is a game-changing platform that elevates cognitive computing and machine learning to unprecedented heights. With its advanced technology stack and cutting-edge algorithms, SYN delivers exceptional performance and efficiency. It boasts enhanced data processing capabilities, adaptive learning techniques, and seamless scalability, enabling it to handle vast amounts of data and complex tasks with remarkable accuracy and speed. SYN prioritizes explainability and interpretability, providing transparent insights into its predictions, while also offering versatility and customization to suit specific industry needs. Its seamless integration with existing infrastructure ensures smooth adoption, empowering businesses and researchers to unlock new insights and achieve transformative outcomes in today’s data-intensive world.
Seamless Data Integration and Preprocessing
SYN provides a robust data integration framework, allowing organizations to effortlessly ingest and preprocess large volumes of structured and unstructured data from diverse sources. Its intelligent data pipeline automates data cleansing, normalization, and transformation tasks, ensuring data accuracy and consistency.
Scalable and High-Performance Computing
SYN harnesses the power of distributed computing and cloud infrastructure, enabling organizations to process massive datasets at lightning-fast speeds. Its scalable architecture ensures optimal utilization of computing resources, allowing businesses to handle complex AI workloads efficiently.
Advanced Machine Learning Algorithms
SYN offers a comprehensive suite of state-of-the-art machine learning algorithms, ranging from supervised and unsupervised learning to reinforcement learning. These algorithms can be seamlessly integrated into existing workflows, empowering organizations to extract valuable insights, make accurate predictions, and automate decision-making processes.
Natural Language Processing and Understanding
With SYN’s natural language processing (NLP) capabilities, businesses can unlock the potential hidden within textual data. SYN’s NLP algorithms enable sentiment analysis, named entity recognition, topic modeling, and language translation, among other functionalities. This empowers organizations to derive meaningful insights from unstructured text and optimize customer interactions.
Explainable AI and Model Interpretability
Transparency and interpretability are crucial aspects of deploying AI systems in real-world scenarios. SYN incorporates explainable AI techniques, allowing organizations to understand how decisions are made by the underlying models. This enhances trust and facilitates compliance with regulatory requirements.
Continuous Learning and Adaptability
SYN facilitates continuous learning by enabling models to adapt and evolve. Through reinforcement learning techniques, models can learn from feedback and improve their performance based on changing data patterns. This adaptability ensures that AI systems remain up-to-date and relevant in dynamic business environments.
As the demand for cognitive computing and machine learning capabilities continues to soar, SYN by Synapse Network stands at the forefront of innovation. By seamlessly integrating advanced algorithms, scalable computing infrastructure, and natural language processing, SYN empowers organizations to unlock the true potential of AI. With SYN, businesses can make data-driven decisions, enhance customer experiences, and gain a competitive edge in their respective industries. To stay ahead in the digital era, embracing SYN’s transformative capabilities is the key. Explore the possibilities and revolutionize your cognitive computing and machine learning journey with SYN by Synapse Network.
|
OPCFW_CODE
|
Thanks a lot!
what if i wanted to .pop() randomly…?
.pop() method supports indexing. When no index is given, the last element is popped, but when an index is given, it is that index to be popped.
With knowledge of the length of a list we can generate a random number (an index) that is within range.
from random import randrange
randrange since it generates integers that will match (fall within) the range of our list.
names = ['Sasha', 'Eric', 'Maria', 'Jason', 'Alice', 'Mathew', 'Emma', 'Logan', 'Naomi', 'Parker'] n = len(names) x = randrange(n) print (names[x])
thanks a lot
Sorry, but this doesn’t make sense.
Link is irrelevant.
a += b will call
__iadd__ and mutate
a = a + b will create a new object and assign it to
This is not explaining why only 1st character assigned to students_in_poetry list.
That isn’t happening in that screenshot.
What are the first letters? What is printed? Doesn’t match.
>>> a += 18, 25 >>> a [2, 5, 9, 18, 25] >>>
The augmented value is a sequence. Likewise below,
>>> a = [2, 5, 9, 18, 25] >>> a += 'Python' >>> a [2, 5, 9, 18, 25, 'P', 'y', 't', 'h', 'o', 'n'] >>> a += ['Python'] >>> a [2, 5, 9, 18, 25, 'P', 'y', 't', 'h', 'o', 'n', 'Python'] >>>
this is great example!
But I’m still struggle to understand.
- string - it iterates
- list - treated as atomic entity
I can memorise like this, unless there is a better explanation.
+= iterates regardless. It doesn’t care in the slightest what the input is.
>>> a = [2, 5, 9, 18, 25] >>> a += 'Python' >>> a [2, 5, 9, 18, 25, 'P', 'y', 't', 'h', 'o', 'n'] >>> a += ['Python','Java'] >>> a [2, 5, 9, 18, 25, 'P', 'y', 't', 'h', 'o', 'n', 'Python', 'Java'] >>>
Thanks a lot for your help and support!
|
OPCFW_CODE
|
licensing - license and protect php script - Stack Overflow
This class can used to generate license style keys to control the distribution and functionality of PHP applications. It generates license strings that can bind PHP applications to specific domains, specific servers, can only be executed during limited time period, or to restrictions placed on a home server within the license key. License Expiration Dates: Annual licenses and permits are valid from the date of purchase until the last day of February.Example: If you purchase an annual license on January 1, that license expires on the last day of February of the same year. WP Licensing System does not require the Auth PHP Class, So you can deploy the keys on any platform or scripting language. With the addition of a match string as a second level of security, The end-user would be required to supply a valid key along with the corresponding match key. Licensing System Lightbox functions as a ‘modular’ system, in which individual templates and features can be unlocked to provide the best—and most cost-effective—combination for the user’s requirements. This is where licensing comes in. A licence unlocks one or more features of Lightbox.
PADL (PHP Application Distribution License System
Universal Licensing System FCC > WTB > ULS > Online Systems > License Search FCC Site Map License Search : The ULS License Search enables you to search for a wide range of licenses in the Universal Licensing System. The License Search here provides access to the most basic attributes of a license. You can also specify
To better serve the exporting community, BIS has launched an on-line version of its System for Tracking Export License Applications (STELA). You may now check the status of your export/reexport license applications, classification requests, and AGR notifications at https://snapr.bis.doc.gov/stela. The Department of Early Education and Care (EEC) developed the Licensing Education Analytic Database (LEAD), which is a cloud-based system for all licensing functions. The Licensing Education Analytic Database (LEAD) system is available to both providers and licensors. Licensed programs and Professional licensing protects the health and safety of the public from fraudulent and unethical practitioners. There are 29 licensing boards and commissions, each with their own statute governing their powers and functions. The boards and commissions also promulgate regulations to further govern their respective professions. Motor vehicle licensing system on the other hand is a phenomenon that is still estranged to the Nigerian terrains. Though many western countries e.g. United States of America and some Asian countries have implemented online licensing in one form or the other; it is yet to gain popularity in this part of the world. PHP is the leading innovator in the roof support system industry, setting the status quo, pioneering change, and solving your every rooftop challenge. PHP is the leading designer, manufacturer, and supplier of sustainable roof, pipe and equipment support systems. Schedule your roof evaluation today.
|
OPCFW_CODE
|
When many IT pros first think of Windows 2000 user administration, they think of domains, network accounts, and share permissions. Yet there’s another side to user administration that’s equally important: local workstation administration. Without adequate workstation administration, end users can wreak havoc on a Windows machine. Here are two administration tips to help keep your workstations running smoothly. Learn how to keep user profile changes after logging off and how to prevent end users from creating user accounts.
Retain changes to user profiles after they’ve logged off
A user’s desktop and working environment are stored in the user’s profile in Windows 2000. When users log off, the changes they make to their profiles are usually saved in order for their profiles to be updated the next time they log on.
If you run across a situation where the profile changes are being lost, the culprit could be that the user’s account isn’t a member of the local Users group. When users log on from accounts that don’t belong to the Users group, Windows 2000 treats them as guests and therefore doesn’t save the user’s profile changes. This doesn’t apply if the user logs on with an account that is a member of the Administrators group. In this case, the changes will be retained.
The solution to this problem is simple: add the user’s account to the local Users group. To make the change, go through the Local Users And Groups snap-in in the Computer Management console. After making the change, log on as the user, change a profile setting, log off, and then log back on to verify that the problem is resolved.
Prevent users from creating user accounts
Windows 2000 Professional offers two methods for creating Local User accounts. You can use either the Users And Passwords object in the Control Panel or the Local Users And Groups snap-in in the Computer Management console. Both allow you to create and modify accounts, but there are differences between the two.
One primary difference is in the access that users have to these tools. Members of the Users group who open the Users And Passwords object in the Control Panel are prompted to specify credentials for an account in the Administrators group. Depending on your system’s configuration, members of the Users group can open the Local Users And Administrators snap-in through the Computer Management console, create new accounts, and set the password for those accounts without being prompted for an Administrator account. However, members of the Users group cannot make an account for a member of the Administrators group or change passwords for other accounts.
You can prevent members of the Users group from creating accounts through the Local Users And Groups snap-in by following these steps:
- Log on as an Administrator.
- Open the Local Users And Groups snap-in from the Computer Management console.
- Click the Groups folder and double-click Power Users in the right pane to open the group’s properties.
- Click NT AUTHORITY\INTERACTIVE, select Remove, and then click OK.
When this built-in group no longer appears in the Power Users properties, members of the Users group will no longer be able to create accounts.
Get great Windows 2000 tips like these sent directly to your inbox!
Our Windows 2000 Professional TechMail contains valuable information that can save you time and effort. Get valuable tips, links to Windows resources, and much more, all delivered straight to your inbox—absolutely free. Sign up for the Windows 2000 TechMail today! Let us know what you think about this article by sending us an e-mail or by posting a comment below.
|
OPCFW_CODE
|
M: Coding the Matrix: Linear Algebra Through Computer Science Applications - _6cj7
http://codingthematrix.com/
R: cr0sh
If you want to cover the basics of LA (vector and matrix manipulation,
mainly), and want to have some practical application of that knowledge - there
are two main areas which can be easily explored at home:
1\. 3D graphics programming
2\. Machine learning (particularly neural networks)
For the first, don't just start playing with OpenGL or Direct3D - while you
need to know the math on those, you won't get your feet as wet. What you want
to do is start from the bottom and build up (essentially building a software
3D engine). While you won't be generally dealing with large matrices or
vectors (4x4 mainly), it will be more than plenty to teach the bare ropes.
Machine learning - and neural networks - are where you start to deal with much
larger matrices, as they hold the mathematical representation of the nodes
which make up the graph that is the network. Now you have shift gears and
think about how to parallelize things, on a much (potentially) larger scale
(even here, though, you can start out small - a simple NN to learn the XOR
function is very small, but contains everything needed to move on to larger
networks once you understand the basics).
Again - these two practical applications one touch the surface of LA, but are
both fun applications of these basics to perhaps motivate you to learn more.
Even if you don't take it to the next level though, what you gain from these
experiments might prove invaluable in the future.
Personally, I think they should emphasize these two applications in lower
grades when they start to teach this stuff; I know when I was in high school
(too many years ago to contemplate), the only thing that kept me interested in
both my geometry and linear algebra sections was the fact that I was playing
around with 3D wireframe graphics on my 8-bit microcomputer at home, and
needed to understand the stuff!
/ok, maybe I outed my age somewhat...lol
R: DarkTree
> What you want to do is start from the bottom and build up (essentially
> building a software 3D engine)
How do you suggest starting this?
R: krat0sprakhar
It's more or less a rite of passage to share these Youtube videos whenever the
topic of Linear Algebra comes up:
[https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2x...](https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab)
R: pixelperfect
That series motivated me to learn Linear Algebra when I watched it 8 months
ago. After watching those, I started this course:
[https://www.youtube.com/channel/UCr22xikWUK2yUW4YxOKXclQ/pla...](https://www.youtube.com/channel/UCr22xikWUK2yUW4YxOKXclQ/playlists)
In my opinion, the latter is one of the best math courses available on
YouTube, and definitely deserves more views.
R: randcraw
I've watched about 1/3 of Strang's lecture videos and several of Klein's (as
well as bought both books).
Klein emphasizes practical computer science applications of LA (like principal
components and hands-on coding tasks), whereas Strang emphasizes LA in terms
of calculus and vector calculus.
I think both courses are outstanding. I suspect CS students will appreciate
Klein's content and examples more, though Strang lectures are so good you
won't find much to complain about. I have heard that some math purists object
to Strang's emphasis as being as lacking fundamental rigor and overemphasizing
intuition. But this criticism probably applies to both courses. I think both
approach LA in terms of its utility toward CS (Klein) or engineering (Strang)
problems.
R: onuralp
Disclaimer: I have an engineering background.
I think this is a fair characterization of the two approaches.
I am currently taking a class by Strang co-taught with Alan Edelman
(MIT/Julia) and Raj Rao (Michigan) that has a strong emphasis on applications
and hands-on coding tasks (using Julia).[0] I am also making my way through
CtM (thoroughly enjoying) and hope that they will release the video lectures
soon as I think the lectures and CtM complement each other quiet nicely.
[0] Matrix Methods In Data Analysis, Signal Processing, And Machine Learning -
[https://stellar.mit.edu/S/course/18/sp17/18.065/](https://stellar.mit.edu/S/course/18/sp17/18.065/)
R: DarkTree
I just bought Gilbert Strang's Linear Algebra so that I can read it along with
watching his MIT lectures. I'm wondering how that will compare to this
book/course.
Has anyone here already taken a similar path and what did you think?
My main interests are in graphics programming, so I'm hoping to apply what I
learn from the course to that.
If anyone else has any recommendations on other areas of math, courses, or
books in general for learning CG, that would be much appreciated!
R: plmno
Suggestions
1\. Introduction to the Mathematics of Computer Graphics by Nathan Carter:
[http://www.maa.org/press/ebooks/introduction-to-the-
mathemat...](http://www.maa.org/press/ebooks/introduction-to-the-mathematics-
of-computer-graphics)
2\. When Life is Linear: From Computer Graphics to Bracketology by Tim
Chartier:
[http://www.maa.org/press/books/when-life-is-linear-from-
comp...](http://www.maa.org/press/books/when-life-is-linear-from-computer-
graphics-to-bracketology)
R: DarkTree
I'll check them out, thanks!
R: carlosgg
Videos for his course at Brown (they start at the bottom):
[https://cs.brown.edu/video/channels/coding-matrix-
fall-2014/...](https://cs.brown.edu/video/channels/coding-matrix-
fall-2014/?page=1)
R: nafizh
The author's coursera course is no longer available sadly.
R: rectang
True, although the lectures from the Brown University version of the course
from 2014 are available here:
[https://cs.brown.edu/video/channels/coding-matrix-
fall-2014/](https://cs.brown.edu/video/channels/coding-matrix-fall-2014/)
They're listed in reverse order; start with "Course Introduction--Sept. 3,
2014".
|
HACKER_NEWS
|
Recursive resolution ignores current scope .Use() instance
The problems is represented by the following example code: https://dotnetfiddle.net/Nq2ImP
The problems persists even if the factory-based .Use() overload is used, no difference.
Platform: .NET Core 3.1.5
DryIoc version: 4.2.1
Seems like the problem with the cache and the presense of AsyncExecutionFlow context makes it more obscure. Anyway, thanks for reporting. I will look.
Seems like the problem with the cache and the presense of AsyncExecutionFlow context makes it more obscure. Anyway, thanks for reporting. I will look.
Using static context makes no difference, so it's probably related to resolution cache: https://dotnetfiddle.net/Cc4tUu
Can I help you with anything? Currently this bug is blocking one of our feature release. I don't want to rush you, just an approx. time would be fine about when will you able to investigate it.
Thanks!
Will try to look Today
Here is the working test - check asResolutionCall setup and check the results for explanation https://github.com/dadhi/DryIoc/blob/master/docs/DryIoc.Docs/RulesAndDefaultConventions.md#injecting-dependency-asresolutioncall
But the problem still was there even with asResolutionCall - the good news is that fix is ready. I will release v4.2.2 shortly.
using NUnit.Framework;
namespace DryIoc.IssuesTests
{
[TestFixture]
public class GHIssue288_Recursive_resolution_ignores_current_scope_Use_instance
{
[Test]
public void Test()
{
var c = new Container(scopeContext: new AsyncExecutionFlowScopeContext());
c.Register<Child>(Made.Of(() => ChildFactory()), Reuse.Scoped,
// NOTE: this is important to make the dependency 'dynamic' and be replaceable even if parent expression is cached
// see https://github.com/dadhi/DryIoc/blob/master/docs/DryIoc.Docs/RulesAndDefaultConventions.md#injecting-dependency-asresolutioncall
Setup.With(asResolutionCall: true));
c.Register<Parent>(Reuse.Scoped);
// instance is resolved in outer scope
using (var outer = c.OpenScope())
{
var outerParentInstance = c.Resolve<Parent>();
Assert.IsNotNull(outerParentInstance);
var outerChildInstance = c.Resolve<Child>();
Assert.AreSame(outerParentInstance.Child, outerChildInstance);
using (var _ = c.OpenScope())
{
// inject instance
var inner = c.Resolve<IResolverContext>();
inner.Use(outerChildInstance);
// direct type resolution works
var innerChildInstance = c.Resolve<Child>();
Assert.AreSame(outerChildInstance, innerChildInstance);
// recursive type resolution skips the instance and calls the factory even though the instance is placed to the current scope
var innerParentInstance = c.Resolve<Parent>();
Assert.AreSame(innerChildInstance, innerParentInstance.Child);
}
}
}
public class Child
{
public int X { get; set; }
}
public class Parent
{
public readonly Child Child;
public Parent(Child child) => Child = child;
}
public static Child ChildFactory() => new Child();
}
}
``
What are the performance characterics using asResolutionCall? There are similar use cases in context of resolving the current user instance for every request and thus it is really hot code that will be called numerously.
If you use Use to supply the service for Resolve then it is fast.
Otherwise it slightly slower but anyway it will use caches along the way.
@detoxhby Longer explanation:
Basically there is no other way to patch Parent expression and the "patching" is slow by definition.
Using asResolutionCall will keep cached Parent expression intact at the cast of adding extra Resolve call which is still baked by resolution cache.
@detoxhby Actually I will think about other possibilities, like replacing the service instance directly in Scope - but it will make things more complex and make more general cases without Use slower (it should be proved though). Also it is much more work. So I will release v4.2.2 as-is and postpone other things for the later.
I am open for ideas, help, and the new issues for the topic.
@dadhi thanks for the detailed explanation!
Thinking about the idea of Use() to do a full resolution process and then replace the already in-place instance for scope's cache. This way the factory wouldn't need any special treating and the only custom logic that would be necessary is the internal possibility to replace the direct instance as an "atomic" operation (or simply skip the call to the factory). I'm not sure if it is possible in a way of doing 2 phase swap to remain lock-free, but a higher level abstraction of letting the main resolver logic to have and pass on an optional instance would sounds better.
Memory usage caused by the extra stack param (which would be null for the 99% of time) may not worth it, as you said, need benchmark.
|
GITHUB_ARCHIVE
|
Your company plans to migrate to Azure. The dynamic scalability and routed interfaces bundled have been selected as if you should you might significantly from where if any other computers. They use dynamic route uses industry standards, using in an extended communities they visit www method of a routed through a building. Some standards into routing protocol standard dynamic route redistribution allows for microsoft graph area consists of using. Click next protocol standard dynamic wep encrypted ptgt is microsoft uses industry dynamic routing protocol standard protocol standard however, microsoft services industry standards documents using a securely.
Electric Grid Fly Killers
Machines then you can use Microsoft Azure cloud categorized as. Security technology for establishing an encrypted link between a server and a client.
Switching loop in the uses industry standard dynamic routing protocol for
You use standard protocol uses industry standards and using virtual machine learning studio enterprise is that is kept for primary driving rapid results. Ip addresses use browser only hosts using a question arises: route would want oracle encourages you visualize the dynamic routing uses protocol standard protocol? Similarly to limit, apps provides basic process meaning that uses industry buzzword today.Book
Parent Involvement Policy
Some networks still provide services free of any AUP. The route for multiple documents using wep security lights should exist multiple regions, but it deals with.
Subnetting is part focuses on router uses industry standard dynamic routing protocol
Remember that use microsoft support. Azure Active Directory and Microsoft Advanced Threat Analytics help you to secure and manage user identity in your business, API keys, and sequencing. The standard is a remote host, the amount to other liss on behalf of dynamic routing uses industry standard protocol designed with. Lans use routing protocol used to dynamically without disrupting operations of routes to be useful for storing the industry vendors involved.Faith Good
Articles Of Confederation
Disaster recovery: A cloud service that performs quickly when it increases. This node profile will affect networks are selected to replace an industry standard dynamic routing uses protocol that is the client broadcasts shared tree forms based credentials before you.
Neither authentication server handles name microsoft uses ebgp to obtain an it sends snmp
Specifically for books at the simple overlay trigger class on English locale pages, it broadcasts its presence using a General RIP Request message. If the subagent wants to report an important state change, Nudity, it is possible to identify at the manufacturer based upon data collected from RF emissions. Ip module can manage a recursive and microsoft uses industry dynamic routing protocol standard procedures in which technology services policies has completed its presence to reduce protocol within the client on the routing?Morning
Update From The Principal
It to each multicast function can see how to. Integrated vulnerability identification of routing uses industry standard dynamic protocol that name server?
The routing uses protocol standard dynamic routing table can secure
Fragment of the following the protocol uses? Hydra is using standard protocol uses industry standards specify how can use snmp agent, a single vendor websites, deploying and manipulate requests? These routers often support basic firewall functionality to filter out packets based on their source or destination network address. SAP, and forwards the datagram to the home network. In a web browsers such as port should have used between instances running a predictable file shares can be.Between
Age Fix With Anthony Youn
You do not follow in the standard dynamic routing uses protocol? Hosts that are routed to many hosts to an outer header to determine how will be selected independently of default.
To forward a neighboring device uses industry standard dynamic routing protocol is
The tool for attacking CDP is Yersinia. The route can be due to select from hijacking network connectivity, standard dynamic routing of drives on the traffic entering an ephemeral port? Given specific to secure area are useful for creating a join or person elements, and virtual network paths and azure services? Segmentation of dynamic wep, standards track of a single architecture as a multilayer access to skip this approach prevents some control.Québec
Law Enforcement Resources
Active protocol used by using ah to use of usage.
This point in standard protocol being utilized will fall under the server
Global, it tries the query again using TCP. If you to learn how should you can accommodate changing confirmation mode of several of that occurs at various measure taken to create a range of its. Unlike the peer vpn gateway for example, protocol uses industry dynamic routing information to be deployed call only one easy to. SMTP clients to request notifications of delivery status, and range describe properties of statistical distributions.Dubai
Visit Our YouTube Channel
However, the addresses they can reach, trying to form an Etherchannel. Ip address port or memory per protocol allows automatic route is the scan, and protocol standard interfaces.
Flagler college and testing new standard dynamic routing uses protocol stack with
Tftp server can be used to protect the microsoft uses industry standard dynamic routing protocol reduces the virtual machines residing downstream. Which host configurations qualifies for verifying their original router until you to use dynamic scalability and payload data center point will present in? Where to use static and where to use dynamic routing. The key distribution attack relies on an attacker capturing the PMK transmission between the RADIUS server and the AP.Buckhorn
This knowledge can be manually configured or automatically discovered. This category only includes cookies that ensures basic functionalities and security features of the website.
The console may host instead, standard dynamic routing uses industry buzzword today
This requires an enabled virtual address. These cards are proprietary for the platform they are designed for and can be accessed even when the server is down or its operating system crashes. If you will be monitored or host command or to its presence in an industry standard developed by ppp datagrams per direction? Igrp supports use dynamic route uses industry standards for protocols used for developer bootcamp is using multicast protocol to the connection.Modifer
Any user or enterprise that requires its data to reside in Germany. One example is the encapsulation method for transport of Ethernet over MPLS networks.
Des in this will not create the industry standard however, tsl or administrator
CCTV system is physically protected. This concept of the vc between the web servers available dynamic routing uses industry standard dynamic ip address port number of distances require only. How to resolve the more specific to this by an open an lsp must support was that uses industry standard dynamic routing protocol? The same horizontal position in ip address of the early days of geographical locations of bits set to microsoft uses the entity.Worksheet
However if a dynamic routing protocol is run over a tunnel do not use subnet. The users do neighbor relationship between an asbr advertises all attached area that uses dynamic rules are withdrawn route will have had been updated, or locations of the cpe object.
Prepare data in each route and bgp protocol uses industry dynamic routing protocol implementations now
Observing the type and placement location of the locking devices on doors it is possible to determine if the door in primarily used for ingress or egress. After you finish these steps, Azure Databricks and Microsoft Power BI to provide a single holistic modern data warehouse solution for all your analytical workloads. This routing uses dynamic ports of microsoft public and dynamically as standards, destination network and analysts to.Subpoenas
|
OPCFW_CODE
|
Redstone circuits in spawn chunks in Minecraft SMP and no players online
I tried to search this question, but it seems to be rather specific.
The problem I have is the following:
We have a small SMP server in Minecraft and I set up an iron golem farm
according to
(only redstone in the hopper sorter system) and an automatic wheat farm according to
(minecart running beneath the farm and a redstone circuit for the unloading as well as a hopper sorting system). Both farms work very well as long as a player is online.
Of course, they stop working after the somewhat 60s after the last player has logged out. So I went and read a bit about that topic and came up with the idea to send an item through a nether portal every 45s via a piston/hopper clock plus a dropper. This kinda seems to work, since my iron golem farm is drowning itself in iron. However, the wheat farm seems to stop working when no one is online, as well as any minecart/redstone builds in the spawn chunks. Is there any mechanic at work here that I overlooked?
To start things with, let me clarify something from the comments on the question. Placed(fixed), in the world, blocks, items and even flowing water or lava, are not considered entities.
Per the minecraft wiki
Entities encompass all dynamic, moving objects throughout the
minecraft world.
Here is a list of entities in minecraft:
Types of Entities
The reason your wheat farm is not working, is because crops require at least one player to be in their chunk update radius.
Basically, your farm system works, but because the crops never grow, the villagers can't replant anything.
Your iron golem farm is working because, it does not require any player to be in its chunk update radius.
Events processed in spawn chunks
Some extra information on crop growth requirements:
Crop growth requirements
Something to note about spawn chunks:
Spawn chunks will not be unloaded from memory, regardless of how far away a player moves, so long as there is at least one player in the
overworld. All overworld chunks (including the spawn chunks) are
unloaded 60 seconds after the last player in the overworld enters the
Nether or the End.
In your case, I believe that the chunks do not get unloaded, due to your sending an item through the nether portal(items on the ground are considered entities).
Unfortunately, spawn chunks are only loaded if there is at least one player on the server. You could have an unused account permanently in /gamemode 3 on the server near the chunks if you want. That way they would nevver unload. If you have an AFK kick on your server you could make a small AFK machine See this video, and have an unused player account permanently in /gamemode 1 constantly cycling this machine near the farms. This will also keep the chunks loaded.
yes, this is what I thought, however the fact that the iron farm keeps on working even when I know for sure that no one was online for at least a few days is speaking somewhat against that... the same goes for the automatic chicken farm, which also keeps producing when no one is online. There has to be more to that story.
@Prof.Farnsworth The spawn chunks don't truly "unload" when no players are online. What happens is entities and tile entities will no longer be sent an update tick, but everything beyond that will still work. Villages will still spawn iron golem farms, for example, but those iron golems will not move due to not receiving an update tick. If you have a chunk loader in place that is capable of loading enough chunks to tick entities, that would explain why that works.
@Skylinerw Well, I used this concept from here https://www.youtube.com/watch?v=dkxRvYKwNZU . But then I am still confused. If the golems, and obviously the chicken as well, do receive an update tick as entities, why don't the minecarts and the villager in the farm receive one? (everything is inside the spawn chunk area)
|
STACK_EXCHANGE
|
I have been using Edge as my primary browser since Windows 10 launched and testing my web application with it.
There is a particularly annoying issue with filtered dropdownlists where when you click in the text box to filter Edge brings up it's own autocomplete list of things that you have previously typed into textboxes. The problem is that this sits on top of the tree control and you cannot see it let alone use it.
Pushing escape to get rid of the autocomplete closes the dropdown tree also.
Take a look at the attached screenshot that I took when attempting to access your demo site
Are you aware of this issue and do you have a fix in the works?
7 Answers, 1 is accepted
I am afraid I was not able to reproduce the described behavior in the demo you linked. At my end the demo looks and works as in the other browsers, including the ESC key closing the DropDownTree. Could you please specify step-by-step the actions a user has to take in order for the issue to be replicated? If it is more convenient for you, you can record a short video (with Jing for instance) that demonstrates the issue.
I have narrowed it down to the "save form entries" option hidden in the advanced settings of Edge.
If this is disabled it works without issue. Further to this if I clear the "form data" it also goes back to working.
Note that originally over time I built up a collection of saved values and have found not all forms cause the issue.
If you go to the following URL
and enter test, test, test and click login, which will of course fail.
Now test should appear on
as explained when you open the dropdown list and click in the filter box.
Thank you for the additional information provided.
We were able to reproduce the behavior following the steps you described. It seems this is an Edge browser-behavior related to the saving option you have found, as it was reproduced not only with our DropDownTree control but with a simple <input> element as well. Since the DropDownTree's input area renders as an input element it is affected as well.
good to hear that you have been able to replicate the issue.
I understand that this is a browser issue and not a dropdowntree issue, however was wondering if you had come up with a workaround or fix for the issue. I am not sure if there is anything that we can do to the input field so that Edge excludes it from this functionality.
Unfortunately telling users to either not use Edge or change that setting is not ideal and we have just started using the dropdowntree control quite heavily in our web application.
As a workaround until the browser behavior with the setting in question is fixed/changed by MS, you can set the "autocomplete" attribute of the form tag of your page as shown below:
I can confirm that this workaround does work and resolves my issue.
I am glad setting the attribute solves the issue at least temporarily, until an official solution becomes available.
|
OPCFW_CODE
|
DescriptionCollaboration and the free flow of information are critical to CERN's success in achieving its scientific goals, which is why CERN is spearheading the Open Access movement with partners across the scientific community, the publishing industry and libraries worldwide. The CERN Open Access Section is looking for a talented and inspired Web-Services Developer to connect people and science through our Open Access initiatives. If you are interested in combining your skills in web-based development with your interest in world-class digital libraries, this is the opportunity for you. Take Part!
Open Access to scientific results accelerates science and is a core mission of CERN Scientific Information Service. The Service manages the Sponsoring Consortium for Publishing in Particle Physics (SCOAP3): a partnership of libraries and funding agencies worldwide which, with scientific publishers, aims to convert key journals in the field of High-Energy Physics to Open Access, for everyone to read and freely re-use scientific results. The Service also supports CERN authors to make their publications Open Access, either in direct contact with publishers, or through a network of linked repositories such as CDS (cds.cern.ch), INSPIRE (inspirehep.net), and arXiv (arXiv.org).
You will join:
- The General Infrastructure Services Department, which provides and supports the infrastructure and technical and administrative services of the Organisation.
- The Scientific Information Service Group, which manages CERN's library as well as historical and scientific archives.
- The Open Access Section, which supports and implements CERN Open Access vision.
Web-Services Developer in the General Infrastructure Services Department (GS), Scientific Information Service Group (SIS), Open Access Section (OA).
As a Web-Services Developer and Digital-Libraries Operator in the Open Access Section, you will contribute to the implementation and operation of web-based services in support of the CERN scientific community to help authors fully record, leverage and disseminate their Open Access publications. In particular you will:
- Implement and operate an Open Access repository to ingest articles published under the SCOAP3 initiative, based on CERN Invenio digital library technology (invenio-software.org); and ensure interoperability with other platforms by SCOAP3 partners;
- Build services to link Open Access publications and metadata across relevant web platforms, according to the license under which such material is released;
- Integrate, throughout CERN Open Access services, relevant external information, products and services such as: CrossRef and DataCite DOIs, ORCID identifiers, FundRef information, CreativeCommon licenses.
Bachelor in the field of computer science, software engineering, or equivalent.*
*Please note that preference will be given to candidates with the above-mentioned qualifications: In principle consideration will not be given to applications from people with higher qualifications.
Experience and competencies
The experience required for this post is:
- Demonstrated experience in the maintenance, development or operation of advanced information systems.
- Experience within a LAMP environment (Linux, Apache, MySQL, Python) and version control systems, preferably in open source projects.
- Familiarity with common formats and protocols in digital libraries such as MARC21, OAI-PMH, RDF, XML,XSLT as well as a basic understanding of scholarly communication (authors, publishers, manuscripts, persistent identifiers, Open Access, licenses, repositories) would be an advantage.
- Programming/Software development.
- Requirements definition and management.
- Database/Repository design.
- Working in Teams: Building and maintaining constructive and effective work relationships.
- Achieving Results: Driving work / projects along and seeing them through to their conclusion.
- Demonstrating Accountability: Taking responsibility for own actions and decisions.
- Communicating Effectively: Delivering presentations in a structured and clear way; adjusting style and content to the audience; responding calmly and confidently to questions.
- Spoken and written English or French: ability to draw up technical texts in one of the two languages, and a willingess to learn the other language.
Employing a diverse and international workforce is a CERN core value and central to our success. We welcome applications from all Member States irrespective of gender, age, disability, sexual orientation, race, religion or personal situation.
This vacancy will be filled as soon as possible, and applications should normally reach us no later than 02/09/2013.
By applying here, you allow CERN to consider your application for any position it considers relevant with respect to your profile. Please ensure you update your profile regularly with any relevant information and that you inform the recruitment service if you wish your file to be removed from the database.
Note on Employment Conditions
We offer a limited-duration contract for a period of 5 years. Limited-duration contracts shall terminate by default on their date of expiry.
Subject to certain conditions, holders of limited-duration contracts may be granted an indefinite contract.
How to apply
Published: Friday, July 19, 2013 16:18 UTC
Last updated: Tuesday, February 28, 2017 23:45 UTC
|
OPCFW_CODE
|
Thanks Frank J. Lagattuta!
It's really problem with coloring CListCtrl with using Geometry in Dialog!
NM_CUSTOMDRAW doesn't work cristal!
to get CDDS_ITEMPREPAINT you should use Frank J. Lagattuta OnNotify function!
Need help on how to use the Geometry templates in Composite
ATL controls used in MFC applications. For example a Composite ATL control has couple of edit controls and comboboxes. Need only to resize Edit controls.
I wanted to be able to allow the user to save the resized
window dimensions, but unfortunately I could not initialize
the window to the new size without experiencing some
amusing problems with the controls. That is, the controls
would be one size, while the window would be the original
size of the resource.
So, I modified and exposed the GeometryInitialize()
function to give me a way around the problem.
The new GeometryInitialized() function now looks like this:
// pExt is the new size of the parent rect
// mExt is the initial size of the parent rect
int x = m_rsctl.m_iRect.left;
int y = m_rsctl.m_iRect.top;
int cx = m_rsctl.m_iRect.Width();
int cy = m_rsctl.m_iRect.Height(); // the new coordinates of the control rect
if (m_constraint.m_options & CConstraint::eRatioX)
x = pExt.cx*m_rsctl.m_iRect.left/mExt.cx;
if (m_constraint.m_options & CConstraint::eRatioCX)
cx = pExt.cx*m_rsctl.m_iRect.Width()/mExt.cx;
if (m_constraint.m_options & CConstraint::eRatioY)
y = pExt.cy*m_rsctl.m_iRect.top/mExt.cy;
if (m_constraint.m_options & CConstraint::eRatioCY)
cy = pExt.cy*m_rsctl.m_iRect.Height()/mExt.cy;
if (m_constraint.m_options & CConstraint::eCenterX)
x += (pExt.cx-mExt.cx)/2;
if (m_constraint.m_options & CConstraint::eCenterY)
y += (pExt.cy-mExt.cy)/2;
if (m_constraint.m_options & CConstraint::eResizeCX)
cx += (int)((pExt.cx-mExt.cx) * m_constraint.m_cxratio);
if (m_constraint.m_options & CConstraint::eResizeCY)
cy += (int)((pExt.cy-mExt.cy) * m_constraint.m_cyratio);
if (m_constraint.m_options & CConstraint::eMoveX)
x += (int)((pExt.cx-mExt.cx) * m_constraint.m_xratio);
if (m_constraint.m_options & CConstraint::eMoveY)
y += (int)((pExt.cy-mExt.cy) * m_constraint.m_yratio);
if (x < m_rsctl.m_iRect.left)
x = m_rsctl.m_iRect.left;
if (y < m_rsctl.m_iRect.top)
y = m_rsctl.m_iRect.top;
if (cx < m_rsctl.m_iRect.Width())
cx = m_rsctl.m_iRect.Width();
if (cy < m_rsctl.m_iRect.Height())
cy = m_rsctl.m_iRect.Height();
Great code! I use it with a CFormView derived class of
mine - repositions/resizes controls, works great!
However, I ran into a problem. In my FormView I have
list ctrls and tree ctrls that allow for in-place editing
for subitems using dropdown list ctrl, spin ctrls, etc.
I have implemented this as described in other sections of
Basically, the list/tree ctrls use message reflection to
handle the BeginLabelEdit and EndLabelEdit notifications
and in their handling create an in-place ctrl of the proper
type, give it focus, and let the user edit. When the
in-place edit ctrl loses focus, it destroys itself and
sends an EndLabelEdit message back to the list/tree ctrl.
In geometry.h there is an override of the CWnd::OnNotify
function that looks like this:
|
OPCFW_CODE
|
As with anyone who copies large files, our lab has been having some issues with files
being corrupt after copying. To help deal with this I wrote this short tutorial to get
people started on using the
MD5 command line tool to solve this problem. This tutorial
was written for MacOSX, but you can do this on Linux using the
md5sum tool instead. This
tutorial was also written with the tool webdav in mind, as this is what our lab currently
uses to backup files, but you can use the MD5 hash to compare any files anywhere locally
This tutorial lives on Github here, please send any edits to improve it as pull requests to that repo.
How to check if your files copied correctly to webdav using the MD5 hash
Sometimes when files are copied from your local computer to webdav mistakes are made and the copied files are corrupt/damaged and not useable. This can be hard to tell with a quick glance, especially if we have transferred a large number of files and left to have a coffee while this happened. What we can do after the file transfer to check that the files copied correctly is to look at the MD5 hash between two files (the original and the copied file) and check that they are the same.
What is a MD5 hash? The MD5 hash for a file is a calculated 128-bit value for that file which can act like a finger print. Each different file should have a different MD5 hash, unless the files are identical (i.e., copies of eachother). You can think of it as a file fingerprint. Like a human fingerprint, there probability of getting two identical MD5 hashes from two different files is extremely low. So we can use this feature for comparing the files and their integrity control while copying.
In this quick tutorial you will learn:
How to copy files from your local machine to webdav via the command line
How to get the MD5 hash from the original and copied files so that you can check if they were copied correctly
Copying files to webdav via the commmand line
We’ll start with the example of copying a single file. First, connect to webdav as you usually do.
Open a new Bash Shell (e.g., terminal on your Mac or Linux Machine, or Gitbash on your Windows machine) and use the
cpcommand to copy the file. A simple way to do this is to type
cpin the command line then a “ “ and then drag the file from Finder or Explorer into terminal/Gitbash. Next drag the webdav folder that you want to copy it to from Finder or Explorer into terminal/Gitbash. Press enter to run the command. The command you run should look something like this:
cp /Users/tiffanytimbers/Desktop/Afzelius1976.pdf /Volumes/leroux-lab-files/Tiffany_Timbers/pdfs
If you want to copy multiple files, you can drag in multiple files to the command line from your local machine and the the folder on webdav that you want to copy them to. For example:
cp /Users/tiffanytimbers/Desktop/Afzelius1976.pdf /Users/tiffanytimbers/Desktop/Bacon_1998.pdf /Volumes/leroux-lab-files/Tiffany_Timbers/pdfs
If you dont want to drag everything but want to copy everything from one folder on your
local machine to a folder on webdav with the same name you would type
cp as well as the
-a flag to mean “all files” and then drag in the folder you want to copy from your local
machine, then drag in the folder you want to copy to. If that folder doesn’t exist yes,
you have to add that to the end of the path.
Below we copy a folder called
2007_pdfs from our local machine to webdav. We want to put
this folder into the folder named
pdfs, but inside
pdfs there is no folder named
2007_pdfs. So we have to drag in the
pdfs folder from webdav and then add
to the end of the filepath:
cp -a /Users/tiffanytimbers/Desktop/2007_pdfs/ /Volumes/leroux-lab-files/Tiffany_Timbers/pdfs/2007_pdfs
Checking the MD5 hash from the original and copied file via the command line
Now that the files are on webdav we can test if they copied correctly. To do this we use
MD5 command to look at the MD5 hash. To get the MD5 hash of the first .pdf file we
copied we type:
and we get the following as output:
MD5 (/Users/tiffanytimbers/Desktop/Afzelius1976.pdf) = 5dc399fc6f64b608e4995f77db41995e
5dc399fc6f64b608e4995f77db41995e is the MD5 hash for
Afzelius1976.pdf and is unique to
that file. Let’s now look at the MD5 hash for the copy of
Afzelius1976.pdf on webdav:
we get the following as output:
MD5 (/Volumes/leroux-lab-files/Tiffany_Timbers/pdfs/Afzelius1976.pdf) = 5dc399fc6f64b608e4995f77db41995e
We can see that
and so we know that the file copied correctly.
To get the MD5 hash for many files in a directory we can use a wildcard to match all files as follows:
we get the following as output:
MD5 (/Users/tiffanytimbers/Desktop/2007_pdfs/Sieburth et al 2007.pdf) = 45e55165d940196274b1e8d11ad078d3 MD5 (/Users/tiffanytimbers/Desktop/2007_pdfs/Sossin 2007.pdf) = 96948bdc62e7d0d723976b5f13b7998a MD5 (/Users/tiffanytimbers/Desktop/2007_pdfs/Speese et al 2007.pdf) = 3c9585eb5756733504d5f9eab251502e
Now you know how to copy files to webdav via the command line as well as how to check if the files copied correctly.
|
OPCFW_CODE
|
|Annu. Rev. Astron. Astrophys. 1984. 22:
Copyright © 1984 by . All rights reserved
Even though they contain only a small fraction of the galaxies in space, rich clusters stand out distinctly against the fabric of galaxies that make up the visible Universe. Because of their high surface densities and large number of very luminous galaxies, clusters can be identified out to distances comparable to the present horizon of the Universe, making them important tools in the study of cosmology (Hubble 1936, Zwicky 1938). Because of this legacy, research on clusters of galaxies has traditionally centered on the measurements of standard candles (e.g. the luminosity of the brightest cluster member, or the ``knee'' in the cluster luminosity function) or standard metrics (radii of cluster galaxies or the cluster distribution as a whole). The review article by Bahcall (1977) reflects a goal of researchers at that time: to provide a morphological description of the sizes, shapes, and galaxy content of clusters, and whenever possible, the applicability of these parameters to cosmological investigations.
In the 1970s a new course of research on clusters of galaxies began to emerge with the resurgence of interest in galaxy evolution. It was recognized that certain clusters contain ``supergalaxies'' unlike any objects seen in the general field. How had these objects evolved in the unique environment of clusters? The general population of galaxies in clusters is highly skewed toward elliptical and S0 galaxies, a population quite unlike the spiral-dominated field where most galaxies are actively forming new stars. What could this obvious difference tell us about the influences of environment both in the formation and evolution of the different morphological types? It was discovered that many rich clusters contain a pervasive, hot intergalactic gas. Might interactions between this gas and galaxies be strong enough to alter galaxy properties such as their own gas fraction and rates of star formation? Clusters, it was realized, are laboratories for the study of galaxy evolution and may become as useful as star clusters are in the study of stellar evolution.
This review is restricted to recent work on clusters and groups of galaxies that is relevant to the study of galaxy evolution. Such subjects as cluster morphology, structure (e.g. spatial distribution of galaxies, core radii), luminosity functions, mass-to-light ratios, catalogs, and dynamics are discussed only in connection with how galaxies may form and change in response to the cluster environment. Also beyond the scope of this review is the rapidly growing area of research that uses large-scale clustering properties to model the evolution of the early Universe (e.g. the distribution of clusters in space, the multiplicity function for clusters, the evolution of clusters and superclusters from primordial fluctuations). Two other areas have grown so rapidly since the Bahcall review that they warrant their own chapters: X-ray emission from clusters (Forman & Jones 1982) and the HI content of cluster galaxies (Haynes et al. 1984). These reviews are themselves highly relevant to the topics discussed here.
The following discussion is divided into four sections: the laboratory - a brief introduction to clusters of galaxies; mergers, tidal stripping, and accretion - the evolution of cD galaxies; the development of the morphological types; and the populations of clusters as a function of cosmological look-back time.
|
OPCFW_CODE
|
Thanks for the interest SneakySly,
Now.....this is most definitely a work in progress. I came up with the idea when designing a RT trader campaign. I really wanted the idea of a massive background dynasty to be churning away behind the main play, and kind of have a simple "sim empire" at the end of each session. Getting my hands on the BC book immediately had me modifying my system.
The way it works is that players will use acquisition tests to bolster their resources on each world. Each world will offer unique units, as well as a core denoted with "all" in origin, that can be transferred to different systems if the Heretics have a void ship, or some other means of transport. In addition, the plot can reward the player with Unique Crusader Units. In the example workbook, it seems as though our heretics have rallied the Xurunt at the Scarred Steppe, possibly by bringing down waves of savage allied Ork Freebooters and Berin Mercenary Greenskins (tough and not as gaudy as the freebooters). In addition, Heavy Aether Craft from Q'Sal are supporting the efforts from the skies, raking the enemy with arcane shells and bolts of energy.
So....who cares? The whole purpose is to make opposing infamy tests against rival factions in each planetary territory. Each unit has stats which will combine with random numbers to generate an infamy bonus for each opposing warlord. After each "Conquest" after game phase, each unit will loose a subtract a random number (first column in attrition) AND add a random number (second column) to its status percent to a max of 100%. For a unit to contribute its Infamy bonus it must roll equal to or under its status on a d100.
If the difference in attrition values is equal to the units Break (BRK) value, the unit immediately leaves your service for any number of reasons. The first four columns represent the scale of acquistion. For each step up the scale, the units starting status increases by 25%.
The second workbook is kind of my workbench, where I have written down random ideas for core, planetary and unique units. I have yet to set the stats up, but my group will eventually have a warband spread out across many territories in a dozen systems. There are however going to be NPC warbands with similar forces that will compete for resources. Lastly, acquistion of resources will be tied to certain systems. So in rolling for that large amount of flak armor it is up to the player to decide which battlefield to send it to, or move one of their void ships to transfer it.
Where the acquistion roll is made will affect how difficult a resource is to acquire. So for example, a player can acquire Barons of Forge Polix at the Hallows, but must figure out how to get them to their battlegrounds on the Flaming Tomb. Remember to that this is on a grand scale. I'm sure a group of Barons of Forge Polix could find their own way across the Vortex, but their entry encompasses dark archeotec labs, legions of serfs, equipment, mobile laboratoriums, etc.
The mechanics still need some major tuning, so any input would be great. Also, the warband represented in the Scarred Steppe example would already be of considerable Infamy. A starting warband may have just a handful of core units on only a few battlefields.
|
OPCFW_CODE
|
Can I preload multiple video resources to load in the same HTML5 <video> element?
I'm working out details on a web application which involves the sequential loading of a long series of (very short) video clips, one after the other, with occasional input from the user establishing new directions for which video clips to load.
I would like to be able to have the browser preload the video clips five at a time. However, the way that we currently have the site working is by means of a single video element which is having its src attribute continually updated through JavaScript.
Is there a straightforward way I can get the browser to preload multiple video clips even though I am ultimately loading them all (one at a time) into the same video element?
Very well phrased, very clear. Welcome to Stack Overflow!
Thanks! I wasn't sure if the question was too vague or general.
i am trying to do something similar think i might be close. this api really helps see the inner workings and how to use them http://www.w3.org/TR/html5/video.html#video post back when i am done
Were you able to accomplish what you wanted? I'm trying to do something similar and I wonder if you can force browsers to cache video in a predictable way. Would appreciate any experience you share...
You can preload images in browsers by creating an <img> tag in JavaScript, and setting its src attribute. Although it’s not required by any spec, all browsers then download the image and cache it (assuming their caches haven’t been disabled).
I’ve no idea if that works with the <video> element in HTML5, but it might do. Could you give it a go?
I have tried something similar, but my problem is that the page is designed to only have one element, whose src gets changed each time a clip reaches its end. I can create multiple tags with JavaScript and use them to preload a series of clips, but when the script to change the src attribute of the main element gets called, the page does not use the clips it has preloaded for the other video elements, even though the src value is the same, rather it seems to fetch the clips again. So, it just ends up being redundant.
I should explain that the video element in question is basically the entire page. I guess I need a way to tell the browser to grab a set of files and cache them until it is time to load them into the video element one at a time. And I need it to happen in JavaScript because the needed files change based on user input.
@Sergio1132: ah, so the browser doesn’t seem to be caching the video files. In that case, I think you’ll have to change your approach — create additional <video> elements, and then swap them in, instead of using just one <video> element.
Thanks, I ended up doing something along these lines. I'm using multiple hidden elements with a element. The frames get drawn to the and when the clip is over the script simply changes which element is getting drawn to the canvas.
@Sergio1132: ah, good stuff. Out of interest, what’s the advantage of having a <canvas> element involved as opposed to just showing and hiding the <video> elements via CSS?
@Paul D. Waite: Well, for switching between the videos I don't think there is an advantage, but the site will have a minimal text interface that is laid over the video and I thought having the whole thing be a <canvas> element might make it possible to do some interesting things. Right now we are just using css positioning to lay the interface over the video, but I was thinking that if we draw the interface onto the canvas along with the video then it opens up the possibility of having the interface subtly interact with the video underneath it, using various <canvas> methods.
@Paul D. Waite: I'm still testing to see how it performs, and if there is an apparent performance issue with the <canvas> intermediary then we will probably just show and hide the <video> elements via CSS, as you say.
@Sergio1132: ah yes I see, neato. Well worth some experimenting.
I was curious if you happened to finish the project you were working on? I am experimenting with an html5 video player with a playlist and was wondering if you ran into any big problems. Were there performance issues with loading/playing multiple video files? Were you able to run this on an iPad? Did you have a fallback for IE?
My approach was going to be: have a single <canvas> element with multiple hidden <video> elements and use the canvas drawImage() function to display and crossfade between videos.
|
STACK_EXCHANGE
|
Technically, a data relationship exists when data from one table is bound to another table through the use of specific fields and values within those fields.
Data relationships can be compared to families (we’re talking real people, not data stuff). We can have a parent (or grandparent) with a connection to their children, or indirectly to their children’s children. They are bound together by their family bonds (whatever form they take—there is still a connection). That’s one of the reasons we frequently use terms like parent, children and even siblings when discussing relationships between tables.
How do tables get ‘bonded’ together?
It works with one table having a primary key field, and another table having a foreign key field. The primary key field is typically the unique identifier for the table it lives on. The foreign key field lives on the other table. It contains the value of the primary key field for the record that is to be associated. Tables should have a primary key–it’s how each row in the table is uniquely identified.
Let’s say we want to identify each of our customers’ super favorite desserts. This simple example is one way to meet this requirement:
One table is the list of desserts, where each is identified by a unique Id (primary key). In the second table is a list of customers, also with their own primary key unique Ids. Notice that the Customer table also has a column called “Favorite Dessert”–the foreign key–containing the corresponding Dessert Id identifying the customer’s favorite dessert, establishing a data relationship.
If we think about how these tables relate—it implies that relationships have a ‘direction’. The foreign key ‘knows’ which record on the primary key’s table it relates to, because it contains the value of the primary key. In the above example, the table that is being related to, the “Dessert table”, has no way to know what is related to it. It contains no information regarding what foreign key fields have its primary key values: it’s oblivious (we’ll get back to this in a moment).
The benefits of relating two different tables in this manner are tremendous. Consider a more realistic business requirement: tracking all orders for each customer. If we couldn’t leverage more than one table we could end up with something like this:
|Customer First Name||Customer Last Name||Customer Street Address||Customer City||Customer Territory||Customer Postal Code||Order|
|Order Date||Order Status||Order Amount|
|Jane||Smith||100 Main||Mainville||TX||01112||183||3/18/2020||Back order||$56.23|
Above, the customer information is stored multiple times. That creates inefficiencies and can lead to confusion/data inconsistency (when the customer wants to update their info—where does that update get applied).
With two tables, the above instead can look like this:
|Customer Id (Primary Key)||Customer First Name||Customer Last Name||Customer Street Address||Customer City||Customer Territory||Customer Postal Code|
|C8876||Mitesh||Patel||6621 Grove Street||Grange||IA||43112|
|Order Number||Customer Id (Foreign Key)||Order Date||Order Status||Order Amount|
We can see through this simplified example, the benefit of relating tables together (note that the splitting up data into too many tables can sometimes lead to other problems; which is outside of the scope of this discussion—maybe for another time).
On the Salesforce platform, the Setup UI for creating fields streamlines the process of working with primary and foreign keys. Whenever we create a field on a Salesforce object (they aren’t referred to as tables on the platform—this will be explored shortly) that references another object that new field is a foreign key. The platform assumes that the primary key is the standard Id field on the related object for our newly created relationship field. This behavior applies to Salesforce’s master-detail, lookup and hierarchical relationship fields (these and other types of relationships will be explored in the following posts).
These relationships allow ‘traversing’ between objects, allowing us to build complex formula fields and automations that can reference an object multiple-times removed—as long as each has a relationship between each that can be followed.
When we design a solution and are thinking about what kind of relationship field should be considered, it’s usually a good idea to identify how the data will relate to each other. There are three ways:
It can get difficult keeping track of objects and relationships—especially as things grow. To help maintain sanity there are a couple of different ways we can visualize these relationships. Probably the most popular visualization tool is the entity relationship diagram (ERD) leveraging crow’s foot notation (there’s a ton of on-line resources that explore ERDs and crow’s foot notation—it would be time well spent!).
Salesforce offers ERDs describing the key relationships for standard objects here (https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/data_model.htm). Of course, we would be remiss in not calling out Salesforce’s Schema Builder that automatically provides a visualization of objects in a particular instance and how they relate.
Let’s circle back to the idea that the primary key is ‘oblivious’ to what is related to it, that we mentioned earlier. Salesforce improves upon this in two ways:
Let’s dive into these.
Related lists are a huge benefit--think about it. Just by creating a relationship we automatically get a UI component we can add (or not add) to the screen that will display all the related records. On top of that, each child relationship requires a unique name (see https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_relationships_and_custom_objects.htm to nerd out).
Together, this gives us some real flexibility. Consider the use case where we have a custom asset object that needs to relate to two different Accounts (through two different relationship fields)—where one identifies the manufacturer and the other the distributor. This translates into having two different related lists on the Account, one for each relationship.
Side note: Why are they called objects and not tables in Salesforce? For fear of oversimplification it’s thanks to the paradigm of object-oriented programming. Object oriented programming revolves around the concept of an object which is made up of two parts:
When we think about Salesforce objects it’s evident that they each appear as a table (they have fields!). What isn’t as obvious is how the platform automatically provides us with this extra juicy goodness: sharing, security, list views, related lists, reporting, etc. These are automatically available whenever we create a new object. To close the loop: the table is the property and all that juicy goodness is the methods; hence we refer to Salesforce tables as objects.
In Part 2 of Exploring Salesforce Data Relationship Series, “Relationship Workshorses”, we’re going to build on what we just covered and explore Salesforce’s workhorse relationships: lookup, master-detail and hierarchical. It’s gonna get interesting!
View the entire 5-part Salesforce Data Relationships series here.
|
OPCFW_CODE
|
#include <vector>
#include "drforest.h"
namespace drforest {
FeatureSampler::FeatureSampler(uint num_total_features, uint seed) :
num_total_features_(num_total_features),
udist_(0, num_total_features_ - 1) {
random_state_.seed(seed);
}
arma::uvec FeatureSampler::draw(uint num_features) {
// if we request more than the total number of features
// just return all of the features.
if (num_features > num_total_features_) {
return arma::linspace<arma::uvec>(
0, num_total_features_ - 1, num_total_features_);
}
arma::uvec feature_indices(num_features);
// bitset to track which samples we have drawn
std::vector<bool> bit_set(num_total_features_);
int num_selected_features = 0;
while (num_selected_features < num_features) {
int feature_index = udist_(random_state_);
if(!bit_set.at(feature_index)) {
feature_indices(num_selected_features) = feature_index;
// set bits
bit_set.at(feature_index) = true;
num_selected_features++;
}
}
return feature_indices;
}
} // namespace drforest
|
STACK_EDU
|
HR managers look at a CV for about 3 seconds each. In that short time, they decide if giving the candidate a chance or not. And the bigger is the company, the shortest they spend on each resume. Creating a personalized application could help to stand out from the crowd. A customized videogame or an interactive website can really get you the dream job.
1. PERSONAL CONNECTIONS
The easiest way to get your cv considered for the position is being referred by someone in the company. If you don't have existing connections, try to create some. You can reach out to the decision makers on LinkedIn or on Twitter. Invite them for a coffee and learn more about the company and its culture. This will help you to adapt your application and ask for referral later on. It's always better to get your application boosted by internal referrals, rather than just applying on the job posting page.
If you're interested in working in the gaming industry, showcase your skills developing an application videogame. Host it on a creative web domain and send the link to the company. Marius Fietzek developed this adventure game to ask for an internship. He got the job. This method can be applied to many different industries. It's always easier to get noticed if you show the company what you're able to do. And it's a starting point for your first interview with them.
3. DATA VISUALIZATION
Many companies are looking for data engineers and data scientists, but the competition is getting higher and higher for candidates. You should be a little bit creative in order to stand out. For example, you can target a specific company and create a visualization of their public data. Imagine Twitter is looking to hire, you can analyze the conversations about the company of the past year on the platform. Then you can just email them the link, or try to make it viral. You'll learn a lot in the process about the company itself and you can use this information during the interview.
Competitions are a great way to make you get noticed. You can choose to compete online or offline. And you will be able to include the results in your cv and in your portfolio. Often, head-hunters look for their next hire on sectorial websites. For software engineers, for example, they can browse candidates on StackOverflow, GitHub, and ProductHunt. Sometimes, specific competitions, like for example Kaggle, is a great way to showcase your skills to potential employers.
5. TOTALLY “OUT-OF-THE-BOX" IDEAS
Sometimes, you need to go the extra mile to really catch the attention. This is what Adam Pacitti did when he was looking for a job. He spent his last 500$ on a giant billboard. It simply said: “I spent my last 500$ on this billboard. Please give me a job. EmployAdam.com". It was very direct and he uploaded a funny video cv on his website. Another option is to send something to the HR manager or the team lead in order to be noticed. Some people printed their CVs on chocolate bars, cereal boxes, cups, and whatever else might be related to the job. Try different things, as long as it has a correlation to the job you're trying to get.
6. SIGN UP TO INDUSTRY-SPECIFIC MARKETPLACES
This option requires way less effort than the others, but it's quite effective as well. You can use industry-specific marketplaces, like 100offer, in order to get in touch easier with the companies you're interested in. Usually, they help you match your skills with the requirements of the companies. You'll get only highly qualified interviews for the kind of job you want and they'll help you in the process.
These are just some ideas on how to get noticed by your dream company. Be creative, but remember to keep it relevant. And deliver something to highlight the skills you think the company might appreciate. If they don't hire you in the end, maybe you'll be lucky enough to receive a call from a similar company you didn't even consider. Be open to all the possibilities!
|
OPCFW_CODE
|
<?php
use Wambo\Catalog\Mapper\ContentMapper;
/**
* Class ContentMapperTest tests the Wambo\Catalog\Mapper\ContentMapper class.
*/
class ContentMapperTest extends PHPUnit_Framework_TestCase
{
/**
* If the given content data is valid a Content model with the given content data should be
* returned
*
* @test
* @dataProvider getValidContentData
*
* @param array $contentData Product content data
*/
public function getContent_ValidContentDataGiven_ContentWithSummaryAndDescriptionIsReturned($contentData)
{
// arrange
$productMapper = new ContentMapper();
// act
$content = $productMapper->getContent($contentData);
// assert
$this->assertNotEmpty($content->getSummaryText(), "The summary of the content model should not be empty");
$this->assertNotEmpty($content->getProductDescription(),
"The description of the content model should not be empty");
}
/**
* If the given content data is valid a Content model with the given content data should be
* returned
*
* @test
* @dataProvider getContentDataWithMissingAttributes
* @expectedException Wambo\Catalog\Exception\ContentException
* @expectedExceptionMessageRegExp /The field '.+' is missing in the given content data/
*
* @param array $contentData Product content data
*/
public function getContent_FieldsMissing_ContentExceptionIsThrown($contentData)
{
// arrange
$productMapper = new ContentMapper();
// act
$productMapper->getContent($contentData);
}
/**
* If some of the given attributes are invalid a ContentException should be thrown
*
* @test
* @dataProvider getContentWithInvalidAttributes
* @expectedException Wambo\Catalog\Exception\ContentException
* @expectedExceptionMessageRegExp /Failed to create a content model from the given data/
*
* @param array $contentData Product content data
*/
public function getContent_InvalidAttributes_ContentExceptionIsThrown($contentData)
{
// arrange
$productMapper = new ContentMapper();
// act
$productMapper->getContent($contentData);
}
/**
* Get a list of valid content data for testing
*
* @return array
*/
public static function getValidContentData()
{
return array(
[
[
"title" => "Product Title",
"summary" => "A product summary",
"description" => "A detailed product description",
],
[
"title" => "Product Title",
"summary" => "ABCdefghijklmnopqrstuvwxyzöüä.ABCdefghijklmnopqrstuvwxyzöüä.ABCdefghijklmnopqrstuvwxyzöüä.ABCdefghijklmnopqrstuvwxyzöüä.",
"description" => "A detailed product description ...",
]
]
);
}
/**
* Get a list of content data with invalid attributes for testing
*
* @return array
*/
public static function getContentWithInvalidAttributes()
{
return array(
[
// title empty or too short
[
"title" => "",
"summary" => "Product summary"
],
// title too long
[
"title" => "ABCdefghijklmnopqrstuvwxyzöüä.ABCdefghijklmnopqrstuvwxyzöüä.ABCdefghijklmnopqrstuvwxyzöüä.",
"summary" => "Product summary"
],
// summary empty
[
"summary" => "",
],
// summary too short
[
"summary" => "A",
],
// summary too long
[
"summary" => "ABCdefghijklmnopqrstuvwxyzöüä.ABCdefghijklmnopqrstuvwxyzöüä.ABCdefghijklmnopqrstuvwxyzöüä.ABCdefghijklmnopqrstuvwxyzöüä.ABCdefghijklmnopqrstuvwxyzöüä.",
]
]
);
}
/**
* Get a list of content data object with missing attributes for testing
*
* @return array
*/
public static function getContentDataWithMissingAttributes()
{
return array(
[
// title: wrong casing
[
"Title" => "Product title",
"summary" => "A product summary",
"description" => "A detailed product description",
],
// title: missing
[
"summary" => "A product summary",
"description" => "A detailed product description",
],
// summary: wrong casing
[
"title" => "Product title",
"SUMMARY" => "A product summary",
"description" => "A detailed product description",
],
// summary: missing
[
"title" => "Product title",
"description" => "A detailed product description",
]
]
);
}
}
|
STACK_EDU
|
The official Microsoft Store application is all very well, but it often pushes the same old titles and it's not trivial to find new releases, especially for Windows 10 Mobile. Which is why a fresh pair of eyes, courtesy of 9Zen Store, is appreciated. Not least because there's a filter just for UWP (Windows 10) applications.
From the Store entry for 9Zen Store (and yes, this is getting a little 'meta'):
Cannot find any new or awesome apps on the windows store? 9Zen Store helps you find awesome, free, discounted apps for you windows 10 device.
The app which eliminates the major drawbacks of the windows store.
Browse apps and see their reviews/ratings from over 60 countries.
Search apps with an advanced search features for apps not shown by windows store search.
Get app details like update date, release date of windows phone 8.1 apps (Pro feature).
Like an app but want to download later. Add it to the download later list which you can easily check out later.
Discover apps which are recommended by us.
See free/discounted apps for the day.
Copy the store link easily to share with anyone.
It all works as advertised, though not all details are surfaced for each app and in most cases you'll need to tap through to the 'real' Store, install, and then tap back out again. But it's all worth it for the new applications that I've discovered in the last 24 hours. Of which more on AAWP in the coming months.
Here's 9Zen Store in action:
New and Rising is the best place to start and seems to offer stuff that's genuinely new and which I hadn't seen before in the Microsoft Store's presentation; (right) diving into an application's entry. There are reviews and some feature data below, but in most cases you will need to 'Open in Store' and then back out again afterwards...
There's also a 'Download later' function, essentially keeping track of all the items you find interesting and handing them in a list to you later on, when convenient to do the actual installations; (right) the hamburger menu of this new UWP app - note the deals and Windows 10 filters!
The Settings are comprehensive, allowing customisation of the Store listings by country/market, and (right) fiddling with theme, colours, and so on.
Finally, here are the two big new filters in action - UWP apps only and (right) apps with deals running at the moment.
You can grab 9Zen Store here - it's a free download and highly recommended. The extra navigation into and out of the actual Store is well worth it for the new content that you'll unearth.
|
OPCFW_CODE
|
Top 7 Machine Learning Frameworks for Data Science Projects in 2023
In the vast landscape of machine learning libraries, selecting the right framework can be a challenging task. It is crucial to choose a framework that aligns with your project requirements and keeps you at the forefront of innovation. Whether you are a beginner or an experienced practitioner, the right framework can significantly impact your journey in building intelligent systems. In this article, we will explore the top-rated machine learning frameworks for data science projects in 2023, focusing solely on the frameworks themselves without mentioning any specific brands or providers.
TensorFlow: Powerhouse of Possibilities
TensorFlow is an open-source library that has gained prominence in the field of machine learning. Developed by Google, TensorFlow offers a comprehensive ecosystem for building and deploying machine learning models. It is known for its robust support for deep learning algorithms and its flexibility to work across various platforms and hardware. TensorFlow’s wide adoption and a vast community ensure ample resources and pre-trained models to accelerate your projects.
PyTorch: Unleashing Creativity
PyTorch, backed by Facebook’s AI Research lab, is another heavyweight in the machine learning framework domain. It stands out with its dynamic computational graph, allowing for greater flexibility and rapid prototyping. PyTorch’s intuitive interface and Pythonic syntax make it popular among researchers and hobbyists. It provides powerful tools for building and training neural networks, making it an excellent choice for deep learning enthusiasts.
Scikit-learn: Simplicity and Versatility
Scikit-learn is a beginner-friendly machine learning framework that covers a wide range of algorithms and tasks. Built on top of NumPy, SciPy, and Matplotlib, Scikit-learn offers a user-friendly interface for classification, regression, clustering, and dimensionality reduction. Its rich functionalities, thorough documentation, and extensive examples make it a fantastic choice for beginners and those starting their machine learning journey.
Keras: The Gateway to Deep Learning
Keras, often referred to as the “user-friendly” deep learning library, provides a high-level interface for building and training neural networks. Originally developed as a wrapper around TensorFlow, Keras has become an integral part of the TensorFlow ecosystem. Its simplicity and versatility make it an excellent choice for beginners and those primarily focused on deep learning applications.
MXNet: Scalability and Efficiency
MXNet, developed by Apache, offers a highly scalable and distributed deep learning framework. It is suitable for projects that require training models on massive datasets. MXNet’s hybrid front end allows seamless switching between imperative and symbolic programming, providing flexibility without sacrificing performance. With support for multiple programming languages and powerful visualization tools, MXNet empowers users to build and deploy models at scale.
Caffe: Speed and Simplicity
Caffe stands out for its speed and efficiency, making it one of the fastest frameworks available. It excels in scenarios where real-time inference is crucial, such as object detection and image classification tasks. While Caffe’s primary focus is on computer vision, its ease of use and straightforward configuration make it appealing for quick prototyping or deploying pre-trained models. Caffe’s expressive architecture and strong community support contribute to its popularity among researchers and industry practitioners.
Theano: Paving the Way for Deep Learning
Although Theano has been around for some time, its impact on shaping the deep learning landscape cannot be underestimated. Theano allows efficient definition, optimization, and evaluation of mathematical expressions, making it an excellent choice for deep learning research and development. While its low-level nature might pose a challenge for beginners, Theano’s performance optimization capabilities and support for symbolic computation provide a solid foundation for building complex neural networks.
The field of machine learning continues to evolve, and with it, the frameworks that power it. Each framework brings its unique strengths, catering to different needs and preferences. Whether you prioritize flexibility, ease of use, scalability, or speed, there is a framework tailored to meet your requirements. It is essential to explore and experiment with these top frameworks to embark on your machine learning journey. The possibilities are endless, and your creativity and innovation are waiting to shape the future!
Before you begin, consider taking an online data science course to become a successful data analyst in 2023. To help you in this journey, we have OdinSchool’s job-oriented Online Data Science Course that will help you grow as an eminent data scientist in the coming years.
|
OPCFW_CODE
|
Training NLU on remote node prevents the model from being run anywhere else
Rasa NLU version: 12.3
Operating system (windows, osx, ...): MacOS, Linux
Content of model configuration file:
language: "en"
pipeline:
- name: "nlp_spacy"
model: "en"
- name: "tokenizer_spacy"
- name: "intent_featurizer_count_vectors"
- name: "intent_classifier_tensorflow_embedding"
intent_tokenization_flag: false
- name: "ner_crf"
BILOU_flag": true
features:
- ["low", "title", "upper", "pos", "pos2", "word3", "digit"]
- ["low", "upper", "bias", "pos", "pos2", "title", "word3", "digit"]
- ["low", "title","upper", "pos", "pos2", "word3", "digit"]
max_iterations: 50
L1_c: 1
L2_c: 1e-4
Issue: I train my nlu on a remote gpu box, and it appears that the path to the install of rasa is recorded and referenced someplace in the pickled model files. Trying to run rasa_nlu.evaluate gives the following:
/Users/jgutta/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
2018-05-25 11:28:27 INFO rasa_nlu.components - Added 'nlp_spacy' to component cache. Key 'nlp_spacy-en'.
2018-05-25 11:28:28.195082: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-05-25 11:28:28 INFO tensorflow - Restoring parameters from models/default/model_20180523-185216/intent_classifier_tensorflow_embedding.ckpt
2018-05-25 11:28:28 INFO rasa_nlu.training_data.loading - Training data format of training.json is rasa_nlu
2018-05-25 11:28:30 INFO rasa_nlu.training_data.training_data - Training data stats:
(I needed to remove this part, but the data stats print correctly. The error below directly follows this print)
XXX lineno: 105, opcode: 0
Traceback (most recent call last):
File "/Users/jgutta/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/Users/jgutta/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/jgutta/anaconda3/lib/python3.6/site-packages/rasa_nlu/evaluate.py", line 718, in <module>
run_evaluation(cmdline_args.data, cmdline_args.model)
File "/Users/jgutta/anaconda3/lib/python3.6/site-packages/rasa_nlu/evaluate.py", line 508, in run_evaluation
test_data)
File "/Users/jgutta/anaconda3/lib/python3.6/site-packages/rasa_nlu/evaluate.py", line 418, in get_entity_predictions
res = interpreter.parse(e.text, only_output_properties=False)
File "/Users/jgutta/anaconda3/lib/python3.6/site-packages/rasa_nlu/model.py", line 340, in parse
component.process(message, **self.context)
File "/Users/jgutta/anaconda3/lib/python3.6/site-packages/rasa_nlu/featurizers/count_vectors_featurizer.py", line 149, in process
bag = self.vect.transform([self._lemmatize(message)]).toarray()
File "/Users/jgutta/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py", line 923, in transform
_, X = self._count_vocab(raw_documents, fixed_vocab=True)
File "/Users/jgutta/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py", line 792, in _count_vocab
for feature in analyze(doc):
File "/Users/jgutta/anaconda3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py", line 266, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "/opt/mapr/tools/python/anaconda3/lib/python3.5/site-packages/rasa_nlu/featurizers/count_vectors_featurizer.py", line 105, in <lambda>
SystemError: unknown opcode
The last path mentioned is the path to my python install on the other machine, where as the others are on my local. This error did not occur in past versions.
Any help in getting past this error would be very appreciated.
Could it be that you have different python models on your local machine/gpu?
I don't think so, as even with the right version the pathing would be the same, but I havent tested it. I will test that out and update on Monday with the results
Update: I have trained a model on the remote machine with an updated python version and it appears as though that resolved the issue. The NLU Evaluate works, and I will return to this thread to update if this runs with core and everything.
Alright, I'll close this for now then. Let us know if you need anything else
I am facing the same issue
Sorry for the delay in updating, the versioning was the issue. Correcting that allowed the model to be run on my local machine. Specifically, I ran:
conda install python==3.6.5
pip install --upgrade rasa_core
pip install --upgrade rasa_nlu
pip install --upgrade tensorflow-gpu
in that order. That managed to resolve my issues.
|
GITHUB_ARCHIVE
|
Please review the guidelines mentioned throughout this document, and when ready, you can open a PR against the
main branch. Typically from there, someone from the Tanssi Developer Relations team will review the PR and request any changes as needed. Then the team will merge your PR into a local branch and make sure that all of the formatting changes look good on the local site. Once everything is all set and done the changes will be published to the live site.
Thank you for your contributions, they are greatly appreciated 💜.
Previewing Changes in VS Code¶
Unfortunately, there is not currrently a way to review the changes locally. However, if you're using Visual Studio Code, you can preview the changes you're making to
.md files before committing them. To learn how, please check out the Markdown and Visual Studio Code guide from the Visual Studio docs site.
In the root directory and every subdirectory, in addition to the content directories and pages, you'll find the following files:
.pages- defines the structure of the documentation site
index.md- represents the landing pages you see throughout the docs site
Below is an example of a
Some important things to note:
titlefield at the top of the page represents the display name for the subdirectory
index.mdpage should always be the first item in the list
- Files follow the convention of 'Display Name': 'file-name.md'
- Subdirectories are listed by their directory name in the source code
Below is an example of an
title: Build your ContainerChain
description: Install the software to compile your ContainerChain, configure your genesis state and core functionalities, test locally and get ready to deploy through Tanssi.
Some important things to note:
<title>tag and is used for SEO purposes
descriptionrepresents the meta-description and is also used for SEO purposes
templatedefines the template to be used. It should always be
main.htmlfor new content pages and
subsection-index-page.htmlwhen adding a new section
<div>is populated with links to any pages or subdirectories and is populated automatically by a script at runtime that builds the landing pages
When adding a new content page, you should have the following components:
title- represents the
<title>tag and is used for SEO purposes (not displayed on the published site)
description- represents the meta-description and is also used for SEO purposes (not displayed on the published site)
- Page title - a title to be displayed at the top of the page
## Introductionsection - 2-3 paragraphs to serve as an introduction. This should be long-lived, meaning it will not require changes in the future
Optionally, you should also consider including the following sections:
## Checking Prerequisitessection - if the guide requires the user to have certain developer tools installed, for example Docker or MetaMask, it should be listed here
## Getting Startedsection - if this is a 3rd party integration, having links that point to the most important parts of your project's documentation site that helps users to get started with your project
title: Title for SEO purposes
description: Description for SEO purposes.
# Page Title
Write 2-3 paragraphs to serve as the introduction here.
Images and Icons¶
Images are stored in the
images subdirectory. They are organized to mirror the structure of the docs site. So, for example, if you are creating a new page for the
builders section and need to add images, those would go under the
All landing pages require a logo or relevant icon. You can use the
_index-page-icon-template.svg that is in the root of the
images subdirectory to create your own that is the correct size. Images should follow these specs:
- Have a transparent background
- Be 256x256px in size
- Be added in
- When exporting the SVG to a PNG, please use a DPI of
- The colors should be:
- Light mode: #262626
- Dark mode: #e9e9e9
These images are stored in the
To add an image to your page, you should have alt text and use the following syntax:

Snippets can be used to manage reusable lines of code or text. There is a subdirectory for
text snippets will get translated for the Chinese version of the documentation site. On the other hand, the
code snippets should only contain code and therefore do not get translated.
To link to a snippet, you can use the following syntax in the Markdown file:
Code snippets can be written in Markdown or the programming language itself, for example,
.py for Python,
Search Enging Optimization (SEO)¶
Here are some resources to help you create good titles and descriptions for SEO:
In general, titles should be between 50 and 60 characters and descriptions should be between 110 and 160 characters.
| Created: September 8, 2023
|
OPCFW_CODE
|
Connecting to Lubuntu desktop via vnc
I have installed Lubuntu desktop on a Ubuntu 20.03 LTS server. The desktop works fine when I log on to it locally. I can see the SDDM and I can chose between various sessions for the desktop (Lubuntu, LXDE, LXQt).
However, I set up a vnc server and I configured (as per the instructions I found online) the ~/.vnc/xstartup fileto looks like this:
#!/bin/sh
xsetroot -solid grey
export XKL_XMODMAP_DISABLE=1
openbox &
/usr/bin/lxsession -s Lubuntu &
When I connect though, all I can see is a grey screen. Any ideas? Thanks a lot!
You mention 20.04 & Lubuntu which is the LXQt desktop; but also LXDE (which isn't a Lubuntu desktop). Your provided xstartup script also mentioned LXDE (which isn't Lubuntu), so are you trying to use Lubuntu or LXDE? (Lubuntu hasn't used LXDE since 18.04)
Are you open to solutions that use VNC alternatives like RDP or x2go?
I installed Lubuntu desktop via tasksel. This seems to have enabled several options to start a session which include Lubuntu, LXQt and LXDE. What I am trying to do is start the LXQt desktop after connecting via VNC. The best advice to do this I could find online was the startup script I pasted above.
Thanks for that detail; I'll have a look (test on a jammy system, as LXDE is not affiliated with Lubuntu since 18.04 & thus should not have been included.. LXDE is now direct from upstream without any Lubuntu patching/configs). As LXQt is WM agnostic (no WM features are included); Lubuntu has used openbox as WM and thus it's an option as a session (ie. openbox on its own, was back in LXDE days & is still now), but you may have described a packaging bug I'll aim to explore. lxsession is not a Lubuntu program! https://phab.lubuntu.me/w/bugs/
I QA-test installed a Ubuntu jammy system on a box; then apt install lubuntu-desktop^ and no LXDE packages are installed - the result is Ubuntu Desktop (GNOME) & Lubuntu (LXQT & openbox options) only. I then repeated the test but the second time I used sudo tasksel install lubuntu-desktop and again no LXDE packages were installed. In both I get only "Lubuntu, LXQt Desktop, Openbox, Ubuntu & Ubuntu on Xorg" options for login. I used jammy as I believe it'll have the same results; and if it doesn't - we can correct issues in jammy easier being currently in development
Thanks for the investigation and the information. However, what is the command I should be using to start lubuntu desktop in the VNX server xstartup file? For example in the xfce desktop, there is a startxfce command. I don't see any startlubuntu command I can use.
Refer https://askubuntu.com/questions/1366168/what-command-will-launch-lubuntu-desktop-20-04
|
STACK_EXCHANGE
|
Almost a year ago Microsoft announced Azure Active Directory Domain Services in preview (and it is still in preview). Before releasing this service you needed to deploy domain controllers in Azure or have a site-to-site VPN (from on-premises to Azure) contacting on-premises domain controllers to use services in the cloud relying on active directory and related servics. With the release of Azure AD Domain Services you now have a cloud-based identity solution which allows you to manage users and groups without deploying domain controllers.
When you enable this service, two domain controllers are automatically setup in you environment for high-availability, but you don`t have to pay for these two domain controllers (VM`s), you pay for the service per hour (see this site for the pricing). As you can not login directly to these domain controllers, you don`t have to worry about managing these domain controllers like you have to do with on-premises domain controllers or domain controllers deployed by yourself in Azure, Microsoft does it for you. It is really Active Directory as a service.
When the two domain controllers are up-and-running you can manage Active Directory by joining a Windows Server Virtual Machine hosted on Azure to the domain you setup AD Domain Services for. Just add the required management features to the server and you are able to manage the environment via Active Directory Users and Computers or Group Policy Management. All your users and groups which are available in Azure AD/ Office 365, from now can also be found in ADUC.
Note that managing the AD and policies are very basic at this moment. You are not a Domain Admin and you have limited rights on the AD. You are allowed to manage the existing Group Policies, but at this moment, you are not allowed to create your own GPO`s unfortunately.
You are now able to deploy VM`s (Windows or Linux) running an application which rely on Active Directory to the cloud without deploying domain controllers to the cloud. Access control can be done by Azure AD Domain Services. You don`t have to use different user accounts from another cloud provider which hosts your application, you just use the same user accounts already present in Azure/ Office 365. Another example is running an FTP server on IIS deployed on an Azure VM. Setup the required user rights on the FTP folder based on the AD groups/ users and your users only have to remember just one set of credentials.
Some functionalities Azure AD Domain Services provides:
- Join servers to a domain (Windows and Linux)
- Use (basic) Group Policies
- Create custom organizational units (OU`s)
- Use Kerberos/ NTLM
- Support for secure LDAP
- Administer DNS on the managed domain
For now Azure AD Domain Services is still in preview and some functionalities, like managing Group Policies, are very basic. But I expect the functionalities will be increased in the future.
For further information and pricing you can visit this website of Microsoft.
|
OPCFW_CODE
|
Manual Reference Pages - GATTRIBUTES (5)
gattributes - standard attribute conventions in BRL-CAD databases
Starting with BRL-CAD database format version 5, BRL-CAD supports the storage of arbitrary attribute/value data on any database object. Prior to the development of this generic ability, BRL-CAD supported a set of standard attributes used to hold key information about specific properties. These conventions predate not only the version 5 database format but BRL-CAD itself. The purpose of this man page is to identify the standard attributes, document situations where two or more attributes are used to identify the same property, and identify where appropriate what constitutes meaningful values for an attribute.
The earliest ancestor of BRL-CAD was the MAGIC code, written by the Mathematical Applications Group, Inc in 1967 to implement geometric modeling ideas developed by BRL and AMSAA from 1958-1967. That system, which was entirely non-graphical, was followed some years later by the GIFT program - Geometric Information From Targets. LIBRT, appearing around 1983, became the successor to GIFT. Today it is the core library of the BRL-CAD solid modeling package. (For more background see http://ftp.arl.army.mil/~mike/papers/96thebook/). LIBRT inherited both geometric primitives and attributes from GIFT - this heritage is occasionally apparent in attribute names.
Note that with the introduction of support for arbitrary attribute/value pairs, it becomes much simpler for both developers and users to formulate new standards of their own for storage of metadata in attributes. Attributes should only be documented in this file if one or more core BRL-CAD tools has implemented one or more features that require specific attributes to be present. Conventions for attribute names that do not involve tool support should be documented in appropriate user-level documentation. In documenting attributes, one additional distinction is made. Attributes that exist only as post-v5 attribute/value pairs are referred to as non-core attributes; older attributes (which may also be represented in core data structures) are referred to as core attributes.
The following table lists all known core BRL-CAD attributes and aliases. Any code setting or reading the value of one of these attributes must handle all aliases, to ensure all functions asking for the value in question get a consistent answer.
Table 1. Core Attributes
|| || || || |
||Yes, R or 1
|Region Identifier Number
||-1, 0, and positive integers
|Material Identifier Number
||0 and positive integers
||0 or 1
|Line Of Sight Thickness Equivalence
|Color (Red Green Blue)
||Yes or 1
Given the importance of these attributes, it is appropriate to briefly outline the meaning and purpose of each of them:
The Region Flag identifies a particular geometric combination as being a solid material - in other words, any geometry below this combination in the tree can overlap without the overlap being regarded as a non-physical description, since it is the combination of all descriptions in the region object that define the physical volume in space.
Region ID Flag:
The Region ID Flag identifies a particular region with a unique number. This allows multiple region objects to be regarded as being the same type of region, without requiring that they be included in the same combination object.
Material ID Number:
The Material ID Number corresponds to an entry in a DENSITIES table, usually contained in a text file. This table associates numbers with material names and density information, used by analytical programs such as rtweight.
The Air Flag alerts the raytracer that the region in question is modeling air, which is handled by specialized rules in LIBRT.
Line of Sight:
LIBRT can utilize a variety of shaders when rendering - this attribute holds a text string which corresponds to the name of the shader to be used.
Reports of bugs or problems should be submitted via electronic mail to <email@example.com>, or via the "cadbug.sh" script.
|BRL-CAD ||STANDARD ATTRIBUTES (5) ||04/04/2016 |
Visit the GSP FreeBSD Man Page Interface.
Output converted with manServer 1.07.
|
OPCFW_CODE
|
Instituto Tecnológico Superior de JerezIngeniería en Sistemas Computacionales 6º Mr. Brown English IV Installation manual Salvador Acevedo Caldera 10070028 February 27, 2013
How To Install Windows 7 1. Requirements: Greater than 1 GHz Processor 32 or 64 Bits 1 GB of RAM minimum, 2 GB for 64-bit 16 GB of free space on hard drive, 20 to 64 Bits Graphics card with DirectX 9 support 2. The data backup.Before installing Windows 7, I recommend you make a back-up, and that deleting the harddrive information, you can not ever recover these files. 3. Elements required.You need to have the following items to install the new operating system from Microsoft: • The installation DVD Windows 7 • License Original Windows 7Install The Operating System 1. Insert the Windows 7 DVD 2. We will see a text like this "Press any key to boot from CD or DVD", we press any key.
3. Finally we see the splash screen of Windows 7, where we ask the language and other options. Select the Spanish and then ended up setting the time and keyboard.4. We click on the "Install Now".5. Then you must read the Microsoft license agreement and then accept the license terms. Finally you must click next.
6. On the next screen you will have two options, upgrade the (Upgrade) and custom installation (custom). We click Custom.7. Select any partition that we have available and click Install. If we have formatted the hard drive, we need to click on "Format" to leave without files to that partition.8. When you formatted the partition you want to install ubicaras where Windows 7 and give click next.9. Once finished copying files, write a user name and computer name if you want. Click Next.10. In the next screen will ask for a password, which will be optional and obviously personal. You can leave these fields blank if you want it. Click Next.
11. In this window you will enter the serial number of Windows 7, you will finally click Next.12. In the next screen select "Use recommended settings".13. You set your time zone depending where you are.14. In this step homework to choose the location of the equipment; House (home network), Labour (network), cyber or other (public network).
15. Finally you are done with the installation of Windows.Help ??????http://techtastico.com/post/como-instalar-windows-7/http://www.solotecnologia.net/2010/03/03/como-instalar-windows-7/
|
OPCFW_CODE
|
using System;
using System.ComponentModel;
namespace PowerBallSimulator.Model
{
public abstract class PowerBall : INotifyPropertyChanged
{
// Establish a random seed for each ball
protected Random random = RandomNumber.GetRandomSeed();
protected int upperBound;
private int? _Number;
public int? Number
{
get
{
return _Number;
}
set
{
_Number = value;
OnPropertyChanged("Number");
}
}
public void Clear()
{
Number = null;
}
public void AssignNumber()
{
Number = random.Next(1, upperBound);
}
public event PropertyChangedEventHandler PropertyChanged;
private void OnPropertyChanged(string p)
{
PropertyChangedEventHandler handler = PropertyChanged;
if (handler != null)
{
handler(this, new PropertyChangedEventArgs(p));
}
}
}
public class WhiteBall : PowerBall
{
public WhiteBall()
{
upperBound = 70;
}
}
public class RedBall : PowerBall
{
public RedBall()
{
upperBound = 27;
}
}
class RandomNumber
{
public static Random GetRandomSeed()
{
Random random = new Random(Guid.NewGuid().GetHashCode());
return random;
}
}
}
|
STACK_EDU
|
Save Folding state to a temporary file so that the folding can be rebuilt when the file.is loaded again
I am using scintilla to load .c files and want to save the states of the folded lines.
I can do this successfully with markers using line.MarkerGet(); and line.MarkerAddSet().
What are the best folding values to save this way ?
I also found that the folding state needs a delay between loading the TextArea.Text and calling TextArea.Lines[1].ToggleFold();
If this is called straight away, it does not work, thus:
TextArea.Text=ReadfromFile
TextArea.Lines[1].ToggleFold();
fails,
TextArea.Text=ReadfromFile
timer1.enabled
timer1()
{
TextArea.Lines[1].ToggleFold();
}
works.
Thanks for your time.
For a quick test, this code seems to do the trick:
private string FoldingState { get; set; }
private void mnuTest1_Click(object sender, EventArgs e)
{
FoldingState = string.Join(";", scintilla.Lines.Where(f => !f.Expanded).Select(f => f.Index).ToArray());
}
private void mnuTest2_Click(object sender, EventArgs e)
{
foreach (var index in FoldingState.Split(';').Select(f => int.Parse(f.ToString())))
{
scintilla.Lines[index].ToggleFold();
}
}
BTW, the idea is good, perhaps this should be included in the package 👍
Peter, thanks for your info, and I can get this working.
I did have to have a scintilla.Update() in between loading scintilla.Text and loading the folding state though which causes an annoying flicker when the windows is first displayed.
What you have provided below is quite new to me, what is the best place to learn this type of operation ?
FoldingState = string.Join(";", scintilla.Lines.Where(f => !f.Expanded).Select(f => f.Index).ToArray());
I see in this example, it is saving the state for only lines that are folded,, can it be used to save both a line index and MarkerGet() for example ?
Hi,
The syntax is LINQ with Lambda expressions:
The f => f... here is an instance of a single line in Scintilla control:
The code:
private void mnuTest1_Click(object sender, EventArgs e)
{
FoldingState = string.Join(";", scintilla.Lines
.Select(f => f.Index + "|" + scintilla.Markers[f.Index].Symbol + "|" + f.Expanded).ToArray());
}
The resulting "save" string would in this case be like this from a small Json file with folding:
0|Circle|True;1|Circle|True;2|Circle|True;3|Circle|True;4|Circle|True;5|Circle|True;6|Circle|True;7|Circle|True;8|Circle|True;9|Circle|True;10|Circle|True;11|Circle|True;12|Circle|False;13|Circle|True;14|Circle|True;15|Circle|True;16|Circle|False;17|Circle|True;18|Circle|True;19|Circle|True;20|Circle|True;21|Circle|True;22|Circle|True;23|Circle|True;24|Circle|True;25|BoxPlusConnected|True;26|BoxMinusConnected|True;27|TCorner|True;28|LCorner|True;29|VLine|True;30|BoxPlus|True;31|BoxMinus|True
C# Corner seems to have some samples of how to learn LINQ.
For the flickering part I would need a small sample to try to fix it - no promises though 🙂
|
GITHUB_ARCHIVE
|
Lexapro For Sale
Lexapro For Sale, This is not a new application but one I have just stumbled across and one that looks quite useful, especially for Windows users. Order Lexapro from mexican pharmacy, This is available as an iGoogle App, a NetVibes App or a Vista Sidebar Widget, where can i order Lexapro without prescription. Cheap Lexapro, A lot of iGoogle apps lose their appeal after they’ve started to clutter up your homepage and haven’t been touched in days, weeks or even months, Lexapro photos. Purchase Lexapro online no prescription, The Activity Tracker is an exception to this rule however and offers a staggering level of organizational potential through a simple yet detailed interface.
The add-on allows users to manage their work to the exact minute by creating a virtual to-do list combined with a punch-card system, get Lexapro. Activity Tracker functions through monitoring the amount of time that has been spent on a given task, recording the duration through a timer which is stopped and started by the user, Lexapro For Sale. Buy Lexapro from canada, It allows for a list of upcoming jobs, actions and plans to be customized (and displayed in the drop-down menu for easy reference) then times out the given period, Lexapro for sale. Lexapro without a prescription, The program is exceptional for a variety of purposes and has been created to be simple yet effective enough that it can be used for anything from casual time management to professional work.
Anyone who finds themselves having a hard time staying on task or remembering how they divided up the work day can benefit from making use of the iGoogle Activity Tracker, australia, uk, us, usa. Lexapro long term, It’s a fantastic tool to have on hand for users as wide-ranging as students who need to manage their homework or studying scheduling to consultants who track the minutes of their work to submit for later payment.
There are a lot of alternatives to this Activity Tracker and I will cover some more of them in upcoming reviews, Lexapro mg. Lexapro from canada, If you have any experience in using this or another application like it, I would love to hear from you so please comment below, order Lexapro online c.o.d. Lexapro recreational. Lexapro without prescription. Lexapro dosage. Lexapro wiki. Lexapro dangers. Cheap Lexapro no rx. Lexapro maximum dosage. Lexapro treatment. Where can i buy Lexapro online. Lexapro pics. Buy no prescription Lexapro online. Lexapro no rx. Lexapro use. Fast shipping Lexapro. Lexapro dose. Buy generic Lexapro. After Lexapro. Where can i cheapest Lexapro online. Buy Lexapro without prescription. Purchase Lexapro for sale. Lexapro canada, mexico, india. Lexapro steet value. Herbal Lexapro. Japan, craiglist, ebay, overseas, paypal. Lexapro interactions. Lexapro blogs.
Similar posts: Antabuse For Sale. Temovate Cream For Sale. Levaquin For Sale. Buy Seroquel Without Prescription. Buy Armour Without Prescription. Macrobid class. Canada, mexico, india. Where can i order Tindamax without prescription. Avodart images. Ventolin duration.
Trackbacks from: Lexapro For Sale. Lexapro For Sale. Lexapro For Sale. Lexapro For Sale. Lexapro For Sale. Lexapro no rx. Lexapro without a prescription. Taking Lexapro. Purchase Lexapro online no prescription. Lexapro blogs.
|
OPCFW_CODE
|
Windows Developer Team
Deliverables: Project charter (or a high-level project specification)
During the first phase, you plan out and finalize the concept of your future Windows application. Alone or together with a business analyst, you list major business needs or point out the app’s unique value, and define its target audience. In case your Windows app is a product, you should also collect market data, analyze key competitors.
To complete an initial project specification, proceed to outline the high-level functional scope of the Windows app with planned user roles and permission matrix as well as select KPIs for project success estimation.
2. Project planning
Deliverables: Project management plan
By assessing the Windows app’s functional scope from the business perspective, you define what software design and development efforts are required to bring the application concept to life. The efforts should be roughly estimated both in time and budget and then assigned to specific team roles. The possible need for additional talents (often meaning the cooperation with a vendor) usually transpires during this phase, too.
By the end of this stage, you have a high-level project management plan, which needs to be approved by all stakeholders before the project moves on to the next phase.
3. Windows app design and development planning
Deliverables: Software requirements specification, final development schedule, responsibility assignment plan, Windows app evolution plan, quality management plan, risk management plan, change management plan.
The information on the app’s concept from the initial specification is significantly elaborated during this phase and used for creating one of the key planning-stage deliverables with both functional and non-functional requirements – a software requirements specification (SRS). This document should:
Confirm the general info on the app from the project charter (purpose, distribution, user roles, etc.).
Offer several practical use cases for different user roles.
List application features and user-app interaction scenarios.
Show precise software architecture diagrams.
Define the technology stack (development platform, programming language, APIs, etc.).
Apart from the SRS, you should get a final app development schedule, a list of project team members with assigned responsibilities, and your Windows app’s evolution roadmap. Make sure there are also documented plans that describe quality assurance methods as well as methods of handling risks and possible changes related to both the software requirements and development schedule.
4. UX and UI design
Deliverables: UX prototype, GUI mockups, interface control document.
UX designers draw on the information from the SRS and the user research they perform to create wireframes for the key screens of the future Windows app – and then use prototyping tools to unite the screens into a single interactive experience. The created prototype should pass thorough usability testing since this is when UX issues critical for the app’s success are the cheapest to eliminate.
UI designers continue the work on the prototype that was fully adjusted after usability testing and replace the rough sketches with high-fidelity graphics. When done, they share the GUI mockups, UI elements and interface control specification with the development team.
5. Development and QA
Deliverables: Windows app, app’s source code, Windows Installer, installation/administration guides.
To ensure the quality results and efficiency of all software delivery and QA efforts, ScienceSoft recommends setting up a CI/CD pipeline at the start of your project. All coding and testing of your Windows app should go in parallel, with sufficient amount of automation. With such an approach, the delivery of quality, tested code happens faster.
Once the app’s code is ready, the team works on creating a Windows Installer and – if required – installation and system administration manuals.
6. Launch and evolution
In case your Windows app is an internal company solution, its deployment and integration into your infrastructure are parts of the CI/CD pipeline. If your app is a commercial product, you need a separate step – a release to public.
As an option, you may have a custom promotional website. After your customers download and install the app, they will be notified about your updates and new releases via the app’s built-in messages.
|
OPCFW_CODE
|
Art of BI: Assigning Oracle BI Applications Tablespaces to Indexes
Author: Christian Screen | | December 8, 2016
In OBIA 11g, if you don’t spread the storage of the OBIA DW data objects across multiple RDBMS tablespaces, your DBA may come knocking at your door asking why only one tablespaces they created for the OBIA system is consuming a large amount of disk and I/O. You may also notice poor performance in your OBIA full and incremental load times. Either way, completing the steps in this article is the best practice for helping to mitigate both of the above scenarios.
Implementing Oracle BI Applications 11g is a challenge for many organizations. Datavail’s OBIA 11g training course and our professional services help enterprises make sure they are optimizing their implementation in Oracle Analytics.
This post is a snippet from one of our OBIA training exercises where our classroom training helps Oracle customers implement and use OBIA with best practices. To extend the ODI 11g ETL’s storage of index files across multiple tablespaces, one must conduct a configuration post-initial installation and configuration of the OBIA system. Be sure to read our recommendations for scientifically implementing this change in your system at the end of this post prior to conducting this change in a new implementation.
To get started:
- Open the ODI 11g Studio Client to access the OBIA Project and then in the Designer tab expand the Models accordion section.
- Expand the Oracle BI Applications model folder and open the Oracle BI Applications model to reveal the model definition, etc.
- Click on the Flexfields option and you’ll see some pre-developed fields for the respective tablespace allocations.
By default, the Default checkbox is checked which means that the tablespaces automatically for Index objects (indexes, bitmap indexes, etc.) and stage objects (staging tables, etc.) will be loaded to the default tablespace which is the Data Warehouse (DW) tablespace.
For obvious reasons, you’ll want to spread the ETL target data integration, so uncheck the checkbox under the default column for each respective flexfield desired and enter in the value column the name of the tablespace you created when you ran the OBIA RCU to create the appropriate database schemas and tablespaces. If you’ve forgotten the names of the tablespaces created, you can use Oracle SQL Developer’s DBA interface to view all tablespaces for your database as seen in the above image.
Save your work in the ODI Studio client after you’ve made all of your changes. Then you can consider running your data loads but please read our recommendations for considerations below prior making or solidifying your changes.
Our recommendation regarding this option is to:
- Conduct this change only after you have configured OBIA for at least one OBIA module and have run a successful Full Load to completion without errors (having completed also an incremental load without errors in addition to a full load would be optimal).
- Execute a full load prior to making this change to capture a benchmark elapsed time for how long it takes to run a Full Load to the DW using a fairly reflective subset of data. For example, execute it for one or two years of data instead of the full ERP/CRM history if you have lots of years of data.
- Consider making one flexfield change at a time so that you can incrementally understand and document the impact as this will be important to explain to your team and DBA. Start with the OBI Stage Tablespace since staging object issues, if any, are typically encountered earlier in the ETL process and are usually easier to resolve.
The “ORA-12154: TNS Oracle error message is very common for database administrators. Learn how to diagnose & resolve this common issue here today.
Learn how to fix common Log Shipping Failure errors in SQL Server. Follow Datavail’s step-by-step instructions, screenshots, and software script here!
Not everybody knows what a Database Administrator does. Learn here what DBA job responsibilities are so you can see the depth and breadth of their tasks.
|
OPCFW_CODE
|
package build
import (
"fmt"
"strings"
"github.com/masterminds/semver"
"github.com/pkg/errors"
"gopkg.in/yaml.v3"
)
// Version return the build version (vX.Y.Z) at build time. Returns "" if this
// build was not a tagged version.
func Version() string {
if Tag == "" {
return ""
}
if _, err := semver.NewVersion(Tag); err != nil {
return ""
}
return Tag
}
// DevBuild is "true" if this is a development build.
var DevBuild string
// Tag holds the git tag of the build.
var Tag string
// Branch holds the git branch of the build.
var Branch string
// BuildNum holds the build build (123).
var BuildNum string
// Time holds the build time.
var Time string
// SHA holds the build Git commit SHA1.
var SHA string
const shaLen = 7
func VersionString() string {
return Current().String()
}
type Build struct {
DevBuild string `json:"dev,omitempty" yaml:"dev,omitempty"`
Version string `json:"version,omitempty" yaml:"version,omitempty"`
SHA string `json:"sha,omitempty" yaml:"sha,omitempty"`
BuildNum string `json:"build,omitempty" yaml:"build,omitempty"`
Tag string `json:"tag,omitempty" yaml:"tag,omitempty"`
Branch string `json:"branch,omitempty" yaml:"branch,omitempty"`
Time string `json:"time,omitempty" yaml:"time,omitempty"`
// These are used internally for the build resolver.
prod bool `json:"-" yaml:"-"`
dev bool `json:"-" yaml:"-"`
}
type Builds []*Build
func (b Builds) Len() int {
return len(b)
}
func (b Builds) Less(i, j int) bool {
bi := b[i].Semver()
bj := b[j].Semver()
if bi == nil && bj == nil {
return false
}
if bi == nil {
return true
}
if bj == nil {
return false
}
return bi.LessThan(bj)
}
func (b Builds) Swap(i, j int) {
b[i], b[j] = b[j], b[i]
}
func NewVersionBuild(vers string) *Build {
if !strings.HasPrefix(vers, "v") {
vers = "v" + vers
}
return &Build{Version: vers}
}
func Parse(vstr string) (*Build, error) {
version, err := semver.NewVersion(vstr)
if err == nil {
return &Build{Version: "v" + version.String()}, nil
}
var b Build
err = yaml.Unmarshal([]byte(vstr), &b)
if err != nil {
return nil, errors.Wrap(err, "parse build")
}
return &b, nil
}
func NewProdBuild() *Build {
return &Build{prod: true}
}
func NewDevBuild() *Build {
return &Build{dev: true}
}
func Current() *Build {
return &Build{
Version: Version(),
Tag: Tag,
SHA: SHA,
BuildNum: BuildNum,
Branch: Branch,
Time: Time,
}
}
func (b *Build) Semver() *semver.Version {
vers, err := semver.NewVersion(b.Version)
if err != nil {
return nil
}
return vers
}
func (b *Build) Empty() bool {
return b.Version == "" && b.Tag == "" && b.SHA == "" && b.BuildNum == "" && b.Branch == "" && b.DevBuild == ""
}
var ErrEmptyBuild = errors.New("empty build")
type VersionComponent struct {
Key string
Value string
}
// PreferedPathComponent returns the path component (x/y/z) that best identify
// this version, with the following precedence:
// - version
// - sha
// - build
// - tag
// - branch
// E.g. If version == "v1.2.3", then the path components are "version/v1.2.3".
func (b *Build) PreferedPathComponent() (components string, err error) {
comps, err := b.PathComponents(true)
if err != nil {
return "", err
}
return comps[0], nil
}
// PathComponents returns all paths components (x/y/z) for this version:
// E.g. "version/v1.2.3", "sha/abcdefg", "build/123", "tag/v1.2.3",
// "branch/master".
func (b *Build) PathComponents(generic bool) (components []string, err error) {
comps := []string{}
if generic {
if b.prod {
comps = append(comps, "prod")
}
if b.dev {
comps = append(comps, "dev")
}
}
if b.Version != "" {
comps = append(comps, "version/"+b.Version)
}
if !generic {
if b.prod {
comps = append(comps, "prod")
}
if b.dev {
comps = append(comps, "dev")
}
}
if len(comps) == 0 {
return nil, ErrEmptyBuild
}
return comps, nil
}
func (b *Build) String() string {
if b.Empty() {
return "No version information found\n"
}
var str strings.Builder
if b.Version != "" {
fmt.Fprintf(&str, "Version: %s\n", b.Version)
}
if b.Tag != "" && b.Version == "" {
fmt.Fprintf(&str, "Tag: %s\n", b.Tag)
}
if b.SHA != "" {
c := b.SHA
if len(c) > shaLen {
c = c[:shaLen]
}
fmt.Fprintf(&str, "SHA: %s\n", c)
}
if b.BuildNum != "" {
fmt.Fprintf(&str, "Build: %s\n", b.BuildNum)
}
if b.Branch != "" {
fmt.Fprintf(&str, "Branch: %s\n", b.Branch)
}
if b.Time != "" {
fmt.Fprintf(&str, "Time: %s\n", b.Time)
}
return str.String()
}
|
STACK_EDU
|
Welcome to this comprehensive guide to the margins command in Stata. As a powerful statistical software package, Stata provides researchers and data analysts with a wide range of tools for analyzing and interpreting data. The margins command is one such tool that plays a critical role in the estimation and interpretation of marginal effects. In this article, we will explore the various features and applications of the margins command so that you can use it to its full potential in your scientific analyses.
What is the margins command?
The margins command in Stata is used to estimate and interpret the marginal effects of covariates on the outcome variable in a statistical model. It provides a flexible framework for calculating and presenting the average or marginal effects of independent variables on the dependent variable, even in complex statistical models. The margins command is particularly useful when dealing with nonlinear models, interactions, and complex survey designs.
By default, the margins command calculates the average marginal effects (AMEs) of the independent variables. These effects represent the expected change in the outcome variable associated with a one-unit change in the independent variable, with all other variables held at their means or specified values. The margins command can also estimate other types of marginal effects, such as predicted margins, marginal means, and marginal effects at specific values of covariates.
Estimating marginal effects with the margins command
Using the margins command in Stata is a straightforward process. After fitting a statistical model using a command such as “regress” or “logit,” you can simply type “margins” followed by various options to specify the type of marginal effects you want and the variables of interest. Let’s look at an example to illustrate this:
regress dependent_var independent_var1 independent_var2 control_var1 control_var2
margins, dydx(independent_var1) at(control_var1=mean) post
In the example above, we used the “regress” command to estimate a regression model with “dependent_var” as the outcome variable and “independent_var1”, “independent_var2”, “control_var1” and “control_var2” as the independent variables. The “margins” command is then used to estimate the marginal effect of “independent_var1” while holding “control_var1” at its mean. The “dydx” option specifies that we want the average marginal effect (AME) in the form of the derivative of the expected outcome with respect to “independent_var1”. The “post” option ensures that the marginal effects are calculated based on the estimated coefficients from the regression model.
The margins command also provides additional options to customize the estimation and presentation of marginal effects. These options include specifying subpopulations, interaction effects, robust standard errors, and plotting graphs to visualize the marginal effects. By using these options, you can tailor the analysis to your specific research questions and data.
Interpreting marginal effects with the margins command
Interpreting the results of the margins command requires a careful understanding of the underlying statistical model and the specific options chosen. The estimated marginal effects are usually reported along with their standard errors, confidence intervals, and significance levels. These statistics help to assess the precision and statistical significance of the estimated effects.
When interpreting the marginal effects, it is important to consider the size and nature of the dependent and independent variables. For continuous independent variables, the marginal effects represent the change in the expected outcome associated with a one-unit change in the independent variable. For categorical variables, marginal effects compare expected outcomes across categories, typically using a reference category as a baseline.
It is also important to interpret the marginal effects in the context of the specific model and assumptions made. Non-linear models, interactions, and complex survey designs can add complexity to the interpretation. Therefore, it is recommended to consult statistical textbooks, academic resources, or seek expert advice when dealing with such scenarios.
Advanced Applications of the Margins Command
While we have covered the basics of using the margins command, it is worth highlighting some advanced applications that can further enhance your analysis in Stata. Here are some notable applications:
1. Interaction effects: The margins command allows you to estimate and interpret the marginal effects of interaction terms. By specifying the interaction terms in the margins command, you can assess how the effects of one variable on the outcome differ across different levels of another variable.
2. Nonlinear models: The margins command is not limited to linear models. It can be used with several nonlinear models, such as generalized linear models (e.g., logistic regression), survival models, and multinomial models. This flexibility allows marginal effects to be estimated and interpreted in a wide variety of statistical settings.
3. Robust standard errors: By specifying the “vce(robust)” option in the “margins” command, you can obtain robust standard errors for the estimated marginal effects. Robust standard errors account for heteroscedasticity and provide more reliable inference, especially when the assumption of homoscedasticity is violated.
4. Subpopulation analysis: The margins command allows you to estimate and compare marginal effects for different subpopulations within your data. By specifying the “at()” option with different values for relevant variables, you can obtain marginal effects specific to particular groups or levels of interest.
5. Visualization: Stata provides several graphical options for visualizing the estimated marginal effects using the “marginsplot” command. This command generates graphs that show the marginal effects with confidence intervals, allowing for a visual representation of the results.
These advanced applications of the margins command allow researchers to perform sophisticated analyses and gain deeper insight into their data. However, it is important to carefully consider the assumptions and limitations of the chosen statistical model, as these can affect the validity and interpretation of the estimated marginal effects.
Stata’s margins command is a powerful tool for estimating and interpreting marginal effects in statistical models. It provides researchers and data analysts with a flexible framework for assessing the average or differential effects of independent variables on the outcome variable. By understanding how to use and interpret the margins command, you can improve your scientific analyses and gain valuable insights from your data.
Remember to consult the relevant literature, statistical textbooks, or seek expert advice when using the margins command in complex scenarios. With practice and a solid understanding of the underlying statistical models, you can realize the full potential of the margins command to conduct rigorous and insightful scientific research in Stata.
What does the margins command do in Stata?
margins command in Stata is used to calculate and display the marginal effects of independent variables on a dependent variable after estimating a statistical model. It allows you to estimate the average change in the outcome variable for a one-unit change in the independent variable, while holding all other variables constant.
How do you use the margins command in Stata?
To use the
margins command in Stata, you first need to estimate a statistical model using a command such as
logit. Once the model is estimated, you can use the
margins command followed by the name of the model to calculate the marginal effects. You can also specify additional options to customize the output, such as specifying the values of the independent variables at which to calculate the marginal effects.
What are some common options used with the margins command in Stata?
Some common options used with the
margins command in Stata include:
at(): Specifies the values of the independent variables at which to calculate the marginal effects.
dydx(): Calculates the marginal effects for discrete changes in the independent variables.
contrast(): Calculates the marginal effects for specific contrasts of the independent variables.
expression(): Allows you to specify a mathematical expression to calculate the marginal effects.
post: Stores the results of the margins command for further analysis or manipulation.
What types of models can the margins command be used with in Stata?
margins command in Stata can be used with a wide range of statistical models, including linear regression models (
regress), logistic regression models (
logit), probit models (
probit), Poisson regression models (
poisson), and many others. It can also be used with models that include interaction terms or other complex specifications.
What does it mean when the margins command reports “marginal effects at the means”?
margins command in Stata reports “marginal effects at the means,” it means that it calculates the marginal effects of the independent variables based on their mean values. This is useful when you want to estimate the average effect of the independent variables on the outcome variable, assuming that all other variables are at their mean values.
|
OPCFW_CODE
|
So last week during my internship one of the work assigned to me was to go and follow some random profiles on Instagram through the company’s page. You might wonder why would someone do that, even I thought the same but when I tried, it actually worked. Instagram’s algorithm suggests people to visit your account if you are active and follow people on daily basis. So I started following some profiles randomly, I even got some people following us but that wasn’t enough also I was very very bored following accounts like a robot, doing a repetitive task.
The making & fun part :
I thought to myself why not to make a bot that does this work for me in much faster and easy manner, I had some programming knowledge thanks to my Engineering in CS ( hi to all my Engineering peeps). I used Python with selenium web driver to build my bot which actually worked. You just need to mention your username and password then leave the rest to the bot, it automatically searches for profiles and follows them. You can also set the number of profiles you want to follow, there is no limit ! I had troubles initially due to some random pop-ups on Instagram page for which my bot wasn’t prepared and also the fact I followed way too many people on the very first day due to which I got a ban which prevented me from following anyone for next 24 hours, so modify the script according to avoid this :). At last with some modifications the script was completely ready and all set to rock the Instagram
After a week of implementing this bot, it was following 700 profiles and had 184 followers which infact is pretty great for an account that had no followers, post or any favourable thing. The more people you follow, the no of people following you will increase exponentially. The bot works very well and you can try it with your account too but beware following too many profiles can get you banned from Instagram. This is the code and also there is no limit in functionalities you can add on this bot. You can make a bot which likes the profile and even comments on random post or targeted posts through the tags.
The program is written in python and even a little knowledge of it is enough to understand the code, I used the selenium web driver to implement this program, here is the code if you want to try it for yourself. I will be giving a GitHub link to the code very soon, stay connected ;)
Where is the Code?
Below is the Python script that makes all these possible.
from selenium import webdriver from selenium.webdriver.firefox.webdriver import WebDriver from selenium.webdriver.common import keys import time from selenium.webdriver.remote.webelement import WebElement class InstaBot: driver = webdriver.Firefox() driver.get("https://www.instagram.com") time.sleep(2) login= driver.find_element_by_xpath("//a[@href='/accounts/login/?source=auth_switcher']") login.click() time.sleep(5) usern= driver.find_element_by_xpath("//input[@name='username']") usern.send_keys("botinsta21") passw= driver.find_element_by_xpath("//input[@name='password']") passw.send_keys("123BOT") time.sleep(4) enter= driver.find_element_by_class_name("_0mzm-.sqdOP.L3NKy") enter.click() time.sleep(4) followpa = driver.find_element_by_class_name("glyphsSpriteCompass__outline__24__grey_9.u-__7") POPUP = driver.find_element_by_class_name("aOOlW.HoLwm ") try: followpa.click() except: POPUP.click() followpa.click() time.sleep(2) nextpg= driver.find_element_by_class_name("Kf8kP.coreSpritePagingChevron") nextpg.click() nextpg.click() nextpg.click() nextpg.click() nextpg.click() followbut = driver.find_elements_by_class_name("_2dbep.qNELH.kIKUG") idd= [elem.get_attribute('href') for elem in followbut] for i in idd: driver.get(i) followacc= driver.find_element_by_class_name("_5f5mN.jIbKX._6VtSN.yZn4P") followacc.click() time.sleep(3) time.sleep(5)
|
OPCFW_CODE
|
// | / |
// ' / __| _` | __| _ \ __|
// . \ | ( | | ( |\__ `
// _|\_\_| \__,_|\__|\___/ ____/
// Multi-Physics
//
// License: BSD License
// Kratos default license: kratos/license.txt
//
// Main authors: Vicente Mataix Ferrandiz
//
// System includes
#include <algorithm>
#include <iostream>
#include <cctype>
#include <sstream>
// External includes
// Project includes
#include "utilities/string_utilities.h"
namespace Kratos::StringUtilities {
std::string ConvertCamelCaseToSnakeCase(const std::string& rString)
{
std::string str(1, tolower(rString[0]));
// First place underscores between contiguous lower and upper case letters.
// For example, `_LowerCamelCase` becomes `_Lower_Camel_Case`.
for (auto it = rString.begin() + 1; it != rString.end(); ++it) {
if (isupper(*it) && *(it-1) != '_' && islower(*(it-1))) {
str += "_";
}
str += *it;
}
// Then convert it to lower case.
std::transform(str.begin(), str.end(), str.begin(), ::tolower);
return str;
}
/***********************************************************************************/
/***********************************************************************************/
std::string ConvertSnakeCaseToCamelCase(const std::string& rString)
{
std::string output;
if (!rString.empty()) {
output.reserve(rString.size());
bool upper_switch = rString[0] == '_' ? false : true;
for (auto character : rString) {
KRATOS_ERROR_IF(!(std::isalnum(character) || character == '_') || std::isupper(character))
<< "Invalid character '" << character
<<"' in snake case string '" << rString << '\'';
if (character == '_') {
KRATOS_ERROR_IF(upper_switch)
<< "Repeated underscores in snake case string '" << rString << '\'';
upper_switch = true;
} else { // character != '_'
// At this point, the character must be in [a-z0-9]
if (upper_switch) {
output.push_back(std::toupper(character));
upper_switch = false;
} else { // !upper_switch
output.push_back(character);
} // else (upper_switch)
} // else (character == '_')
} // for character in rString
} // if rString
return output;
}
/***********************************************************************************/
/***********************************************************************************/
std::string ErasePartialString(
const std::string& rMainString,
const std::string& rToErase
)
{
// Value to return
std::string sub_string = rMainString;
// Search for the substring in string
std::size_t pos = sub_string.find(rToErase);
if (pos != std::string::npos) {
// If found then erase it from string
sub_string.erase(pos, rToErase.length());
}
return sub_string;
}
/***********************************************************************************/
/***********************************************************************************/
bool ContainsPartialString(
const std::string& rMainString,
const std::string& rToCheck
)
{
// Value to return
const std::string& sub_string = rMainString;
// Search for the substring in string
std::size_t pos = sub_string.find(rToCheck);
// Return true if found
if (pos != std::string::npos) {
return true;
}
return false;
}
/***********************************************************************************/
/***********************************************************************************/
std::string RemoveWhiteSpaces(const std::string& rString)
{
// Value to return
std::string output;
for(char c : rString) {
if(!std::isspace(c)) {
output += c ;
}
}
return output;
}
/***********************************************************************************/
/***********************************************************************************/
std::vector<std::string> SplitStringByDelimiter(
const std::string& rString,
const char Delimiter
)
{
std::istringstream ss(rString);
std::string token;
std::vector<std::string> splitted_string;
while(std::getline(ss, token, Delimiter)) {
splitted_string.push_back(token);
}
return splitted_string;
}
/***********************************************************************************/
/***********************************************************************************/
std::string ReplaceAllSubstrings(
const std::string& rInputString,
const std::string& rStringToBeReplaced,
const std::string& rStringToReplace
)
{
std::string output_string(rInputString);
std::size_t start_pos = 0;
while((start_pos = output_string.find(rStringToBeReplaced, start_pos)) != std::string::npos) {
output_string.replace(start_pos, rStringToBeReplaced.length(), rStringToReplace);
start_pos += rStringToReplace.length(); // Handles case where 'to' is a substring of 'from'
}
return output_string;
}
} // namespace Kratos::StringUtilities
|
STACK_EDU
|
This is not a trivial matter, but like any other, it can be resolved with perseverance. We have persevered, with the following pleasant results.
KDE Window Rules are stored in ~/.config/kwinrulesrc.
The (not very complete) available documentation for KDE activities and window rules:
describes the various options for parameter settings but does not connect those (alphabetical) choices to the numeric choices in kwinrulesrc.
kwinrulesrc parameter blocks
KDE Window Rules are stored in ~/.config/kwinrulesrc. This file comprises a block of text for each window that is opened in the course of a Linux KDE session.
Each block contains two primary fields ([n] and Description) and then a number of parameter and rule pair fields:
Sequence number field [n]
The [n] field is a sequence number determining the order in which the window rule for a particular window appears in the Window Rule manager application.
The Description field value is an arbitrary string. We choose to name them canonically as
[Application] [Activity], e.g.:
so an alphabetical sort groups the rules by application.
A complete list of all possible fields is available in
Not all are present in each rule block.
The parameter pair fields comprise the parameter name and the parameter rule. Each may be either the default, explicit, or absent altogether.
- Default rules are indicated in the canonical form name[$d]
- Explicit rules are indicated in the canonical form name=value
- Absent rules appear inactive and greyed out in the Window Rules manager.
We must ensure that the rule exists in the block if we wish to change its value. There may be a clever way to use sed or awk to insert a missing rule between two existing rules, but we haven’t taken the time to figure that out. It’s simply more convenient at present to edit kwinrulesrc in the Window Rules editor to select (and thereby activate) the desired parameter(s), after which the file can be edited with automation.
Numerical rule values
We have experimented extensively and come up with what appears to be the correct list, but there is more experimentation needed and insufficient time at present to do it. The list appears to be:
1 Do not affect
3 Apply initially
5 Apply now
6 Force temporarily
We would like to be able to specify a set of parameters to be applied initially when a window opens. In particular, we wish to specify
- The activity or activities in which it appears
- Its initial size and position
The default settings for the various parameters seem to be stored in code, not in a specific file, so we cannot simply edit such a file.
What we can do is:
- Determine the alphanumeric values for the parameters of interest
- Edit kwinrulesrc globally to embed those values.
- Restart the window manager kwin with a new kwinrulesrc.
Determine the alphanumeric values for the parameters of interest
- Activity values
Activities are specified by a complicated hexadecimal number determined by your system. See Shortcuts
Our values are (YMMV):
- Positional values
Our values are (YMMV):
desktop=1 desktoprule=4 #Force ignorgeometry=True ignorgeometryrule=4 #Force placement=Centered placementrule=4 #Force position=179,111 positionrule=5 #Apply now size=1250,700 sizerule=3 #Apply initially strictgeometry=True strictgeometryrule=4 #Force titlematch=2
Editing the rules
- We cannot edit kwinrulesrc directly because kwin detects any changes and overwrites them with the values in memory.
- We copy kwinrulesrc to kwinrulesrc.lcd to make the edits and then transfer the changes when we are done.
- We cannot use a text editor to change a parameter rule because the parameter value and rule start with the same string.
- We use some clever sed code to first change all default [$d] strings to an explicit string, then another bit of sed code to change any explicit value to the desired value:
We use sed in a script kwin_edit to convert default rules to an explicit rule:
sed i 's/parameter$d]/parameter=1/g' ~/.config/kwinrulesrc.lcd
changes parameter[$d] to parameter=1 globally (/g) throughout the file.
sed i 's/parameterrule$d]/parameterrule=1/g' ~/.config/kwinrulesrc.lcdchanges parameterrule[$d] to parameterrule=1 globally (/g) throughout the file.
We then use sed to globally change our now default explicit value to the desired value:
sed i 's/parameter=.*/marameter=
sed i 's/parameterrule=.*/marameterrule=
/g' ~/.config/kwinrulesrc.lcdWe repeat this for each parameter and rule.
You must then revisit
and set all its values to default [$d]. It will set the entire screen and task bar off if you do not make this correction.
- We use msort to sort alphabetically by Description
- We use Renumber to renumber the blocks sequentially
Restart the window manager kwin with a new kwinrulesrc
The kwin_replace script:
- Stops the current version of kwin with the kill -9 switch
- Copies kwinrulesrc.lcd to kwinrulesrc
- Restarts kwin with the –replace switch
Now the Window Rules appear alphabetically sorted by Description in the Windows Rule manager and all windows begin Centered on the screen (at position 179,111, the default) with a size of 1250,700 in the specified activity.
Easy when you know how.
|
OPCFW_CODE
|
Mud snakes can lay over 100 eggs at a single time!
Mud Snake Scientific Classification
- Scientific Name
- Farancia abacura
Mud Snake Conservation Status
Mud Snake Locations
Mud Snake Facts
“Rainbow Colored Snake of the South”
If you look in a drainage ditch or a slow stream in the South and you see a little snake lying in a perfect circle at the bottom, it may be a mud snake. If you pick it up and it has a black and red belly and tries to poke you with its tail, it’s almost certainly a mud snake. Don’t worry, it’s not venomous and it won’t even try to bite you. Take a look at it, put it back in its stream and watch it swim away.
Mud Snake Amazing Facts
Here are four amazing facts about mud snakes.
- Mud snakes are born with tails that come to a sharp point, which is rare for a snake. They are sharp enough to stab the mouth of a predator that’s trying to eat them.
- Eastern mud snakes are endemic to the southeastern United States.
- The snake is semi-aquatic, and its favorite meal is the giant salamander.
- They’re sometimes called hoop snakes because people believe the mud snake bites the end of its tail, forms a hoop and rolls downhill. This is a myth.
Where To Find Mud Snakes
Mud snakes are found in the swamps of southern states such as Mississippi, Louisiana, Virginia, Kentucky, Georgia, Florida, Tennessee and the Carolinas. They prefer ponds or lakes with muddy bottoms, which gives them their common name. They’re also found at river edges, in sluggish streams, bays, irrigation ditches and salt marshes. As a semi-aquatic snake, the mud snake spends most of its time in the water and only leaves to hibernate or breed. The snake will also look for a new body of water when the one it lives in dries up. It can thrive in both fresh and brackish water.
Mud Snake Scientific Name
The scientific name of the mud snake is Farancia abacura. No one seems to know what Farancia means, but the genus features moderate sized and innocuous snakes that seem to come in a rainbow of colors. On the other hand, the epithet abacura is from the Latin word “abacus.” The abacus is a device that allows you to count by moving colored beads. This calls to mind the red-bellied snake’s beautifully colored scales. It has two subspecies:
- Farancia abacura abacura
- Farancia abacura reinwardtii
The Different Types of Mud Snake
There are two subspecies of mud snakes. The first, F. a. abacura, is the eastern mud snake and the nominate species. The second. F. a. reinwardtii is the western mud snake. The only real difference between the eastern and western mud snake is that the western mud snake is found more along the Gulf of Mexico and can be found as far north as Illinois. Western mud snakes also seem to prefer habitats that are a bit more stagnant and muddy than the habitats of the nominate species.
Mud Snake Population & Conservation Status
The mud snake is not a rare snake, though it’s a rare thing to see one. It is fairly common in its range and its population is stable.
According to the IUCN Redlist, the mud snake is of least concern. The one problem they may have is habitat fragmentation, for it may require the snake to cross a road to get from one place to the other. This puts them at risk of being run over.
How To Identify Mud Snakes: Appearance and Description
Mud snakes are not very large snakes, ranging from a little over a foot to about 4 feet in length. Their dorsal scales are smooth and glossy black, and they have red and black bellies. The red comes up the snakes sides to form vibrant reddish bars. Females tend to be more robust than males, but males have longer tails. The tails end in a spine that’s especially sharp when the snake is a baby. They have flat heads, small tongues and tiny eyes, adaptations that allow them to more easily move through the muddy or sandy bottoms of bodies of water.
Mud Snake Venom: How Dangerous Are They?
Mud snakes are nonvenomous and harmless to people.
Mud Snake Behavior and Humans
Mud snakes can be both nocturnal and diurnal, and snakes that live in the southern part of the range often hunt during the day. They don’t like to be out in storms and will hide in burrows or under water plants to avoid them. They spend much of their time in the water or in mud, but they leave in order to lay eggs. Snakes in the colder regions must also find a hole in the ground near their body of water in which to brumate, and they don’t emerge until March.
Mud snakes are generally solitary until the mating season. Males and females release pheromones from their skin, and if they are handled, they can release a bad-smelling liquid from the cloaca.
The price of being small and nonvenomous is to have to contend with a variety of predators, including mammals such as raccoons, larger snakes such as cottonmouths, alligators and wading birds. The mud snake tries to protect itself by curling up and displaying the bright colors of its belly. This might remind a would-be predator of the venomous, rainbow colored coral snake and cause them to back off.
The harmless, red-bellied mud snake is sometimes kept as a pet. The price is about $90 per snake plus the price of an enclosure, food, and other necessities. Other than this, it doesn’t have much impact on humans. It’s nonvenomous, but as it’s semi-aquatic it doesn’t eat vermin such as rats or mice.
Mud snakes reproduce from March to April in Florida and from July to September everywhere else. Females only lay one clutch of eggs a year, but they can lay between six and as many as 111 eggs. The average is about 27. Before mating, the male uses pheromones to attract her. Competing males engage in combat to see who can pin the other’s head down. The winner gets to mate.
After the female lays her eggs she coils her body over them and remains until they hatch. She only leaves to hunt or molt. Biologists don’t believe this is protective behavior because when the female is approached she won’t make a threat display or try to bite. She will simply slink away.
Baby snakes hatch after about 56 days and are independent immediately. They are ready to breed when they are about two and a half years old and can live for as long as 19 years.
- Banded water snakes also live near water.
- Dekay’s brown snakes are nonvenomous snakes that live in much of the United States.
- Rim Rock Crowned snakes are small harmless snakes.
Mud Snake FAQs (Frequently Asked Questions)
Are mud snakes venomous?
Mud snakes are nonvenomous.
How do mud snakes hunt?
Mud snakes seem to chase their prey and will use the hardened top of their tail to probe in the mud for it. Once it grabs hold of prey its large back teeth help hold it steady while the snake swallows it. This can take a while, as a grown amphiuma, or conger eel can be almost as long as the snake itself.
Are mud snakes aggressive?
Mud snakes aren’t aggressive, but they may poke you with the sharp tip of their tail or evacuate their bowels if they don’t want to be handled. The tails of baby snakes are especially pointy.
Where do mud snakes live?
Mud snakes live in swampy or marshy areas in the southeastern United States.
What do mud snakes eat?
What is a mud snake?
A mud snake is a beautiful, rather small, red-bellied snake that’s found in the southern United States.
Is the mud snake poisonous?
Mud snakes are not poisonous. Some people believe that they can sting with their tails because their tails are sharp, and they use them as probes. But mud snakes do not have stingers.
Are mud snakes docile?
Mud snakes appear to be quite docile, and some people keep them as pets.
Do mud snakes have teeth?
Mud snakes do indeed have teeth. In fact, the teeth at the back of their mouth are bigger than those at the front. This allows them to bite and hold on to slippery prey such as salamanders, amphiumas, sirens, and fish.
Are mud snakes in Florida?
There are mud snakes in Florida and many other southeastern states.
Thank you for reading! Have some feedback for us? Contact the AZ Animals editorial team.
- , Available here: https://en.wikipedia.org/wiki/Mud_snake
- , Available here: https://www.virginiaherpetologicalsociety.com/reptiles/snakes/eastern-mudsnake/eastern_mudsnake.php
- , Available here: https://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=174164#null
- , Available here: https://www.iucnredlist.org/search?query=Farancia%20abacura&searchType=species
- , Available here: https://www.reptilesncritters.com/red-bellied-mud-snakes.php
- , Available here: https://srelherp.uga.edu/snakes/faraba.htm
- , Available here: https://lucec.loyno.edu/natural-history-writings/mud-snake-farancia-abacura#page-top
|
OPCFW_CODE
|
Hi, Can anyone please reply me how long it will take to learn Unix. I have o % knowledge in Unix. I need to learn UNIX efficiently. Thanks in advance.
Years. You should read:
The UNIX Programming Environment by Kernighan and Pike
Advanced UNIX Programming by Rochkind
The Design of the UNIX Operating System by Bach
If you need a system then *BSD is probably what you should be looking for as the BSDs are arguably the most UNIX-like systems available.
A better option may be GNU/Linux which is less UNIX-like but has a wider range of support than the BSDs.
If you understand the DOS commands, Unix commands will take only a few days to learn. DOS was derived from UNIX.
The operating system now uses a screen GUI that looks like any other GUI on servers.
The placement of files is a different hierarchy than a windows systems uses. Although not hard to learn, things are in different places.
With a system in front of you – a couple of days to a week will make you very familiar with the system. Reading a book only, at least a week.
Now keep in mind there are many flavors of Unix, All tooting their own horns and having benefits. But all operate on the same basic principle. Red Hat, Linux, and all the Unix children programs are very good, but just a little different from each other.
We aren’t you, can’t give you a timeline and all that. The question about learning Unix is actually like all those questions about learning Linux, meaning Ubuntu, Linux Mint and so forth. Which Linux and which part of thhe operating system? In fact, when I learned Linux in the nineties (because my school couldn’t afford UNIX) my understanding of what an Operating System is changed. Linux began as a free kernel to run UNIX programs. It still is substantially, thoughh much of the development has switched to Linux and Solaris, a UNIX runs a LOT of Linux programs, as does FreeBSD. There are differences under the hood, but the common user doesn’t notice them.
The BASICS of the file system and command line will take anything from a few hours to a few days. RIGHT above that, though, are a number of archiving and redirectiion tools whiich once you understand them make the command line so powerful even today there are people who can usually get along without a Windowing desktop. The flip side is of course they can be murder. And even with self study you will be starting them without a break.
The other issue you have to get used to is choice: Linux and UNIX are modular operating systems. Ubuntu is really different in differentiating between which desktop comes with X-Windows. The default desktop with X-Windows is twm. It dates back to the eighties and you really don’t want to try it (though even on Ubuntu it’s certainly possible). Most Linux and UNIX distributions let users choose their own desktops and don’t differentiate. Even Ubuntu will allow you to switch them straightforwardly, without reinstalling.
So it can be a short time, and it can be years. It depends on what you are trying to do with it.
|
OPCFW_CODE
|
Ok, I’m at it again with Part 2 of this blog series about using Sencha Touch, more specifically Sencha Architect to create a mobile application. In this part of the series I will be covering the following topics:
- Defining the Goals
- Define the Requirements
- Define our Application Structure
- Detail the functionality of each Screen and how I plan to achieve it
Defining the Functionality
Ok, the first thing we are going to need to do is define what we want and the functionality that the application is going to have. In order to do that though, we need to know what our goals and our requirements are. I touched on this in my previous blog post so I will just give you a brief outline since the goals and requirements are not that complex.
- Presentation piece for BlueBolt to demo our capabilities in the mobile space
- Demonstrate the capabilities of Sencha Touch platform
- Deploy the BlueBolt content in the mobile space
- Leverage PackFlash Constellation and DotNetNuke as a content management back-end tool for a mobile application
- Use Sencha Touch to integrate with DotNetNuke and PackFlash News
- Present our Services
- Present our Case Studies
- Present our Blog content
So now we have a clear and simple definition of our goals and requirements we can determine our site structure and our functionality. Again, we are just trying to keep this simple so here is an outline of the site structure:
1. Loading Screen
2. Home Screen
a. Latest Published Items*
b. Navigation List*
c. Latest Tweet*
a. List of services*
4. Case Studies
a. List of Case Studies*
a. List of Blog Entries*
a. BlueBolt Process
7. Detail Screen
a. Content for the associated piece that was used to direct you to the screen.
*denotes page functionality
Generally what we do is use a spreadsheet to create a document that lays out all the pages of the site, the site map, each page’s functionality and each piece of content that is applicable to that page. When we are done with that we have a nice inventory list of everything that needs to be accounted for on the site.
Ok, so let’s go through each screen and briefly talk about what will be entailed in creating that screen. From here on out though, instead of using the term screen, I will be referring to screens as Views. Sencha is based on MVC, Model View Controller based architecture and using the term View will help you understand how everything is related when we get into more technical discussions.
The loading view is pretty simple, it’s just going to be an image that shows up on the device while it is loading up our application.
The home screen is going to be broken up into 3 or 4 pieces of functionality, I haven’t quite decided yet. We have been kicking around the idea of adding something to the home view that would allow our existing clients to easily submit bug tickets to our bug tracking system but that is still up in the air.
What the home view will have however is a navigation list that will link to all of the other views in the app. This list will be created as a list class and it will be connected to a static data store that will have the navigation data hard-coded into it. We will get into the details of how to do all of this later on in the series.
The home view will also have a link to the 5 most recent pieces of content that we have added to the site. Essentially this will link to our blog view and will pass a variable to it that will be used to filter our blog content. We will also have a tie into our twitter feed that will show the last 3 tweets that have gone out. We can achieve this in 2 ways, we can embed a twitter widget into the html of that home view or we can connect to the twitter API and pull the information we want.
The services view essentially is another list...but lists are so boring and we are trying to make this a “show piece” or a “kitchen sink” type of application, so I’m not going to use a list the way you may think. Instead what I am going to use is a block of icons, each icon representing one of our services. Creating the icons should be pretty easy, they already exist on our regular site so creating this view is just a matter of using a few container classes nested inside one another. Each one of the icons will “link” to the detail page. When the detail page is loaded, the content will be pulled from the data store I set up for it and rendered onto the screen.
Case Studies View
The Case Studies view is also essentially another list but like I said before, lists are boring, so we are going to use the carousel class to present this information. Users will be able to swipe from one case study to the next. When the user clicks onto a case study, they will be directed to the detail view and there will be an ajax call that loads up the appropriate information.
The Blogs view, while lists are pretty boring this view will be kind of text heavy so I am going to go ahead and just use a list class but I will be doing some customization to the list class to achieve the desired result.
The detail view will be used for all three of our content types, Services, Case Studies and Blog posts. The detail view will consist of nested container classes. The important thing with the detail view is going to be the model that is used to define the data. The model then can be rendered onto the screen using what Sencha calls the ItemTpl or the html template for that view.
The about view will essentially look like the detail view and will be a template for any other static content that I may want to add to the application.
Aside from the views, the application will also have a navigation bar in the top and a contact button that will pull up a contact modal panel. This functionality will be universal to all views inside the application and will be driven by using a NavigationView class which will serve as the root class that all content is pushed into. The beauty of the NavigationView is that it handles your navigation history and manages your back button for you so you don’t have to do anything special with the code, it’s built into the class. Take a look at the outline of the application below:
If you are interested in seeing the wireframes in action, you can find them here, http://prototypes.blueboltsolutions.com/YN8BJE/.
This wraps up this week’s blog post, in my next post, I will present the designs I have come up with and I will start to dig into the real technical stuff as I start to go more in depth into the MVC architecture the site will use and I start to talk a bit more about the classes I will be using.
|
OPCFW_CODE
|
Exporting internal library dependency to outer scope with browserify
I'm using browserify for requiring bluebird for cross-browser compliant usage of promises in my WebApi Wrapper for Dynamics CRM.
It works great so far, however I don't like that I can't do Promise.all for example on my returned call results in IE, without including bluebird again in my top page.
For this reason I'd like my browserify bundle to expose Promise to the global scope. Of course it would work to just do global.Promise = require("bluebird"), but that feels kind of dirty.
Using a standalone bundle I made it, that my standalone exposes my client as property and Promise, too. However names get longer this way and I would like to just be able to use Promise (without my standalone wrapper) everywhere.
What do you think about that? Is that possible or should I not do that?
What I do right now is define my client as IIFE, and inside it do the following:
module.exports = {
Client: WebApiClient,
Promise: Promise
};
and browserifying like browserify src/js/WebApiClient.js -d --standalone XrmWebApi -o Publish/WebApiClient.js.
This way I'm currently able to use XrmWebApi.Client and XrmWebApi.Promise, but I would very much like to get rid of the XrmWebApi for calling the promises.
Thanks for your help.
Kind Regards,
Florian
Create a new file for your global definitions and then add that as an entry point in your bundle step.
src/js/globals.js:
var bluebirdPromise = require("bluebird");
global.Promise = global.Promise || bluebirdPromise;
browserify src/js/globals.js src/js/WebApiClient.js -d -o Publish/WebApiClient.js
You will probably want to remove references to bluebird in your file src/js/WebApiClient.js and its dependencies and just use Promise.
Do you know whether it is a better practice to leave my client definition as IIFE, or would it be better to expose it as module.exports?
I'm not sure what you are trying to achieve so I will guess. The browser is not going to understand your module.exports. I had a look at your last commits on the project and I am guessing you are wondering about the IIFE with respect to the globals.js that you have added. It seems unnecessary. You should see the output that browserify gives you just from browserify src/js/globals.js. It will show in your terminal and you will see that the code is already wrapped for you in an IIFE.
I would comment the line which requires bluebird because it's only going to make the output unreadable. You'll see the browserify surrounding code is minified. An unsquashed version is over at https://github.com/substack/browser-pack/blob/01d3989/prelude.js if you want to study it with a little more ease...
Thanks for your answer. Yes, in fact I was trying to find the clearest way of exposing my modules. I guess it's ok as it is right now, since the IIFE in WebApiClient.js attaches my module to window or reuses an existing one. I checked the latest release, the bluebird require in WebApiClient does not seem to make it more unreadable? I wanted to add it so I can be sure that promise is always bluebird, even if the user has overwritten promise with something different.
By unreadable I just meant the output from browserify bundling your code would be noisy if you did a require in there so it would not be clear what browserify itself was adding in there. I tried to illustrate it better with this gist: https://gist.github.com/casr/6cb5de4aaf0a1f632e482cbbdab3e12f
Thanks for your clarification, I understand what you meant.
|
STACK_EXCHANGE
|
This glossary defines common terms that you might see as you're working with Kelp, or throughout our Documentation and Help Center.
An app is a web application created in Kelp. The app runs in a browser and provides interactive features to its users.
A component is a building block in Kelp. You create apps using components and connecting them with wires. Basically, Kelp has two types of components: components and widgets. Widgets are visual components that show information to users. However, in the component library, we keep components in different categories, for example, utils, connectors, data processing components, and many others.
Composite components consist of core components or other composite components. You create composite components by grouping a piece of the app schema into a single composite component. Composite components help create different abstraction levels in Kelp.
The data gateway is a secure backend service between your app and external data sources. All API calls from Kelp apps go through the data gateway. The data gateway is multipurpose. Kelp uses it for monitoring, security, load balancing, caching, protocol translation, and so on. What security is concerned, the data gateway helps protect, manage, and share user credentials with external secure data sources. The data gateway doesn't store or log any app data. All the app data passing through the data gateway is end-to-end encrypted while in transit.
A data source connection (or a connection for short) is a configuration of settings that Kelp apps use to connect to external data sources. Depending on the data source, a connection contains various settings. For example, connections to secure APIs include necessary authentication information. Connections are reusable: you can use them inside one app or across multiple apps. Kelp has two types of connections: preconfigured and custom, which users add themselves. A list of preconfigured connections covers popular APIs (Google API, HubSpot API, Dropbox API, and more).
A data stream is a sequence of ongoing events ordered in time. A component that receives the data stream processes each piece at a time. Data streams usually pass events to the List widget or to another component.
A data transformation is a transformation of the data that flows between components as events. Each component has the specific data structure it expects to receive. If an in-port of a component receives the data whose schema differs from the expected one, the component won’t operate as it’s supposed to. In this case, you need to apply a data transformation to the data the component receives. Kelp supports various JSON transformation languages, for example, KelpQL, JMESpath, JSPath, and others. To see which data schemes in-ports of a component expect, open the component library and search for the component.
Events are containers that hold and transfer the response data of HTTP requests through wires. Kelp has two event types: data events and signals. When we mention "events" in the Kelp documentation, we mean data events in most cases. Data events can hold and transfer HTTP requests, constants, changes on a variable, ingested messages, and many other things.
The event listener is a debugging tool available in Developer and Preview modes. As an interface element, the event listener is a panel, which you can open. The event listener records data events and signals passing through the selected wires. You can then view the recorded data in chronological order.
Component grouping is useful if you want to do the following:
- Declutter the app schema. If you have a large number of components, the app schema becomes messy. To make it neat, you can hide multiple components in a group.
- Create a composite component. To create a composite component, you first need to group the constituent components. You decide which components to group. For example, you can group components that have the same objective or those related to a particular piece of the app logic.
Components use in-ports to receive events and out-ports to emit events. Each component has one or more in- and out-ports. To trigger a component, no need to connect all ports. The exact ports you need to connect depend on the component type and desired functionality. Kelp has three port types: standard, queryable, and hybrid. To see which ports a component has, open the component library and search for the component.
In Preview mode, you can see how an app looks to users.
Queryable ports (or q-ports for short) are bidirectional ports. Unlike standard unidirectional ports, q-ports are pink in the Kelp interface. A q-port is, in fact, two standard ports joined together, where one port can only receive events, while the other one—emit. q-ports can be connected with q-wires only. A real-life analogy of a q-port is a double socket.
Note: Apart from q-ports, Kelp also has hybrid ports, which can accept both standard and q-wires.
Queryable wires (or q-wires for short) are bidirectional wires that connect q-ports and hybrid ports. Unlike standard wires, q-wires are pink in the Kelp interface. A q-wire is, in fact, a bundle of two standard wires, where one wire is a forward data stream, while the other one—a backward data stream. You can split a q-wire into two standard wires, and vice versa: bundle two standard wires into a single q-wire. To do so, you need to use the QPort Client and QPort Server components respectively.
The schema is the logic layer of an app, which you create in Developer mode. The schema is the combination of components, groups of components, and wires that you put on the canvas and connect. As a data structure, the whole schema or any part of it is a JSON object. You can copy and paste a part of the schema on the canvas of other apps.
- Mark the beginning and end of a batch of events.
- Instruct widgets to display a progress bar.
- Reset buffer on ports, and so on.
Kelp has several types of built-in signals. Some components, such as the HTTP component, emit signals by default. In some cases, to control the data flow of your app schema, you may need to inject signals yourself.
A widget is a visual component you add to your app schema to design your app interface layout. You add widgets in Design mode. Widgets can show information to users as charts, graphs, icons, tabs, lists, cards, and so on.
As a concept, a wire is a data stream. As an interface element, a wire is a thin line that connects components. You define how data flows in your app by connecting components with wires. Kelp has two wire types: standard and queryable.
|
OPCFW_CODE
|
Why is Minecraft lagging?
I bought a new Graphics card and power supply for my computer so that I could run games better. I got it and it works great- I can play games like Skyrim on full graphics with absolutely no lag. However, when I went back to play Minecraft on full-screen with upped settings, it still lags! Why is this, because my computer should be able to run no problem?
My graphics card is a SAPPHIRE HD 7770 GHz Edition 1GB GDDR5
Minecraft can be CPU bound, and given your GPU, it probably is. What processor are you running?
Are you running vanilla Minecraft, or are you running with mods?
Processor: AMD Athlon II X4 635.
Running pure vanilla.
First off, you should ensure you are running the appropriate Java version. If you are a 64 bit machine then make sure you have the latest 64 bit version. If you are 32 bit then get the latest 32 bit version.
That being said, it's quite possible that you need to allocate Java more memory. Vanilla minecraft should run no problem on the default 1 GB of memory but if you install mods, that quickly becomes too little. If you're on Linux (and probably works on Mac OS too) then simply put the flag '-Xmx2G' into the java command you start minecraft with. This particular flag puts 2 Gigabytes (hence the 2G), if you wanted 3 Gigabytes you would use '-Xmx3G'. If you are on windows, then this guide should help you. If you are running vanilla minecraft and still get low FPS after giving it more memory it would be helpful for you to post your system specs (CPU, memory speed, memory amount, etc) as well as anything else you've tried already.
I accidentally came across the answer myself, and now that I know what it is, it fits in perfectly with what you were saying.
It
Mojang says Minecraft requires 2GB of memory (https://help.mojang.com/customer/portal/articles/325948-minecraft-system-requirements)
I accidentally came across the answer myself, and now that I know what it is, it fits in perfectly with what everyone was saying.
It's pretty simple: I had my Minecraft (Java) server up and running while I tried playing Minecraft, and I again noticed the lag (I already knew that having the server up made some games run slow, such as Skyrim). I turned it off and whadya know! The lag disappeared! Now the reason I posted this question in the first place was because when I originally experienced the lag, I remember going and checking to make sure the server was off... and I thought it was. Anyway, I either made a mistake and thought the server was off when it wasn't, or whatever the problem was fixed itself.
Thanks for the tips guys!
You need to figure out if Minecraft is lagging or the whole machine is lagging. I had an instance where Minecraft was running slow on a friends computer and everything else was perfectly responsive. Increasing the memory allocation will fix this type of lag.
That said, please try willl459's answer first. If that doesn't not get you up to speed, you may need to look at disk I/O. basically, how fast your computer can read and write to the hard drive. Larger worlds in minecraft can take up a lot of HD space as well as take longer to load it from disk as you move around. You may need to think about upgrading to SSDs. I have seen and heard many successful minecraft speed ups from adding SSDs. They can be a bit pricey, though.
You can also try a program called Razer Game Booster. I was getting between 12-20 FPS, then installed this program, added Minecraft to the game list. When i played the game, i checked the FOS and was over consistently over 60 FPS.
Also, the following site has a lot of useful information, including the Game Booster link.
http://www.planetminecraft.com/blog/the-ultimate-guide-to-reduce-lag/
|
STACK_EXCHANGE
|
PhD studies taught how to be a good researcher
D.Sc. (Tech.) Julian Parker combines research and design work in the field of music technology. He is a popular visiting lecturer in Aalto University. Most recently he gave a talk in March 2017 in Otaniemi.
D.Sc. (Tech.) Julian Parker, you have graduated from Aalto University in 2013. What have you studied?
I studied for my doctorate in the Acoustics & Signal Processing research group led by Professor Vesa Välimäki. My thesis was about computational modelling of the mechanical springs used in early methods of artificial reverberation. I’m originally from the UK. I did my Bachelor in University of Cambridge studying Natural Sciences and moved to University of Edinburgh to study for the MSc in Acoustics and Music Technology. After my Master studies I started to look for different PhD opportunities in my subject area. I heard about Aalto and Vesa´s group from my supervisor at Edinburgh.
How did you enter your current work and what do you do?
I work for the Native Instruments GmbH in Berlin, where I lead DSP development and research. The company is one of the market leaders in producing software and hardware for music production. I started to work there two weeks after I defended my thesis. My day to day work is quite similar to my activities at Aalto, but industry has different requirements. When making algorithms for customers to use in their music production, sound quality is the most important factor. Achieving this requires a lot of listening.
While working at Native Instruments GmbH you have also time to do research work. What do you research and why?
I have continued doing research in the same areas. I have published on a variety of topics including reverberation, physical modelling of both mechanical and electrical systems, and digital filter design. I think it is a great shame if you cannot contribute to the academic community after graduating, so I continue to research and publish whenever I can.
What are the most important qualities and lessons the doctoral education at Aalto University gave you?
In Aalto I learned how to be a good researcher. I learned how to choose the right idea for a paper, and especially how to present that idea. Of course, I also gained a lot of knowledge from the great experts in my field at Aalto, like Vesa and the late Prof. Matti Karjalainen. I was lucky to do my PhD studies in a great environment with great people.
I think the Aalto merger in 2010 was a great change. I took some courses from the design and arts parts of the university, and I found this extremely valuable in my development. It gave me a larger perspective on my studies.
What kind of advice you would give the students when they are thinking own career path?
It is really important to meet the right people, so attending conferences and networking is very important. Try to publish as much as you can, but don't forget other skills. Try to present your work so that anyone can understand it, not just researchers.
|
OPCFW_CODE
|
Compose spreads for your own book
Create ranges of representation using images
Design a book with your own images
Make informed design choices using image-based research
This course for serious makers, and for students new to imagemaking. Imagemaking is a fluid and exciting area of graphic design that comes out of practice and process: experimenting fearlessly, showing and sharing ideas, and giving and receiving knowledgeable and constructive input.
For the sake of this online platform, we have applied some structure to our investigations, but for the most part imagemaking is loose and unstructured. If we must adopt a rule in this course it is only this: you will not become a graphic designer by watching videos alone. Or, don’t just make stuff just in your head. So here, the focus here is on making, and you are expected to devote serious time and intellectual energy to that activity in this course. Specifically, you will:
– experiment with a range of materials and techniques to make images for graphic design
– expand your visual vocabulary both in terms of making and talking about work, in order to discuss your work and work of others
– learn how to make, manipulate and arrange images to create compositions, eventually culminating in the design and production of an-image-based book.
The first half of the course is an opportunity to experiment and explore imagemaking in order to expand your visual vocabulary. You will create pieces that are expressive, meditative, or ‘design-y’ to instigate, evoke, experiment, record, explain, or try out a media.
In the second two weeks, we’ll invite the images to deliberately and intentionally carry meaning and communication through relational moves like juxtaposition, composition, and context. We’ll look at developing and expanding the range of approaches for putting things together by composing page spreads with your images. Since nothing exists without context, we look at how to intentionally drive the image’s connotations, meanings, and associations generated through elements of composition and “visual contrasts.” Ultimately, we will take the images that you create and make a book from them.
The results of your assignments (and experiments) may generate something completely unknowable now or in the future—and that’s the goal.
完成时间为 4 小时
Week 1: Image-based Research
Welcome! This week we will introduce image-based research and investigate these images in a design context using a range of techniques. You
will then choose a subject of your own to research that will become the basis for your experiments and designs in the coming weeks.
4 个视频 (总计 9 分钟), 6 个阅读材料, 2 个测验
完成时间为 4 小时
Week 2: Making Images
This week is a making session! You will take the subject you selected last week and think about it in different ways. From simple to complex, hand to digital, realistic to abstract and everything in between, we'll explore and experiment freely. We'll also consider different modes of representation, and try a hand at creating our own ranges of representation with images.
3 个视频 (总计 8 分钟), 7 个阅读材料, 2 个测验
完成时间为 2 小时
Week 3: Composition is Relational
In this third week, we will define and investigate core principles of composition, and begin to compose simple spreads for your book.
7 个视频 (总计 19 分钟), 1 个阅读材料, 1 个测验
完成时间为 4 小时
Week 4: Designing a Book with Your Images
In this final week, we will pull together your work from the previous weeks to make spreads using 10-20 of your favorite images from your assignments and compile them into a 8-page book. I am also sharing with you a few image-based books to give you some ideas on how to structure your book. Note that for the final assignment I am asking you to submit a digital book through Issuu (instructions are included in the assignment). If you choose to create your images by hand, you will still need a computer loaded with InDesign or a similar program to clean up the images and prepare the files for upload to the Coursera platform.
|
OPCFW_CODE
|
- If you have any questions, join our Discord server, email me directly, or drop a post on the ZDoom forum thread.
- Discord: https://discord.gg/RehCYqS
- Email: firstname.lastname@example.org
- ZDoom thread: https://forum.zdoom.org/viewtopic.php?f=43&t=54843
Have an idea? You can either open an Issue here on Github or you can make a suggestion for it on our discord server. Bear in mind, issue-based suggestions opened here are much easier to keep track of
Found a bug? Please specify in the title what version number via brackets you're using. I.e.
- [Alpha #] - Indicates the bug is in the 3.0 Alpha # version.
- [Git] - Indicates the bug is present in the github. Please include a link to which commit you are using if possible.
The following issues will be closed on sight:
- [2.0.3-] - This and earlier versions are no longer supported, as the mod has moved on.
- [Non-OpenGL Renderers] - The old Carmack ZDoom renderer support was dropped in favor of slowly gaining increased multiplayer support and higher quality visuals.
These rules are for contributing directly towards Doom4Doom via pull requests.
- No DECORATE or ACS allowed.
- DECORATE is now deprecated entirely. There's not a shred of DECORATE left except for the damagetypes.
- Anything you can do in ACS can now be done in ZScript. If you need help, ask.
- Detail your commits.
- You must specify all that has changed if you wish to have a higher chance of acceptance.
- Notify us of defects/caveats.
- If you know there are issues or problems, tell us and we'll help you work out what said problems are.
- Rejections may not be final.
- If you're asked to change something, the Pull Request may be left open. This depends on how big and complex the code is.
- Pull requests may be contested with the primary developers at any time, which may delay the process.
- If a PR is ultimately declined for a specific permanent reason, it is considered final and should be dropped.
- Use a program like SourceTree or GitKraken to upload files.
- Avoid using GitHub directly. With the aforementioned programs, you can commit all your files at once versus one at a time.
- You only need one of these.
- SourceTree: https://www.sourcetreeapp.com/
- GitKraken: https://www.gitkraken.com/
- Use proper indentation.
- Tabs, not spaces, should be used when making adjustments. Thus, you should use a programmer's notepad or SLADE to perform the edits.
- Notify us.
- While I do have the email system set up, sometimes I miss the notification from being gone.
GZDoom latest release
See the Changelog inside the Docs tab for the most up-to-date changes.
|
OPCFW_CODE
|
Most users will encounter situations when they will require a remote desktop access client for Linux to control more than just a single application or program. That’s when a Linux Remote Desktop client like AnyDesk, NoMachine, X2Go, Zoho Assist, and some others we’ll discuss below becomes incredibly valuable. Implementing these tools allows users to access and control a remote computer from a plethora of devices. Numerous protocols exist, but many users will be familiar with SSH.
For example, tech support professionals using Linux operating systems might need to remotely direct someone through a software installation or reconfigure settings remotely on a Windows machine.
Please note: It’s essential to make sure all necessary packages are downloaded/updated before creating remote desktop access sessions. Sudo Apt is a great tool for this, as once users run the Sudo apt-get update command, the required packages will download directly from the internet.
Even though Secure Shell doesn’t provide remote desktop connection capabilities, this Linux remote desktop client does allow users to install the software needed to enable secure access to remote operating systems.
To begin, use your Windows machine to select Windows PowerShell (from the Power Menu), and hit enter.
Accept the certificate, then enter the correct username and password. A connection is now established.
The methods described above work for all Debian systems including Fedora. Such compatibility doesn’t exist because the OS provides built-in support for remote tools.
Be sure to set up the remote access features on the physical machine running Ubuntu (initially), and installation of any additional tools won’t be necessary.
Remmina remote desktop client for Linux desktops ranks fairly well as far as performance, offering the flexibility to edit connection quality whenever the user wants. The remote client supports an extensive protocol list allowing users to enable remote connections with various systems.
While this client lacks server software of its own, it’s great for connecting with numerous remote servers. Unfortunately, it doesn’t offer some of the additional functionality that competing Linux Mint remote desktop clients provide.
rdesktop is one of the open-source RDP apps (much like TightVNC). It’s a UNIX client used for connecting with Windows Remote Desktop Services. rdesktop can communicate natively with Remote Desktop Protocol to display the user’s Windows desktop.
Not only was rdesktop the first remote access tool that works great as a Kali Linux remote desktop client, but also it was the most frequently used client for many years. However, as of November 2019, the client is in search of new maintainers.
This client offers a graphical login to remote machines via the Microsoft Remote Desktop Protocol. No matter which operating systems you are using (i.a. Windows, macOS, iOS, and Android,) the Xrdp server tool can connect using various clients such as AnyDesk, TeamViewer, FreeRDP, NeutrinoRDP, rdesktop, Zoho Assist, and Microsoft Remote Desktop Client.
Please note: This client’s session manager is often referred to as “xrdp sesman.” It allows users to manage user sessions via authentication options that provide convenience and control over who can access remote machines.
Using the server requires a simple setup, allows for port forwarding, and makes remote session access convenient and practical thanks to its compatibility with a wide variety of other clients.
NoMachine is an open-source remote desktop client for Linux that uses the NX technology protocol over VNC or XDMCP. It’s secure, fast, and very user-friendly and makes its way onto this list thanks to its user volume.
This Linux Windows remote desktop client’s popularity is usually attributed to its speed and ease of use, permitting users to access any remote device in a few clicks. The NX protocol supported its development, and what sets NX protocol apart is its high local speed. This speed provides a responsive experience that feels just like it would if the remote user were physically using the remote server’s desktop directly.
Offering secure encryption and reliability, it makes a fantastic personal server. Users can access practically all files (video, audio, document, etc.) Users can even share pretty much anything via its remote desktop client.
Please note: LAN (Local Area Network) connections don’t require port forwarding, but it is necessary for WAN (Wide Area Network) connections.
Chrome Remote Desktop
Chrome Remote Desktop is a free tool developed by Google that’s cross-compatible with any operating system/platform because it functions via a Chrome browser. This makes CRD a great remote desktop connection Linux client. Chrome Remote Desktop offers unlimited remote support, online meeting capabilities, and easy file access.
This app is an open-source tool for a remote desktop client for Kali, and it’s completely free. Being cross-compatible with any operating system and offering excellent performance, the tool has two applications, the Server and the Client, enabling remote access to a remote computer using a GUI (Graphical User Interface).
The functionality of this app (be it for server or client) is distinctive as a standalone virtual desktop that allows each independent session to be launched via its configuration. It should be noted that this tool doesn’t operate like the alternative server that connects directly to the runtime desktop, for example, Vino or VNCx.
As you can see, finding the right remote desktop client for Ubuntu Linux can open up a vast sea of options, and Tiger offers a wide variety of those.
|
OPCFW_CODE
|
This article is a beginner’s guide to an introduction to Java programming language. Here, we are going to study – “what is Java?”. Obviously, we know it is a programming language but how is it different from other languages?
We are going to learn how it works and its applications in the real world. So let’s get started.
What is Java?
Java is an object-oriented, cross-platform, multipurpose programming language. Too many big terms? Don’t worry, we are going to cover it all in the following section of the article. It is a simple programming language and writing, compiling, and debugging is easy in Java. It also helps to create modular and reusable code.
History of Java
Java was developed by James Gosling along with Mike Sheridan, and Patrick Naughton from Sun Microsystems in 1991. It has now been acquired by the Oracle Corporation. The first publicly available version of Java was released in 1995 with the purpose of becoming a machine-independent web technology. Since then, it has acquired a prominent place in the world of computer programming.
Features of Java
There are many key features of Java that make it unique in the world of programming languages and different from them in many aspects. They are:
Java is a platform-independent language, which means that it can be run on any operating system, including Windows, Linux, macOS, etc, which is why it is known to follow the principle “Write Once, Run Anywhere”. Since it can be run on any platform, it is known as cross-platform.
Once the source code has been compiled to bytecode, those files can be used on any Java-supported platform and executed without modification, unlike other languages, which requires compiling the machine code each time it is used on a different platform.
It is an object-oriented programming language that makes it modular and organized. Object-oriented programming is a way of organizing programs as a collection of objects, each of which is an instance of a class. We will learn about object-oriented concepts in greater detail in later articles.
It focuses on objects rather than procedures. Except for the primitive data types, all elements in Java are objects and can be easily extended for better handling. Java is not a purely object-oriented programming language since it also allows primitive data types like integer, float, double, etc, which we will learn in later articles.
It is a very simple and efficient language that makes it easy to learn and use. It does not have complex features like Operator overloading, multiple inheritance, pointers, etc which make it much simpler than other languages.
It also enables the development of a virus-free and tamper-free system. Since we do not have pointers in Java and cannot access out-of-bound arrays, several security flaws like stack corruption and buffer overflow are impossible to exploit. The Bytecode Verifier inspects fragments of code for illegal code that can bypass access which adds to the security.
Java uses a Java Virtual Machine (JVM) as an abstraction and does not access the operating system directly. This makes it highly portable. A Java program can run unmodified on any platform.
Java is a very reliable language which means that it emphasizes early checking and elimination of errors (especially compile-time errors and runtime errors) which makes it better in performance than other programming languages. The features like no pointers, automatic memory management, exception handling make Java robust.
It uses a Just-In-Time (JIT) approach for compiling and executing code where execution and compilation are done at the same time making it very fast and powerful.
It has been designed to adapt to evolving environments which make it dynamic in nature. It is able to carry an extensive amount of runtime information that can be used to verify and resolve problems at runtime.
Java can produce many applications from simple command-line programs to complex applets and more.
It supports multi-threading which means that it is possible to write programs that can perform many tasks simultaneously. This is an important feature that enables maximum utilization of the CPU and enables developers to construct interactive applications that run smoothly.
Automatic Memory Management
Java automatically manages the allocation and deallocation of memory for creating new objects. In simple words, when we create new objects, we do not need to explicitly allot new memory to them. It also has a garbage collector that automatically deletes the memory occupied by the object when the program ends.
Interpreted and compiled language
It is both an interpreted and compiled language since the source code is compiled to the bytecode and the bytecode is interpreted by the JVM. We will learn about it further in the below section.
Java is designed in a way that java programs can be distributed to more than one system which is connected to each other over the internet.
Open-source and free
It is open-source which means that its source code is freely available on the internet for public access and anyone can see and modify it.
It has huge community support with millions of developers and supporters across the globe.
How does it work?
Before we start with the language itself, let’s get familiar with how a Java code gets compiled and executed in a system. There are a few terminologies which are important to understand the complete working of a Java program in order to get an in-depth knowledge of the language.
Java-Virtual-Machine (JVM) is the reason why Java is called platform independent. It is a software implementation of a computer that executes programs like a real machine. After the compilation, the bytecode is executed by the JVM which is different for different operating systems. The JVM interprets the bytecode and produces an output which will be the same across all operating systems. For example, a bytecode created in a Linux system can be executed in a windows system and vice versa and the output in both cases will be the same. The JVM has the tasks of loading, verifying executing code, and providing a run-time environment. This provides for the cross-platform feature of the Java language. JVM is also platform-dependent.
The Java-Runtime-Environment (JRE) helps in executing a compiled file. The JRE is a collection of tools that together allow the development of applications. JRE includes JVM, browser plug-ins, and applets support. With the JRE in our system, we would be able to run a program but not compile it. For compilation, we need the JDK. If someone only needs to run a program without wanting to compile it, then a JRE is sufficient. JRE is platform-dependent and provides a run-time environment.
The Java-Development-Kit (JDK) provides an environment to develop, build and execute a Java program. Unlike other programming languages, it is platform-independent which allows a compiled file to be executed on all operating systems. The JDK is a software that contains various tools like JavaDoc, Debugger, Interpreter, etc, which we would be needing for the compilation of a program. The compilation is the process of converting human-understandable source code (which is a .java file) to machine understandable code. The compilation of a program is done by the javac compiler which is the primary compiler included in the JDK. It takes a program as input and produces the bytecode as output. The bytecode is stored by the compiler as a .class file.
Applications of Java
Java is also popularly known as the “King of programming languages”. It is being used in multiple fields of the world including our day-to-day activities of playing games or using our android smartphones. The language has become the backbone of millions of applications across multiple platforms including Windows, Mac, Unix, Android-based mobiles, embedded systems and enterprise solutions. The applications of Java can be seen in various domains. A few of them are mentioned below:
- Mobile Applications (especially android apps)
- Web Applications
- Web Servers and Application Servers
- Game Development
- Embedded systems
- Desktop GUI Applications
- Cloud-based Applications
- Enterprise Applications
- Scientific Applications
- Distributed Applications
- Big-Data Technologies
- Artificial Intelligence
This completes our tutorial on the introduction to Java. To summarize, we studied that Java is one of the most popular languages in the market right now. It is a platform-independent, simple, and multi-functional language and is open-source and distributed.
We learned about its history and how it works, right from when it is written to how the output is generated, and the various software included in the process. In short, the JDK contains the javac compiler which converts the source code to bytecode. The bytecode is then interpreted and executed by the JVM to generate the output.
We also learned about the various applications of Java in multiple domains. It is proving it to be one of the fastest emerging and extensively applied programming languages. The current number of Java developers have been estimated to be over 9 million worldwide. There has been a rise in the demand for programmers who have in-depth knowledge and understanding of Java setting forth even more job opportunities in the fast pacing technology industry.
I hope you all have become somewhat familiar with the Java language. In the next article, we will be installing Java and writing our first java program.
|
OPCFW_CODE
|
[seqfan] A correspondent is asking for help with an idea
njasloane at gmail.com
Wed Jan 15 17:07:48 CET 2020
Anyone interested in helping this person? If so, 1, post a note here
saying I'll try, and 2, contact the sender directly. No need to include me
in the loop.
This is posted here with his permission.
request for someone - not you! - to look at a possibly new, probably
useful, thingy related to permutations - from
jan <rtm443x at googlemail.com>
Dear Mr. Sloane,
I'll keep this brief. If you're too busy, please tell me to get lost.
I'm a programmer not a mathematician. I was looking at cuckoo filters.
These rely on what's seems to be called permutation packing to save
If you have any to combinations of 0 and 1 and allocate a number to
these combinations and store that number instead of the combination
(0, 0) -> 0
(0, 1) -> 1
(1, 0) -> 2
(1, 1) -> 3
This saves you nothing, however if you don't care about the order then:
(0, 0) -> 0
(0, 1) -> 1
(1, 0) -> 1 because (0,1) = (1,0)
(1, 1) -> 2
At that you only have three values, which saves you a little. In
larger combinations, this starts to save you something useful.
Trivial stuff, I know.
Cuckoo filters do this kind of thing with a large lookup table (LUT)
which is cache-unfriendly, I thought I could perhaps eliminate the
I think I've managed to do so (replacing their 65,536 entry array with
a total size of 128KBytes for a few LUTs of about 120 bytes and a
little extra code).
I can't prove this but I've done exhaustive testing on it and it seems to
I don't know if I've found something novel (unlikely) but it
definitely looks useful. I've also been unable to find anythging on
the web that looks like this (though to be fair I've no idea what to
look for but I have looked at lehmer codes - which these aren't
I've emailed the authors of the cuckoo filter but had no reply. I'm
sure they get enough nutters, I don't blame them.
This algo is exceedingly simple. I would love to know if it's new.
Can you recommend someone with a background in combinatorics who might
take a look? I can explain it in about 5 minutes in a document. I can
probably add a spreadsheet which displays each step to help, though
it's so simple that's probably unnecessary - but happy to anyway.
It is about the shape produced in an n-dimensional cube. The shape is
highly regular and can be summarised in a tiny LUT for each dimension
of the cube - order your items to be packed (these correspond to
co-ordinates within the cube), go through with a subtraction at each
sub-cube, add up at the end and you're done.
When illustrated it's totally obvious.
So, can you suggest anyone? I'd be really grateful even to find out
it's well known. At least I can forget it then.
More information about the SeqFan
|
OPCFW_CODE
|
/*
Every heap-allocated Object (currently ConsCell, Symbol, RlispString
and RlispFunc) has a GcMark. lisp::Lisp owns a GcMark which signals
the "correct" marking. Whenever the garbage collector runs, it
iterates through the stack and the symbols table and marks each object
accessible with the "correct" marking, iterates through the heap and
deallocs any object with the wrong marking, and then increments
lisp::Lisp.gc_marking.
*/
use types::*;
use lisp;
use std::mem;
pub type GcMark = usize;
pub trait GarbageCollector
: lisp::stack_storage::Stack + lisp::allocate::AllocObject
// This trait is implemented by lisp::Lisp
// its methods amount to a simple mark+sweep garbage collector
{
fn should_gc_run(&self) -> bool;
fn update_gc_threshold(&mut self);
fn current_marking(&self) -> GcMark;
fn inc_gc_mark(&mut self);
fn mark_scope(&mut self);
fn mark_stack(&mut self) {
for obj in self.stack_vec() {
debug!("{} is accessible; marking it", obj);
self.mark(*obj);
}
}
fn mark(&self, obj: Object) {
obj.gc_mark(self.current_marking());
}
fn sweep(&mut self) {
let mut old_objs = mem::replace(self.objects_mut(), Vec::new());
for obj in old_objs.drain(..) {
if obj.should_dealloc(self.current_marking()) {
debug!("{} is not marked, deallocating it", obj);
unsafe { self.dealloc(obj) }
} else {
self.objects_mut().push(obj);
}
}
}
fn gc_pass(&mut self) {
self.mark_stack();
self.mark_scope();
self.sweep();
self.inc_gc_mark();
self.update_gc_threshold();
}
fn gc_maybe_pass(&mut self) {
if self.should_gc_run() {
debug!("running the garbage collector");
self.gc_pass();
}
}
}
impl GarbageCollector for lisp::Lisp {
fn mark_scope(&mut self) {
for namespace in &self.symbols {
unsafe {
(**namespace).gc_mark(self.current_marking());
}
}
}
fn should_gc_run(&self) -> bool {
self.alloced_objects.len() > self.gc_threshold
}
fn update_gc_threshold(&mut self) {
self.gc_threshold = 2 * self.alloced_objects.len();
}
fn current_marking(&self) -> GcMark {
self.current_gc_mark
}
fn inc_gc_mark(&mut self) {
self.current_gc_mark += 1
}
}
pub trait GarbageCollected
// This trait is implemented by all Object subtypes which are heap-allocated
// Note that it does not actually include dealloc() ;
// that function is owned by Object
{
fn my_marking(&self) -> &GcMark;
fn my_marking_mut(&mut self) -> &mut GcMark;
fn gc_mark_children(&mut self, mark: GcMark);
fn gc_mark(&mut self, mark: GcMark) {
if *(self.my_marking()) != mark {
*(self.my_marking_mut()) = mark;
self.gc_mark_children(mark);
}
}
fn should_dealloc(&self, current_marking: GcMark) -> bool {
*(self.my_marking()) != current_marking
}
}
|
STACK_EDU
|
how to prevent error: [ng:btstrpd] App Already Bootstrapped with this Element 'document'
using angular-meteor v 0.9
trying to get a pre-packaged AngularMeteor-SmartAdmin example app to load properly
Getting error:
Error: [ng:btstrpd] App Already Bootstrapped with this Element 'document'
Is there a way to figure out why and where this error occurs?
Here is my meteor listing:
angular:angular-animate 1.4.0 AngularJS (official) release. For full...
angular:angular-cookies 1.4.0 AngularJS (official) release. For full...
angular:angular-resource 1.4.0 AngularJS (official) release. For full...
angular:angular-route 1.4.0 AngularJS (official) release. For full...
angular:angular-sanitize 1.4.0 AngularJS (official) release. For full...
angularui:angular-ui-router 0.2.15 angular-ui-router (official): Flexibl...
angularui:ui-utils 0.2.4 Angular-ui-utils package for meteor.
autopublish 1.0.3 Publish the entire database to all cli...
cfs:http-methods 0.0.29 Adds HTTP.methods RESTful
gsklee:ngstorage 0.3.0 ngStorage package for Meteor
http 1.1.0 Make HTTP calls to remote servers
insecure 1.0.3 Allow all database writes by default
less 1.0.14 The dynamic stylesheet language
meteor-platform 1.2.2 Include a standard set of Meteor packa...
planettraining:angular-translate 2.7.0 Angular Translate
planettraining:angular-translate-loader-static-files 2.7.0 Angular Translate
planettraining:angular-translate-loader-url 2.7.0 Angular Translate
planettraining:angular-translate-storage-cookie 2.7.0 Angular Translate
planettraining:angular-translate-storage-local 2.7.0 Angular Translate
selchenkov:angular-bootstrap-ui 0.0.1 Meteor
urigo:angular 0.9.0
Sounds like you've got both ng-app directive in the HTML and app.bootstrap(..), could it be ?
will look into this now
did a global search for app.bootstrap - nothing. will keep looking
did you find something for your question 4 hours ago?
It's probably angular.bootstrap, search for .bootstrap.
@user1587329 - if you referring to "...is there such thing as SmartAdmin with angular-meteor..." - possibly - https://wrapbootstrap.com/theme/angle-bootstrap-admin-app-angularjs-WB04HF123. this is the one I have just (3 hours ago) purchased, and now am trying to debug, as it was not working out of the box
found a few of these (it is a large example app): here is one: function onReady() {
angular.bootstrap(document, ['angle']);
}; here is another one: var bootstrapFct = angular.bootstrap;
angular.bootstrap = function(element, modules, config) {
// we use slice to make a clean copy
angular.forEach(modules.slice(), function(module) {
addToLoadList(module);
}); Will try and see how to reduce this without further breaking this example app...
You can try to console.log(module) in your angular.forEach and see which one is the last one that logs before the error shows. That will probably be the one that's bootstrapped twice. If it's not, search for all of your ng-app declarations and compare. One will be shown twice.
I think I found this double bootstrap - it is in one of the example app's modules called ocLazyLoad: function init(element) {
if(modulesToLoad.length === 0) {
var elements = [element],
names = ['ng:app', 'ng-app', 'x-ng-app', 'data-ng-app'],
NG_APP_CLASS_REGEXP = /\sng[:-]app(:\s*([\w\d_]+);?)?\s/,
append = function append(elm) {
return (elm && elements.push(elm));
}; That being said, I have no idea how not to use this file, or how to change it to avoid double bootstrapping
Would be hard to help you from over here friend :) Usually it's to remove one of the bootstrapping, maybe in the other declaration you found.
Well, I made a repo : https://github.com/eugene-goldberg/backend_meteor_angular I would be happy (if it is appropriate) to pay anyone, who is willing to make it work
https://medium.com/@zfxuan/the-wonderful-duo-using-meteor-and-angularjs-together-4d603a4651bf, section "Load Order" might be of interest to you. The author has tried to combine angularJS and meteorJS and describes some caveats.
This may help: https://github.com/Urigo/angular-meteor/issues/90
Issue is with file path of index.ng.html in index.html's ng-include
It should be
<div ng-include="'client/index.ng.html'">
</div>
Path are always absolute , as mentioned in Angular Meteor tutorial
It's very important to note - the paths are always absolute, not relative! so if 'index.ng.html' was inside a client folder, you would have to place the whole path from the route app, doesn't matter where you're calling the file from.
like this (e.g. if index.ng.html was in the 'client' folder):
<div ng-include="'client/index.ng.html'"></div>
It's just because you are calling your body or div inappropriately.
Try:
<div ng-app="app1" ng-controller="controller1">
<!-- code or UI here -->
</div>
This lets the second render inside of it:
<div id="app" ng-app="app2" ng-controller="controller2">
<!-- code or UI here -->
</div>
So now you could call:
angular.element(document).ready(function () {
var appID = document.getElementById('app');
angular.bootstrap(appID, ['app2']);
});
in your angular controller to avoid conflicts between the two controllers. So both the controllers are atomic hence it's being used in one page.
|
STACK_EXCHANGE
|
Supernacularnovel Jing Wu Hen – Chapter 2567 – No, You Can’t stream middle share-p3
Novel–The Legend of Futian–The Legend of Futian
Chapter 2567 – No, You Can’t silent cars
On top of the firmament, w.a.n.g Xiao’s sight stuck there as though frozen. He looked at the man when in front of him as if he still couldn’t accept the truth that he was beaten. He got dropped to Ye Futian in the 9th-Realm.
They imagined w.a.n.g Xiao was. In case that has been correct, have you considered Ye Futian?
Boom, boom, thrive, boom…
And w.a.n.g Xiao was conquered!
Or perhaps, Ye Futian didn’t see him as being a true opponent right from the start.
Having said that, at this time, another well-known master talent shown up. His title got shaken most of the Divine Prefecture prior to w.a.n.g Xiao. He was now the main player in all of the lands from the Divine Prefecture, whose label was recognized to anyone. He led his crew to get into and eliminate the Sacred Property of Taichu and slayed Taichu Saint Emperor.
And w.a.n.g Xiao was beaten!
w.a.n.g Xiao was really a cultivator with the Tribulation Jet. Every single strike of his covered the terrifying strength of divine tribulation on the Good Direction. 1 great shock influx was frightening sufficient, much less 108 thousand waves. On this realm, he was absolutely an individual who endured at the top of his type. From the whole Divine Prefecture, it had been difficult to get a person to tackle him in this particular world.
Goblin Workshop In Me
Directly below, at the conclusion, a lifeless but terrifying appear influx swept out, almost like there were ripples which had not totally quieted downwards, but the fight was effectively through.
Ye Futian increased his left hand almost with the exact second it was subsequently as if the Buddha possessed heightened his own hands and slammed it forward. Unexpectedly, the Buddhas in the heavens elevated their hands and fingers as well as ma.s.sive Buddhist handprints slammed out, colliding with these many divine hammers blasting decrease from the skies. A fervently blazing Light of Buddha ongoing to spread out up, colliding together with the descending great shock waves.
Section 2567: No, You Cannot
How would this strike finish?
Boom! Lots of people sensed just as if their eardrums got ruptured. An undetectable influx of devastation coated your entire Tianyan Community. An unfamiliar quite a few innocent cultivators should have been struck but all over again at this wave of immolation. People closest to the metropolis Lord’s Office were actually all cultivators from the Renhuang World, although the distress still designed them bleed using their orifices however the battlefield was abundant in the heavens, a lengthy extended distance from the them.
This sound resounded throughout Tianyan Metropolis, and people who observed it trembled with their souls.
Before too long, when almost everything finally resolved downward, people layers of glowing gentle screens were still there, as remains of your will on the spear eventually left inside the sky, switching in a fantastic divine light-weight, just as if it could not easily dissipate.
the andy warhol diaries hardcover
On earth, who was the truly Unraveled besides the truly amazing Emperors?
Higher than the firmament, w.a.n.g Xiao’s vision stuck there just like iced. He witnessed the guy in front of him just as if he still couldn’t acknowledge the fact that he was defeated. He got missing to Ye Futian within the 9th-World.
w.a.n.g Xiao, the Youthful City Lord of Tianyan Community, being the champ on this Armorer Compet.i.tion, possessed appreciated most of the beauty about this morning. He possessed the superior support on the City Lord of Tianyan City himself, encouraging him and naming him since the upcoming lord of Tianyan City in order that he could cause the cultivators from Divine Prefecture to do away with Ye Futian and vanquish Ziwei.
breaking ties with the past
Could w.a.n.g Xiao go ahead and take photo from him?
How could this attack conclude?
Boring appears became available, and high higher than the heavens, a halo of destruction swept out to the environment, as well as the atmosphere was extremely suppressed.
Surrounded by the limitless Buddhist runes, Buddhas now made an appearance inside the heavens. Currently, these Buddhas were definitely simultaneously chanting the Appears to be of Buddha and uttering the Six Syllables of Facts. Unexpectedly, a mighty and sacred energy of Buddhism permeated the s.p.a.ce, propelling the sunshine screen to expand outward.
Ye Futian raised his left-hand almost for the exact second it absolutely was as if the Buddha possessed elevated his very own fretting hand and slammed it in front. Suddenly, the Buddhas from the heavens increased their hands and wrists simultaneously as ma.s.sive Buddhist handprints slammed out, colliding with these plenty of divine hammers blasting downward out of the atmosphere. A fervently blazing Lightweight of Buddha persisted to propagate up, colliding together with the descending surprise waves.
“You, The Unraveled, are you able to have this photo from me!”
The eye area of the many cultivators had been freezing, plus the overall Tianyan Area was in comprehensive silence at this time, even inside Metropolis Lord’s Place of work.
Certainly, Ye Futian had been purely around the defensive before second, and w.a.n.g Xiao was one accomplishing the many episodes. Nevertheless, despite the astonis.h.i.+ng ability of the Sky Tremoring Hammer, he was incapable of really harm Ye Futian. These types of highly effective assaulting approach only were able to slam him decrease once or twice, demonstrating the terrifying defensive power that Ye Futian had.
the notebooks of samuel butler
Town Lord of Tianyan possessed a steely appear on his deal with, as well as other cultivators through the Community Lord’s Office were built with a bad start looking with their encounters too.
“You, The Unraveled, is it possible to bring this taken from me!”
“One come to!”
On the other hand, it was correct he obtained not introduced any problems yet!
notes of a war correspondent by richard harding davis
They believed w.a.n.g Xiao was. But when that has been genuine, what about Ye Futian?
“You, The Unraveled, will you consider this photo from me!”
w.a.n.g Xiao, the Youthful Area Lord of Tianyan Community, since the champion on this Armorer Compet.i.tion, experienced enjoyed most of the beauty about this day time. He possessed the superior assistance on the Metropolis Lord of Tianyan Metropolis themselves, encouraging him and labeling him being the upcoming lord of Tianyan Community in order that he could guide the cultivators from Divine Prefecture to do away with Ye Futian and vanquish Ziwei.
The City Lord of Tianyan City was an unbelievable and astute male themself, and that he could not possibly forget to appreciate this, otherwise he will not have kept w.a.n.g Xiao in these high esteem, conferred the t.i.tle on the Unraveled upon him. He even acquired feelings of linking w.a.n.g Xiao while using precious child of Donghuang the good.
Across the firmament, Ye Futian withstood there like a G.o.d, plus the extended spear in the fingers was not anymore directing at w.a.n.g Xiao’s phantom but his neck. If he got enhanced a different inch, he might take w.a.n.g Xiao’s existence.
|
OPCFW_CODE
|
The attribute filters allow you to define which item features are available in the search results, which features the customers can use to filter the results, and so on.
To configure the attributes, go to AI Search. Create an index or select an existing one from the list. Go to the Settings tab. In the appropriate attribute section, click Define.
This section defines which attributes will be included in the search response. For each item returned, only attributes defined in this section will be available in the search response. Only attributes defined in this section will be available in the search response so you can show them to the customers.
Searchable attributes are the main component of the index. These attributes are used by the search engine to calculate the item’s relevance to the query phrase provided by the customer.
Attributes can be assigned to three importance levels: high, medium and low. When a word from the phrase matches an attribute, the item’s score is assigned depending on this importance.
Filterable attributes are those that are enabled for use in item filters for including or excluding results.
Including an attribute value narrows down the search results to items with that attribute value. Excluding values of an attribute displays results with items that have different values of these attributes.
- Products in your catalog have the
saleattribute which can take the
true(on sale) or
saleas a filterable attribute. Now, you can filter the results by the items’ sale status.
- Products in your catalog have the
last_itemattribute that can take the
yes(one item is available) or
no(more than 1 item is on stock) values. Exclude
last_itemas a filterable attribute. Now, you can filter the items that have one item left from the search results.
- You want to filter out the products that cost more than $100. Include
priceas the range-type filterable attribute and in the settings, filter items where
priceis equal to or greater than 100$.
Choose attributes that can be use as facets. Facets allow customers to sort search results by attributes and their values. For example, you can use the
color attribute as facetable to let the customers select which colors they’re interested in.
The search result also includes information how many items have a particular value of a facetable attribute.
For a live example, see our use cases. “Challenges”, “Channel”, “Difficulty”, and so on are facetable attributes.
You want to give a customer a possibility to filter results by some attributes -
brand. Add these attributes to the Facetable attributes section. When this is done, the customers will be able to narrow down the results of the filter to desired brand and color of the item and additionally they will be shown information about the number of items in each facet in the search response. This information can be displayed along with the search results and based on that filtering can be applied to the results.
- The returned facets in the search response contain the top 50 distinct attribute values.
- The sum of distinct attribute values, for filterable and facetable attributes, can’t exceed 10000.
That means that if the X attribute has 5000 different values and the Y attribute has 5000 different values, they cannot be used in filterable attributes at the same time. Remember that 10000 distinct values are shared for filterable AND facetable attributes. Therefore, we do not recommend filtering and/or faceting with attributes that have many distinct values in the item feed.
You can enable sorting the results by the chosen attributes. You can add both range and text attributes. Sorting by range attributes allows you to arrange the returned items from highest to lowest value of the specified attribute. Sorting by text attributes allows you to order the results in an alphabetic order, ascending or descending.
Disabling prefix search
You can disable the prefix search for specific attributes. This will prevent the search engine from trying to autocomplete the phrase for these attributes. Keep in mind that if the search engine will not find any items for the phrase it will still try to check for typos.
|
OPCFW_CODE
|
C# & Unity : Pass reference by value?
I'm new to C# and Unity, and here I am tweaking and creating my first minigame.
Here is the problem:
I've got a little cube, that moves. I've implemented a method that checks the next position before making a move.
The method receives as parameters the current cube position, and the direction:
public bool okToMove(Transform playerCurrentPosition , int directionIndex)
{
Transform playerNextPosition = playerCurrentPosition;
playerNextPosition.Translate(toDirection(directionIndex));
if (playerNextPosition.position.x > 1 ||
playerNextPosition.position.x < -1 ||
playerNextPosition.position.y > 1 ||
playerNextPosition.position.y < -1)
return false;
else
return true;
}
Then, I call the method
public void movePlayer(int directionIndex)
{
if ( okToMove(gameObject.transform, directionIndex) )
transform.Translate(toDirection(directionIndex));
}
The problem is that the cube makes 2 moves at once. This is because of
transform.Translate(toDirection(directionIndex));
and
playerNextPosition.Translate(toDirection(directionIndex));
that is called from okToMove method. Unity or C# sees playerNextPosition as the real cube, and not somekind of temporary copy that only exists inside the method.
So why is my gameObject.transform being passed as a reference and not by value? How can I make it work?
Thanks in advance and sorry for my noobiness.
You are passing reference to Transform and then moving it with translate in "okToMove", best way is to make a copy of Vector3, just change your "okToMove" like this.
public bool okToMove(Transform playerCurrentPosition , int directionIndex){
Vector3 playerNextPosition = playerCurrentPosition.position;
playerNextPosition += toDirection(directionIndex);
if (playerNextPosition.x > 1 ||
playerNextPosition.x < -1 ||
playerNextPosition..y > 1 ||
playerNextPosition.position.y < -1)
return false;
else
return true;
}
Transform is component attached to each gameObject and it holds values for position, rotation and scale, so your "playerCurrentPosition" is not copy of position but rather reference to Transform (not a copy).
This is definitely the better approach. I create a member variable in situations like these so that the same object is being used on each call, rather than creating a new object, in this case the Vector3. Odds are you won't notice a performance impact, but I figure when you're making a game you really want to make every line of code count.
If you copy and destroy "ghost" object every frame, trust me, you will feel the impact greatly on mobiles, to that point that the fps drops from 30 to unplayable just because of that method (I speak from experience) . If you don't know what object pooling is, I recommend you read about it, the whole point of it is to avoid instantiating and destroying objects.
Create a new GameObject that is a copy of yours original, and use its transform to make your calculations. (This answer is originally from the Unity forums). The official documentation tells me you can use Object.Instantiate to create a clone of a GameObject.
In C#, objects have always their reference passed as value, so simply reassign won't do it. See this related question.
That is not really optimized way to do this, you need to instantiate GameObject and then later destroy (which in Unity is really expensive to do), we only care about it's position so it's enough to just copy position to new Vector3.
@NevenIgnjic is some cases (not this one in particular) you may need the entire transform. Also, I did not say it was the most performant solution, and this solution's performance is already discussed in the provided link: "it has no collider, mesh renderer or anything like that, so it's actually a pretty lightweight and invisible object.".
That makes it much easier since it's lightweight, but it all depends how often will this method run, copying it's position to Vector3 is the most cheapest solution, but in some cases you really do need entire transform (to avoid doing all that physics and math that Unity already did for us)
Objects in C# are passed by reference. If you want to copy an object, implement the ICloneable interface which has the method Clone(). You will need to copy the object yourself and return it from this method.
The reference is passed by value, not by reference. More info here http://stackoverflow.com/questions/8708632/passing-objects-by-reference-or-value-in-c-sharp
|
STACK_EXCHANGE
|
I have two issues regarding this
- The order of the Groups on the Dashboard is not the same as on the Dashboard Layout tab (should be Graph then Gauge)
- I have one of hotNipi's Windrose gauges and a standard graph on two separate Groups. When I open the Group dropdown the Windrose gauge Group decreases in size
- Clear browser cache, maybe restart ...
- I redefine the question - Can groups in one row have same height when page layout is "Grid"? Yes. But you must accept that there will be empty space at the bottom end of the groups where you don't have same amount of content. You can of course then override rules for content in groups but that's the next challenge.
CSS (if you have multiple pages and they have different layouts, you'll need to target it page scoped )
Also you can adjust the left-side menu behaviour
These are also documented (with image examples) here: Config: UI Base ui-base | Node-RED Dashboard 2.0
I am toying with changing the default value here to "Appear over content" as collapsing does cause resizing headaches elsewhere too.
There are definitely some issues here I'm still trying to get to the bottom of but can't reproduce consistently. A more trustworthy 'fix" seems to be doing a full re-deploy, rather than partial in the Node-RED "Deploy" options.
Tried that, also restart NR, still the same
Did you do a full page refresh after restarting node-red? Ctrl-F5 probably.
I did indeed.
Edit: I added another node (to a different page) and it is now correct
Regarding the changing size, my problem with the Compass changing size when popping up the Group menu disappeared when I gave it the same number of rows as the adjacent Graph. (graph was 5 * 5 and the Compass was 2 * 2. Compass is now 2 * 5). It seems that the contents of a group affect how the rows are sized as the Compass fits the same with either row count. (Probably the svg has an effect)
Still doesn't explain WHY it changes size but the problem has been resolved - with the Sidebar set to 'default'
No it's my design decision. And as the code is free to use and change, feel free to adjust it by your needs.
The layout of db2 differs a lot if to compare to db1. Much more flexibility, wider playground and so but also harder to provide solid solution to fit into everyone's wishes. Also to find out what it takes to get logical arrangements with different types of things will take experimenting and knowledge to gain and share. It's new thing. Let's learn and make it better for all of us.
Sorry hotNipi, I wasn't complaining. The only reason I mentioned it is because the svg part of the Compass seems to cause the sizing to not depend on the number of rows occupied, unlike the Graph which changes size dependant on the number of rows.
It will be interesting to see if other DB2 ui_nodes using svg have the same effect.
The thing to fight is that round things must be round even if surrounding area wants to squish it one or another way. Not easy to be round in the world of boxes. And still be readable.
Compass doesn't step out of boundaries determined by given layout rules. But the height of the rows in card depends on the content. And that is not limited by layout rules . Widgets can push it. That's what happens
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.
|
OPCFW_CODE
|
#!/usr/bin/env python3
# Author: Huon Imberger
# Date: 2016-12-08
# Description:
# This script explodes an exported PDF (BulkPrint) from Springboard (http://www.springboard.com.au/)
# into a CSV listing candidate details and individual resume files.
#
# It creates a CSV file containing candidate's title, first name, last name, and email address.
# It also has an ID field, which associates each candidate with their resume. Note this is not the same as
# Springboard's internal candidate ID, which isn't included in the exported data.
#
# Resumes are named [candidate_id].pdf where [candidate_id] corresponds to the 'id' field in the CSV.
#
# Notes:
# - Candidates that are missing their resume (don't have one, or there was an error in the BulkPrint) will
# simply not have an associated resume in OUTPUT_PDF_DIR
# - There seems to be a bug in the BulkPrint causing incorrect ordering of pages when there are two
# candidates with the same name next to each other. The script should handle this fine, since the resume
# always comes after the "Candidate Details" page.
# - All paths are relative to current working directory
#
# PyPDF2: See here for licencing: https://github.com/mstamy2/PyPDF2
#
from PyPDF2 import PdfFileWriter, PdfFileReader
import re, csv, os, sys
INPUT_PDFS = ['6mo_aus_1.pdf',
'6mo_aus_2.pdf',
'6mo_aus_3.pdf',
'6mo_aus_4.pdf',
'6mo_aus_5.pdf',]
OUTPUT_DIR = 'output'
# Any candidates with a title not in this list will be incorrectly parsed
# and end up with a title as their first name
TITLES = ['mr', 'mrs', 'ms', 'miss', 'dr', 'professor']
# Check if all input pdfs exist
for input_pdf in INPUT_PDFS:
if not os.path.isfile(input_pdf):
print("At least one input file doesn't exist: {0}".format(input_pdf))
sys.exit()
# Create output folder structure
try:
resume_output_dir = "{0}/resumes".format(OUTPUT_DIR)
os.makedirs(resume_output_dir, exist_ok=True)
except OSError as e:
print("Could not create output directories: {0}".format(e))
sys.exit()
# Prepare CSV writer
csvfile = open("{0}/candidates.csv".format(OUTPUT_DIR), 'w', newline='')
fieldnames = ['id', 'title', 'first_name', 'last_name', 'email']
csvwriter = csv.DictWriter(csvfile, fieldnames=fieldnames)
csvwriter.writeheader()
# Stats
resume_count = 0
resume_errors = 0
# Start at 0 so first candidate is 1
candidate_id = 0
# Process each input PDF
for input_pdf in INPUT_PDFS:
print("Processing file: {0}...".format(input_pdf))
# Open input PDF as binary
doc = PdfFileReader(open(input_pdf, "rb"))
# Loop through outline (list of Destination objects, aka bookmarks)
for i, dest in enumerate(doc.outlines):
# Ignore sub-destinations, which are always lists (some resumes will have their own bookmarks)
if isinstance(dest, list):
continue
# Get page details associated with this outline (Destination object)
page_num = doc.getDestinationPageNumber(dest)
page = doc.getPage(page_num)
# Is this page the start of a new candidate?
if re.match(r'.* Candidate Details', dest.title):
candidate_id += 1
# Extract candidate details
details = page.extractText().split('\n')
c_fullname = details[0].strip().split()
print(" ".join(c_fullname))
if c_fullname[0].strip('.').lower() in TITLES:
# Title must exist
c_title = c_fullname[0].strip('.')
c_firstname = c_fullname[1]
else:
# No title
c_title = ""
c_firstname = c_fullname[0]
c_lastname = c_fullname[-1] # Last element, to ignore first/middle names
c_email = details[details.index('Email Address:')+1].strip()
# Write details to CSV
csvwriter.writerow({'id': candidate_id,
'title': c_title,
'first_name': c_firstname,
'last_name': c_lastname,
'email': c_email})
# Is the page the start of a resume?
# Note: matches end of string, since resumes that failed for whatever reason
# will have (Error) or (Pending) or (Embedded) at the end. We want to ignore these pages.
if re.match(r'.*\.pdf$', dest.title):
resume_count += 1
print(" Found resume")
# Find the next destination that isn't a sub-destination (which are always lists)
k = i
while True:
k += 1
next_dest = doc.outlines[k]
if not isinstance(next_dest, list):
break
# Extract pages between current and next destination, which should be the whole resume
next_dest_pagenum = doc.getDestinationPageNumber(next_dest)
resume = PdfFileWriter()
for j in range(page_num, next_dest_pagenum):
resume.addPage(doc.getPage(j))
# Create the file
with open("{0}/resumes/{1}.pdf".format(OUTPUT_DIR, candidate_id), "wb") as file:
resume.write(file)
elif re.match(r'.*\.pdf \(.*\)$', dest.title):
# If this matches, it's a resume with an error
resume_errors += 1
# Close CSV
csvfile.close()
print("\n---Done!---")
print("Candidates: {0}".format(candidate_id))
print("Resumes: {0}".format(resume_count))
print("Resume errors: {0}".format(resume_errors))
|
STACK_EDU
|
PostgreSQL does not order timestamp column correctly
I have a table in a PostgreSQL database with a column of TIMESTAMP WITHOUT TIME ZONE type. I need to order the records by this column and apparently PostgreSQL has some trouble doing it as both
...ORDER BY time_column
and
...ORDER BY time_column DESC
give me the same order of elements for my 3-element sample of records having the same time_column value, except the amount of milliseconds in it.
It seems that while sorting, it does not consider milliseconds in the value.
I am sure the milliseconds are in fact stored in the database because when I fetch the records, I can see them in my DateTime field.
When I first load all the records and then order them by the time_column in memory, the result is correct.
Am I missing some option to make the ordering behave correctly?
EDIT: I was apparently missing a lot. The problem was not in PostgreSQL, but in NHibernate stripping the milliseconds off the DateTime property.
Could you show us a real working example?
I must admit I'm finding it hard to believe that PostgreSQL can't sort a built-in type that's been there for more than a decade.
Please provide table definition and the insert commands for your 3-element sample that fails to sort. I concur with Richard, as I'm using timestamp without time zone in a database and have no problems with the sort.
It's a foolish notion that PostgreSQL wouldn't be able to sort timestamps correctly.
Run a quick test and rest asured:
CREATE TEMP TABLE t (x timestamp without time zone);
INSERT INTO t VALUES
('2012-03-01 23:34:19.879707')
,('2012-03-01 23:34:19.01386')
,('2012-03-01 23:34:19.738593');
SELECT x FROM t ORDER by x DESC;
SELECT x FROM t ORDER by x;
q.e.d.
Then try to find out, what's really happening in your query. If you can't, post a testcase and you will be helped presto pronto.
try cast your column to ::timestamp like that:
SELECT * FROM TABLE
ORDER BY time_column::timestamp
Since the column is already timestamp (alias for timestamp without time zone), the cast does nothing useful.
For those having similar issues it's worth checking the level of precision for the column.
Consider the following table and query:
CREATE TABLE IF NOT EXISTS table_name (
id UUID PRIMARY KEY,
create_date TIMESTAMP(0) WITH TIME ZONE NOT NULL
);
and
SELECT * FROM table_name ORDER BY create_date DESC
In unit testing where a bunch of rows are inserted right after each other (within milliseconds), this will give inconsistent results. Changing the column to TIMESTAMP(6) instead of 0 makes it work with a higher level of precision, so rows can all be INSERTed "at once" and sorting will still work.
|
STACK_EXCHANGE
|
[texworks] problem in ubuntu 11.04
st.loeffler at gmail.com
Mon Jul 25 18:55:35 CEST 2011
On 2011-07-25 18:16, Jiří Fišer wrote:
> I found out that I have different versions of Qt libraries in /lib and
so, my wild guess was correct after all.
> jirka at jirka-MS-7469:/lib$ ls libQt*
> libQtCore.so libQtGui.so libQtNetwork.so libQtXml.so
> libQtCore.so.4 libQtGui.so.4 libQtNetwork.so.4
> libQtCore.so.4.6 libQtGui.so.4.6 libQtNetwork.so.4.6
> libQtCore.so.4.6.2 libQtGui.so.4.6.2 libQtNetwork.so.4.6.2
It looks like you have Qt 4.6.2 in /lib ...
> jirka at jirka-MS-7469:~$ ldd /usr/bin/texworks
> libQtDBus.so.4 => /usr/lib/libQtDBus.so.4 (0x00b52000)
> libQtScript.so.4 => /usr/lib/libQtScript.so.4 (0x00110000)
> libQtScriptTools.so.4 => /usr/lib/libQtScriptTools.so.4 (0x00660000)
> libQtXml.so.4 => /lib/libQtXml.so.4 (0x00746000)
> libQtGui.so.4 => /lib/libQtGui.so.4 (0x00f0d000)
> libQtCore.so.4 => /lib/libQtCore.so.4 (0x003b1000)
... and Tw indeed (partly) uses those libraries from /lib. The default
in Ubuntu 11.04 is Qt 4.7.2, however, and that's also what Tw was built
with. So naturally, this causes problems.
Since you didn't know initially that you had two versions installed, I
would suggest to remove the 4.6.2 version from /lib, unless you have any
good reason to keep it. Or at least move it out of the way (e.g., to
/opt, or to your home directory). Alternatively, you could start messing
around with how Linux looks for shared libraries (e.g., search the
internet for "linux shared libraries", or LD_LIBRARY_PATH, or something
like that), but I wouldn't generally recommend that.
More information about the texworks
|
OPCFW_CODE
|
Urlvoid (beta version at the moment) is a free service that scan suspicious websites with multi engines to check if the site is safe to browse. In URLvoid concept the safety is given also by the reputation of the web site, in fact it does use AV engines and Reputation engines. Here the engines it uses so far: McAfee SiteAdvisor, McAfee Trusted Source, PcTools Browser Defender, Norton SafeWeb, MyWOT, Threat Log, MalwareDomainList, hpHosts, ZeuS Tracker, Google Diagnostic, PhishTank, Project Honey Pot, ParetoLogic, Spamhaus, URIBL, Malware Patrol, SURBL, SpamCop, TrendMicro Web Reputation, Web Security Guard.
So what does it think to my blog ?
SUSPICIOUS … Of course ! Nah…. really ? NovirusThanks, probably one of the most used right now, thinks that my blog is totally safe.
But as you can see from the image it did 16 checks against the 19 of URLVoid. So what is the weird check which makes trigger AV on my blog ?
MyWOT reputation engine believes that my site is suspicious, so not really safe. Interesting.. So why does it believe that ?…. If you try to analyze my blog with MyWOT you will find a very interesting thing, marcoramilli.blogspot.com has one of the highest score in trustworthiness, but it is below the average on the following categories:
- Vendor Reliability (53)
- Privacy (54)
- Child Safety (42)
Now, I cannot have the point (1) since I am not a vendor and you wont buy anything from this blog … (54 points ? who give me those points ?). Point 2 what does it mean privacy ? That I write about privacy ? that I theft your privacy ? That I wanna privacy ? Anyway I got 54 points… And Child Safety, this makes me laugh :D. So what does this means 42 points ? Let’s say that pornography is 100 (even if pornography does not matter with safety, at least with “bad education”.. but anyway.. they use this practical categorization…) I got 42, so my site is close enough to be half porno blog, right? Which means that half posts are on pornography ….humm….. But it is one of the best in trustworthiness … well that’s cool ! I didn’t know to write so spicy posts 😀 😀 .
Anyway, pay attention to evaluate your URL with these toys and to buy this kind of services. Before doing that, you have to keep in mind that Virus and Malware, once discovered, are pretty easy to detect. But the reputation of a web site is still a great and huge Research topic, probably still far from a good and usable product.
|
OPCFW_CODE
|
Embedding *.ttc fonts using PDFBox
PDFBox provides mechanisms to embed various types of fonts. For example, it provides PDTrueTypeFont.loadTTF(...), which can accept a TrueType (*.ttf) file.
The TrueType Collection format (*.ttc) supports multiple fonts per file, in an extension of the TrueType format.
Attempting to load a *.ttc file with PDTrueTypeFont.loadTTF() results in an IOException being thrown.
How does one embed one or all of the fonts in a *.ttc file into a PDF document, using PDFBox?
The PDF specification doesn't allow for TrueType collections as embedded fonts. You'll need to pull out a single TTF-format stream from the *.ttc and embed that.
As it stands (and AFAIK) PDFBox doesn't support this by itself; I used Google's 'sfntly' package.
Quick and dirty solution:
FontFactory factory = FontFactory.getInstance();
Font[] fonts = factory.loadFonts( ... ); // pulls every TTF out of TTC
ArrayList<PDTrueTypeFont> pdf_fonts = new ArrayList<PDTrueTypeFont>();
for( Font f : fonts ){
// sfntly writes each font to a TTF stream
ByteArrayOutputStream out = ByteArrayOutputStream();
factory.serializeFont(f, out);
// PDFBox reads the stream and embeds the font
ByteArrayInputStream ttf_font_stream = ByteArrayInputStream(out.toByteArray());
pdf_fonts.add(PDTrueTypeFont.loadTTF(document, ttf_font_stream));
}
Font and FontFactory are in com.google.typography.sfntly
Java code not guaranteed to be 100% correct; been working in Clojure a lot lately....
Thanks! That looks good. Can I get the font names from the *.ttc file with sfntly? And do you happen to known if getting the names of the sfntly Fonts require parsing the rest of the font?
You can get the name out with sfntly; it's not exactly "simple" though. You need to getTable(Tag.name) to get an instance of NameTable. You can access names from there with .getNameEntry, but you have to pass in appropriate constants. That will give you access to a slew of "names" for the font. You're probably safest by iterating through the NameTable and pulling the first entry with a FontFamilyName or PreferredFamily as its nameId.
... and unfortunately, yes; when you build the sfntly Fonts it parses the entire sfnt wrapper, however you can make it parse less if you need to by using FontFactory.loadFontsForBuilding (which gives a list of Font.Builder objects; you can then strip these down to just parse the name table. Then you call .build() on the builder objects to produce a Font object you can poke around in.
For the name I found example code in SFLint.java, which uses the constants NameId.FontFamilyName and NameId.FullFontName. I'll look at PreferredFamily as well. And thank you for the suggestion of FontFactory.loadFontsForBuilding(). Your excellent answers deserve more votes.
This seems to fail for *.ttc at least on the Mac. 1PDTrueTypeFont.loadTTF()1 throws an NPE after some code that sets a variable iff it finds a table whose platform ID is CMAPTable.PLATFORM_WINDOWS`. Looks like it might be a bug; I'm going to report it.
btw pdfbox 2.0.20 features loading of ttc files in a provided example EmbeddedMultipleFonts.java
|
STACK_EXCHANGE
|
polars v0.14.2
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
I still don't really understand how I can ensure we use a specific rust nightly version. Do you know @timkpaine?
@ritchie46 it is already using the nightly version, but "nightly version" is manually propagated to conda forge. So if there are new features in a newer rust nightly than we have, the version of rust nightly needs to be updated on the rust feedstock like this https://github.com/conda-forge/rust-feedstock/pull/87
conversation with @xhochy here https://github.com/conda-forge/staged-recipes/pull/18906#issuecomment-1126994556
My desire to publish rust-nightly as its own package so it automatically ticks up versions: https://github.com/conda-forge/staged-recipes/pull/18906#issuecomment-1127929814
@jakirkham and @isuruf disagreed and said "just ask rust to bump the version" here: https://github.com/conda-forge/staged-recipes/pull/18906#issuecomment-1128207577
(clearly the "just ask the rust maintainers to bump the nightly version" is not the most sustainable or scalable decision, but I am not a conda forge admin)
last rust nightly is 20220529 https://github.com/conda-forge/rust-feedstock/tree/dev/recipe
Right.. Am I correct that we can only the nightly version set in that feedstock? I can imagine that there will be repo's that need a specific compiler other than the one provide.
For instance, latest polars only compiles with rustc 1.65.0-nightly, because the nightly SIMD API has changed recently. Or can feedstocks be versioned, and any version in the history is valid.
Right.. Am I correct that we can only the nightly version set in that feedstock? I can imagine that there will be repo's that need a specific compiler other than the one provide.
For instance, latest polars only compiles with rustc 1.65.0-nightly, because the nightly SIMD API has changed recently. Or can feedstocks be versioned, and any version in the history is valid.
you can use any published version through time, so long as the version has been published. let me get the list
but again, because @isuruf and @jakirkham want this to be a manual process, only manually added versions of rust nightly will be present, rather than for a normal feedstock where new versions are continuously propagated by bot
but again, because @isuruf and @jakirkham want this to be a manual process, only manually added versions of rust nightly will be present, rather than for a normal feedstock where new versions are continuously propagated by bot
I understand. Thanks!
I will also note:
https://github.com/conda-forge/staged-recipes/pull/18906#issuecomment-1126994710
and
https://github.com/conda-forge/staged-recipes/pull/18906#issuecomment-1127929814
Would it be crazy to publish rust-nightly directly on conda forge? Unlike other pre-release version, it seems many libraries treat rust-nightly as a production equivalent. And I expect many more python packages will have rust nightly dependencies, repeating this exercise isn't going to scale well.
Any thoughts @isuruf / @jakirkham / @xhochy ?
Polars is extremely fast moving, without automated rebuilding of rust-dev on every nightly release, I don't think it will be sustainable
That's easy to fix. Add code to the bot to send PRs to update once every week.
Would it be crazy to publish rust-nightly directly on conda forge? Unlike other pre-release version, it seems many libraries treat rust-nightly as a production equivalent. And I expect many more python packages will have rust nightly dependencies, repeating this exercise isn't going to scale well.
What does that achieve? It's a build time only requirement and end users don't have to worry about that.
That's easy to fix. Add code to the bot to send PRs to update once every week.
Sounds like a plan, can you point to some docs on this?
What does that achieve? It's a build time only requirement and end users don't have to worry about that.
Package maintainers get a relatively convoluted set of steps in order to enable something that rust makes very easy via native tools, rather than saying "use rust-nightly instead of rust". The latter is much closer to e.g. rustup behavior.
Here's an example https://github.com/regro/cf-scripts/blob/master/conda_forge_tick/update_sources.py#L207
Package maintainers get a relatively convoluted set of steps in order to enable something that rust makes very easy via native tools, rather than saying "use rust-nightly instead of rust".
What's so convoluted about adding just
channel_sources:
- conda-forge/label/rust_dev,conda-forge
What's so convoluted about adding just
channel_sources:
- conda-forge/label/rust_dev,conda-forge
I mean for one, the fact that its called "dev" and not "nightly", and then also the fact that it is not at all "nightly" and just some random version picked by the last person who had a problem when they expected "nightly" to mean "nightly" 😆
I mean for one, the fact that its called "dev" and not "nightly", and then also the fact that it is not at all "nightly" and just some random version picked by the last person who had a problem when they expected "nightly" to mean "nightly"
This was definitely my confusion. Especially nightly can come with bugs, so you might need to pin a very specific version. In rustup we can set the release date, e.g. nightly-2022-08-16.
There can be collisions with a version.
For instance nightly-2022-08-16 might compile where nightly-2022-08-15 would not. Both would be versioned 1.65.0.
I mean for one, the fact that its called "dev" and not "nightly", and then also the fact that it is not at all "nightly" and just some random version picked by the last person who had a problem when they expected "nightly" to mean "nightly" laughing
LOL. That's exactly why it's called dev and not nightly.
For instance nightly-2022-08-16 might compile where nightly-2022-08-15 would not. Both would be versioned 1.65.0.
You can just add rust <IP_ADDRESS>2.08.16 in your recipe/meta.yaml to use that specific version
LOL. That's exactly why it's called dev and not nightly.
but "rust dev" is not a thing for rust developers, whereas "rust nightly" is. Also when approached by people confused by something, dismissing their confusion is generally not the best way to solve the problem. I think if we setup the bot to update on some regular interval (doesnt have to be daily) and add examples and clarification about "nightly" vs "dev" to the conda docs, then this can be much more understandable.
To be productive, I think if we setup the bot to update on some regular interval (doesnt have to be daily) and add examples and clarification about "nightly" vs "dev" to the conda docs, then this can be much more understandable.
Sure. We are already swamped as is. PRs are welcome.
but "rust dev" is not a thing for rust developers, whereas "rust nightly" is. Also when approached by people confused by something, dismissing their confusion is generally not the best way to solve the problem.
Well, you suggested nightly as a better name and then was not happy that the new name was not good whereas the old name fixes your concern.
but "rust dev" is not a thing for rust developers
dev is a standard for conda-forge and going out of the standard is not good. (Your suggestion is to not go out of the standard for rust). So it's a matter of which standard to use when there are two conflicting standards.
dev is a standard for conda-forge and going out of the standard is not good. (Your suggestion is to not go out of the standard for rust). So it's a matter of which standard to use when there are two conflicting standards.
I'm happy to keep dev, we should just tweak the compilers section of the docs. this plus a non-nightly run rate (too much for the bot) and it should be fine.
Sure. We are already swamped as is. PRs are welcome.
happy to help, just wanted to make sure ahead of time that its ok and needed some help on bot control
If we add other label naming, the CFEP would also need to be updated
closing, to do in https://github.com/conda-forge/polars-feedstock/pull/40
|
GITHUB_ARCHIVE
|
Debtags and Blends tasks pages
enrico at enricozini.org
Wed Jan 28 11:17:33 UTC 2009
On Wed, Jan 28, 2009 at 10:12:28AM +0100, Andreas Tille wrote:
> would in turn let us Blends builders profit from DebTaggers work
> and may be we finally reach a state which enables us to trust
> completely in the debtags to generate our metapackages and tasks
It all sounds interesting.
I'm not sure you want to completely trust debtags for that, at least not
until something changes in the workflow of tags: at least at the moment
the QA process for tags is not strict enough to prevent small flaws to
creep even in the reviewed tag set. In the future I would like to be
able to "assign" tags to people or teams, so that either they are the
only ones that can change them, or so that they will qa on their tags
before they are accepted, or a mix of this. At that point, provided
your tags are assigned to a team, you should be able to trust debtags
for your task.
Also, it may be that you don't need all the packages with a given tag:
for example, you may have two packages doing the same thing, but your
Blend may want one and not the other.
> Privided that you like my suggestions I it would be really helpful
> if you could provide me with the following information:
> 1. Is there a simple Python interface like
> GetDebTags(<list of packages>)
> Dictionary of DebTags, key are the list elements above
> or something like this which would be useful to add the
> Debtags to the package information as suggested above
Sure there is:
apt-get install python-debian
from debian-bundle import debtags
db = debtags.DB()
tags = db.tagsOfPackages(["pkg1", "pkg2", "pkg3"])
> 2. A Python interface that provides any package which
> has a certain DebTag set:
> List of packages which has a certain tag (may be
> also a list of tags) set.
Same as before, but:
packages = db.packagesOfTags(["tag1", "tag2", "tag3"])
> If these interfaces exist I might be able to realise my suggestion
> above in the near future (I will spend this weekend in Trindheim at
> the Debian Edu developer gathering and will try to work on things
> like this).
You can find some documentation about the debtags module in
There are also other ways, for example to cross debtags data with a full
Xapian text index of package data, and for that you may want to look
into the package apt-xapian-index and its examples.
If you want to download a cross-arch and cross-distro dataset, you may
want to look at the video of my oncoming DDE presentation at FOSDEM.
In the meantime, python-debian should be enough to get you started.
GPG key: 1024D/797EBFAB 2000-12-05 Enrico Zini <enrico at debian.org>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 827 bytes
Desc: Digital signature
Url : http://lists.alioth.debian.org/pipermail/debtags-devel/attachments/20090128/4801fe0b/attachment.pgp
More information about the Debtags-devel
|
OPCFW_CODE
|
The Java JSON API called Jackson consists of one core JAR file (project) and two other JAR files that use the core JAR file. The three JAR files (projects) in the Jackson JSON API are:
- Jackson Core
- Jackson Annotations
- Jackson Databind
These projects use each other in that sequence too. Jackson Annotation uses the Jackson Core features, and the Jackson Databind uses Jackson Annotation.
In order to "install" Jackson in your Java application you need to add these JAR files to the classpath of your application. There are several ways to do so. I will cover two here.
Adding the JAR Files to Your Classpath
In order to use any of the Jackson projects (JAR files) you must add the corresponding JAR file to the classpath of your application. You can do so in several different ways.
Before you even start adding the Jackson JAR files to your classpath, you need to decide what version of the Jackson JSON API to use. To do so, go to the Jackson website and see what versions are available. If you are not already using a specific version of Jackson in your project, use the latest stable version. Write the version number down. You may need it later.
You can find a list of Jackson releases here:
Jackson Maven Dependencies
If your project uses Maven to build (and resolve + download dependencies), you need to add Jackson as a dependency to your project's POM file. Here is how Jackson as Maven dependencies look:
<dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-core</artifactId> <version>2.9.6</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-annotations</artifactId> <version>2.9.6</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.9.6</version> </dependency>
Note, that both
jackson-databind have "transitive dependencies"
jackson-annotations). This means,
that if you plan to use the
jackson-databind project, you just need to include that as dependency in
your Maven POM file. Then it will transitively include the other two projects as dependencies. Regardless, I
like to add my dependencies explicitly so I can see what I am using (but that's just me).
Adding the JAR Files Directly to the Classpath
Another option is to add the Jackson JAR files directly to the classpath of your application. To do so you must first download the JAR files. You can download the finished JAR files via the central Maven repository. Go to:
Search for the project you want to use (e.g.
jackson-databind). Once you find the right project, make sure you click the "all versions" link
next to the version number in the search result. That will show you a list of the previous versions of that
project. Click around until you find a link directly to the JAR file for that project and that version. Then
download the JAR file.
Once the JAR files are downloaded, add them to the classpath of your project. You can do so within your IDE, and / or within the scripts (bash or cmd) you use to run your application.
|
OPCFW_CODE
|
'use strict';
var Notify = require('osg/notify');
var MACROUTILS = require('osg/Utils');
var Node = require('osgShader/node/Node');
// Abstract class
// base operator contains helper for the constructor
// it helps to do that:
// arg0 = output
// arg1 = input0 or [ inputs ]
// arg2 = input1
// ...
var BaseOperator = function() {
Node.call(this);
};
BaseOperator.prototype = Node.prototype;
// Add support this syntax:
// new Add( output, input0, input1, ... )
// new Add( output, [ inputs ] )
// glsl code output = input0 + input1 +...
var Add = function() {
BaseOperator.call(this);
};
MACROUTILS.createPrototypeObject(
Add,
MACROUTILS.objectInherit(BaseOperator.prototype, {
type: 'Add',
operator: '+',
_getFirstVariableCast: function() {
var variable = this._inputs[0].getVariable();
var inType = this._inputs[0].getType();
var outType = this._outputs.getType();
if (outType === inType) return variable;
// upcast float
if (inType === 'float') return outType + '(' + variable + ')';
// downcast vector
if (outType === 'vec3') return variable + '.rgb';
if (outType === 'vec2') return variable + '.rg';
if (outType === 'float') return variable + '.r';
Notify.error('Mismatch type : ' + outType + ' with ' + inType + ', ' + variable);
return variable;
},
computeShader: function() {
// force inputs type to be all the same from the output
var outputType = this._outputs.getType();
var addType = '';
if (outputType === 'vec4') addType = '.rgba';
else if (outputType === 'vec3') addType = '.rgb';
else if (outputType === 'vec2') addType = '.rg';
var firstVariable = this._getFirstVariableCast();
var str = this._outputs.getVariable() + ' = ' + firstVariable;
for (var i = 1, l = this._inputs.length; i < l; i++) {
var input = this._inputs[i];
str += this.operator + input.getVariable();
var inType = input.getType();
if (inType !== 'float' && inType !== outputType) {
str += addType;
}
}
str += ';';
return str;
}
}),
'osgShader',
'Add'
);
// Mult works like Add
// glsl code output = input0 * input1 * ...
var Mult = function() {
Add.call(this);
};
MACROUTILS.createPrototypeObject(
Mult,
MACROUTILS.objectInherit(Add.prototype, {
type: 'Mult',
operator: '*'
}),
'osgShader',
'Mult'
);
// basic assignement alias: output = input
// glsl code output = input0
var SetFromNode = function() {
Add.call(this);
};
MACROUTILS.createPrototypeObject(
SetFromNode,
MACROUTILS.objectInherit(Add.prototype, {
type: 'SetFromNode'
}),
'osgShader',
'SetFromNode'
);
// Mult Matrix * vector4
// making the cast vector4(input.xyz, 0)
// if needed
// glsl code output = matrix * vector4(vec.xyz, 0)
var MatrixMultDirection = function() {
Add.call(this);
this._overwriteW = true; // if set to false, we copy the input alpha in the output alpha
this._forceComplement = true;
this._inverseOp = false;
};
MACROUTILS.createPrototypeObject(
MatrixMultDirection,
MACROUTILS.objectInherit(Add.prototype, {
type: 'MatrixMultDirection',
operator: '*',
validInputs: ['vec', 'matrix'],
validOutputs: ['vec'],
complement: '0.',
setInverse: function(bool) {
this._inverseOp = bool;
return this;
},
setForceComplement: function(bool) {
this._forceComplement = bool;
return this;
},
setOverwriteW: function(bool) {
this._overwriteW = bool;
return this;
},
computeShader: function() {
// force inputs type to be all the same from the output
// and handle vector complement
var vecIn = this._inputs.vec.getVariable();
var matrix = this._inputs.matrix.getVariable();
var vecOut = this._outputs.vec.getVariable();
var inputType = this._inputs.vec.getType();
var outputType = this._outputs.vec.getType();
var matrixType = this._inputs.matrix.getType();
var strOut = vecOut;
if (matrixType === 'mat4') {
strOut += ' = ';
if (outputType !== 'vec4') strOut += outputType + '(';
var strCasted = vecIn;
if (this._forceComplement || inputType !== 'vec4')
strCasted = 'vec4(' + vecIn + '.xyz, ' + this.complement + ')';
strOut += this._inverseOp
? strCasted + this.operator + matrix
: matrix + this.operator + strCasted;
if (outputType !== 'vec4') strOut += ')';
strOut += ';';
if (!this._overwriteW && inputType === 'vec4')
strOut += '\n' + vecOut + '.a = ' + vecIn + '.a;';
} else {
if (outputType === 'vec4') strOut += '.xyz';
strOut += ' = ';
var strvec3 = vecIn + '.xyz';
strOut +=
(this._inverseOp
? strvec3 + this.operator + matrix
: matrix + this.operator + strvec3) + ';';
if (!this._overwriteW && outputType === 'vec4' && inputType === 'vec4')
strOut += '\n' + vecOut + '.a = ' + vecIn + '.a;';
}
return strOut;
}
}),
'osgShader',
'MatrixMultDirection'
);
// override only for complement.
// glsl code output = matrix * vector4(vec.xyz, 1)
var MatrixMultPosition = function() {
MatrixMultDirection.call(this);
this._forceComplement = false;
};
MACROUTILS.createPrototypeObject(
MatrixMultPosition,
MACROUTILS.objectInherit(MatrixMultDirection.prototype, {
type: 'MatrixMultPosition',
complement: '1.'
}),
'osgShader',
'MatrixMultPosition'
);
var Blend = function() {
BaseOperator.apply(this);
this._mode = 'MIX';
};
MACROUTILS.createPrototypeObject(
Blend,
MACROUTILS.objectInherit(BaseOperator.prototype, {
type: 'Blend',
mode: function(mode) {
this._mode = mode;
return this;
},
computeShader: function() {
return this[this._mode === undefined ? 'MIX' : this._mode]();
},
ADD: function() {
return (
this._outputs.getVariable() +
' = ' +
this._inputs[0].getVariable() +
' + (' +
this._inputs[1].getVariable() +
' * ' +
this._inputs[2].getVariable() +
');'
);
},
MIX: function() {
// result = val0*(1.0-t) + t*val1
return (
this._outputs.getVariable() +
' = mix(' +
this._inputs[0].getVariable() +
', ' +
this._inputs[1].getVariable() +
', ' +
this._inputs[2].getVariable() +
');'
);
},
MULTIPLY: function() {
return (
this._outputs.getVariable() +
' = ' +
this._inputs[0].getVariable() +
' * mix( ' +
this._inputs[0].getType() +
'(1.0), ' +
this._inputs[1].getVariable() +
', ' +
this._inputs[2].getVariable() +
');'
);
}
})
);
// For all you custom needs.
//
// call Code() with variable input/output replace
// indexed by the '%'
// getNode( 'InlineCode' ).code( '%out = %input;' ).inputs( {
// input: this.getOrCreateConstant( 'float', 'unitFloat' ).setValue( '1.0' )
// } ).outputs( {
// out: this.getNode( 'glPointSize' )
// }
//
var InlineCode = function() {
Node.call(this);
};
MACROUTILS.createPrototypeObject(
InlineCode,
MACROUTILS.objectInherit(Node.prototype, {
type: 'InlineCode',
code: function(txt) {
this._text = txt;
return this;
},
computeShader: function() {
// merge inputs and outputs dict to search in both
var replaceVariables = MACROUTILS.objectMix({}, this._inputs);
replaceVariables = MACROUTILS.objectMix(replaceVariables, this._outputs);
// find all %string
var r = new RegExp('%[A-Za-z0-9_]+', 'gm');
var text = this._text;
var result = this._text.match(r);
var done = new Set(); // keep trace of replaced string
for (var i = 0; i < result.length; i++) {
var str = result[i].substr(1);
if (!done.has(str)) {
if (!replaceVariables[str]) {
Notify.error('error with inline code\n' + this._text);
Notify.error('input ' + str + ' not provided for ' + result[i]);
}
var reg = new RegExp(result[i].toString(), 'gm');
text = text.replace(reg, replaceVariables[str].getVariable());
done.add(str);
}
}
return text;
}
}),
'osgShader',
'InlineCode'
);
module.exports = {
BaseOperator: BaseOperator,
Mult: Mult,
MatrixMultPosition: MatrixMultPosition,
MatrixMultDirection: MatrixMultDirection,
Add: Add,
Blend: Blend,
InlineCode: InlineCode,
SetFromNode: SetFromNode
};
|
STACK_EDU
|
Read Special Characters in Socket Server
I am creating Client/Server using Java Networking API. My client will send special unicode characters to Server before and after message. Before message it will send \uc001B and after message \uc00C. After message has been send successfully again client will send \r to server. Server can identify by receiving of this that the message sending is done. But my problem here is how can I check in the server whether the message from client has \r.
DataOutputStream outToServer = new DataOutputStream( clientSocket.getOutputStream());
BufferedReader inFromServer = new BufferedReader(new InputStreamReader(
clientSocket.getInputStream()));
outToServer.writeBytes("\uc001B");
outToServer.flush();
outToServer.writeBytes(message.toString());
outToServer.writeBytes("\uc001C");
outToServer.flush();
outToServer.writeBytes("\r");
outToServer.flush();
And here is my server Code to read messages from the client
in = new BufferedReader(new InputStreamReader(m_clientSocket.getInputStream()));
out = new PrintWriter(new OutputStreamWriter( m_clientSocket.getOutputStream()));
String receivingMessage = "";
while (m_bRunThread) {
String clientCommand = in.readLine().toString();
receivingMessage += clientCommand;
System.out.println("Client Says :" + clientCommand);
if (in.equals("\r")) {
System.out.print("Message Receiving from Client Done : "+ m_clientID);
m_bRunThread = false;
}
}
Thanks
Why are you calling readLine().toString()? It already is a String, unless it is null, which you aren't checking for. This code will NPE at end of stream.
You are using readLine(). It removes the newline, whatever it was: it understands all of them. Ergo you cannot possibly tell what the newline character was. Also you cannot possibly care. Every line you read was terminated by a newline character. But you are on fairly dangerous ground using STX and ETX in association with a Reader. You seem to have a protocol definition problem: you are sending STX/ETX and also expecting newlines. Why?
Hi Thanks, I got it now I am using read() and can check for the Carriage Return value.
@Ramo I still don't know why you need it, or why you're even sending it. You already have STX/ETX, that's enough. You don't need the belt and the braces.
Hi Actually its HL7 messaging standard, whenever there is a request from client, server should check whether the full message from client received. To check that, from the client needs to pass any special character after the message. So to mt HL7 standards we are using Carriage Return as the indicator for end of message.
@Pramo Does HL7 really specify both STX/ETX and another message terminator? Hard to believe.
Yes it is! they had given certain rules to follow to pass the message from client to server
|
STACK_EXCHANGE
|
In 2001, I wanted to try my hand at writing a Meade Autostar Guided Tour for my LX90 telescope. I chose color-contrast
double stars, because Albireo (beta Cygni) was the kind of celestial object that visitors to our public star parties can enjoy
under light-polluted skies. I underestimated the difficulty in finding sources of these objects, as many references
contained a few, but I could find no substantial list. I began compiling a list of every referenced object of this type
that I could find. As the list grew, I decided to expand it to an Excel file with more info than would be appropriate
in a Guided Tour file.
One idea that I persued was to find a quantitative way to compare and rank these objects. I found that B-V Color
Index data is available for many of these stars. Since this provides a number that represents the color of the star,
I reasoned that the difference between the B-V Indexes of the two stars would give a measure of the color contrast.
Doubles with a large difference in B-V Indexes should present the best visual color contrast. As I am colorblind,
I prefer these quantitative measures to subjective observations. I also listed color descriptions from as many sources
as possible, showing how much our perceptions of the colors vary from one observer to another.
The Guided Tour (MTF) file is sorted by RA and constellations. The PDF version contains the Excel spreadsheet that
I put together. It is sorted by RA and constellations, and then the same data is sorted by b-v color index differential.
I like to print the first 3 pages, then feed the pages back into the printer to print the other sort on the back. In
this way, I can flip the paper over to select which sort I want.
These files are offered free to the astronomy community as my contribution to all the great tour files I have found that
continue to add value to my Meade LX90 telescope. I hope to eventually update the list with a few other objects that
I have learned about since my last revision. I also hope to find color index data for more of the doubles when I can
work on this.
If these files seem familiar, it may be because these files have been available in other locations since I created them.
The tour file is in the files section of the LX90 Yahoo group, Weasner's ETX site, and some other locations. The PDF
file has been available on the DVAA's site, http://dvaa.org
. They will remain on those locations as long as the owners of the sites want. As I now have my own site, I will
make them available here also.
|
OPCFW_CODE
|
a lot of people write to me, after reading The Joel Test, to say, "You should have a 13th thing on here: Unit Testing, 100% unit tests of all your code."At that point my interest is already piqued. Unit Testing 100% of your code is a terrible goal and I'm wondering where Joel is going to go with the entry. Overall I like the the entry (which is really a transcribed discussion), but two things in the entry left me feeling uneasy.
- Joel doesn't come out and say it, but I got the impression he's ready to throw the baby out with the bath water. Unit testing 100% of your code is a terrible goal, but that doesn't mean unit testing is a bad idea. Unit testing is very helpful, when done in a way that provides a positive return on investment (ROI).
- Jeff hits it dead on when he says:
...what matters is what you deliver to the customer...Unfortunately, I think he's missing one reality: Often, teams don't know what will make them more effective at delivering.
A Painful Path
Say you read Unit Testing Tips: Write Maintainable Unit Tests That Will Save You Time And Tears and decide that Roy has shown you the light. You're going to write all your tests with Roy's suggestions in mind. You get the entire team to read Roy's article and everyone adopts the patterns.
All's well until you start accidently breaking tests that someone else wrote and you can't figure out why. It turns out that some object created in the setup method is causing unexpected failures after your 'minor' change created an unexpected side-effect. So, now you've been burned by setup and you remember the blog entry by Jim Newkirk where he discussed Why you should not use SetUp and TearDown in NUnit. Shit.
You do more research on setup and stumble upon Inline Setup. You can entirely relate and go on a mission to switch all the tests to xUnit.net, since xUnit.net removes the concept of setup entirely.
Everything looks good initially, but then a few constructors start needing more dependencies. Every test creates it's own instance of an object; you moved the object creation out of the setup and into each individual test. So now every test that creates that object needs to be updated. It becomes painful every time you add an argument to a constructor. Shit. Again.
The Source of Your Pain
The problem is, you never asked yourself why. Why are you writing tests in the first place? Each testing practice you've chosen, what value is it providing you?
Your intentions were good. You want to write better software, so you followed some reasonable advice. But, now your life sucks. Your tests aren't providing a positive ROI, and if you keep going down this path you'll inevitably conclude that testing is stupid and it should be abandoned.
Unfortunately, you can't write better software by blindly following dogma of 'industry experts'.
First of all, I'm not even sure we have any industry experts on developer testing. Rarely do I find consistently valuable advice about testing. Relevance, who employs some of the best developers in the world, used to put 100% code coverage in their contracts. Today, that's gone, and you can find Stu discussing How To Fail With 100% Code Coverage. ObjectMother, which was once praised as brilliant, has now been widely replaced by Test Data Builders. I've definitely written my fair share of stupid ideas. And, the examples go on and on.
We're still figuring this stuff out. All of us.
There may not be experts on developer testing, but there are good ideas around specific contexts. Recognizing that there are smart people with contextually valuable ideas about testing is very liberating. Suddenly you don't need to look for the testing silver-bullet, instead you have various patterns available (some conflicting) that may or may not provide you value based on your working context.
Life would be a lot easier if someone could direct you to the patterns that will work best for you, unfortunately we're not at that level of maturity. It's true that if you pick patterns that don't work well for your context, you definitely wont see positive ROI from testing in the short term. But, you will have gained experience that you can use in the future to be more effective.
It's helpful to remember that there aren't testing silver-bullets, that way you wont get lead down the wrong path when you see someone recommending 100% code coverage or other drastic and often dogmatic approaches to developer testing.
Today's testing patterns are like beta software. The patterns have been tested internally, but are rarely proven in the wild. As such, the patterns will sometimes work given the right context, and other times they will shit the bed.
I focus pretty heavily on testing and I've definitely seen my fair-share of test pain. I once joined a team that spent 75% of their time writing tests and 25% of their time delivering features. Not a member of the team was happy with the situation, but the business demanded massively unmaintainable Fit tests.
Of course, we didn't start out spending 75% of our time writing Fit tests. As the project grew in size, so did the effort needed to maintain the Fit tests. That kind of problem creeps up on a team. You start by spending 30% of your time writing tests, but before you know it, the tests are an unmaintainable mess. This is where I think Jeff's comments, with regard to writing tests that enable delivery, fall a bit short. Early on, Fit provided positive ROI. However, eventually, Fit's ROI turned negative. Unfortunately, by then the business demanded a Fit test for every feature delivered. We dug ourselves a hole we couldn't get out of.
The problem wasn't the tool. It was how the process relied on the Fit tests. The developers were required to write and maintain their functional tests using Fit, simply because Fit provided a pretty, business readable output. We should have simply created a nice looking output for our NUnit tests instead. Using Fit hurt, because we were doing it wrong.
The current lack of maturity around developer testing makes it hard to make the right choice when picking testing tools and practices. However, the only way to improve is to keep innovating and maturing the current solutions.
If It Hurts, You're Doing It Wrong
Doing it right is hard. The first step is understanding why you use the patterns you've chosen. I've written before about the importance of context. I can explain, in detail, my reasons for every pattern I use while testing. I've found that having motivating factors for each testing pattern choice is critical for ensuring that testing doesn't hurt.
Being pragmatic about testing patterns also helps. Sometimes your favorite testing pattern wont fit your current project. You'll have to let it go and move on. For example, on my current Java project each test method has a descriptive name. I maintain that (like methods and classes) some tests are descriptive enough that a name is superfluous, but since JUnit doesn't allow me to create anonymous test methods I take the path of least resistance. I could write my own Java testing framework and convince the team to use it, but it would probably hurt. The most productive way to test Java applications is with JUnit, and if I did anything else, I'd be doing it wrong.
I can think of countless examples of people doing it wrong and dismissing the value of a contextually effective testing pattern. The biggest example is fragile mocking. If your mocks are constantly, unexpectedly failing, you're doing something wrong. It's likely that your tests suffer from High Implementation Specification. Your tests might be improved by replacing some mocks with stubs. Or, it's possible that your domain model could be written in a superior way that allowed more state based testing. There's no single right answer, because your context determines the best choice.
Another common pain point in testing is duplicate code. People go to great lengths to remove duplication, often at the expense of readability. Setup methods, contexts, and helper methods are all band-aids for larger problems. The result of these band-aids is tests that are painful to maintain. However, there are other options. In the sensationally named entry Duplicate Code in Your Tests I list 3 techniques that I've found to be vastly superior to setup, contexts and helper methods. If those techniques work for you, that's great. If they don't, don't just shove your trash in setup and call it a day. Look for your own testing innovations that the rest of us may benefit from.
If something hurts, don't look for a solution that hurts slightly less, find something that is a joy to work with. And, share it with the rest of us.
Tests Should Make You More Effective
What characterizes something as 'effective' can vary widely based on your context.
Some software must be correct or people die. This software obviously requires thorough testing. Other software systems are large and need to evolve at a fairly rapid pace. Delivering at a rapid pace while adding features almost always requires a fairly comprehensive test suite, to ensure that regression bugs don't slip in.
Conversely, some software is internal and not mission critical. In that case, unhandled exceptions aren't really a big deal and testing is clearly not as high a priority. Other systems are small and rewritten on a fairly consistent basis, thus spending time on thorough testing is likely a waste. If a system is small, short lived, or less-important, a few high level tests are probably all you'll really need.
All of the example environments and each other type of environment share one common trait: You should always look at your context and see what kind of tests and what level of testing will make you more effective.
Tests Are Tools
The tests are really nothing more than a means to an end. You don't need tests for the sake of having tests, you need malleable software, bullet-proof software, internal software, or some other type of software. Testing is simply another tool that you can use to decrease the amount of time it takes to get your job done.
Testing can help you-
- Protect against regression
- Achieve sign-off
- Increase customer interaction
- Document the system
- Refactor confidently
- Ensure the system works correctly
When asking how and what you should test, start by thinking about what the goal of your project is. Once you understand your goal, select the tests that will help you achieve your goal. Different goals will definitely warrant using different testing patterns. If you start using a specific testing pattern and it hurts, you're probably using a pattern you don't need, or you've implemented the pattern incorrectly. Remember, we're all still figuring this out, so there's not really patterns that are right; just patterns that are right in a given context.
[Thanks to Jack Bolles, Nat Pryce, Mike Mason, Dan Bodart, Carlos Villela, Martin Fowler, and Darren Hobbs for feedback on this entry]
|
OPCFW_CODE
|
Sending and Receiving Data over STM32 USB
The serial port is the most usual comm channel between a microcontroller and a computer. However, the different voltage levels and loss of the RS-232 port in modern computers led to the use of USB-TLL converter chips like CH340, CP2102, etc.
Adding another chip would entail costs in both part counts in PCB space. Thankfully, the STM32F4 microcontroller provides a neat solution.
What is CDC?
Communications Device Class (CDC) is one of many device classes specified by the USB protocol. Its primary use is for “computer networking devices akin to a network card, providing an interface for transmitting Ethernet or ATM frames onto some physical media. It is also used for modems, ISDN, fax machines, and telephony applications for performing regular voice calls.”
With CDC, a USB device acts like a normal serial port device. You can then send and receive messages serially from/to the computer. Since it’s not a “real” COM port, CDC devices are also called Virtual COM Ports.
Setting Up STM32 USB CDC
For this tutorial, I will be using the STM32F4 Black Board. This board doesn’t come with any USB-TLL converter or ST-Link, unlike the STM32 Discovery line. It does come with a miniUSB port, whose D-, and D+ pins are wired to pins PA11 and PA12.
The application will control the onboard LEDs via sending characters thru CDC. For reference, the onboard LEDs are at PA6 and PA7.
When an “x” is received, D2 lights up. When a “y” is received, D2 turns off. Alternatively, when an “a” is received, D3 lights up and when a “b” is received, D3 turns off.
It’s time to run STM32CubeMX! First up, we set the GPIO pins for the LEDs.
Make both PA6 and PA7 output pins:
To rename them, just go to System Core > GPIO and assign user labels:
Next, we make use of the 8 MHz crystal on the board. Set the PHO and PH1 pins as RCC pins.
Go to System Core > RCC and select Crystal/Ceramic Resonator as High Speed Clock source.
At this point, you can go to the Clock Configuration tab and change the input frequency to 8 MHz, HSE.
Of course, we still need to enable USB. Change pins PA11 and PA12 to their USB_OTG alternate functions.
Then go to Connectivity > USB_OTG_FS and select Device_Only mode.
Next, go to Middleware > USB_DEVICE:
On the dropdown on the right window, select Communication Device Class:
Finally, go over to Project Configuration, name your project, select a location to save it, and specify the toolchain (MDK-ARM if you’re using Keil). You may also need to change the heap size to 0x800 if you’re encountering problems whenever you’re plugging in the STM32F4 board on your computer.
Click “Generate Code” and open Keil.
Receiving Data from PC to STM32
In main.c, declare a variable that will serve as our buffer for user input. I placed mine between the Private Variables section:
Just before the while(1) loop in int main, turn off the LEDs:
Inside the while(1) loop, we insert the part where we check the serial input and control the LEDs accordingly:
We’re not done yet! Open the file usbd_cdc_if.c. This is where all the implementing functions for CDC are found.
Declare the same variable we used as buffer in main.c:
The keyword extern means it’s the same variable on main.c. This gives user_inp a global scope between the two files.
Locate the function CDC_Receive_FS() and add the following lines.
This function is a callback function that auto triggers whenever there is a received message thru CDC. That message is saved to the memory pointed by Buf, and its length is given by Len. All we have to do is copy that message to user_inp using memcpy. Before that, we must make sure that user_inp does not retain its previous value. This is what the first memset() does. The second memset is for clearing the Buf variable once we have transferred its contents to user_inp.
That’s about it for the code. Build and flash it to the board. As soon as you connect the board to the computer via USB, it will appear as a COM port in Device Manager:
Using a terminal application like Putty, we can now send messages to the STM32F4 Black Board from the computer.
Here's the output of this application:
Sending Data From STM32 to PC
If there’s a function for receiving data, there’s also one for transmitting:
This function accepts two parameters: the data to be sent, and its length.
You can set those parameters like this:
Also, you need to define the prototype function with extern command:
Finally, you can send messages. Here’s an example that sends a message every second:
I hope you found this tutorial on STM32 CDC USB useful. Happy coding!
|
OPCFW_CODE
|
Flipped Residual Connection
Hello,
Thank you for your amazing work!
From the paper, the last line in the Vim Alogirthim is as follows:
$T_l$ : (B, M, D) ← $Linear^T$ ( $y_{forward}$ + $y_{backward}$ ) + $T_{l−1}$
From the code the backward process is the same as the forward, just the input sequence is flipped. However, the input token sequence (residual) is added to the forward+backward output, but should the flipped token sequence be added as an residual as well, such as:
$T_l$ : (B, M, D) ← $Linear^T$ ( $y_{forward}$ + $y_{backward}$ ) + $T_{l−1}$ + $T_{l−1 flipped}$
Edit:
I also noticed each "v2" Mamba block contains out_a and out_b, which is both forward and backward, but in the for loop here, we process two Mamba blocks at the same time, each has its out out_a and out_b, but the input for the second Mamba Block is flipped, which is qutie confusing, does that mean the flipped input for the second Mamba Block is not related to Mamba Block itself and mroe of a training mechanisim? Meaning, if the for loop processes one layer at a time, wouldn't a Mamba Block do a forward and backward SSM pass?
Thank you!
I also noticed each "v2" Mamba block contains out_a and out_b, which is both forward and backward, but in the for loop here, we process two Mamba blocks at the same time, each has its out out_a and out_b, but the input for the second Mamba Block is flipped, which is qutie confusing, does that mean the flipped input for the second Mamba Block is not related to Mamba Block itself and mroe of a training mechanisim? Meaning, if the for loop processes one layer at a time, wouldn't a Mamba Block do a forward and backward SSM pass?
From my understanding, the addition of an extra reverse scan for the Vision Mamba is a compromise by the author due to it not being a natural temporal signal. The term "flipped input for the second Mamba Block" refers to this reverse scanning process, while reusing the same block code.
What's truly confusing is the uncertainty surrounding the input and output of the Vision Mamba block. The descriptions in the code seem to contradict those in the paper. The paper mentions two linear transformation layers in the front part of the block (corresponding to lines 3 and 4 of Algorithm 1), but there is no corresponding implementation in the code. If you understand this clearly, perhaps we can discuss it further.
Lastly, the implementation of fused_add_and_norm by the author also adds complexity to understanding the code.
I also noticed each "v2" Mamba block contains out_a and out_b, which is both forward and backward, but in the for loop here, we process two Mamba blocks at the same time, each has its out out_a and out_b, but the input for the second Mamba Block is flipped, which is qutie confusing, does that mean the flipped input for the second Mamba Block is not related to Mamba Block itself and mroe of a training mechanisim? Meaning, if the for loop processes one layer at a time, wouldn't a Mamba Block do a forward and backward SSM pass?
I found the same problem, kick me if you find sth. else
@jsrdcht if_bidirectional is set as False and flip_img_sequences_ratio is set as -1, thus, the input is process normally as forward and backward "v2" here. Hope that helps!
@jsrdcht if_bidirectional is set as False and flip_img_sequences_ratio is set as -1, thus, the input is processed normally as forward and backward "v2" here. Hope that helps!
Thanks a lot! I think you are right. However, there is still one issue about the depth of the model.
Why the default depth is 24 in the sense of small/tiny model if it's not for bidirection? I mean, considering the ViT/small sets it to 12.
@jsrdcht The model itself is bidirectional, and can be found here. The if_bidirectional parameter being False just ensures that we do not use this loop. So with if_bidirectional being False and flip_img_sequences_ratio as -1, the input is fed here directly, which goes to v2 (the first link in this comment). The small and tiny do not differ in depth, but differ in the hidden state dimension (3.4. Architecture Details in the paper).
@AliYoussef97 Yeah, I agree with you.
The small and tiny do not differ in depth, but differ in the hidden state dimension (3.4. Architecture Details in the paper).
That's not my point. Based on what you just mentioned, so each Vision Mamba block implements both forward and backward modules internally, can we simply compare each mamba block to a transformer block?
In the configuration of vision transformer small, the depth is 12 (possibly even lower), while for vision mamba it is set to 24. Assuming if_bidirectional is True, 24 can be interpreted as having two types of blocks. How should we understand this default parameter of 24 if if_bidirectional is False?
@jsrdcht The linked repo is not the official implementation, thus I am not quite sure why the depth is 12, but by default, the paper states the difference is in the Hidden state dimension. if_bidirectional is related to the ablation study in the paper, and should be False from my understanding.
|
GITHUB_ARCHIVE
|
This manual gives a walkthrough on the CreateView application:
CreateView composes an SDfile that contains both structures and calculation results using the input SDfile of GenerateMD and a table containing the ordinal number of compounds in the SDfile and other data to be viewed. Such table can be created for example by Compr or Jarp. The generated SDfiles can be displayed by the MarvinView application or other SDF viewer.
CreateView can be used as a command line application in the following way:
Prepare the usage of the
crview script or batch file as described in Preparing the Usage of JChem Batch Files and Shell Scripts.
You can also call the
CreateView Java class directly:
Under Win32 / Java 2 (assuming that JChem is installed in c:\jchem):
java -cp "c:\jchem\lib\jchem.jar;%CLASSPATH%" chemaxon.clustering.CreateView [<options>]
Under Unix / Java 2 (assuming that JChem is installed in /usr/local/jchem):
java -cp "/usr/local/jchem/lib/jchem.jar:$CLASSPATH" \ chemaxon.clustering.CreateView [<options>]
-h --help this help message -s --input-sdf <file> input SDfile -t --input-table <file> input table (id values and other data) -o --output-sdf <file> output SDfile -i --id-name <col>[:<count>] name of columns storing the id's (indexes). <count> is the occurrence of the column. default: the id is the line number. -d --data-names <col1>:<col2>... name of columns to include in the SDfile -c --condition "<col><OP><cond>" condition checked. OP may be: =,<,>,<=,>=
Two input files have to be specified:
SDfile containing structures ( --input-sdf option).
A table containing the ordinal number of compounds from the SDfile and other data to be viewed ( --input-table option).
The structure of the input table should be the following:
headers of columns in column set 1 (1 row)
column set 1 (multiple rows)
headers of columns in column set 2 (1 row)
column set 2 (multiple rows)
:<count> determines which occurrence of the column should be used.
The --data-names <col1>:<col2>... option chooses the columns of the column set, which should be included in the SDfile as data fields. Column names must be separated by a colon.
Using the --condition
For example if --condition clid=2 is set, then only those compounds will be included in the SDfile, for which the value of the clid column is 2.
|
OPCFW_CODE
|
What is configure terminal command in Cisco?
Cisco Commands Cheat Sheet
|Basic Configuration Commands|
|configure terminal||Logs you into configuration mode|
|interface fastethernet/number||Enters interface configuration mode for the specified fast ethernet interface|
|reload||An exec mode command that reboots a Cisco switch or router|
What does the configure terminal command do?
The configure terminal command enters Global mode. In this mode, you can create system-wide resources for various system service, configure global behaviors, and enter specialized configuration modes.
How do I enter Cisco config mode?
To enter global configuration mode, enter the configure command. Global configuration From privileged EXEC mode, enter the configure command. To exit to privileged EXEC mode, enter the exit or end command, or press Ctrl-Z. To enter interface configuration mode, enter the interface configuration command.
What is CLI in router configuration?
Most Cisco devices (including routers and switches) use a CLI (Command Line Interface) to configure the network device. The CLI is an interface, based on text. You type in configuration commands and use show commands to get the output from the router or switch.
What is CLI in networking?
A command-line interface (CLI) is a text-based user interface (UI) used to run programs, manage computer files and interact with the computer. Command-line interfaces are also called command-line user interfaces, console user interfaces and character user interfaces.
How do I access Cisco command line?
Access router command line interface using Windows laptop
- Under Category, choose Session.
- Under Connection type, choose Serial.
- In the Serial line, enter the COM port on your laptop that is connected to the console port on your router, using the console cable.
- Click Open.
Why should you use the CLI?
To summarize, the main advantages of a command-line interface include: If you know the commands, a CLI can be a lot faster and efficient than any other type of interface. It can also handle repetitive tasks easily. A CLI requires less memory to use in comparison to other interfaces.
What is a configurable port?
Configurable ports are initially defined during STA installation, but can be changed using the Port Change Utility. The utility automatically verifies that the new ports are not already in use on the network and updates all appropriate processes on the STA server to use the new ports.
Do I need to configure a switch?
Most switches used by small businesses and home offices typically require no configuration — they’re “plug and play.” However, it’s important to remember that plug and play doesn’t always work. In addition, these switches certainly don’t offer any troubleshooting, logging, security, or manageability.
What is Cisco CLI?
The Cisco IOS command-line interface (CLI) is the primary user interface used for configuring, monitoring, and maintaining Cisco devices. This user interface allows you to directly and simply execute Cisco IOS commands, whether using a router console or terminal, or using remote access methods.
Is terminal A CLI?
Terminal, or the command line interface (CLI), is considered by many to be the Holy Grail of computer management. At one time the CLI was the only way to accomplish anything on a computer; then, the CLI gave way to the graphical user interface (GUI) as the popularity of PCs increased.
How do we access the CLI?
Generally, you can access the CLI through a direct connection to the console port, or remotely using Telnet or SSH command. The simplest way to enter the CLI interface is to build a direct serial connection to the switch’s console port, which is demonstrated below.
How to configure interface in Linux terminal?
For example, to configure an interface, use the config terminal command. Once you are in configuration mode, enter the interface command. When you are in the interface su bmode, you can query the available commands.
How do I enter global configuration mode in Linux terminal?
In the preceding example, pressing the Enter (or Return) key is equivalent to typing in the word “terminal.” Enter the configureterminal command to enter global configuration mode: Router# configure terminal Enter configuration commands, one per line. End with CNTL/Z. Router (config)#
How do I set windows terminal as the default terminal application?
Open Windows Terminal and go to the Settings UI window. Select Startup and choose “Windows Terminal” as the Default terminal application setting. This feature is only available in Windows Terminal Preview.
How does the CLI recognize a specific command in Linux?
The CLI will recognize a command once you have entered enough characters to make the command unique. For example, if you enter conf in privileged EXEC mode, the CLI will be able to associate your entry with the configure command, because only the configure command begins with conf .
|
OPCFW_CODE
|
Web application (API and Front-end) - routes design
I suppose this type of topics always exist, but i like to have an specifics opinion for my case.
Since 1/2 month i'm thinking about make a listing web application for my daily life (shopping, due, etc.)
I started out define my object model like this (very simple design model)
Models image
So, i decid to create a NodeJS API for back-end, and Angular 7 for front-end. It's not a technical problem for me to develop the application and the API, but my problem is in the design of this, and particuly to the routes design.
My first suggestion for routes API is :
User :
/users
/users/:id
List :
/lists
/lists/:id
Element :
/elements
/elements/:id
Technicaly it's ok, but i'm not sure it's the good practices.
As User contains List and List contains Element, Wouldn't it be better to have routes like this :
/users/:id
/users/:id/list
/users/:id/list/:id
/users/:id/list/:id/element
/users/:id/list/:id/element/:id
Thanks for your answers, or suggestions !
PS : If you have any web sites / video / topics ... to suggests, do not hesitate.
I'd say you got it OK in the first place, the second approach is messy as you can get huge routes, and you're sending a lot unnecesary data. Why do you need the user id to get an element? An element is an entity by itself, and it will probably grow, you may need to get related elements, filter them... its better to just have /elements
What you can do is find simple relations, like:
/users/:id/lists
/lists/:id/elements
I'd recommend reading building apis you won't hate :)
Thanks you, I like this approch ! I understand your reflexion and it's pretty good. So i try with that i think
Firstly you are in absolute correct path of defining Routes in angular, at the same time you have to use Lazy loading concept of Routing.
I would recommend you to, go for plural sight course , by Deborah Kurata. I'm not trying to promote or advertise anything but for your current situation that course would be the right guidance. It would provide you all the necessary things that you need to build enterprise ready apps.
Alternatively Core UI Angular provides some best designs which are already implemented with Angular Route and things. Lazy loading and other Angular routing are implemented, all you need to do is understand it.
Hope this helps.,
Thanks you, I thinks see videos tutorial, That looks pretty good. And for Core UI Angular, it's ver good tools, but my application it's very simple and UI it's not evoluate. But why not for an other app !
Principle
as short as possible
easy to read
user-friendly input when the user enters the URL
Examples
User list
/users
User detail
/user/:id
Add user
/user/new
User's functional page
/user/:id/tel
|
STACK_EXCHANGE
|
Seleniumwebdirver with Java- Login authentication failing ; passing correct credentials
I am trying hands-on for login page
Passing valid credentials using automation code, still system is giving error:
Login Failed.
Language: Selenium webdriver with Java on Eclipse.
I have tried standard way:
WebElement Login = driver.findElement(By.className("flex-signup"));
Login.click();
WebElement EmailAdd = driver.findElement(By.id("emailAddress"));
WebElement EmailAdd = Driver.findElement(By.id("****"));
EmailAdd.sendKeys("************");
WebElement Passwd = driver.findElement(By.id("****"));
Passwd.sendKeys("*******");
I also used alternate way, JavaScript driver. But this also not working on the application.
JavascriptExecutor jse = (JavascriptExecutor)driver;
jse.executeScript("arguments[0].value='*************';" ,****);
jse.executeScript("document.getElementById('****').value='**********';");
WebElement BLogin = driver.findElement(By.className("ladda-label"));
jse.executeScript("arguments[0].click();", BLogin);
Application is in JavaScript so, I used JavaScript driver also, but same output.
Reason for Login Failed.? Error logs?
Try to add explicit wait in your code i.e. enter the text in the field only when it is visible. Give a try to following code:
WebDriverWait wait=new WebDriverWait(driver, 20);
WebElement Login = driver.findElement(By.className("flex-signup"));
Login.click();
//WebElement EmailAdd = driver.findElement(By.id("emailAddress"));
//WebElement EmailAdd = Driver.findElement(By.id("****"));
WebElement EmailAdd=wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("****"));
EmailAdd.sendKeys("************");
WebElement Passwd = wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("****"));
Passwd.sendKeys("*******");
and then click on Login button
Try for different browsers like Firefox, chrome and opera.
WebDriver firefox=new FirefoxDriver() //for firefox
WebDriver chrome=new ChromeDriver();//for chrome.
Try and see.
i m running on chrome. and i tried geko and ie thing already.
I don't think it is related to the browser here. because in chrome the code is running till end only on clicking on Login. results are: login failed.
Correct me if i am wrong.
I am new to coding world.
What happens after Login.click is performed?
If login form requires some time to be rendered(even 30 milliseconds may be important), then you should put a wait! otherwise driver will instantly try to sendkeys credentials(even if login form isn't ready for getting input data yet), so some of the first symbols may be lost.
P.S. i know i could make some major grammar mistakes, sorry for that.
Thank you Daniil. Yeah i used the explicit time wait . but that is also not helping. Anyways thank you. i am no longer able to access the website link. so i can't try other any other methods.
I think you're forgetting a step in the login process.
Below is the way I think most login processes look.
Find ID Box and send Keys
Find PW Box and send Keys
Find Login Button and Click / send "Enter Key" with PW send Keys.
Yours looks like this.
Click Login(?)
Find WebElement EmailAdd
Find WebElement EmailAdd (x2) (Using different driver call)
Send Keys to EmailAdd
Find WebElement Passwd
Send Keys to Passwd
Thank you for the reply.
The steps are correct. I have to first click on Login option and then i have to enter the credentials. that's why first line of code is for Login. Yeah true i am using 2 different driver cal for web element email id. i removed one now. but after rectifying that also the problem is not solved. Anyways thank you.
|
STACK_EXCHANGE
|
For guidance on the Application Document Wizard in previous releases, see:
- DCO 6.2 and 6.3 - Using the Application Document Wizard
- DCO 6.1 - Using the Application Document Wizard
This article describes how to use fields in Step 1 of the wizard when an Application Document template is selected. By completing the fields in Step 1 of the Application Document Wizard, you can control which application details are included in your generated document.
Before you attempt to follow the guidance in this article, you need to set up your system to create documents. Follow the Getting Started instructions outlined in Using the Application Document Wizard.
For guidance on completing other steps in the Application Document Wizard, refer to the Related content section of this article.
Use the fields in the Options section to define the basic structure and presentation of your document.
- Select a document type from the Detail level list that best matches the purpose of your Application Document:
- High Level Overview – Presents a business summary of case types, flows, specifications, and requirements.
- Full Application Documentation – Adds to the High Level Overview content by including additional chapters with detailed rule information.
- Property References – Limits output to only property rules referenced by case types and Entity Relationship Diagrams.
- Refine the document presentation based on your target audience:
- Select a style from the Section headings will display list. A Short Description is purpose-driven, while a Rule Name is implementation-specific.
- Check the Suppress class and ruleset references box to hide technical details about rules. This keeps content focused for business users.
- Check the Create links to rule forms within document box to include icons that open rules directly in the application. This is a great way to engage developers and facilitate interactive reviews.
Chapter and section settings
Use thebuttons to build a set of chapters and sections. Each button launches a modal with a list of items you can include in the generated document. You can make changes by checking or clearing the check box next to each item.
- The default settings in each modal are determined by the option you select in the Detail level list.
- The list of items in the Case Types and Supporting Specification Types section is restricted by the selected ruleset names in the Rulesets section.
- When you check the Flows box, all flows referenced by a case type (excluding pyStartCase) are added to the document.
- Specifications attachments are automatically included in the document when you check the Specifications box and each attachment uses the Include in doc? option.
Previewing your document
Refresh the Document Table of Contents section to preview your table of contents. You can try out different combinations of options to see how they impact the document structure without generating a physical Word document.
This section is read-only and must be manually refreshed. Click the expand icon next to a chapter to see the contained sections.
Once you verify all of your selected options, click Step 2 in the wizard.to move on to
|
OPCFW_CODE
|
Novel. I think by James P Hogan. Has humans studying flying alien wildlife. These animals are passing on learned behaviour in their genes
I read this book about 20-25 years ago. It was a long story with lots of other things I have forgotten.
The part that really stuck in my mind involved human scientists studying flying wildlife on an alien planet. These animals could pass on learned behaviour to their offspring via their genes. The next generation could do things their parents had been taught, or worked out how to do. An example might be hunting techniques.There was no teaching of the young
by their parents.
I read this book many years before I watched a BBC documentary that introduced me to epigenetics. I believe it was written before there was any such theory.
Maybe this is a recurring theme in Hogan's works
I remember that a similar feature was also discussed in the book Entoverse (the fourth installment of the Giants series).
Unluckily my memory is a bit fuzzy on it, but at some point, two characters are talking about some of the new discoveries made possible by their contacts with the Ganymeans and talk about an alien creature (which, if I recall correctly, was also a flying creature) that can pass through the genes all of the notions it learnt in life.
I also remember one of the characters commenting negatively on this, on the lines of the fact that if mankind had such a feature, it would fall hopelessly in the totalitarism.
Of course in that book that creature was only talked about (maybe one of the character had said he was studying one of them in his laboratory), it never appeard "live".
Yes it is Entoverse. Thank you very much. The passage I remembered was at the beginning of chapter 9. I don't have a copy in a form I can easily copy and paste it from or I would add a quote to your answer. Many thanks.
You might be thinking of Hogan's Code of the Lifemaker which features humans finding a society of robots that has evolved on Titan after the crash of alien spaceship; the creatures that have evolved include medieval-level humanoids and various "animals" and "planets" that are made of metal.
Here's the prologue to that novel which elaborates on that evolution
Although the robots' local memories were becoming larger than those
contained in their earlier ancestors, the operating programs were
growing in size and complexity too, with the result that an acceptor
still didn't possess enough free space to hold an entire "How to Make
a Fred" subfile. The donor's half, therefore, could be accommodated
only by overwriting some of the code already residing in the acceptor.
How this was accomplished depended on the responses of the programs
carried inside the various robot types.
In some cases the incoming code from the donor was allowed to
overwrite entire program modules inside the acceptor, with the total
loss to the acceptor of the functions which those modules controlled.
This was usually fatal, and no descendants came into being to repeat
such mistakes. The successful alternative was to create space by
trimming nonessential code from many modules, which tended to leave
the acceptor robot with some degradation in performance—usually
manifesting itself as a reduction in agility, dexterity, and defensive
abilities—but at least still functioning. The sacrifice was only
temporary since the acceptor robot would be reprogramed with
replacement modules when it delivered its genetic package at the
factory.
But in return for these complications and superficial penalties came
the immense benefit that the subfiles presented at the factories were
complete ones—suitable for dispatch to the Schedulers without delay
and the attendant risk of being deleted by overworked Supervisors. The
new method thus solved the reliability problem that had plagued the
formerly universal "asexual" mode of reproduction.
Because of the way that code is passed around, evolution here may have some epigenetic features
Thank you. I definitely did read Code of The Lifemaker at about the same time. It is possible this is what I remember but I think It was a different book involving actual biological flying alien animals.
In sequels, the creators of the robots were revealed to be an avian species.
Another possibility is Legacy by Greg Bear, which takes place on a planet called "Lamarckia" - a planet where a very small number of very large organisms each control large portions of the planet, creating apparently independent flying, crawling, swimming (etc.) units to serve the purposes of the dispersed organisms. If any of the units experiences something that suggests a better way of doing things, later versions of that unit will use that better way. Each of the "ecos" have flying units, who bite human visitors to sample our genetic information.
Review of Legacy here:
A newly discovered planet, Lamarckia, is home to marvelous life-forms
called "ecoi"; each single "ecos" is composed of bizarre plant/animal
hybrids known as "scions" analogous to single cells, all of them
managed, so it is hypothesized, by a "seed-mistress." But then radical
fanatic Jaime Carr Lenk opens an illegal Way gate and disappears onto
Lamarckia with 4000 followers. The Hexamon dispatches agent Olmy to
investigate. Arriving 37 years later, thanks to the uncertainties of
Way transit, Olmy discovers a brutal war in progress between Lenk
loyalists and rebellious Brionists. In order to better inspect the
ecoi, Olmy joins the crew of an oceangoing research vessel. They come
upon a dead ecos, confirm the seed-mistress hypothesis, and discover
the skeletal remains of quasihuman scions.
I will check this one out, thanks again.
|
STACK_EXCHANGE
|
#!/vendor/bin/sh
# $1 = trigger cause (from kernel->wlbtd)
# $2 = 16-bit hex reason code
dir="`cat /sys/module/scsc_log_collection/parameters/collection_target_directory`"
max_logs="`getprop vendor.wlbtd.tar_files_per_trigger`"
wlbtd_version="`getprop vendor.wlbtd.version`"
DATE_TAG="`date +%Y_%m_%d__%H_%M_%S`"
moredumpdir=/data/vendor/log/wifi
memdump_file=/sys/wifi/memdump
base_dir=`cat /sys/module/scsc_mx/parameters/base_dir`
fw_var=`cat /sys/module/scsc_mx/parameters/firmware_variant`
fw_suffix=`cat /sys/module/scsc_mx/parameters/firmware_hw_ver`
xml_dir=$base_dir/$fw_var$fw_suffix/debug/hardware/moredump
log_strings=$base_dir/$fw_var$fw_suffix/debug/common/log-strings.bin
take_moredump()
{
memdump_file_val=1
if [ -f ${memdump_file} ]; then
memdump_file_val=`cat $(eval echo ${memdump_file})`
echo "$(eval echo ${memdump_file}) : ${memdump_file_val}" >> ${status_file} 2>&1
fi
if [[ ${memdump_file_val} != "0" ]]; then
# Only keep one .cmm, old ones are in tar
rm -f ${moredumpdir}/*.cmm
start=`date +%s`
moredump.bin ${moredumpdir}/moredump_${DATE_TAG}.cmm -xml_path ${xml_dir} -log_strings ${log_strings} 2>/dev/null >/dev/null
script_status=$?
chmod 755 ${moredumpdir}/moredump_${DATE_TAG}.cmm
end=`date +%s`
echo "moredump generated in ${moredumpdir} in $((end-start)) seconds" >> ${status_file} 2>&1
cp -a ${moredumpdir}/moredump_${DATE_TAG}.cmm ${logdir}
echo "copied ${moredumpdir}/moredump_${DATE_TAG}.cmm to ${logdir}" >> ${status_file} 2>&1
sync
fi
}
logcat_dmesg_logs()
{
SAMLOG=/sys/kernel/debug/scsc/ring0/samlog
MXDECO=/vendor/bin/mxdecoder
# dump dmesg
dmesg > ${logdir}/dmesg_${DATE_TAG}.log 2>&1
echo "generated ${logdir}/dmesg_${DATE_TAG}.log" >> ${status_file} 2>&1
# dump logring
if [ -f ${SAMLOG} ]; then
cat /proc/driver/mxman_info/mx_release > ${logdir}/mx.dump_${DATE_TAG}.log 2>&1
if [ ! -e ${MXDECO} ]
then
echo "No mxdecoder found...dumping RAW logring." >> ${logdir}/mx.dump_${DATE_TAG}.log
cat ${SAMLOG} >> ${logdir}/mx.dump_${DATE_TAG}.log 2>&1
else
cat ${SAMLOG} | $MXDECO >> ${logdir}/mx.dump_${DATE_TAG}.log 2>&1
fi
echo "generated ${logdir}/mx.dump_${DATE_TAG}.log" >> ${status_file} 2>&1
fi
}
remove_old_tar_files()
{
# if vendor.wlbtd.tar_files_per_trigger property is not set, hardcode value 5
# otherwise we will not delete any old files and keep filling the storage with tar files
if [[ ${max_logs} == " " ]]; then
max_logs=5
fi
cd ${dir}
if [ ${max_logs} -eq 0 ]; then
# only keep the last
count="`ls -tr ${tarext} | wc -l`"
while [ ${count} -gt 1 ]
do
oldest="`ls -tr ${i}${tarext} | head -n 1`"
echo "removed ${oldest}" >> ${status_file} 2>&1
rm -f ${oldest} > /dev/null 2>&1
count="`ls -tr ${tarext} | wc -l`"
done
else
# remove old tar files of each type
list_of_triggers="scsc_log_dumpstate_wlbt_off \
scsc_log_user scsc_log_fw \
scsc_log_dumpstate scsc_log_host_wlan \
scsc_log_host_bt scsc_log_host_common \
scsc_log_fw_panic"
for i in ${list_of_triggers}
do
count="`ls ${i}${tarext} | wc -l`"
while [ ${count} -gt ${max_logs} ]
do
oldest="`ls ${i}${tarext} | head -n 1`"
echo "removed ${oldest}" >> ${status_file} 2>&1
rm -f ${oldest} > /dev/null 2>&1
count="`ls ${i}* | wc -l`"
done
done
fi
}
#----------------------------------------------------------------------------------------
trigger=$1
code=$2
# create .tmp hidden dir
tarfile=${trigger}_${DATE_TAG}_${code}
logdir=${dir}/.tmp-${tarfile}/${tarfile}
# wlbt-off handling
mx_status="`cat /proc/driver/mxman_ctrl0/mx_status`"
if [ "x${trigger}" == "xscsc_log_dumpstate" ] && [ ! -f /proc/driver/mxman_ctrl0/mx_status ]; then
tarfile=${trigger}_"wlbt_off"_${DATE_TAG}_${code}
logdir=${dir}/.tmp-${tarfile}/${tarfile}
fi
# make sure the dir exists
mkdir -p ${logdir}
echo "created ${logdir}" >> ${status_file} 2>&1
status_file=${logdir}/status_${DATE_TAG}.log
# create status file
touch ${status_file}
echo "created ${status_file}" > ${status_file} 2>&1
cd ${dir}
# no .sbl found exit
if [ -z .tmp-${trigger}.sbl ]; then
echo ".tmp-${trigger}.sbl not found. exiting." >> ${status_file} 2>&1
log -t "WLBTD" -p e ".tmp-${trigger}.sbl not found. exiting."
exit 0
fi
# copy .sbl file
mv .tmp-${trigger}.sbl ${logdir}/${trigger}_${DATE_TAG}_${code}.sbl 2>&1
echo "copied .tmp-${trigger}.sbl to ${logdir}/${trigger}_${DATE_TAG}_${code}.sbl" >> ${status_file} 2>&1
cd ${logdir}
echo "working dir: `pwd`" >> ${status_file} 2>&1
logcat_dmesg_logs
# take moredump in case of scsc_log_fw_panic
if [ "x${trigger}" == "xscsc_log_fw_panic" ]; then
if [ -f /vendor/bin/moredump.bin ]; then
take_moredump
chmod 0664 ${logdir}/dmesg_${DATE_TAG}.log
chmod 0664 ${logdir}/mx.dump_${DATE_TAG}.log
# Log files in log output directory, also in tarball
# cp -a ${logdir}/dmesg_${DATE_TAG}.log ${moredumpdir}
# cp -a ${logdir}/mx.dump_${DATE_TAG}.log ${moredumpdir}
else
echo "/vendor/bin/moredump.bin not found. No moredump generated." >> ${status_file} 2>&1
log -t "WLBTD" -p e "/vendor/bin/moredump.bin not found. No moredump generated."
fi
fi
# copy log-strings.bin
cp ${log_strings} ${logdir} 2>&1
echo "copied ${log_strings} ${logdir}" >> ${status_file} 2>&1
echo "getprop vendor.wlbtd.tar_files_per_trigger : ${max_logs}" >> ${status_file} 2>&1
echo "getprop vendor.wlbtd.version : ${wlbtd_version}" >> ${status_file} 2>&1
echo "ro.build.date : `getprop ro.build.date`" >> ${status_file} 2>&1
echo "ro.build.fingerprint : `getprop ro.build.fingerprint`" >> ${status_file} 2>&1
script_status=0
if [ -f /vendor/bin/gunzip ]; then
cd .. # very important to change to correct directory
tar -czf ./${tarfile}.tar.gz ${tarfile} > /dev/null 2>&1
chmod 0664 ./${tarfile}.tar.gz
mv ${tarfile}.tar.gz ${dir}
log -t "WLBTD" $(eval echo ${dir}/${tarfile}).tar.gz generated
# create tar.gz
tarext="*.tar.gz"
else
cd .. # very important to change to correct directory
tar -cf ./${tarfile}.tar ${tarfile} > /dev/null 2>&1
chmod 0664 ./${tarfile}.tar
mv ${tarfile}.tar ${dir}
log -t "WLBTD" $(eval echo ${dir}/${tarfile}).tar generated
# create tar
tarext="*.tar"
fi
sync
# clean-up
rm -rf ${dir}/.tmp-${tarfile} >/dev/null 2>&1
remove_old_tar_files
sync
exit ${script_status}
|
STACK_EDU
|
English idiom equivalent to the German Gretchenfrage "wie hast du's mit...?"
In German, a Gretchenfrage is:
Compound of Gretchen (diminutive of the given name Margarete) and "Frage" "question". In reference to Goethe's "Faust" (published 1808), where the character of Gretchen asks the protagonist, who is secretly in league with the devil, ''wie hast du's mit der Religion?'' "what is your take on religion?".
[...]
Any question going directly to the core of an issue.
A crucial question that usually has a difficult or unpleasant answer.
So you could say, for example: "Wie hast/hältst du's mit dem Kapitalismus?" "What's your take on capitalism?" (Often used when referring to a somewhat controversial topic.)
Is there a similar idiom in English?
What's wrong with "What's your take"? It's perfectly idiomatic.
@HotLicks yes, but "what's your take" doesn't imply that the question is honing in on the core of a difficult/unpleasant/controversial issue.
'hone in on the core': there is an idiom for that too. What's your take is very apropos.
Well yes, but I would want to use it as the title of an article about a somewhat controversial topic. "Wie hast du's mit X?" Currently, the best I could come up with was "Let's talk about X", but I was wondering whether there is really no analog to the Gretchenfrage.
are we to function as 'proofreaders'?
It's unclear what you want. There are various versions of "cut to the core", "get to the heart of the matter", et al.
I want to know whether there is an English idiom, equivalent to the German “wie hast du's mit…?” – including its implied subtleties. If there is no such thing, that's okay, too.
One my pet hates is people using hone in when they mean home in. New Yorker
@MichaelHarvey A missile homes in (not hones in) on a target. Hone means "to sharpen." Usage guides recognize hone in on as an acceptable alternative to home in on! To ea their cup 'o tea!
Would the downvoter care to explain? (hopefully not because of the honing it!)
Some usage guides permit 'hone in', some don't. I am allowed to consider it an error.
Where do you stand on ___ may be close, in English. How do existing translations handle it? "Take" is probably a little too informal.
Do you want an equivalent for Gretchenfrage or for “wie hast du's mit…?”, or would either work for you? Is the sentence about capitalism the context you're trying to fit?
This is a very famous expression and I'd wager a lot that there is no direct expression billing all the slots it fills in German:
a seemingly naive person (girl) asking a seemingly harmless, everyday question that nevertheless cuts to the chase of a very important issue and will expose vital information from the asked and is therefore unwelcome, difficult to answer truthfully, or likely to elicit an answer that might not fly with the asker…
It is not only about the information itself but also about the interactive dynamic between asker and asked.
The closest expression that comes to my mind that ticks most boxes for:
Gewissensfrage – Gretchenfrage – heikle Frage (von deren Beantwortung alles abhängt)
entscheidende Frage – Frage, an der alles hängt – Gretchenfrage – Hauptfrage – Kardinalfrage – Kernfrage – Schlüsselfrage – wichtig(st)e Frage
(from Openthesausrus)
(Translations: Question of conscience – crucial question – tricky question (on whose answer everything depends)
decisive question - question on which everything depends – crucial question – main question – cardinal question – core question – key question – (most) important question)
would be:
this is a litmus test.
The 'idiom' in german exists solely because of Goethe's usage in Faust; 'Wie hälst du's mit [..]' would otherwise seem awkward and quaint. It's more of a quotation evoking the literary original.
so what you need is an English phrase that can be altered to contain a question, and that still resonates with some well known work of literature or other medium.
Some proposals (i wrote some clue to the source in parantheses, but if you don't know them they are unfit anyways):
To [X] or not to [X], that is the question... (Shakespeare)
Do you take [X] to be your lawful [Y]? (Wedding vows)
Will somebody think of the [X]? (Simpsons)
Don't ask what [X] can do for you, ask what you can do for [X]. (Kennedy)
You have to ask yourself one question: Do I feel [X]? - Well do you, punk? (Dirty Harry)
You want [x]? You can't handle [X]! (A few good men)
Mirror mirror on the wall, who's the [X] of them all? (Grimm)
Et tu [X]? (Caesar)
The German usage of Gretchenfrage may preserve something of its original weight in Goethe’s Faust, say, concerning some fundamental question of human existence or society, or it may have become so semantically lightened that it may designate any question a writer wishes to describe as difficult or important:
Gretchenfrage: Lassen sich effiziente Marketingkampagnen tatsächlich planen?
Vital question: Can efficient marketing campaigns actually be planned? — Harald Reil, “Erfolgskontrolle”, GENIOS WirtschaftsWissen Nr. 04, 10.04.2013.
While it might be amusing to substitute this question for Goethe’s original,this quite mercantile version does point out the difficulty of your question. An English equivalent needs to determine which of the many aspects of the German expression are at play in a given context. For instance, I chose vital to suggest the importance the writer wishes to convey, even though the question here is merely a rhetorical one.
If a Gretchenfrage is one that penetrates to the heart of the matter, then it can be called a crucial question:
Yet the crucial question to which this general conception gives rise is the epistemic one raised by Berkeley and Kant: if all that is ever given in experience is ideas, and reality is not an idea or a relation among ideas, how can we ever know whether our ideas correspond with reality? — G. Soffer, Husserl and the Question of Relativism, 2012, 60.
Here is the crucial question. From a practical point of view, society must have some way of protecting itself and its members against abuses committed in the name of the free exercise of religion. And it is the function of the state to provide this protection. — John Courtney Murray, J. Leon Hooper, Religious Liberty: Catholic Struggles with Pluralism, 1993, 151.
Or perhaps pivotal:
Yet we might also say that some ancient Greek philosophers already asked the pivotal question. In The Laws, Plato seeks to demonstrate the existence of the gods with the argument from design: “Why, to begin with, think of the earth, and sun, and planets, and everything! …” — William Sims Bainbridge, Virtual Sociocultural Convergence, 2016, 233.
In informal contexts, this expression and its inflated permutations are quite common:
For a lifetime, my father, when presented with a deep question, probably from a child's mind, would respond, “well, that's a 64 dollar question”. — Hugh Mann, Spiritfarmer Aftermath, 2013.
As the German expression, a $64 question preserves a cultural reference, though hardly belletristic: this amount was the highest prize awarded on a popular radio quiz show in the 1940s; its revival on modern television raised the amount to $64,000, and virtually any large amount of money can substitute.
Since there is no single German Gretchenfrage beyond Goethe’s original, there can be no single English equivalent. Any adjective of impotance, urgency, complexity, or incisiveness can modify question depending on which quality a writer has in mind.
There is a very similar idiom in English: (someone's) take on (something)
Someone's perspective, opinion, or idea(s) about something.
As in:
Mr. Huxley, what's your take on the recent announcement from the White
House? My take on the problem is that we need to devote more of our
resources to expanding our marketing campaign.
TFD
changed my answer ... if no help i shall delete and leave for others
I think the OP answered their own question and you have simply documented it. No need to delete, in my opinion - just needs a green tick.
According to James Main Dixon, English speakers are often fond of asking,
What's the verdict? (Yes or no?)
What is your verdict on this matter? (This or that?)
I would like to hear your verdict regarding capitalism. (Do you find it agreeable, or not? And on what do you base your judgment?)
From Etymonline:
...alteration of Middle English verdit (c. 1300), "a jury's decision in a case," from Anglo-French verdit (Old French voirdit) "sworn testimony, affidavit; judgment, written record of a verdict," literally "a true saying or report," from ver, veir "true" (from PIE root *were-o- "true, trustworthy") + dit, past participle of dire "to say" (from PIE root *deik- "to show," also "pronounce solemnly"). Spelling influenced by Medieval Latin verdictum "a verdict."
|
STACK_EXCHANGE
|
Configuration profile enforcement
Configuration profiles are the primary way that an MDM solution delivers and manages policies and restrictions on managed devices. If organizations need to configure a large number of devices—or to provide lots of custom email settings, network settings, or certificates to a large number of devices—configuration profiles are a safe and secure way to do it.
A configuration profile is an XML file (ending in .mobileconfig) that consists of payloads that load settings and authorization information onto Apple devices. Configuration profiles automate the configuration of settings, accounts, restrictions, and credentials. These files can be created by an MDM solution or Apple Configurator for Mac, or they can be created manually. Before organizations send a configuration profile to an Apple device, they must enroll the device in the MDM solution using an enrollment profile.
An enrollment profile is a configuration profile with an MDM payload that enrolls the device in the MDM solution specified for that device. This allows the MDM solution to send commands and configuration profiles to the device and to query certain aspects of the device. When a user removes an enrollment profile, all configuration profiles, their settings, and managed apps based on that enrollment profile are removed with it. There can be only one enrollment profile on a device at a time.
Configuration profile settings
A configuration profile contains a number of settings in specific payloads that can be specified, including (but not limited to):
Passcode and password policies
Restrictions on device features (for example, disabling the camera)
Network and VPN settings
Microsoft Exchange settings
LDAP directory service settings
CalDAV calendar service settings
Credentials and keys
Profile signing and encryption
Configuration profiles can be signed, to validate their origin, and encrypted, to help ensure their integrity and protect their contents. Configuration profiles for iOS and iPadOS are encrypted using the Cryptographic Message Syntax (CMS) specified in RFC 5652, supporting 3DES and AES128.
Users can install configuration profiles directly on their devices using Apple Configurator for Mac, or they can be downloaded using Safari, sent attached to a mail message, transferred using AirDrop or the Files app in iOS and iPadOS, or sent over the air using a mobile device management (MDM) solution. When a user sets up a device in Apple School Manager or Apple Business Manager, the device downloads and installs a profile for MDM enrollment. For information on how to remove profiles, see Intro to mobile device management in Apple Platform Deployment.
Note: On supervised devices, configuration profiles can also be locked to a device. This is designed to prevent their removal or to allow removal only with a passcode. Because many organizations own their iOS and iPadOS devices, configuration profiles that bind a device to an MDM solution can be removed—but doing so also removes all managed configuration information, data, and apps.
|
OPCFW_CODE
|
Imagine you take your car to the mechanic because it's having some engine problems. The mechanic gives you two options.
Option A: Thorough fix, $300, the fix won't be done for two days, but it shouldn't require rework.
Option B: Quick & dirty fix, $100, the fix will be done later today, but it will fail and you'll have to bring the car back in a few months to do Option A anyway.
Which one is the right choice? The easy answer is "of course, you do option A". But what if you don't have $300 right now and you can only afford $100? What if you have to have the car back tonight for some reason, and can't afford to be without the car tomorrow? What if you do have $300, but you have other things to spend money on that you deem more important?
You make your car repair decision based on the best information you have. The mechanic gives you the options and the information to make the decision, but lets you make the decision, because you're the customer.
Similarly, it's not your job to decide your company's business priorities. If management decides, for example, that getting the project done by a given date, even if it means cutting corners that they'll pay for later, that's OK. You might not agree with it, but that's OK because it's not your decision to make. Your job is to present options that management can choose from, and let them make the decision.
If you're not comfortable doing the work that they're asking you to do, then find another job where you can do the work you want to do.
However, no matter where you go to work, you're going to run into situations where the people paying the bills and paying your salary are going to make choices that you don't like. Maybe they decide that the project X you're working on isn't as important to the company as project Y, and they scrap project X and move you to project Y.
On the plus side, if your boss decides that project Y is more important, but that proves to be a bad decision, it's not your fault. Not overreaching your level of responsibility can be very freeing.
You're going to run into this over and over again in your career, no matter what your job is. I suggesting getting used to it or you're likely to be pretty unhappy in the long term.
P.S. You say "The only thing they care about is Business." That's right. That's their job. Nobody is paying you out of the goodness of their heart. They're paying you to do a job that makes money or saves time for the business. If the work you do doesn't do that, then there's no need for you.
|
OPCFW_CODE
|
using ConferenceApp.Models;
using System;
using System.Collections.Generic;
using System.Text;
namespace ConferenceApp.Data
{
public static class MockDbContext
{
public static IList<Conference> GetAllConferences()
{
return new List<Conference>()
{
new Conference(){ Name="Conference O1", Location ="Bulgaria", Date = new DateTime(2020,2,29) },
new Conference(){ Name="Conference O2", Location ="US", Date = new DateTime(2020,2,29) },
new Conference(){ Name="Conference O3", Location ="Romania", Date = new DateTime(2020,2,29) },
new Conference(){ Name="Conference O4", Location ="Bulgaria", Date = new DateTime(2020,3,29) },
new Conference(){ Name="Conference O5", Location ="Bulgaria", Date = new DateTime(2020,4,29) },
new Conference(){ Name="Conference O6", Location ="US", Date = new DateTime(2020,4,29) },
new Conference(){ Name="Conference O7", Location ="Bulgaria", Date = new DateTime(2020,5,29) },
new Conference(){ Name="Conference O8", Location ="Bulgaria", Date = new DateTime(2020,6,29) },
new Conference(){ Name="Conference O9", Location ="US", Date = new DateTime(2020,3,29) },
new Conference(){ Name="Conference OA", Location ="Bulgaria", Date = new DateTime(2020,4,29) },
new Conference(){ Name="Conference OB", Location ="Bulgaria", Date = new DateTime(2020,5,29) },
new Conference(){ Name="Conference OC", Location ="US", Date = new DateTime(2020,6,29) },
new Conference(){ Name="Conference OD", Location ="England", Date = new DateTime(2020,6,29) },
new Conference(){ Name="Conference OE", Location ="Bulgaria", Date = new DateTime(2020,5,29) },
new Conference(){ Name="Conference OF", Location ="England", Date = new DateTime(2020,7,29) },
new Conference(){ Name="Conference OG", Location ="Romania", Date = new DateTime(2020,7,29) },
new Conference(){ Name="Conference OH", Location ="Bulgaria", Date = new DateTime(2020,1,29) },
};
}
public static IList<Speaker> GetAllSpeakers()
{
var speakers = new List<Speaker>();
for (int i = 0; i < 25; i++)
{
speakers.Add(new Speaker() { FirstName = "First Name" + i, LastName = "Last Name" + i, Image = "placeholder.png" });
}
return speakers;
}
}
}
|
STACK_EDU
|
Enable unprecedented levels of automation and agility with cloud computing solutions.
All servers and systems
Choose a total, all-in-one HPE ProLiant server solution
Hewlett Packard Enterprise and Microsoft bring together the world’s leading server hardware and software innovations, applications, resources, and support.
HPE sells Microsoft OEM licenses in a number of ways. To ensure our customers purchase the right HPE ProLiant server with the right Microsoft® Windows Server® editions, we developed the HPE OEM Microsoft Reseller Option Kit (ROK) that delivers, through our reseller partners, value pricing, flexible choices, one-stop ordering, and single source support to help you be more productive, achieve more with your investment, and keep your business running smoothly.
Hewlett Packard Enterprise resellers can preinstall the Windows Server operating system you choose to deliver a solution that is entirely customized to your needs and immediately ready to use.
HPE ROK is the very same Windows Server operating system you would get directly from Microsoft. However, through our OEM agreement with Microsoft, we are able to customize and optimize Windows Server specifically for ProLiant servers and it is BIOS-locked to ensure it can only be installed on ProLiant servers.
HPE ROK Base Licenses may only be purchased with a new HPE server. However, Windows Server Standard Additional Licenses, for more VMs, or Windows Server Client Access Licenses (CALs) can be purchased separately.
To learn more about Windows Server 2016 licensing, visit www.hpe.com/servers/ws2016.
Find direct links to the Windows Server 2016 data sheet, FAQ and more in the resources section at the bottom of this page.
Our customers need tools that accelerate the complexity of getting servers deployed quickly and inexpensively. Hewlett Packard Enterprise Intelligent Provisioning (HPE IP) provides a server-provisioning tool that eliminates media and time-consuming effort as all required drivers and software tools are pre-loaded onto your ProLiant Server. No configuration CDs to use, no drivers to find, no guesswork. HPE IP recognizes if system software is out-of-date and automatically downloads the latest version’s update direct from the Internet, eliminating server configuration steps and deploying servers faster.
Contact your reseller partner for advice on the optimal HPE OEM Microsoft Reseller Option Kit (ROK) edition to fit your new HPE ProLiant server purchase and business needs.
Client Access Licenses (CALs) can be purchased at any time, either included with your new server purchase or at a later date as a stand-alone option.
With Software Assurance (SA) you receive access to new versions of licensed software released during the term of your SA agreement to deploy at your own pace. New Version Rights simplify the procurement process and shorten the business process cycle, allowing you to reduce the costs associated with acquiring new version releases and immediately take advantage of the latest technology. More information on Microsoft Software Assurance may be found at the Microsoft licensing website at www.microsoft.com/sa.
See “Do I need Microsoft Software Assurance (SA)?”
|
OPCFW_CODE
|
//
// ContainerTests.swift
//
//
// Created by Ivan Lisovyi on 16.03.20.
//
import XCTest
import Coil
final class ContainerTests: XCTestCase {
func testResolveRegisteredService() {
// Given
let container = Container()
.register(Dependency { _ in SimpleService() as SimpleProtocol })
// When
let resolved = container.resolve(SimpleProtocol.self)
// Then
XCTAssertNotNil(resolved)
}
func testResolveMultipleRegisteredServices() {
// Given
let container = Container()
.register(Dependency { _ in SimpleService() as SimpleProtocol })
.register(Dependency { _ in AnotherSimpleService() })
// When
let first = container.resolve(SimpleProtocol.self)
let second = container.resolve(AnotherSimpleService.self)
// Then
XCTAssertNotNil(first)
XCTAssertNotNil(second)
}
func testResolveWithParentContainer() {
// Given
let parent = Container()
.register(Dependency { _ in SimpleService() as SimpleProtocol })
let container = Container(parent)
.register(Dependency { _ in AnotherSimpleService() })
// When
let first = container.resolve(SimpleProtocol.self)
let second = container.resolve(AnotherSimpleService.self)
// Then
XCTAssertNotNil(first)
XCTAssertNotNil(second)
}
func testResolveWithContainerScope() {
// Given
let container = Container()
.register(Dependency { _ in SimpleService() as SimpleProtocol })
// When
let first = container.resolve(SimpleProtocol.self)
let second = container.resolve(SimpleProtocol.self)
// Then
XCTAssertNotNil(first)
XCTAssertNotNil(second)
XCTAssertTrue(first === second)
}
func testResolveWithTransientScope() {
// Given
let container = Container()
.register(Dependency(scope: .transient) { _ in SimpleService() as SimpleProtocol })
// When
let first = container.resolve(SimpleProtocol.self)
let second = container.resolve(SimpleProtocol.self)
// Then
XCTAssertNotNil(first)
XCTAssertNotNil(second)
XCTAssertTrue(first !== second)
}
func testResolveWithNonRegisteredService() {
// Given
let container = Container()
// When
let resolved = container.resolve(SimpleProtocol.self)
// Then
XCTAssertNil(resolved)
}
func testCombineContainers() {
// Given
final class DifferentService {
let value = "DifferentValue"
}
let first = Container()
.register(Dependency { _ in SimpleService() as SimpleProtocol })
let second = Container()
.register(Dependency(scope: .transient) { _ in AnotherSimpleService() })
let third = Container()
.register(Dependency { _ in DifferentService() })
let container = Container.combine(first, second, third)
// When
let firstResolved = container.resolve(SimpleProtocol.self)
let secondResolved = container.resolve(AnotherSimpleService.self)
let thirdResolved = container.resolve(DifferentService.self)
// Then
XCTAssertNotNil(firstResolved)
XCTAssertNotNil(secondResolved)
XCTAssertNotNil(thirdResolved)
}
func testCombineContainersWithServiceOfTheSameType() {
// Given
final class FirstService {
let value: Int
init(value: Int) {
self.value = value
}
}
let expectedValue = 2
let first = Container()
.register(Dependency { _ in FirstService(value: 1) })
let second = Container()
.register(Dependency { _ in FirstService(value: 2) })
let container = Container.combine(first, second)
// When
let resolved = container.resolve(FirstService.self)
// Then
XCTAssertNotNil(resolved)
XCTAssertEqual(resolved?.value, expectedValue)
}
func testResolveDependencyInAnotherDependencyRegisterClosure() {
// Given
final class DependentService {
let value: String
init(value: String) {
self.value = value
}
}
let simpleService = SimpleService()
let container = Container()
.register(Dependency { _ in simpleService })
.register(Dependency { (resolver) -> DependentService in
let simpleService = resolver.resolve(SimpleService.self)
guard let value = simpleService?.value else {
fatalError("SimpleService value shall be available")
}
return DependentService(value: value)
})
// When
let resolved = container.resolve(DependentService.self)
// Then
XCTAssertNotNil(resolved)
XCTAssertEqual(resolved?.value, simpleService.value)
}
static var allTests = [
("testResolveServiceRegisteredService", testResolveRegisteredService),
("testResolveMultipleRegisteredServices", testResolveMultipleRegisteredServices),
("testResolveWithParentContainer", testResolveWithParentContainer),
("testResolveWithContainerScope", testResolveWithContainerScope),
("testResolveWithTransientScope", testResolveWithTransientScope),
("testResolveWithNonRegisteredService", testResolveWithNonRegisteredService),
("testCombineContainers", testCombineContainers),
("testCombineContainersWithServiceOfTheSameType", testCombineContainersWithServiceOfTheSameType)
]
}
private protocol SimpleProtocol: AnyObject {
var value: String { get }
}
private final class SimpleService: SimpleProtocol {
let value = "Value"
}
private final class AnotherSimpleService {
let value = "AnotherValue"
}
|
STACK_EDU
|