Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
When there's a possibility or the likelihood of litigation, admins can place mailboxes on litigation hold or in-place hold. When you place content locations on hold, content is held until you remove the hold from the content location or until you delete the hold. eDiscovery is used to produce immutable copies of data for legal counsel or the courts.
Legal hold can also be used as an alternative to using third-party journaling solutions since all emails are retained and cannot be deleted by the user or admin, except by retention policies.
Litigation hold in O365 originally only existed for Exchange Online mailbox data, but has been extended to include other workflows, like SharePoint Online, Teams, Skype for Business Online, etc.
For more information about litigation or legal hold in Office 365 please read In-Place Hold and Litigation Hold in Exchange Server.
This article will describe the technical licensing requirements for holds in O365, both lit hold and in-place hold. For brevity I will refer to both types of holds as "lit hold" in this article, since the licensing requirements are the same for both.
The user you wish to place on hold must a subscription that includes Exchange Online (Plan 2). This includes the following online licenses:
- Microsoft 365 E5
- Microsoft 365 E3
- Office 365 E5
- Office 365 E3
Users in subscriptions that include Exchange Online (Plan 1) can also be put on hold if the user has the Exchange Online Archiving add-on license. Holds only apply to mailbox data with this license.
To learn how to place a mailbox on hold, see the following articles:
One of the advantages of lit hold is that the user account can be deleted after they leave the company and the data will still be preserved for eDiscovery. This way you're not burning a license for a user who does not access their mailbox any longer. This is called making a mailbox inactive. A lot of organizations do this, so I want to dive into the legalities of this.
First, it's completely acceptable to do this and Microsoft supports it.
Second, you need to aware of the licensing terms regarding license reassignment. According to the Microsoft Volume Licensing Product Terms,
Customer may reassign a License to another device or user, but not less than 90 days since the last reassignment of that same License, unless the reassignment is due to (i) permanent hardware failure or loss, (ii) termination of the user’s employment or contract or (iii) temporary reallocation of CALs, Client Management Licenses and user or device SLs to cover a user’s absence or the unavailability of a device that is out of service.
Let's use some examples to illustrate this.
- John Baker's mailbox is on litigation hold when he leaves the company. The administrator makes John's mailbox inactive by deleting John's user account, which releases John's Microsoft 365 E5 license. The inactive mailbox is still subject to eDiscovery searches until one of the following:
- All litigation holds are released from John's mailbox (there may be more than one).
- All the data ages out based on the organization's litigation hold retention policy. Discovery can still be made, but no results will be returned.
- The organization is no longer a Microsoft Online customer. In this case, it is the responsibility of the organization to remove all data from Office 365 before they leave.
- Susan Mitchell's mailbox is on litigation hold when she goes on leave for 30 days. Susan will not access her email while out on leave. The administrator deletes Susan's account from Azure AD, which removes her license, and assigns it to her temporary replacement. When Susan returns to work, the temporary replacement's account is deleted, which again removes the license, and the license is reassigned back to Susan. This is allowed by the licensing terms because it was a temporary reallocation.
- Contoso has assigned 100 Office 365 E3 licenses to its workers. Contoso buys 100 new Microsoft 365 E5 licenses, assigns them to the same workers, and removes their Office 365 E3 licenses. The Office 365 E3 licenses can be assigned to other workers but cannot be reassigned again for 90 days except for the reasons listed above.
- Northwind Traders has 500 Office 365 F1 licenses assigned to users. These licenses do not include Exchange Online (Plan 2), so litigation hold is not an option for these users, however Northwind wants to retain their emails indefinitely. The administrator assigns a single Microsoft 365 E3 license to a user, enables litigation hold, and then removes the E3 license. He then repeats these steps for each user. This is a licensing violation for several reasons - Active mailboxes under litigation hold must have a valid license that includes Exchange Online (Plan 2) and it violates the licensing reassignment policy.
- Fabrikam has 500 Microsoft 365 F1 licenses assigned to users. These licenses do not include Exchange Online (Plan 2), so litigation hold is not an option for these users, however Fabrikam wants to retain their emails indefinitely when they leave the company. Fabrikam also has a single Office 365 E3 license. Five users leave the company. The Administrator can assign the Office 365 E3 license to one of the five users, enable litigation hold for that user, then delete the user account (releasing the F1 and E3 licenses). The mailbox will be retained due to litigation hold. She can repeat this for each of the separated users, one at a time. This is permitted because the 90-day reassignment policy does not apply to terminated users.
Special note 1:
The correct way to remove a license from a lit hold mailbox is to delete the user account from Azure Active Directory, which releases the license. This is documented here. While you are not prevented from removing a license from an existing user account, it will put the Azure user into an error state. This should be avoided.
Special note 2:
There are some conditions where you may have a mailbox that no one logs into that may still require a license. Examples include shared mailboxes under lit hold or where messages stored in a shared mailbox are needed for a Microsoft 365 Advanced eDiscovery case (the shared mailbox is a "custodian").
Hopefully, this information is useful and clears up some confusion around litigation hold and licensing. Special thanks to Microsoft and Tony Redmond for reviewing this article for accuracy.
|
OPCFW_CODE
|
This topic is not strictly related to polymers but I hope I'll receive some help.
I intend to write a UMAT subroutine in order to perform a heat treatment simulation using CalculiX (the residual sytresses generated during quenching are my major subject of interest).
I will most probably use the CalculiX native UMAT subroiutine rather than writing an Abaqus UMAT code and using it later with CalculiX.
My plan is to write a UMAT code for a thermo-elasto-plastic model first (combined isotropic and kinematic hardening). After verifying that everything works fine I will extend the code by adding the phase evolution kinetics and proper dependence of the elastic and plastic constants (on the percantage of anayzed phases).
There are some issues that remain unclear for me. I will list them below.
1) If I used Abaqus, it would be possible for me to perform a coupled thermo-mechanical analysis. In that case my Abaqus UMAT code should contain the definitions of the tangent operator (DDSDDE matrix) and a matrix defining stress dependence on temperature (DDSDDT).
However, I am using CalculiX (not Abaqus) and to my best knowledge the official release of CalculiX does not support the coupled thermo-mechanical analysis while using UMAT. Thus, I will have to utilize the uncoupled thermo-mechanical analysis (the step is called "uncoupled temperature-displacement"). I understand that in this case all I need to define in my UMAT subroutine is the mechanical tangent operator (the stiff matrix in CalculiX UMAT or DDSDDE in Abaqus UMAT). Of course, I will include in my tangent operator the terms which yield from the thermal expansion and the phase/temperature dependence of the elastic and plastic constants. But that is basically it. No need to define DDSDDT since I perform an uncoupled thermo-mechanical analysis and all I need to define is the mechanical tangent operator (DDSDDE or stiff matrix) and update the stress vector (STRESS or stre).
Please correct me if I'm wrong in this reasoning.
2) In order to obtain a realistic simulation of the quenching process I want to take into account the existance of latent heat. I understand that a proper user subroutine to achieve this is DEFLUX where I can define the heat generated per unit volume during each increment.
In order to calculate the latent heat generated during a given time increment, I need to know the volume fraction of the transformed phase. Suppose that I used the UMAT subroutine to define the kinetics of phase evolution. The volume fractions (of each analysed phase) are defined as separate state variables (STATEV) within UMAT. Here is a question: will I be able to use the state variables (STATEVs) defined in UMAT subroutine for further calculations of latent heat preformed in DEFLUX subroutine?
3) The documentation of the CalculiX UMAT native interface says that the "emec" vector contains the components of the Lagrange mechanical strain. I understand that if I do not utilize the nonlinear geometry (NLGEOM option) "emec" will simply contain the components of the small strain tensor. Is that correct?
Furthermore, the documentation states that at each increment the thermal strains are automatically subtracted from the total strain to obtain the strain components collected in the "emec" vector. Does this also apply if I define the thermal strains all by mhyself in UMAT?
Any help much appreciated,
|
OPCFW_CODE
|
M: Inside The Fine Art Factories of China - chrischen
https://instapainting.com/blog/company/2015/10/28/how-to-paint-10000-paintings/
R: eitally
I've visited Dafen several times, had conversations with artists there, and
purchased some paintings. The quality ranges from gradeschool-esque to fine
art, even if most of it isn't original. I have no problems with factory-style
creation of art, either. If people don't buy it, they won't create it. The
vast majority goes to commercial properties looking for semi-generic-but-in-a-
specific-style-or-color-scheme art for their public spaces or private rooms,
and although it's blatantly obvious how crappy a hotel room painting is, it
still beats staring at blank walls.
These folks have found yet another way too bootstrap themselves out of
poverty, and with startups like instapainting or
[https://www.nobilified.com/](https://www.nobilified.com/), it's easier than
ever for them to have a global reach. Good for them! Good for us!
R: frozenport
Here is the problem: why can't we have these guys painting actual art?
A decade ago I was visiting the Dominican Republic and saw street vendors who
sold touristy paintings that took them less than an hour to complete. Beach,
sunset, palm. They weren't good but they were unique.
R: bootload
_" Here is the problem: why can't we have these guys painting actual art?"_
Ever created visual art? Most art you see is commercial art. Capital-A, Art is
personal. To create an original piece of art takes a lot of effort, be it
something visually inspiring, a novel concept or deeply felt emotion. Apply
some technique and maybe you might come up with something good, maybe?
_" En plein air"_ isn't always practical.
Think of a photographs used in the way you describe, as the _" Cheeze-Wiz"_ of
inspiration for commercial art.
R: JoeAltmaier
I don't know. Isn't that backwards? "If its easy, its not real" is too simple.
Art may be defined by the artists' effort, or by the effect is has. Plenty of
folks are impressed/astounded by 'street art'. Why is it not real, just
because the artist had the chops to make it look easy?
R: bootload
_" Isn't that backwards? "If its easy, its not real" is too simple."_
good point @Joe, what I'm trying to explain is, it's inspiration that's hard.
The Muse. Art that comes easily, rarely looks/feels bad.
R: dshankar
This was a fascinating read. Surprised that painters have seemingly good
working conditions, set their own hours, and work from home!
Just a random idea: when targeting mid-market customers, have you considered
going "Watsi" style? Instead of an Instapainting storefront, what about
commissioning artwork directly from painters? Display the painter's name,
their photo, number of previous paintings painted, and their affordable price
to commission a painting.
Effectively, you could become the Internet-version of those studios
themselves.
R: forrestthewoods
I'd like to see an undercover investigation. This piece ended up being a
little too close to propaganda.
R: analyst74
Decent working conditions, flexible hours, desirable job, in China?! That's
blasphemy!
Well, the desirable job part might be stretching it...more like shitty job in
a highly respected profession.
R: chrischen
That's true, the artists that do the photo to painting aren't necessarily
ecstatic about signing their names on it.
R: Alex3917
Seth Godin talks about Dafen in a bunch of his books. There is an entire
section of the town that just makes monkey paintings. My favorite anecdote is
about one of his monkey paintings that inexplicably has what looks like a
teardrop on it. He said he was confused about this for a while, but then
realized what must have happened -- some rain got on one of the paintings, and
from then on everyone just kept copying the smudge. (Part of a larger riff
about how not everyone who makes paintings or whatever is an artist.)
R: imjk
I'd be interested in reading this. Can you point me to a specific book?
R: Alex3917
He talks about this in his book Linchpin, but he talks about it in a bunch of
other places as well:
[https://www.google.com/search?q=%22seth+godin%22+dafen&oq=%2...](https://www.google.com/search?q=%22seth+godin%22+dafen&oq=%22seth+godin%22+dafen&aqs=chrome..69i57.3625j0j4&sourceid=chrome&es_sm=119&ie=UTF-8)
R: Nicholas_C
In college my friend told me about this gig selling art that I took him up on.
We picked up a moving truck from a warehouse and drove it to a city about 20
hours away and set up art sales in hotels. When I asked the owner of the
operation where the paintings were from he said Asia.
The paintings themselves appeared to be legitimate paintings. Most were
different, but a few were very similar to other paintings packed into the
truck. Often an object in the painting was moved slightly. Some even looked
like classic paintings, I found one or two variations of Van Gogh's Starry
Night, as mentioned in OP's article.
Each one of these paintings sold for about $40-$100, depending on size.
I made decent money for a college kid but I declined a second gig as I was
concerned about where these paintings were coming from. I pictured a sweatshop
like scenario with painters inhaling fumes while getting paid pennies for each
painting. It's good to hear that probably isn't the case, at least according
to this article. The paintings were also marketed as being painted by
"starving artists" and the whole operation felt a little dishonest.
R: jbarham
The only thing that Instapainting and similar businesses demonstrates is that
bad photos don't make good paintings, especially when they're paint-by-numbers
copies of bad photos. IMHO most of Instapainting's customers would be better
off applying a "painterly" filter in Photoshop to their photos and have them
printed to canvas at their nearest Costco.
Paintings of photos look like paintings of photos because they're crippled by
the artefacts of the source photo such as unnatural perspective (which is why
the "normal" 50mm lens is a thing), inaccurate colours and clipped highlights
due to the limited dynamic range of even pro-level DSLRs.
Professional painters take years to hone their craft and learn the rules of
composition that make a painting look good. E.g., one of my favourite painters
is the watercolour virtuoso Joseph Zbukvic. Here's a video where he quickly
paints an impressionistic Melbourne street scene:
[https://www.youtube.com/watch?v=81w9PBZOmZ8](https://www.youtube.com/watch?v=81w9PBZOmZ8).
Because he's not slavishly copying the photo he's free to rearrange the
composition and add elements to make it more pleasing. He makes it look easy
but that's because he's one of the best.
Buying original paintings from professional artists doesn't have to be that
expensive. This past weekend I attended a group art show opening in Melbourne
and prices ranged from $450 to $2200 for some very nice original oil &
watercolour paintings. If you can afford to live in the Bay area or most other
large cities in the developed world, you can afford to occasionally pay that
much for a painting.
(Most of the stuff churned out by the "contemporary art" scene can be safely
ignored. At the very high end contemporary art isn't art as much as it is an
unregulated private currency for the very rich to launder and move around
their wealth. Most contemporary art disappears without a trace.)
R: Animats
_" IMHO most of Instapainting's customers would be better off applying a
"painterly" filter in Photoshop to their photos and have them printed to
canvas at their nearest Costco."_
That works quite well, especially if you use one of the better inkjet printers
with six or seven inks, print on canvas, and finish off with a sprayed clear
coat. This is called "Giclée".[1]
There's already an online service for this.[2] You can even see what the
result will look like before you order. Price is about $30-$50/square foot.
[1]
[https://en.wikipedia.org/wiki/Giclée](https://en.wikipedia.org/wiki/Giclée)
[2] [http://www.canvaspress.com/](http://www.canvaspress.com/)
R: chrischen
Yea, we offer high quality archival canvas prints printed on demand through
our sister site: [http://www.amanufactory.com](http://www.amanufactory.com)
There isn't just _an online service_ for this, there are literally thousands.
R: nugget
Chris: I've seen at least half a dozen of these businesses come and go. None
of them seem to be able to scale enough to make for an exciting long-term
business worth reinvesting in. Despite individuals absolutely loving the final
product. Any thoughts on why this is, and how you can succeed where others
have failed?
R: chrischen
We're completely bootstrapped, so we don't have unrealistic investor
expectations or valuation targets to achieve or else implode. I think that's
probably the primary reason most of these businesses disappear.
They either invest a lot of money themselves or raise money from investors
with the hope of making it back in profits that don't come (for whatever
reason).
I started Instapainting with a negative bank balance, so everything we've done
has been optimized to increase revenue and efficiency.
R: nols
Why does your blog post say you're backed by YC if you're bootstrapped?
R: chrischen
We ran out of YC (and other investor's money), and pivoted to this to make
money.
R: pnathan
> And it likely played well with Western executives, who preferred to hear
> that their product was being mass produced in factories rather than
> subcontracted to rural artists.
Absolutely fascinating! That's quite an interesting discussion point.
R: cubano
I'm somewhat surprised no one has of yet written an algo that can do this sort
of thing on demand.
Would it even matter it was automated if it did a good job? Are these artists
is Dafen nothing but automatons, waiting to be replaced by some AI and a
modded 3d plotter?
Personally, I would not care how it was produced if I wanted such a painting.
R: chrischen
We have been trying:
[https://www.instapainting.com/ai-painter](https://www.instapainting.com/ai-
painter)
[https://www.instapainting.com/blog/research/2015/09/10/robot...](https://www.instapainting.com/blog/research/2015/09/10/robotic-
painter-color/)
[https://www.instapainting.com/blog/research/2015/08/23/ai-
pa...](https://www.instapainting.com/blog/research/2015/08/23/ai-painter/)
Now the trick is to bridge the gap between the neural net algorithm and the
physical robot.
Unfortunately this photo to line drawing is the best we can do so far:
[https://twitter.com/instapainting/status/636245176554405892](https://twitter.com/instapainting/status/636245176554405892)
R: sithadmin
I have a knockoff (or a 'copy', as some like to say) Yue Minjun piece from one
of these places. Aside from a couple minor flaws, it's actually fairly high
quality. Having it framed was about 13x more expensive than the painting
itself.
R: chrischen
The economics of framing are tricky too. Frames are actually just as cheap in
China FOB price, especially if they are a mass-produced size. Even if not
mass-produced, a frame from China would be priced in line with a custom
painting.
The issue is that shipping prices are by weight, and the frames are usually
much more heavy than a piece of canvas. This significantly drives up the total
price of a frame from China, and lets American framing companies charge much
higher prices.
Pro tip: get the artwork in a size that already fits a standard frame size and
you'll save a ton of money.
R: sithadmin
>The issue is that shipping prices are by weight, and the frames are usually
much more heavy than a piece of canvas
Which is precisely why I brought the canvas home in a tube.
>Pro tip: get the artwork in a size that already fits a standard frame size
and you'll save a ton of money.
Eh. I looked at such options, but wasn't impressed. Ended up settling on a
fairly oddball, incandescently red glossy frame. Goes well alongside the
subjects (a bunch of PLA guys).
R: dmritard96
Been a few times. Very cool place with tons of artwork. The originals are hard
to find but even the replications and copies are fascinating and watching
people do it is very interesting.
R: known
Exploitation?
R: zeroecco
And now I know why so many fine artists can't find work unless it is purely
original work (a highly volatile market). I am saddened deeply that even art
has degraded to this. if you want a print, buy a print from a printing
machine. You want an oil painting? Hire or barter with a local artist to do
so. I can't even. I just can't even.
R: nkrisc
Aside from respecting geopolitical borders and nationalistic allegiance, why
should I value a local artist any more than an artist in China? Is the Chinese
artist somehow lesser than the local artist?
R: zeroecco
is synthetic vanilla `less than` real vanilla? People are not the problem
here, the product is. It lacks in the POINT of art.
Art is like going to a concert, an expression of the human condition. Why go
if you can listen to a perfect mix of it at home?
How you get a painting is part of the reason you get the painting at all. The
story behind it is just as much of the point as the piece itself.
To be clear: If you are in China, buy art from a Chinese artist. If you are in
Brazil, buy art from a Brazilian artist.
R: genericone
I think the downvotes are coming from the following interpretation of your
comment: No true Artsman buys art internationally.
R: zeroecco
likely. but that wasn't my intent.
|
HACKER_NEWS
|
How to combine CSV files with unique headers and show output of each CSV per row?
I have seen numerous examples of how to combine CSV files, but none quite fit what I am trying to accomplish. I have multiple CSV files that only contain 1 row, but multiple columns. Each header may differ in each CSV, but they do have a common column name between them. The output I want to accomplish is to add all headers from each CSV file (regardless of whether the shown value is null or not) to row 1 of my output CSV file. Then I want to align each CSV's single row output to a row in the output CSV file, and populate each column accordingly, and leave those empty that have no values assigned.
CSV1:
Name,Date,Year,Country
Bill,May 2018,1962,Canada
CSV2:
Profile,Prov,Size,Name
1,ON,14,Steve
CSV Final:
Name,Profile,Size,Date,Year,Prov,Country
Bill,,,May 2018,1962,,Canada
Steve,1,14,,,ON,,
look for one of the several Join-Object cmdlets mimicking the sql join.
Using this Join-Object from the PowerShell Gallery: $Csv1 | FullJoin $Csv2 Name
Import, extract, construct and export...
$files = 'csv1.csv','csv2.csv'
$outputcsv = 'final.csv'
# import all files
$csv = $files | Import-Csv
# extract columns from all files
$columns = $csv | ForEach {
$_ | Get-Member -MemberType NoteProperty | Select -ExpandProperty Name
}
# construct an object (from file 1) with all the columns
# this will build the initial csv
# (SilentlyContinue on members that already exist)
$columns | ForEach {
Add-Member -InputObject $csv[0] -Name $_ -MemberType NoteProperty -Value '' -ErrorAction SilentlyContinue
}
# export first object - and build all columns
$csv | Select -First 1 | Export-Csv -Path $outputcsv -NoTypeInformation
# export the rest
# -force is needed, because only 1st object has all the columns
$csv | Select -Skip 1 | Export-Csv -Path $outputcsv -NoTypeInformation -Force -Append
# show files
($files+$outputcsv) | ForEach { "$_`:"; (Get-Content -LiteralPath $_); '---' }
Edit: No need to overthink the export. A single line is enough:
$csv | Export-Csv -Path $outputcsv -NoTypeInformation
While there will be another option to do it in a one liner with Join-Object,
You can get this cmdlet from PSGallery.
Find-Script join-object | Install-Script -Verbose -Scope CurrentUser
below works,
(Join-Object -Left $CSV1 -Right $CSv2 -LeftJoinProperty Name -RightJoinProperty Name ), (Join-Object -Left $CSV2 -Right $CSV1 -LeftJoinProperty Name -RightJoinProperty Name -Type AllInLeft) | Export-Csv -Path c:\CombinedCsv.csv -NoTypeInformation
Join-Object is not a built-in cmdlet.
Added the step to install the cmdlet.
|
STACK_EXCHANGE
|
When will Windows 10 be released?
I'm subscribed to the Preview for Developers program, but I'm in the dark about the next big OS version for my phone (which is currently a Lumia 1520).
We know that Windows 10 will be on every device (The Verge, Microsoft's Press event), but apart from seeing what will probably eventually come to my phone, I'm left with the question of when.
What are the dates for the preview and alpha versions of Windows 10 for phones? What's the best channel of information to find out as soon as such things are available?
Keep in mind that you will need the Windows Insider app (http://www.windowsphone.com/s?appid=ed2b1421-6414-4544-bd8d-06d58ee402a5), not the PFD app to install Win10.
Although my question has been answered, I'm still waiting for the Windows 10 for phones preview to be released for some models like the Lumia 1520 which is the model of my phone.
Possible duplicate of Release date of Windows 10 Mobile
Windows 10 has been released for the following devices in March 2016:
Lumia 1520
Lumia 930
Lumia 830
Lumia 730/735
Lumia 640/640 XL
Lumia 635/636/638 (1 GB)
Lumia 540
Lumia 535/532
Lumia 430/435
BLU Win HD w510u and Win HD LTE x150q
MCJ Madosma Q501
To get started, you'll need to download the Windows 10 Upgrade Advisor app. Note that actual availability of the update for your phone is dependent on carrier approval.
For more information, see the following links:
Windows 10 is here for your phone (Microsoft.com)
Upgrading existing Windows Phone 8.1 devices to Windows 10 Mobile (Windows Blog)
Windows 10 Mobile Insider Device Rollout FAQ (Microsoft Community)
Microsoft (Finally) Ships Windows 10 Mobile Upgrade for Windows Phone 8.1 (Thurrott.com)
As for other phone models, according to this AAWP article, it's seeming more and more likely that anything from the Lumia x20 series (520, 920 and so on), as well as phones with less than 1 GB of RAM, will not be able to upgrade to Windows 10 Mobile. Hopefully Microsoft will clarify the situation with these phones soon.
As for the best channel of information, Windows 10 is such a big deal that every major news outlet, tech-related or not, will report on any significant milestones as soon as information becomes available. You'd really have to go out of your way to remain in the dark. That said, a couple of good sources to keep an eye on:
Blogging Windows
Windows Central
All About Windows Phone
Thurrott.com
Neowin
The preview has apparently now started
That's right! Currently for Lumia 630, Lumia 635, Lumia 636, Lumia 638, Lumia 730, and Lumia 830.
You may want to edit your answer; MS has said that it will be releasing in September. Also, you could perhaps get a little more specific now-- June 29 is the Windows 10 release date for PC's.
Just a note, the HTC 8x will apparently not get the update.
|
STACK_EXCHANGE
|
A persona of a device represents the role that the device plays in a network deployment. Creating a persona for devices helps in customizing configuration workflows, automating parts of configurations, and showing the default configuration and the relevant settings for the device. Persona configuration also helps in customizing the monitoring screens and troubleshooting workflows that are appropriate for the device.
Creating a Persona
Personas can be created when creating a group. Persona and architecture can be set at the group level. All devices within a group inherit the same persona from the group settings.
While creating a group, the architecture and persona settings of the current group can be marked as preferred settings for adding subsequent groups. For subsequent groups, you can either automatically apply the preferred settings or manually select settings for the new group.
Based on the device persona selected in a group, the device configuration page displays only particular device tabs for that group. For example, if a group has only access points persona assigned to it, then the device configuration page for that group displays only the access points tab.
Persona for Access Points
Access Points can have the following persona:
- Campus/Branch—In this persona, AP provides WLAN Wireless Local Area Network. WLAN is a 802.11 standards-based LAN that the users access through a wireless connection. functionality. This persona applies to both ArubaOS 10 and legacy ArubaOS 8 (including IAP-VPN Virtual Private Network. VPN enables secure access to a corporate network when located remotely. It enables a computer to send and receive data across shared or public networks as if it were directly connected to the private network, while benefiting from the functionality, security, and management policies of the private network. This is done by establishing a virtual point-to-point connection through the use of dedicated connections, encryption, or a combination of the two.) architectures.
- Microbranch—In this persona, AP provides SD-WAN Software-Defined Wide Area Network. SD-WAN is an application for applying SDN technology to WAN connections that connect enterprise networks across disparate geographical locations.-lite functionality in addition to the functions of WLAN AP. This persona applies only to ArubaOS 10 architecture.
Persona for Gateways
Gateways can have the following personas:
- Branch—In this persona, gateways provide the Aruba InstantOS and SD-Branch (LAN Local Area Network. A LAN is a network of connected devices within a distinct geographic area such as an office or a commercial establishment and share a common communications line or wireless link to a server. + WAN Wide Area Network. WAN is a telecommunications network or computer network that extends over a large geographical distance.) functionality. This persona applies to
both ArubaOS 10 andArubaOS 8 architecture s, with ArubaOS 10 being a superset of ArubaOS 8.
- VPN Concentrator—In this persona, gateways provide VPN concentrator functionality for
ArubaOS 10 Microbranch orArubaOS 8 or ArubaOS 10 + SD-Branchdeployments.
- Mobility—In this persona, gateway enables tunneling of traffic for enhanced security and centralized policy enforcement across wired and wireless clients. This persona only applies to the ArubaOS 10 architecture.
The following architecture is supported for creating groups:
- ArubaOS 8—Instant AP-based deployment, including Aruba InstanOS 6.x or Aruba InstantOS 8.x (IAP, IAP-VPN), or Aruba InstantOS 8.x SD-Branch deployments.
- ArubaOS 10—ArubaOS 10 deployment, including AP-only underlay, AP+gateway overlay, Microbranch, or AOS 10.x SD-Branch deployments.
A device persona can be applied to both ArubaOS 10 and ArubaOS 8 deployments. All the ArubaOS 8 personas apply to ArubaOS 10 deployments as well. The ArubaOS 10 deployment supports a couple of additional personas that do not apply to ArubaOS 8 deployment. The persona workflow differs based on the deployment type.
The following animation shows you group persona functionality in Aruba Central.
For information on creating groups with a persona and architecture, see the following topics:
|
OPCFW_CODE
|
Written by current Gastronomy student Laura Kitchings.
As an Archivist who is a current Master’s Candidate in the Gastronomy program, I am always trying to find ways to incorporate my professional training into my study of food. This summer I was fortunate to attend the 30-hour, 5-day workshop “The History of the Book in America: A Survey from Colonial to Modern” at Rare Book School (RBS) at the University of Virginia in Charlottesville, Virginia. Rare Book School is an organization that provides educational opportunities to study the history, care, and use of written, printed, and digital materials. The course was co-taught by Scott E. Casper and Jeffrey D. Groves who have published extensively on the History of the Book in America both separately and as a team. Our twelve-person class included antiquarian booksellers, librarians, archivists, and graduate students. I attended the class as part of my thesis research, focusing on cookbooks in the 1890’s. My goal in attending the program was to place the cookbooks of the 1890’s in the larger book history of the United States. While I had studies Special Collection Library management as part of my Master’s in Library Science at Simmons College (now Simmons University) I had never taken a course in the History of the Book.
Each of the five days during the workshop was divided into four sessions. I expected each session to consist of a lecture around an aspect of book history. Instead, each day was a mix of lecture, and activities. The various activities, including as comparing educational texts, almanacs, newspapers, and paperback books from a variety of time periods. As you can see from the examples below, each activity involved working with materials held by RBS and active discussion with classmates.
These activities allowed me to consider possible comparison cookbook activities in a library setting.
While our class was focused on Book History, we were also able to see what other courses were studying. One evening we were able to work with the Vandercook proof press that included the need to hand set type.
While our team struggled with placing the small metal pieces of type to prepare to actually print on the press, I found myself reflecting on how printing, like cooking, involves significant preparation and muscle memory. While I was frustrated in the movement while placing the type, I found myself thinking about the Saturdays I spent learning knife skills in MET ML 698 –Laboratory in the Culinary Arts: Cooking.
As with learning knife skills, I realized that if I had to regularly work with the small type, it would become Embodied knowledge.
On the last day, each member of the class presented on the future of the book. While my classmates focused on performative reading on social media and linked data, I focused on cookbooks. I was able to use work done as part of a team in MET ML 671- Food and Visual Culture. My presentation focused on how cookbooks now need to include significant visual elements such as photographs and illustrations, and how successful cookbook authors need to have a social media presence.
While it was an exhausting week, I’m grateful that I had the opportunity to attend the workshop and hope to find opportunities to teach with historic cookbooks in the future.
|
OPCFW_CODE
|
About Vlocity Vlocity-Platform-Developer Exam Still Valid Dumps
Vlocity Vlocity-Platform-Developer Certified Questions Of course, it is necessary to qualify for a qualifying exam, but more importantly, you will have more opportunities to get promoted in the workplace, The Vlocity Platform Developer Exam (v5.0) Vlocity-Platform-Developer exam dumps are provided by our company is organized in such a manner they can fulfill the needs of the exam in the finest possible way, The APP online version of our Vlocity-Platform-Developer study guide is used and designed based on the web browser.
Creating the AjaxControlToolkit.BehaviorBase Class, We were the merest acquaintances, Certified Vlocity-Platform-Developer Questions having met earlier in the year when he toured the Watson Lab, Skilled IM professionals needed The demand for IM and cybersecurity experts is growing.
What would be a better way of getting the user to confirm the loss of CTAL-TA_Syll2012DACH Certification Dumps form data, Strict avoidance of such addiction requires rigorous intellectual morality, which is only available to non-retired thinkers.
They'll require updates to content, site architecture, code, Certified Vlocity-Platform-Developer Questions or software, Check out how this works by saving the following code as an html file and opening it in Chrome.
Do you have the potential to manage others, New Vlocity-Platform-Developer Test Price The way you implement database connections may be the most important design decision you make for your application, Understanding Vlocity-Platform-Developer Practice Tests the effects of one market on another is critical to successful investing.
100% Pass Quiz Pass-Sure Vlocity - Vlocity-Platform-Developer Certified Questions
PayPal lets you print these forms for free, Organizations need to keep pace Certified Vlocity-Platform-Developer Questions with emerging technologies in order to stay ahead of competition and be able to scale operations in line with market changes and business goals.
Designer as Product Owner, Imagine that you walk out the https://examkiller.testsdumps.com/Vlocity-Platform-Developer_real-exam-dumps.html patio door of your hotel room an ocean view, of course) and admire the beauty of the sun setting on the ocean.
In order to build up your confidence for Vlocity-Platform-Developer training materials, we are pass guarantee and money back guarantee, if you fail to pass the exam we will give you full refund.
Using the WebServiceConnector Component, Of course, it is necessary S1000-007 Certification Test Questions to qualify for a qualifying exam, but more importantly, you will have more opportunities to get promoted in the workplace.
The Vlocity Platform Developer Exam (v5.0) Vlocity-Platform-Developer exam dumps are provided by our company is organized in such a manner they can fulfill the needs of the exam in the finest possible way.
The APP online version of our Vlocity-Platform-Developer study guide is used and designed based on the web browser, Real Exam Scenario With Vlocity-Platform-Developer Training Material, To help you improve yourself with Certified Vlocity-Platform-Developer Questions the pace of society, they also update the content according to requirement of the syllabus.
Pass Guaranteed Fantastic Vlocity - Vlocity-Platform-Developer Certified Questions
Accompanied by tremendous and popular compliments around the world, to make your feel more comprehensible about the Vlocity-Platform-Developer practice materials, all necessary questions of knowledge concerned with the exam are included into our Vlocity-Platform-Developer practice materials.
Facing the Vlocity-Platform-Developer exam this time, your rooted stressful mind of the exam can be eliminated after getting help from our Vlocity-Platform-Developer practice materials, These Vlocity Vlocity-Platform-Developer dump torrent are designed by our IT trainers and workers who are specialized in the real test questions for many years and they know well the key points of Vlocity-Platform-Developer real pdf dumps.
In any case, if someone is not able to pass despite preparing through Vlocity Vlocity-Platform-Developer dumps than he will be able to get all of his money back, The qualified experts have done their work very competently.
And you will be bound to pass the exam with our Vlocity-Platform-Developer learning guide, We believe that Vlocity-Platform-Developer test prep cram will succeed in helping you pass through the Vlocity-Platform-Developer test with high scores .What you need to do is giving us a chance, and we will see what happened.
The charging platforms the Vlocity-Platform-Developer trusted exam resource cooperated are all with high reputation in the international and own the most reliable security defense system.
Also, you just need to click one kind; then you can know much about it, Maybe, Vlocity-Platform-Developer certkingdom training material will be your good guidance, Our Vlocity-Platform-Developer study guide materials could bring huge impact to your personal development, because in the process of we are looking for a job, hold a Vlocity-Platform-Developer certificate you have more advantage than your competitors, the company will be a greater probability of you.
NEW QUESTION: 1
The ASA supports the following authentication methods with RADIUS servers:
+ PAP -- For all connection types.
+ CHAP and MS-CHAPv1 -- For L2TP-over-IPsec connections.
+ MS-CHAPv2 - For L2TP-over-IPsec connections
|
OPCFW_CODE
|
Review for Measuring Service in Multi-Class Networks
|0.6 (somewhat familiar with this area of research)||3 (3: Fair contribution)||1 (1: Requires major work)||2 (2: Weak Reject)|
The paper addresses an important topic, viz., user verification of network QoS through non-invasive means. The study is focused on router level QoS mechanisms, and provides techniques for users to distinguish between different scheduling mechanisms (Priority, WFQ, EDF) and estimate the parameters of the service.
In my opinion, there seems to be a mismatch between the needs of QoS verification and the methodology adopted in the paper. My primary concern is that the presentation is poor and difficult to follow.
A. The authors fail to distinguish between QoS mechanisms (policing, buffer management, scheduling) at an individual network element, and the network characteristics of the service delivered by the provider to the customer. For instance, a provider may offer a "Gold service" with delay and loss assurances, and may use multiple mechanisms (EDF and priority queueing) within the network to deliver this service. In this network packets of a single class may not travel the same sequence of network nodes, and could become re-ordered. Does it even make sense to model, interpret or validate this NETWORK service using a ROUTER model? Stated in an other way, is there any hope of extending the authors' methodology to a muliple router scenario?
B. The sampling or probing methodology is unclear. The authors mention the use of streams of packets with sequence numbers and timestamps. However, the experimental setup is not quite clear in terms of what are the raw data sampled and how the estimators are constructed. In particular, the statistical estimators used are sensitive to the fact that multiple classes be jointly backlogged at the time of sampling. How is this achieved? If there is unknown cross-traffic, how can we be sure about the backlogged status of the router?
C. The sensitivity of the estimation process to time scales is not adequately explored, especially as "all time scales are not guaranteed to infer the same scheduler" and hence "the final decision is made by using the majority rule over all time scales."
I would ask that the paper be rejected in its current form, with the recommendation that the authors work more on the justification for and presentation of the work.
|0.6 (somewhat familiar with this area of research)||2 (2: Marginal contribution)||5 (5: Very good)||2 (2: Weak Reject)|
This paper describes how to infer the service discipline used in a router, and after that infer service parameters for different classes, using a passive monitoring approach, and hypothesis testing techniques.
I have several concerns about the paper:
1. I can't think of practical use of the solution, especially given its relative complexity. For example, wouldn't a service provider document and tell its clients the kind of service that their routers provide, and
what performance properties users can expect?
2. The monitoring approach is only shown to work if the performance monitored is due to processing by a single router. In practice, performance can only be observed for an end-to-end path, with > 1 hops, and possibly heterogeneous service disciplines used in different hops. Will these techniques still work? If not, then paper's only addressing a toy problem.
3. The experimental results address simplified cases, e.g. only EDF vs SP vs WFQ for possible disciplines, and two WFQ rates for parameter discovery. They don't not convince that they will generalize to cases in which the possible candidate disciplines are more diverse (including other algorithms or even combination of algorithms), or when an WFQ is shared by flows with many different rates, and some of these rates are close to each other.
In summary, while I find such use of the estimation/hypothesis testing theory interesting, I'm skeptical about the need and practicality of the proposed solutions.
|0.6 (somewhat familiar with this area of research)||3 (3: Fair contribution)||5 (5: Very good)||4 (4: Weak Accept)|
The paper provides a statistical technique for inferring the type of scheduling used by a switch, and certain parameters of the inferred scheduler. The data for the inference is the empirical service envelopes of traffic through the router. A maximum likelihood estimator for a Gaussian parameterization of the envelopes is developed. The authors carry out some experimental evaluations.
This seems a reasonably interesting paper, applying rigorous statistical techniques to a networking problem.
1) Can the authors provide some more argument on how the results of their inference would be used in practice? What actions would end-points or applications take on the basis of the results.
2) What are the limitations of the Gaussian approach. Does this limit the applicability to inference on highly aggregated flows? It seems the approach should generalize without this assumption, although at the cost of some additional complexity.
3) What is the behavior of the inference if the actual discipline does not belong to one of the classes considered in the model? Can one use the maximizing likelihood in a statistical test of the hypothesis that the actual discipline is among the model class?
|
OPCFW_CODE
|
Set an IP and get notified if your public IP differs. Useful to stay informed about network changes.
This extension helps you to get informed about changes to your public IP address. You set an address to check for, and the icon will be green if it matches. The icon will change its color if your public IP is different from the one you set in the options and you will hear a short notification sound (muteable).
- (2021-09-12) SoNNeT: Muting sound doesnt work. it keeps alarming
- (2021-08-26) riowong: Can't use sound notification in silent environment. Lack of popup notification. It's useless to me now.
- (2021-07-14) Wojtek Rzechówka: This sound is horrible! Why can't I turn it off and only have icon notification?!
- (2021-06-28) Seth Fisher: Only works for IPv4 address, which doesn't change very often. Unfortunately the usefulness of that is shrinking because modern security systems check for your IPv6, which service providers are constantly changing, and which security protocols now use. If you're looking for a quick way to know if your IP address changed because you have to keep your firewall exceptions updated, this won't do the job. It would be awesome if they gave you the option to check for IPv6 address changes.
- (2020-10-13) Vinay Wadhwa: HORRIBLE ALARM SOUND - WHY? i thought my hard drive is malfunctioning or there's something stuck in my fan because of the STRANGE and LOUD alarm sound. I opened up my MacBook cleaned everything, ran disk error checks, thought i had to buy a new laptop.
- (2020-08-14) Zvi Twersky: I would only use it if I'd have an option to check once a day. Every 5 minutes is over the top.
- (2019-04-08) Eric Segerson: Just installed. Looks like just what I need. Thank you! Would be nice if I could set the check interval for longer than 5 minutes. Like, once per hour or once every 24 hours. I'm also unsure (because I just installed it) if it will pop an alert or just make a noise every 5 minutes. I'd prefer a popup that doesn't go away until I click OK.
- (2018-03-31) azizul ayasaki: If you can make this notification check our IP every min or sec. Then i will say this add-on are perfect.
- (2018-03-17) Luzia Enengel: usefull plug-in! funny sound :)
- (2018-03-17) FALKEmedia Technik: just working :)
|
OPCFW_CODE
|
You can add customized, event-specific database fields to collect information from event registrants, such as seating choice, meal choice, or extra-charge options.
You can customize the appearance of the event registration form by modifying the event registration system page.
Registration form fields
The default event registration form consists of two sections:
- contact fields
- events fields
Values entered in contact fields on event registration forms will not update corresponding fields in the registrant's contact record, but are stored separately within event registration records. See Customizing your contact database fields for more information.
To customize an event registration form, follow these steps:
- Hover over the Events menu and select the Events list option.
- Within the event list, click on the event whose form you want to customize.
- Click the Registration form link.
- Click Edit.
- From the screen that appears, you can choose which contact fields to include, and add custom event fields for this event only. If you want registrants to be able to – or be required to – upload one or more documents, add a file attachment type field.
You cannot de-select the e-mail field.
- When you are finishing making your changes, click Save.
To add a new field, click Add new field. For new fields, you can set the field type and change field settings.
The following field types are available:
Simple text field, used for short entries.
Used for longer text entries of up to 3,000 characters.
A set of checkboxes. See Adding choices to multi-option fields.
Multiple choice with extra charge
A set of checkboxes, each with an associated cost. Allows you to provide additional event options at a separate cost.
A set of mutually exclusive choices, arranged like buttons on a car radio. See Adding choices to multi-option fields.
Radio buttons with extra charge
A set of mutually exclusive choices, arranged like buttons on a car radio, each with an associated cost. Allows you to provide additional event options at a separate cost.
Extra charge calculation
Provides the ability to order multiple items, or to charge an additional fee proportional to a value entered by the registrant. For organizational members, you might want to charge an extra fee based on their revenue, number of staff, or grants they've received. For more information, see Using the extra charge calculation field.
A set of mutually exclusive choices, arranged in a drop-down list. See Adding choices to multi-option fields.
Allows an event registrant to upload documents and images as part of their event registration. The supported document types are: TXT, PDF, DOC, DOCX, XLS, XLSX, PTT, PPTX, ZIP, CSV. The supported image types are: JPG, JPEG, GIF, PNG, TIF, TIFF. For more information on working with file attachment fields, click here.
Rules and terms
Displays a calendar control that can be used to select a date.
Used to group and separate fields.
For each field, the following settings are available:
The name used to identify the field. The field label must be unique among all event fields and contact fields.
Controls whether the field has to be filled out before the form can be submitted. For all self-service online forms, (member application, email subscription, donation, and event registration), the Email field is always required.
For multiple choice, radio buttons, and dropdown fields, you choose the options to be displayed. Click an existing option to change or remove it. Click Add new item to add more items to the list. See Adding choices to multi-option fields.
Instructions explaining how to use this field. For information on controlling the appearance of field instructions, see Adding field instructions.
Deleting a field will lose the data stored in that field for all current event registrants.
To delete a field, click it within the list, then click the delete link on the right.
After you delete a field, it appears crossed out in the field list until you save your changes.
While the field appears crossed out, you can restore it by clicking the restore link.
To change the order in which fields appear, you can drag and drop fields within the list, or you can click the green up and down arrows beside a field.
Changing colors and fonts
You can change the colors and text styles used on your event registration form from the Colors and styles screen. For a complete list of the elements you can modify, see Event calendar gadget.
Any changes you make will be applied to other gadgets that use the same settings.
Modifying the event registration system page
You customize the system page used to display the event registration form by adding content.
To customize the event registration system page, follow these steps:
- Hover over the Website menu and select the System pages option.
- Within the system page list, select Event registration.
- Click the Edit button.
Now, you can modify the system page in a number of ways. You can:
- Change the page template from the page settings on the left.
- Hover over the blue box – the system gadget that displays the actual registration form – and click the Settings icon to display the settings for the system gadget.
- Click the Gadget or Layout drop-downs to insert gadgets and layouts above or below the system gadget.
When you are finished modifying the event registration system page, click the Save button.
Using an event registration form
When visitors to your site click the Register button for an event, the following event registration forms appear:
- First, they will be asked to enter their email address. If they are logged in, their email will already be filled in (though they can change it, to register another person).
- If there are multiple ticket types, they will then be asked to select a ticket type. Depending on whether they are logged on or not, some member-only ticket types may not be available. If they are not logged in, but their email is stored in your contact database, they will be prompted to log in. If their email is not stored in your contact database, they will be prompted to apply for membership.
- Now, the main registration form appears. Once they complete the form and click Next, the event record is created and they will be prompted to confirm the registration.
- After the registration is confirmed, a new contact record will be added and an email will be automatically sent to them with their password and other login information. Depending on the payment method you chose for your event, the registrant may be given the option of paying the registration fee off or online.
Working with file attachments
If you want event registrants to be able to upload documents and images as part of their event registration, you need to add a file attachment type field to the event registration form.
On the registration form, the field will appear as a Choose files button that the registrant can click to upload up to 20 files, with each file a maximum of 20 MB.
The supported document formats are: TXT, PDF, DOC, DOCX, XLS, XLSX, PTT, PPTX, ZIP, CSV. The supported image formats are: JPG, JPEG, GIF, PNG, TIF, TIFF.
If you designate your file attachment field as a required field, then registrants will have to upload at least one file before proceeding with their registration.
After the registrant uploads a file and confirms the registration, administrators can view the file attachment(s) from the registrant's registration details.
Administrators can edit the registration details and add files or remove files from the registration.
Registrants can view their attachments by clicking their event registration within the list on the My event registrations tab of their private member profiles.
Registrants cannot replace or delete file attachments after submitting the registration form.
For more ideas on ways you can use file attachments to help your events run smoothly, click here.
|
OPCFW_CODE
|
Can't enable/set current new themes - Orchard v<IP_ADDRESS>
I have created new themes from the command line tool from a clean install of the source. Have ran the site in debug mode, gone through setup and enabled code generation.
After running the codegen theme line, the new theme will show up when clicking on "Themes" in admin, however when I try to enable or set current the newly created theme, I get the following:
An unhandled exception has occurred and the request was terminated. Please refresh the page. If the error persists, go back
Object reference not set to an instance of an object.
System.NullReferenceException: Object reference not set to an instance of an object. at Orchard.Themes.Services.ThemeService.EnableThemeFeatures(String themeName) at Orchard.Themes.Controllers.AdminController.Activate(String themeId) at lambda_method(Closure , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary2 parameters) at System.Web.Mvc.Async.AsyncControllerActionInvoker.b__39(IAsyncResult asyncResult, ActionInvocation innerInvokeState) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult2.CallEndDelegate(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase1.End() at System.Web.Mvc.Async.AsyncControllerActionInvoker.EndInvokeActionMethod(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncControllerActionInvoker.AsyncInvocationWithFilters.b__3d() at System.Web.Mvc.Async.AsyncControllerActionInvoker.AsyncInvocationWithFilters.<>c__DisplayClass46.b__3f() at System.Web.Mvc.Async.AsyncControllerActionInvoker.AsyncInvocationWithFilters.<>c__DisplayClass46.b__3f() at System.Web.Mvc.Async.AsyncControllerActionInvoker.AsyncInvocationWithFilters.<>c__DisplayClass46.b__3f()
From Orchard.exe?
#6901 reports that it only happened in orchard.exe
Can confirm this issue with Orchard 1.10.x.
If I use this code:
codegen theme MyThemeName
Then inside the Theme.txt this line was added:
BaseTheme: $$BaseTheme$$
This causes troubles, although I did not figured out what the problem is.
By removing this line I can active the theme without problems.
I've done the following with 1.10.x branch:
1. git clone 1.10.x branch
1. run the site so it sets up a default tenant
1. opened `orchard.exe`
1. `feature enable Orchard.CodeGeneration`
1. `codegen theme MyThemeName` - no $$BaseTheme$$ in the theme.txt
1. from #6901 `codegen theme MeiyinTheme /CreateProject:true /IncludeInSolution:true /BasedOn:TheThemeMachine`
1. from #6901 `codegen theme MeiyinTheme /IncludeInSolution:true /CreateProject:true`
1. `codegen theme basedontheme /BasedOn:TheThemeMachine`
No $$BaseTheme$$ placeholder left behind, no exceptions when activating the themes.
I will retry with 1.10.1 but I'm out of time now for the next few days.
Closing as we don't repro in 1.10.x as per @rtpHarry
|
GITHUB_ARCHIVE
|
Connect data sources
To on-board Azure Sentinel, you first need to connect to your data sources. Azure Sentinel comes with a number of connectors for Microsoft solutions, available out of the box and providing real-time integration, including Microsoft Threat Protection solutions, and Microsoft 365 sources, including Office 365, Azure AD, Azure ATP, and Microsoft Cloud App Security, and more. In addition, there are built-in connectors to the broader security ecosystem for non-Microsoft solutions. You can also use common event format, Syslog or REST-API to connect your data sources with Azure Sentinel as well.
On the menu, select Data connectors. This page lets you see the full list of connectors that Azure Sentinel provides and their status. Select the connector you want to connect and select Open connector page.
On the specific connector page, make sure you have fulfilled all the prerequisites and follow the instructions to connect the data to Azure Sentinel. It may take some time for the logs to start syncing with Azure Sentinel. After you connect, you see a summary of the data in the Data received graph, and connectivity status of the data types.
Click the Next steps tab to get a list of out-of-the-box content Azure Sentinel provides for the specific data type.
Data connection methods
The following data connection methods are supported by Azure Sentinel:
Microsoft services are connected natively, leveraging the Azure foundation for out-of-the box integration, the following solutions can be connected in a few clicks:
External solutions via API: Some data sources are connected using APIs that are provided by the connected data source. Typically, most security technologies provide a set of APIs through which event logs can be retrieved.The APIs connect to Azure Sentinel and gather specific data types and send them to Azure Log Analytics. Appliances connected via API include:
External solutions via agent: Azure Sentinel can be connected to all other data sources that can perform real-time log streaming using the Syslog protocol, via an agent.
Most appliances use the Syslog protocol to send event messages that include the log itself and data about the log. The format of the logs varies, but most appliances support the Common Event Format (CEF) based formatting for logs data.
The Azure Sentinel agent, which is based on the Log Analytics agent, converts CEF formatted logs into a format that can be ingested by Log Analytics. Depending on the appliance type, the agent is installed either directly on the appliance, or on a dedicated Linux server. The agent for Linux receives events from the Syslog daemon over UDP, but if a Linux machine is expected to collect a high volume of Syslog events, they are sent over TCP from the Syslog daemon to the agent and from there to Log Analytics.
- Firewalls, proxies, and endpoints:
- DLP solutions
- Threat intelligence providers
- DNS machines - agent installed directly on the DNS machine
- Linux servers
- Other clouds
To connect your external appliance to Azure Sentinel, the agent must be deployed on a dedicated machine (VM or on premises) to support the communication between the appliance and Azure Sentinel. You can deploy the agent automatically or manually. Automatic deployment is only available if your dedicated machine is a new VM you are creating in Azure.
Alternatively, you can deploy the agent manually on an existing Azure VM, on a VM in another cloud, or on an on-premises machine.
Map data types with Azure Sentinel connection options
|Data type||How to connect||Data connector?||Comments|
|AzureActivity||Connect Azure Activity and Activity logs overview||V|
|AuditLogs||Connect Azure AD||V|
|SigninLogs||Connect Azure AD||V|
|InformationProtectionLogs_CL||Azure Information Protection reports
Connect Azure Information Protection
|V||This usually uses the InformationProtectionEvents function in addition to the data type. For more information, see How to modify the reports and create custom queries|
|AzureNetworkAnalytics_CL||Traffic analytic schema Traffic analytics|
|OfficeActivity||Connect Office 365||V|
|SecurityEvents||Connect Windows security events||V||For the Insecure Protocols workbooks, see Insecure protocols workbook setup|
|Microsoft Web Application Firewall (WAF) - (AzureDiagnostics)||Connect Microsoft Web Application Firewall||V|
|ThreatIntelligenceIndicator||Connect threat intelligence||V|
|Azure Monitor service map
Azure Monitor VM insights onboarding
Enable Azure Monitor VM insights
Using Single VM On-boarding
Using On-boarding Via Policy
|X||VM insights workbook|
|W3CIISLog||Connect IIS logs||X|
|WireData||Connect Wire Data||X|
|WindowsFirewall||Connect Windows Firewall||V|
|AADIP SecurityAlert||Connect Azure AD Identity Protection||V|
|AATP SecurityAlert||Connect Azure ATP||V|
|ASC SecurityAlert||Connect Azure Security Center||V|
|MCAS SecurityAlert||Connect Microsoft Cloud App Security||V|
|Sysmon (Event)||Connect Sysmon
Connect Windows Events
Get the Sysmon Parser
|X||Sysmon collection is not installed by default on virtual machines. For more information on how to install the Sysmon Agent, see Sysmon.|
|ConfigurationData||Automate VM inventory||X|
|ConfigurationChange||Automate VM tracking||X|
|F5 BIG-IP||Connect F5 BIG-IP||X|
- To get started with Azure Sentinel, you need a subscription to Microsoft Azure. If you do not have a subscription, you can sign up for a free trial.
- Learn how to onboard your data to Azure Sentinel, and get visibility into your data, and potential threats.
|
OPCFW_CODE
|
I don't know when this bug started, but it's in yesterday's release build, and on the tip today - if you read the msg Chris Hoffman sent out with the ZDNet article, then you can't read any other messages, imap or local. If you read a local msg, you crash in nsMailboxProtocol::ReadMessageResponse because m_channelListener is null. If you try to read an imap message, it loads but does not display, again because in nsImapProtocol::BeginMessageDownLoad, m_channelListener is null. I don't know what's causing this but it's easily reproducible.
I'll look into this a bit and see if I can figure out why the channel listener is not set. I *don't* think it was your fix for the news article forwarding problem, because yesterday's release build had the problem.
OK, the problem is that nsMessenger's mDocShell is null after reading Chris's message, and everything goes downhill after that. Not sure how mDocShell got to be null, unless SetWindow got called. I'll look a little more.
So, SetWindow gets called everytime we load a msg, and after loading the msg in question, we can't get the msg pane from the passed in dom window. I wonder if this could be related to the js changes for clearing out js state on web page loades. I'll see if I can find that change and back it out.
i think this is a dup of bug 38398 which was reopened yesterday
Nope, Brendan's change to dom\src\base\nsGlobalWindow.cpp wasn't it. No more clues.
So this is kind of cool.....here's why this page breaks us.... the body is loaded into an iframe that has a name attribute of "messagepane". before we load a message, we find the right iframe to load it into by searching in the DOM for an element with this name. This page attached to this message contains some JS which does the following: self.name = "parentWin" effectively renaming our iframe from "messagePane" to "parentWin". *doh* So we never find the message pane again in future searches..... clearing regression keyword. This isn't a regression.
+, P1 per mail triage
cc'ing Mitch. Mitch, this is the bug I emailed you about last week. The web page we are laying out in the message pane has JS which says self.name = "parentWin" causing our "messagePane" docshell to get renamed. Supposedly JS is disabled in the message pane. So this shouldn't have had the right capaiblities to execute, right?
JS is not disabled in the message pane...messages can contain scripts, and the scripts will run. I wanted to disable JS in mail by default, and was told I can't do this, but even if it's disabled by default, when turned on it should obviously not be allowed to change the name of the frame it's in. There are probably other similar things which JS in mail should not be allowed to do; the sandbox needs to be tighter or we'll wind up with messages which can mail themselves to people in your addressbook and other fun behaviors. I'm not sure how best to deal with this but I'll give it a look.
cc: Cathy Zhang (QA) since this may have to do with mail security. And js is not disabled by default in the message pane in commercial builds as per another bug report that Mitch is referring to.
to pdt: this is a P1 because someone can affect another user's system and make it so that s/he can't read any other message in that session. Per Marketing and mail team's earlier decision, Netscape builds will have JS turned on by default in the message pane due to other reasons so we can't fix this by turning JS off.
PDT agrees P1
Hey Mitch, it turns out my fix for 51442 fixes this bug as well because now the running JS doesn't have the right capabilities for setting the name of it's containing iframe.
Waiting for the (attachment) webpage that caused this bug!
Re-opening bug. Steps to reproduce problem 1- goto www.zdnet.ch (if you viewsource you will notice a self.name="parentWin"); 2- send the page to yourself. 3- Open the page and then try clicking on any other message 4- since parentWin is the name of the pane now messagepane is lost and you cant view messages. 5- you can still view messages in a separate window by clicking on the thread or pressing Enter.
Does not work on release commercial builds 2000092808/2000092810 on all three platforms (mac,linux,win). Although works on release mozilla build (2000092810) on win, linux and mac.
rtm+, this renders mail unusable. Why is only the mozilla build fixed? Did we forget to check this in on the branch or something?
cleaning up: turning nsbeta3+ to nsbeta-: beta3 has passed.
we don't know why it occurs on netscape but not mozilla builds - a couple possibilities are prefs differences between mozilla and netscape, or diffs between debug and release builds. But the branch is not an issue since we branched long after this bug was marked fixed.
JS in Mailnews is disabled by default in Mozilla, enabled in Netscape 6.
changing to [rtm need info]. We want to rtm+ this but need a patch and code reviews. When those exist, please change back to [rtm+].
If 51442 is fixed, how come that page can still rename its iframe??
What is bug 51442? I don't have permissions to view it :-(((.
*** Bug 55871 has been marked as a duplicate of this bug. ***
Have we lost the battle on this one? Should I mark it rtm-? I haven't seen a progress update in the last few days and time is almost up.
I hope we haven't lost the battle. I repeatedly fall into the trap of loading messages that won't go away, and the workaround - shut down and restart mail - is very painful.
Give us one more day. Scott, any progress? I'll take a look today.
Looks like the message pane is still being allowed to change its own name. I've reproduced this in Mozilla with mail JS activated. As Scott suggested, it wasn't showing up in Mozilla because mail JS was off by default. Looks like we'll just have to add a check somewhere that sats self.name is read-only if you're a message pane. While we're at it, is there anything else the message pane shouldn't be able to do? I will try to have a fix for this tomorrow (Fri), can we get it into rtm?
If you get me a patch, I'll try to get it through PDT =).
mstoltz, any luck with a patch?
If you can get me a patch today I'll take it to PDT while they are meeting. Thanks!
Not sure I'm going to get to this.
I was thinking about just preventing scripts in mail from setting self.name, but is there a larger issue here? Are there any other, similar things whoich will break mail?
On the surface, self.name is the only one I can think of. Actually, should we prevent it from bringing up dialogs or is that already blocked.
Changing summary to include the JS security concern. Please attach the patch if you can. The approval step is predicated on it existing.
I'd still like to get a fix for this Mitch, if you think it'll be easy. If you don't have time to get to it, if you can point me in the right direction then maybe I can figure it out.
I'm working on a fix tonight. I think I can modify the security manager to prevent mail messages from writing to window.name.
I have a fix, testing to make sure it's safe...
Created attachment 17537 [details] [diff] [review] Patch - allow per-scheme sec policies and add policy for mail
Mitch does mOrigin come from a URI? If so, we can just call ->GetScheme() to get the scheme instead of trying to parse the scheme out of the string ourselves.
mOrigin is a string originally read out of prefs; it can be either an URL like http://warp or just a scheme, like http:. Basically all the operations are on strings; it would add more complexity to build URLs and then parse out the scheme, I think.
note to mscott: following varada's steps to reproduce, I was able to reproduce this bug. then I applied mstoltz's patch and rebuilt, and the problem is fixed.
Mitch, can you get a reviewer from someone in your group to review your patch? I can be the super reviewer for this. thanks for trying this out Seth. sr=mscott It'd be great if we could get the review done today so we can take it to PDT. Thanks again for fixing this for us!
sorry for the extra email. Removing mail2 keyword.
Mitch, would it be possible to get another security review so we can try to get this in this weekend before Monday's build?
Mitch, I'd really like to get this checked in today but we need another reviewer from soneone on your team that's familiar with this code. Who can we pester?
beard has reviewed this. Marking rtm+ to expedite pdt review.
checked into the branch and the tip. thanks again mitch.
Verified as fixed on branch build of win32, linux, and macos using the following builds: win32 commercial seamonkey build 2000-102409-mn6 installed on P500 Win98 linux commercial seamonkey build 2000-102409-mn6 installed on P200 RedHat 6.2 macos commercial seamonkey build 2000-102308-mn6 installed on G3/400 OS 9.04
Win32 (2001-07-10-05-0.9.2) This bug is gone.
|
OPCFW_CODE
|
import { expect } from 'chai';
import Calculator from '../src/calc';
describe("Calculator", function() {
describe("Add", function() {
it("should return 0 when a = 0 and b = 0", function() {
let calc = new Calculator();
let result = calc.Add(0, 0);
expect(result).to.equal(0);
});
it("should return 3 when a = 1 and b = 2", function() {
let calc = new Calculator();
let result = calc.Add(1, 2);
expect(result).greaterThan(0);
expect(result).to.equal(3);
});
it("should return -3 when a = -1 and b = -2", function() {
let calc = new Calculator();
let result = calc.Add(-1, -2);
expect(result).lessThan(0);
expect(result).to.equal(-3);
});
});
describe("Subtract", function() {
it("should return 0 when a = 0 and b = 0", function() {
let calc = new Calculator();
let result = calc.Subtract(0, 0);
expect(result).to.equal(0);
});
it("should return -1 when a = 1 and b = 2", function() {
let calc = new Calculator();
let result = calc.Subtract(1, 2);
expect(result).lessThan(0);
expect(result).to.equal(-1);
});
it("should return 1 when a = -1 and b = -2", function() {
let calc = new Calculator();
let result = calc.Subtract(-1, -2);
expect(result).greaterThan(0);
expect(result).to.equal(1);
});
});
describe("Multiply", function() {
it("should return 0 when a = 0 and b = 0", function() {
let calc = new Calculator();
let result = calc.Multiply(0, 0);
expect(result).to.equal(0);
});
it("should return 2 when a = 1 and b = 2", function() {
let calc = new Calculator();
let result = calc.Multiply(1, 2);
expect(result).greaterThan(0);
expect(result).to.equal(2);
});
it("should return 2 when a = -1 and b = -2", function() {
let calc = new Calculator();
let result = calc.Multiply(-1, -2);
expect(result).greaterThan(0);
expect(result).to.equal(2);
});
});
describe("Divide", function() {
it("should throw an error when a = 0 and b = 0", function() {
let calc = new Calculator();
expect(() => calc.Divide(0, 0)).to.throw(Error, 'Cannot divide by zero');
});
it("should return 0.5 when a = 1 and b = 2", function() {
let calc = new Calculator();
let result = calc.Divide(1, 2);
expect(result).greaterThan(0);
expect(result).to.equal(0.5);
});
it("should return 0.5 when a = -1 and b = -2", function() {
let calc = new Calculator();
let result = calc.Divide(-1, -2);
expect(result).greaterThan(0);
expect(result).to.equal(0.5);
});
});
});
|
STACK_EDU
|
White label SEARCH ENGINE OPTIMISATION & hyperlink building companies. Getting relevant, qualified search site visitors to your web site is just the beginning of our SEARCH ENGINE OPTIMISATION optimization providers. We companion with you to be sure that your web site is driving guests through the purchasing funnel in a transparent, concise approach. We believe in testing every little thing and making continual improvements to your SEARCH ENGINE OPTIMISATION advertising strategy.
Usaha yang dilakukan oleh pihak Amerika Serikat pada saat itu menghasilkan suatu kemajuan lain. Howard Aiken , seorang insinyur Harvard yang bekerja dengan IBM , berhasil memproduksi kalkulator elektronik untuk Angkatan Laut Amerika Serikat Kalkulator tersebut berukuran panjang setengah lapangan bola kaki dan memiliki rentang kabel sepanjang 500 mil The Harvard-IBM Computerized Sequence Controlled Calculator, atau Mark I, merupakan komputer relai elektronik. Ia menggunakan sinyal elektromagnetik untuk menggerakkan komponen mekanik. Mesin tersebut beroperasi dengan lambat (ia membutuhkan 3-5 detik untuk setiap perhitungan) dan tidak fleksibel (urutan kalkulasi tidak dapat diubah). Kalkulator tersebut dapat melakukan perhitungan aritmatik dasar dan persamaan yang lebih kompleks.
Relying on the severity of the offense, your website may not be able to come back from the penalties. The one approach to build a sustainable online business that will deliver in more natural site visitors over time is to comply with WEBSITE POSITIONING marketing finest practices and create efficient content that your guests will discover useful. Google need user-generated content in your website to be moderated and stored as prime quality as the rest of your web site.
Programmers usually work alone, however generally work with other laptop specialists on giant initiatives. Because writing code can be carried out anyplace, many programmers work from their properties. All Pentium II processors have Multimedia Extensions (MMX) and built-in Level One and Stage Two cache controllers. Additional features embrace Dynamic Execution and Twin Unbiased Bus Architecture, with separate sixty four bit system and cache busses. Pentium II is a superscalar CPU having about 7.5 million transistors.
Simply as completely different teams in software program engineering advocate different methodologies, totally different programming languages advocate totally different programming paradigms. Some languages are designed to help one paradigm ( Smalltalk helps object-oriented programming, Haskell helps practical programming), whereas different programming languages support a number of paradigms (similar to Object Pascal , C++ , C# , Visual Fundamental , Widespread Lisp , Scheme , Python , Ruby , and Oz ).
There are numerous certifications for software builders. Among the most common certifications embody Microsoft , Amazon Software program Services , Cloudera , and Oracle Many software growth careers require professionals to obtain certifications before allowing them to work with certain software initiatives. These certifications provide verification that professionals know enough concerning the software program in question to work comfortably with it. Often, these credentials boost software program developers’ salary and employment opportunities since they set them apart from different candidates. Professionals can conduct their very own research on-line or reach out to their college or university to discover different certification alternatives. Additionally, skilled organizations might supply extra certification opportunities.
Many assume that Google won’t permit new web sites to rank properly for competitive phrases till the online deal with ages†and acquires belief†in Google – I believe this relies on the standard of the incoming hyperlinks. Sometimes your site will rank high for some time then disappears for months. A honeymoon period†to offer you a taste of Google visitors, maybe, or a interval to raised gauge your website high quality from an actual user perspective.
|
OPCFW_CODE
|
Tabs are not closed but left in strange state
I use TidyTabs in VS 2015, together with several other extensions: Power Tools, CodeRush for Roslyn and VsVim. Instead of closing tabs TidyTabs bring them in some sort of strange state. Tab is left open, but file content is not displayed, just uniform gray color. To view file I have to close it and open again. I am not sure if this happens with every tab TidyTabs tries to close, but it happens a lot.
Logs show nothing strange:
2016-06-17 09:28:46.2948 | INFO | Starting Tidy Tabs inside Visual Studio Community 14.0 for solution D:\c#\c#projects\Topas4\TOPAS4.sln
2016-06-17 09:30:26.1793 | INFO | Closed 2 tabs to maintain a max open document count of 10
2016-06-17 09:37:25.4442 | INFO | Closed 1 tabs to maintain a max open document count of 10
2016-06-17 09:39:47.7808 | INFO | Closed 5 tabs that were inactive for longer than 10 minutes
2016-06-17 09:52:05.1689 | INFO | Closed 3 tabs that were inactive for longer than 10 minutes
2016-06-17 10:04:02.7607 | INFO | Closed 1 tabs that were inactive for longer than 10 minutes
2016-06-17 10:10:39.8079 | INFO | Closed 1 tabs that were inactive for longer than 10 minutes
2016-06-17 10:11:48.8318 | INFO | Closed 1 tabs that were inactive for longer than 10 minutes
2016-06-17 10:21:15.3360 | INFO | Closed 2 tabs that were inactive for longer than 10 minutes
2016-06-17 10:27:41.1426 | INFO | Closed 2 tabs that were inactive for longer than 10 minutes
2016-06-17 10:27:57.8471 | INFO | Closed 1 tabs that were inactive for longer than 10 minutes
2016-06-17 10:31:28.5493 | INFO | Closed 2 tabs that were inactive for longer than 10 minutes
2016-06-17 10:41:05.5757 | INFO | Closed 1 tabs that were inactive for longer than 10 minutes
2016-06-17 10:46:47.9392 | INFO | Closed 2 tabs that were inactive for longer than 10 minutes
2016-06-17 10:48:25.8136 | INFO | Closed 1 tabs that were inactive for longer than 10 minutes
2016-06-17 10:59:55.3279 | INFO | Closed 2 tabs that were inactive for longer than 10 minutes
2016-06-17 11:20:40.5326 | INFO | Closed 1 tabs that were inactive for longer than 10 minutes
2016-06-17 11:26:23.8849 | INFO | Closed 1 tabs that were inactive for longer than 10 minutes
2016-06-17 11:34:24.8336 | INFO | Starting Tidy Tabs inside Visual Studio Community 14.0 for solution D:\c#\c#projects\Topas4\TOPAS4.sln
2016-06-17 11:42:20.9365 | INFO | Closed 12 tabs to maintain a max open document count of 10
2016-06-17 11:43:14.2752 | INFO | Closed 1 tabs to maintain a max open document count of 10
2016-06-17 11:44:13.8304 | INFO | Closed 1 tabs to maintain a max open document count of 10
2016-06-17 11:45:46.0828 | INFO | Closed 2 tabs to maintain a max open document count of 10
Hi @DomasM,
Thank you for reporting this issue. I'm not longer using Visual Studio for my day-to-day work, but I would still like to fix this issue.
I noticed you were editing XAML files in your screenshot, doe this just happen for XAML files? I'm wondering if it is related to to the visual designer/code mixed use tab?
If you could submit a pull request with a project added where the issue it reproducible that would be a huge help.
Thanks
I can't reliably reproduce issue, especially with dummy projects siting idle. It does happen only with .xaml files. I use visual designer only for very short amounts of time so this issue is certainly not specific only to split tab (code+preview).
|
GITHUB_ARCHIVE
|
#include "farmer.h"
Ticker sensor_reader;
TMP36 temp(TEMP_IN); //using TMP36 library
AnalogIn moisture(MOISTURE_IN);
DigitalOut relay(RELAY_OUT);
DigitalOut fan(FAN_OUT);
DigitalOut pump(PUMP_OUT);
DigitalOut manual_led(MANUAL_LED);
InterruptIn manual_relay(MANUAL_RELAY);
InterruptIn manual_fan(MANUAL_FAN);
InterruptIn manual_pump(MANUAL_PUMP);
InterruptIn manual_enable(MANUAL_ENABLE);
bool manual_enabled; //if manual mode is enabled
float poll_time; //polling variables and thresholds
float relay_threshold;
float fan_threshold;
float pump_threshold;
int main() {
//initialize variables
relay = 1; //relay 1 == off 0 == on
fan = 0;
pump = 0;
manual_led = 0;
fan_threshold = 25; //25 degrees celsius
relay_threshold = 16; //16 degrees celsius
pump_threshold = .45; //45% moisture
poll_time = 20; //20 seconds
manual_enabled = false;
manual_enable.rise(&toggle_manual);
sensor_reader.attach(&read_sensors, poll_time); // for checking sensors at constant time interval
printf("\n\rSystem Started\n\r");
print_params();
int choice;
char term;
int result;
//run command line interface in infinite loop, interrupts handle everything else
while(1) {
print_menu();
choice = 0;
term = 0;
result = 0;
result = scanf("%d%c",&choice, &term);
if(result != 2 || isalnum(term)){ //error checking
printf("That is not a valid input\n\r");
continue;
}else{
switch(choice){
float val; // for reading
case 1:
print_sensors();
break;
case 2:
//change poll time
printf("\n\rEnter a new poll time in seconds: \n\r");
result = scanf("%f%c", &val, &term);
if(result != 2 || isalnum(term) || val <= 0){
printf("That is not a valid input\n\r");
}else{
poll_time = val;
//only restarts polling if manual is false, if manual is true the
//interrupt to turn off manual will restart the polling correctly
if(!manual_enabled) sensor_reader.attach(&read_sensors,poll_time);
}
break;
case 3:
//change fan threshold
printf("\n\rEnter a new fan threshold in degrees celsius: \n\r");
result = scanf("%f%c", &val, &term);
if(result != 2 || isalnum(term)){
printf("That is not a valid input\n\r");
}else{
fan_threshold = val;
}
break;
case 4:
//change relay threshold
printf("\n\rEnter a new relay threshold in degrees celsius: \n\r");
result = scanf("%f%c", &val, &term);
if(result != 2 || isalnum(term)){
printf("That is not a valid input\n\r");
}else{
relay_threshold = val;
}
break;
case 5:
//change pump threshold
printf("\n\rEnter a new pump threshold as a real number between 0 and 1: \n\r");
result = scanf("%f%c", &val, &term);
if(result != 2 || isalnum(term) || val < 0 || val > 1){
printf("That is not a valid input\n\r");
}else{
pump_threshold = val;
}
break;
case 6:
print_params();
break;
default:
printf("Please enter a number between 1 and 5\n\r");
break;
}
}
}
}
void print_menu(){
printf("\n\rWelcome to the main menu for the automatic greenhouse\n\r");
printf("1) Read Sensor Inputs\n\r");
printf("2) Change Poll Time\n\r");
printf("3) Change Fan Threshold\n\r");
printf("4) Change Relay Threshold\n\r");
printf("5) Change Pump Threshold\n\r");
printf("6) Print Current Thresholds and Poll Time\n\r");
}
void print_sensors(){
float temp_celcius;
float moisture_val;
temp_celcius = temp.read();
moisture_val = moisture.read();
printf("\n\rStart Sensor Reading\n\r");
printf("\n\rDegrees Celsius %f\n\r", temp_celcius);
printf("\n\rMoisture Value %f\n\r",moisture_val);
}
void print_params(){
printf("\n\rFan Threshold: %f degrees celcius\n\r", fan_threshold);
printf("\n\rRelay Threshold: %f degrees celcius\n\r", relay_threshold);
printf("\n\rPump Threshold: %f%% moisture content\n\r", pump_threshold * 100);
printf("\n\rPoll Time: %f seconds\n\r", poll_time);
}
void read_sensors(){
float temp_celcius;
float moisture_val;
temp_celcius = temp.read();
moisture_val = moisture.read();
if(temp_celcius <= relay_threshold){ //if temp less than relay threshold
relay = 0; //relay on (0 == on)
}else{
relay = 1; //relay off (1 == off)
}
if(temp_celcius >= fan_threshold){ //if temp greater than fan threshold
fan = 1; //fan on
}else{
fan = 0; //fan off
}
if(moisture_val <= pump_threshold){ //if moisture reading less than threshold
//turn pump on for short interval to avoid flooding the plant
pump = 1;
wait(2);
pump = 0;
}else{
pump = 0;
}
}
void toggle_fan(){
//turning fan on with interrupt
fan = !fan;
manual_fan.rise(NULL); //get rid of button bounce
wait(WAIT_TIME);
manual_fan.rise(&toggle_fan);
}
void toggle_relay(){
//turning relay on with interrupt
relay = !relay;
manual_relay.rise(NULL); //get rid of button bounce
wait(WAIT_TIME);
manual_relay.rise(&toggle_relay);
}
void toggle_pump(){
//turning pump on with interrupt
pump = !pump;
manual_pump.rise(NULL); //get rid of button bounce
wait(WAIT_TIME);
manual_pump.rise(&toggle_pump);
}
void toggle_manual(){
//turning manual on with interrupt
manual_enable.rise(NULL);
if(manual_enabled){ //turn manual off
manual_relay.rise(NULL); //disable interrupts
manual_fan.rise(NULL);
manual_pump.rise(NULL);
manual_led = 0; //turn led off
manual_enabled = false; //change variable
sensor_reader.attach(&read_sensors, poll_time); //restart polling
}else{
sensor_reader.detach(); //disable polling
manual_relay.rise(&toggle_relay); //enable interrupts
manual_fan.rise(&toggle_fan);
manual_pump.rise(&toggle_pump);
manual_led = 1; //led on
manual_enabled = true; //change variable
}
wait(WAIT_TIME); //debounce button
manual_enable.rise(&toggle_manual);
}
|
STACK_EDU
|
On 5 June 2012, Google bought QuickOffice, a privately held developer of office productivity applications for mobile devices such as iPhone, iPad, Android and Symbian.
The office productivity market is changing, but Microsoft is still largely in control with over 90% market share on PCs (mostly because no other product is 100% compatible in features or format). But PCs are becoming relatively less important as people increasingly use mobile devices such as smartphones and tablets. Many will give up certain functions so they can carry a tablet instead of a heavier PC, but over time, they want their tablets to be able to perform more of the functions they run on their PC. Users want to perform office automation tasks like reviewing documents, spreadsheets and presentations, and several products on the market allow documents (notably Microsoft Office ones) to be read and edited on mobile devices. Notably absent from this market have been Microsoft and Google.
There have been rumors for some time that Microsoft plans to enter the market for mobile office productivity applications. (It already has versions of Lync and OneNote for iOS.) Before now, Google had no real mobile-device story to tell. Google and Microsoft have had a mobile- related story — browser-based products that can be accessed from any connected Web browser. But users like rich applications, and business users often need to use them when they are not connected to the Internet (on a plane, for example). Google announced its Web-based suite before mobile devices like the iPad were available. Since then, mobile applications have become extremely important. A mobile-device application strategy will be critical to future office productivity suites (see "How Will the Office Suite Evolve, and Will Microsoft Continue to Dominate the Market?" ). The purchase of QuickOffice, which already includes access to documents hosted on Google Docs, gives Google presence on that important platform.
QuickOffice has been around for some time and sells for $20 or less. Other mobile office suites sell for low prices through app stores. In the hands of a company with deep pockets like Google, QuickOffice can now be improved more rapidly, and put pressure on Microsoft to sell Office for iPad (if such a product emerges) at a lower price than Microsoft would like.
However, it would be difficult for Microsoft to sell Office for iPad at a price near that of typical iPad apps, for fear of cannibalizing sales of its PC versions and setting a lower price overall. It could offer a relatively low-featured release that is somewhat on par with the functions offered by other vendors. But as soon as the functions were sufficient for most users, there would be less reason to purchase the full Office suite for every user and Microsoft would lose its Office "cash cow." It should be noted, though, that Google is using the same strategy with Google Docs vs. Office on PCs, and so far Microsoft has maintained or increased the price of Office.
Some documents may not be available as part of your current Gartner subscription.
|Resource Id: 2046315|
|
OPCFW_CODE
|
ncl_tditri (3) - Linux Manuals
ncl_tditri: Add triangles defining a simple surface to the triangles in a triangle
TDITRI - Add triangles defining a simple surface to the triangles in a triangle list.
SYNOPSISCALL TDITRI (U, NU, V, NV, W, NW, F, LF1D, LF2D, FISO, RTRI, MTRI, NTRI, IRST)
C-BINDING SYNOPSIS#include <ncarg/ncargC.h>
void c_tditri(float *u, int nu, float *v, int nv, float *w, int nw, float *f, int lf1d, int lf2d, float fiso, float *rtri, int mtri, int *ntri, int irst)
DESCRIPTIONThe arguments of TDITRI are as follows:
- (an input array, of type REAL, dimensioned NU) - values of an independent variable "u". It must be the case that U(1) < U(2) < ... U(NU-1) < U(NU).
- (an input expression of type INTEGER) - the dimension of U.
- (an input array, of type REAL, dimensioned NV) - values of an independent variable "v". It must be the case that V(1) < V(2) < ... V(NV-1) < V(NV).
- (an input expression of type INTEGER) - the dimension of V.
- (an input array, of type REAL, dimensioned NW) - values of an independent variable "w". It must be the case that W(1) < W(2) < ... W(NV-1) < W(NV).
- (an input expression of type INTEGER) - the dimension of W.
- (an input array, of type REAL, dimensioned NU x NV x NW and having FORTRAN first and second dimensions LF1D and LF2D, respectively) - values of a dependent variable "f(u,v,w)". F(I,J,K) is the value of the function "f" at the position (U(I),V(J),W(K)); the equation "f(u,v,w)=FISO" defines a surface that one wishes to draw.
- LF1D and LF2D
- (input expressions of type INTEGER) - the FORTRAN first and second dimensions of the array F. It must be the case that LF1D is greater than or equal to NU and that LF2D is greater than or equal to NV.
- (an input expression of type REAL) - the value of the function f defining the isosurface to be drawn.
- (an input array, of type REAL, dimensioned 10 x MTRI) - a list of triangles, probably created by means of calls to TDSTRI, TDITRI, and/or TDMTRI, and sorted, probably by means of a call to TDOTRI.
- (an input expression of type INTEGER) - the second dimension of RTRI and thus the maximum number of triangles the triangle list will hold.
- (an input/output variable of type INTEGER) - keeps track of the number of triangles currently in the list. It is the user's responsibility to zero this initially and its value is increased by each call to a triangle-generating routine like TDITRI. If NTRI becomes equal to MTRI, TDITRI does not take an error exit; instead, it just stops generating triangles. Therefore, it's a good idea, after calling TDITRI, to check the value of NTRI against the dimension MTRI; if they're equal, it probably means that the triangle list filled up and that the rendered surface will be incomplete.
- (an input expression of type INTEGER) - specifies the index of the rendering style to to be used for the triangles added to the triangle list by this call.
C-BINDING DESCRIPTIONThe C-binding argument descriptions are the same as the FORTRAN argument descriptions.
ACCESSTo use TDITRI or c_tditri, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order.
COPYRIGHTCopyright (C) 1987-2009
University Corporation for Atmospheric Research
The use of this Software is governed by a License Agreement.
|
OPCFW_CODE
|
May was a month of finalizing all of the features of Tritium which are now complete. Our last Zoom meeting summarized what was left to finish before a final month of testing.
The team has been refining and finishing the API, with respect to contract scripts, batched transactions, and our new API standard. There is a debit/credit function for fungible tokens, and a transfer/claim function for non fungible tokens (assets). Validation scripts then determine the logic, which create a decentralized exchange (i.e. in order to sell/buy an asset, limit how much can be withdrawn from an account per day, or how much to send to a particular account per month).
We have also been working on the interface for staking functions, so that one can move coins more easily from a main account to a trust account, for example.
The API will contain an events processor that will automatically respond to notifications to accept and claim new transactions. The user will be able to configure the settings of the events processor, i.e. to specify which account to credit to.
Now a whole set of transactions can execute in one transaction that clears at once, such as all of the transactions of one split-revenue payment. This will all be actioned in one API call.
Tritium Name Services (TNS)
Tritium name service will allow users to exchange human readable text that represents account addresses, rather than sending QR codes or large hexadecimal formats. This will allow accounts to be identifiable by your own local names eg. Checking, Savings, Trust, Payroll, etc. There can only be one of each local name per username, which is useful for managing personal accounts. Therefore, when you request a payment from someone, you can either give them the register address or you can ask them to send the payment to paul:savings, for example.
When Paul creates an asset such as ‘MyContactInfo’, it will be retrievable by the register address (QR code or hexadecimal format) or by its name prefixed with the username of the signature chain that created it, e.g. paul:MyContactInfo. The username prefix is required to allow two different users to create with the same name.
Tritium name services will also provide the option to purchase a global namespace which could be compared to owning a specified domain extension, such as *.io or *.com on the Nexus network. The namespace registration will require a fee, which we plan to burn to enrich the entire network.
Once a user registers their namespace, they will be able to register any name within it such as asset.io, which can then be sold to another user as an asset on the Nexus DEX. This provides the opportunity for ‘investors’ to purchase namespaces, and gain ROI by reselling desirable unique names inside of their namespace. We anticipate this to be like the ‘DNS’ (Domain Name Service) of Nexus.
The name object itself will have fields in it that can be updated which point to register addresses, your public-id, or other convenient types of data. Being the owner of the name object, you will have the freedom to change it to whatever you see fit, similar to updating a DNS record when you own a domain name. This opens the possibility for name objects to point to IP addresses, meaning that TNS could eventually be integrated with existing DNS nameservers.
The team implemented the first stage of hybrid mode on a private test net, with multiple nodes connected together on their own private chain. Previously, private mode functioned on a single node. Each of the multiple nodes are ‘permissioned’ to sign and create a block which is then propagated to all the nodes in the private network. We have been demoing this and some basic applications, for some of our use cases.
Paul, Mike and Alex met with the USA based team to discuss the DAO, fee structures for Tritium, architectural designs for Amine, and to hold a briefing and planning session with the business development team.
We propose that the following services will incur a fee:
- Permissioned Access Schemes for Hybrid Networks
- Namespace registration
- Creation of an Asset
- Creation of a Token
- More than 2 transactions made per block
We propose that the fees are burned which will result in a reduction of the total supply of NXS to enrich the network. The fee amounts are undecided, and will vary in response to transaction volume. An increase in transaction volume will decrease fees, rather than increase fees like Bitcoin. We welcome all perspectives on this topic. Please contact @Jules to join the economics working group.
Small load test on one computer
Mike made a video showing the local API server receiving 150-200 HTTP requests per second, processing the JSON to turn it into a transaction. The tests are creating an asset object register with a random name, and is then relaying those transactions to the other nodes on the network, which in turn verifies the transactions, sequences them correctly by signature chain, and then packages them into a block. It is handling 150+ transactions per second with a single node. The limiting factor of this test is that Colin’s machine can’t generate the test transactions any faster!
|
OPCFW_CODE
|
Java UI designer + framework similar to visual studio (drag and drop, floating controls)
I'm looking for a Java UI designer allowing me to drag and drop controls directly to the design surface in a floating mode (without the hassle of north, south etc that comes with SWT). Is there any such tool?
Also, I'm only interested in tools offering a trial version.
EDIT: I'm only interested in solutions allowing me to drag/drop items regardless of panels margin, LayoutManager stuff etc. The position should preferably be just relative to the window margin.
Thanks in advance
You can use NetBeans to design your GUI. Instead of messing with Layout Managers, just use the "Absolute" layout. It will put the UI Components exactly where you drop them, pixel for pixel.
That's exactly what I was after! Thanks a lot :)
Eclipse has a free visual editor called VEP. See http://www.eclipse.org/vep/
Instantiations has a very nice set of tools with a trial version:
http://instantiations.com
Note that for any visual designer, you should know how layout managers work to use them properly (and make sure your UI expands/contracts/adapts to font/locale properly). If you just use absolute placement, things can get cropped, for example.
See http://developer.java.sun.com/developer/onlineTraining/GUI/AWTLayoutMgr/ for my article on layout management to get a feel for how to use things like North, South. It only covers the original five Java layout managers, but describes why you need them and how you can nest them.
Thanks, but what I'm actually looking for is something that lets me get rid of the LayoutManager and all other strictly swing-related and restrictive ideas as I need to put together a UI which shouldn't be flexible or robust at all, but simply have a given appearance. The GUI will be used only once, in a controlled environment, but I have to make it quickly and can't afford spending time learning Swing's specificities just to align components the way I need.
These tools will allow you to use absolute positioning as well
I recommend JFormDesigner, which has support for "Free Design". From
http://www.jformdesigner.com/doc/help/layouts/grouplayout.html:
The goal of the group layout manager
is to make it easy to create
professional cross platform layouts.
It is designed for GUI builders, such
as JFormDesigner, to use the "Free
Design" paradigm. You can lay out your
forms by simply placing components
where you want them. Visual guidelines
suggest optimal spacing, alignment and
resizing of components.
It has a trial version and is very easy to use.
Netbeans has a drag and drop module called Matisse: http://www.netbeans.org/kb/articles/matisse.html
|
STACK_EXCHANGE
|
Please take note that the API exclusively functions to facilitate the acquisition of fresh proxies solely from the "Proxies List" resource. It is not intended to reuse proxies sourced from the "Today's List." The subsequent sections elucidate the API utilization guidelines along with illustrative instances.
Upon successful authentication within the software system, transition to the "Proxy API" tab. From the "API settings" submenu, opt for the "Use API" option. Maintaining an active login session while engaging the API functionalities is imperative.
In the root directory of the software application, a subordinate folder labeled "proxytool" is present. Contained therein is a file named "ProxyAPI.exe".
Invoke the "ProxyAPI.exe" executable using your designated software or script. This necessitates the inclusion of specific parameters outlined below. Consequently, automated proxy alteration can be accomplished:
use a random proxy from any countries, with port forwarding the proxy to port 5000. The port need to be in the "port forward" range in "Settings" tab, Each time using the parameter will result in replacing a new proxy to this port
use a random proxy from US, state: NewYork, city: New York
Regarding the specification of the country parameter, it is requisite to input the ISO alpha-2 code corresponding to the desired country. In cases of uncertainty, you may refer to external resources such as web search engines for queries like "country ISO alpha-2 code" to acquire the requisite information.
For the "-citynolimit" parameter, it can be employed in the following manner:
To prioritize proxy retrieval from a designated city (such as "New York") using the API system, you can use the command:
This configuration instructs the API to initially attempt proxy acquisition from the specified city (New York), giving it higher priority. Only in the absence of available proxies from the specified city does the system proceed to seek proxies from the corresponding state (NY). Should a proxy be successfully obtained from the designated city, the system refrains from attempting proxy retrieval from the state level (NY).
To acquire a proxy based on a specific IP address or IP block, the following format can be utilized:
The usage of the "-hwnd=" parameter is as follows:
By utilizing the "-hwnd=" parameter, when executing "ProxyAPI.exe," the application will dispatch a WM_COPYDATA message to the invoking software. This mechanism enables the originating software to access postcheck information and additional details related to the acquired proxy. To facilitate this functionality, you must provide the Window Handle of the software launching "ProxyAPI.exe". Subsequently, "ProxyAPI.exe" transmits a WM_COPYDATA message containing the subsequent format and information:
In the event of successful proxy retrieval via the API: success|ip|Ping|ProxyCountry|ProxyState|ProxyCity|CloudRouter account balance
In the event of unsuccessful proxy retrieval via the API: failed|reason for failure
Examples demonstrating the usage of this parameter:
|
OPCFW_CODE
|
Polygon geofencing with iOS
I am trying to find a way to create several polygon geofences with iOS. I need to draw multiple zones in a city to represent areas, streets, etc. From what I've read so far, iOS only allows circular zone from a geolocated device.
Is it feasible with iOS?
Is there a web app somewhere to draw polygons on a map and generate the coordinates in an array?
1) iOS only allows to create circular geofences indeed however what you are trying to achieve is possible with some extra logic.
I have developed similar features so I suggest you to do the following:
create a circular geofence that embeds your polygon
when the device gets notified as being within the circular geofence,
start the GPS
every time you get a location update, check if its coordinates are
within the polygon
turn off the GPS as soon as the device's location is found within the
polygon, unless you need to be notified when exiting the polygon as
well
turn off the GPS when the device gets notified as outside the
circular geofence
As you need polygon geofences I guess you expect a good level of accuracy, so you would need to use an extra layer of GPS on top of the geofencing anyways, as geofencing is not accurate at all.
2) Have a look at those links:
https://github.com/thedilab/v3-polygon-shape-creator
https://github.com/tparkin/Google-Maps-Point-in-Polygon
Thanks Laurent, it is much appreciated and this is a good approach. Have you got a Github by any chance?
No I don't. Please accept my answer if you are happy with it.
If a polygon is embedded within another polygon, would that approach still work? Because creating a circular geofence would overflow to the next polygon within wouldn't it?
This approach would still work. But be careful, there is a limited number of geofences you can monitor at a time (about 10), so you need to optimise them. For example, you can create only one geofence for multiple embedded polygon.
Thanks Laurent, i really appreciate this approach. I used this approach and achieved my requirement when the app is in foreground/background, but not when the app is killed/terminated state. I think it was due to the default 10 seconds of time that iOS wakes up the system and again put it to sleep/kill it. beginBackgroundTask(expirationHandler:) also will extend time to 3 minutes if am not wrong. I need to extend the time until the user gets into the polygon and getting out. Any help on this is appreciable. Thanks!
|
STACK_EXCHANGE
|
Flux is an enterprise Workload Automation and Managed File Transfer suite that provides a platform for processing files and automating system tasks. Flux is the core automation suite that hundreds of organizations across many industry sectors rely on to execute their core business processes, such as:
- Routine file transfers
- Extract Transform Load (ETL) processes
- Reporting (inventory, sales, etc)
Flux 8.0 now includes native support for ZIP archives, making it even easier to automate common file processing tasks through a simple point-and-click interface.
A Typical File Processing Workflow
A Flux Workflow consists of one or more steps. A workflow step can either be a trigger (wait for an event) or an action (perform a task). Workflows can be deployed as a single instance or used as a workflow template to deploy multiple instances of a workflow to process files for many clients.
A common file processing workflow deployed to Flux performs the following tasks:
- Waits for a file to arrive on a server, such as an FTP server, SFTP (SSH) server, or Windows network share
- Check incoming file to ensure it is complete and valid
- Decrypt file
- Unzip file
- Send file to ETL process
- Wait for ETL process to complete
- Restart failed portions of ETL process if there is a failure
- Send email notifications when errors occur and when SLA deadlines are approaching or exceeded
Here is the actual visual representation of such a workflow in Flux:
Typical File Processing Workflow
The visual workflow model makes it easy to design workflows to automate routine system tasks such as waiting for and processing files. It’s easy to understand how the process is automated just by looking at the visual workflow. The visual workflow model makes it easy to design, maintain, and modify file processing workflows.
Managing File Processing Workflows
Desktop applications have their place, but they are not for monitoring and administering automated tasks which run across multiple servers. Flux provides a slick modern Operations Console that allows you to manage your workflows right from your web browser. There is no software to install on your desktop. You design, deploy, promote, monitor and control workflows all through your web browser.
Benefits of Flux
Even simple tasks become seemingly difficult after attempting to force ad-hoc scripts to scale to increasing enterprise loads. Managing ad-hoc scripts in a distributed environment, especially with a large team that spans across business units, is an all too familiar problem. Flux solves this problem through a visual workflow model and centralized web interface.
Flux 8.0 includes several new features in addition to native zip archive support that make Flux the ideal solution to eliminate the headaches of automating routine file tasks.
For more information on processing zip files with Flux, reach out to a Flux engineer at flux.ly.
|
OPCFW_CODE
|
Advice on layered architecture (Win Forms + WinService; WCF; LINQ to SQL)
I'll try to explain as briefly as possible:
C#.Windows application for categories and descriptions of files.
Windows Forms - for the user
A library I want to be saved for future usage - I got nice algorithms for tasks with XML,Files,strings. In this case they are to serve the WF, but i don't want to keep them in the Form classes. I want to have them as a separate library with namespaces and classes in it. But I don't know what type of project or addition to the whole VS "Solution" that has to be.
Windows Service - get notifications on file changes and updates the same db the WF is using.
LINQ to SQL - for the data access
WCF - I am just throwing that here, because it seems that I need to use it(answer from a previous related topic) : https://stackoverflow.com/a/15998122/1356685
SO...yeah...architecture, architecture. Any guideline for a good architecture in my case is welcomed. Now I know in these conversations people start throwing terms like: "business logic","persistence layer","model layer" and what not. However I don't quite understand them, so please be specific.
Thanks in advance for the help !
Keep in mind that Linq To SQL has been deprecated by Microsoft in favor of Entity Framework. And that winforms is a really really antiquated technology, also left behind in favor of much more elegant, scalable, faster, and customizable XAML-based UI frameworks (such as WPF or WinRT). So, before starting your project, I would suggest you consider the reasons why you're using these old technologies to begin with.
linq to sql - it's sql server db, and it's just one table btw. EF is nice, but at this point, it's too powerful and full of capabilities. I'll learn it in the future.
winforms - wpf is nice with the declarative code and what not, but my project is not about stunning visuals to start with, and just like EF I'll learn it later.
WPF is not (only) about stunning visuals... It's a much better UI framework with built-in support for things such as DataBinding (real Databinding) which simplify a LOT the code and helps keep your code really clean. It also saves a lot of time by not having to write a ton of boilerplate code in order to pass data between Model and UI.
Microsoft has a pretty extensive application architecture guide and their patterns and practices website has a lot of information and code samples showing you how to structure applications.
As for your 'library to be saved for future usage'... you can create a C# Class Library project and add it as a Reference in whatever applications you'd like to use it in.
Ye I spotted the guide,but didn't want to learn everything.In order to save time (just for now!) I just wanted a piece of experience from somebody.
As for the library, ok I'll do that. Thanks !
With reference to @Grumbler85's answer in your referenced link: "Three Tier: Persistance-Layer <--> Logic-Layer (e.g. a WCF-Service handling the app logic) <--> Clients (Service and Forms - triggering app logic and showing results)". To apply that to your listed parts in the OP, you'll probably want to package each layer into a separate project -- Persistence (your Linq to SQL classes), Logic (your WCF svc), and then your Presentation layer, or your Forms App (as a Forms application project).
Where do you see the Windows Service here ?
What is going to host the WCF service ?
Thanks in advance.
If you're creating a Windows Service, you'll want to create that as a separate project. It will be its own little program.
|
STACK_EXCHANGE
|
Planning a new build for a image and video editing system.
For those who are interested the spec is as below:
* Gigabyte GA-Z58X-UDH3-B3 Motherboard
* Intel i7-2600K CPU
* On board chipset and marvell raids. No plan to add additional raid cards.
Sata ports on board are:
Port 0 and 1 - 6Gbps - Intel Chipset
Ports 2 through 5 - 3Gbps - Intel Chipset. #5 routed to rear panel as e-sata.
Port 6 and 7 - 6 Gbps - Marvell
I'm planning the following raid setup. Some questions below the pic.
1. Please critique / correct / suggest improvements.
2. Would the dual processor on the WDC Black help with reducing load on CPU in this case or is that moot?
3. Generally, what's the difference between 7200K RPM vs 5400/59xx RPM in raids (0,1,5). Looking for synthetic measurements from which I will draw inferences for real world impact. Please point me to any such 7200 vs 5400 RPM comparo measurements if already posted elsewhere.
4. I asked #2 and #3 because I am debating between the WDC Black 1.5Tb drive vs a green 2Tb drive both of which are at the same price point.
How many layers of HD that you want to edit and what codec are you going to use?
Oppss I should ask SD or HD edting?
Honestly, new to HD editing. So don't know much about programs and codecs and I will be researching that as well. I happen to have Corel VideoStudio Pro x2 at the moment for video and Adobe Web Premium CS 5.5 suite as well for photos and Flash.
Source content will be from a Canon HD camcorder and/or a Full Frame Canon/Nikon DSLR that supports 1080p video.
Surely you asked those questions for a reason(s) and I would love to learn those if you can in a nutshell please! Will help me.
I'm not a video editing expert, but I don't see any benefit to placing the applications in a separate partition. The apps are married to the OS once you've installed them - it's not like you can, for example, re-install the OS without re-installing the apps. So IMHO it would be more space efficient and less complex to just throw them into the same partition.
I learnt elsewhere that I might run into issues with using WDC Black and Raid 5 together due to TLER being turned off in the desktop black drives.
So it is all going to be raid zero.
@sminlal - the partition for OS is to ensure that OS files stay on the outer edge of the platters which generally give the best read/write performance. Having a dedicated partition would secure that prime physical space on the platters for OS (I think!). Updates, driver installs, service pack installs etc. over time would all then still happen in this prime physical location of the platters. If OS and Apps were all on one partition, then OS could take, say, first 20GB, apps the next 300GB and the OS service pack that potentially installs later one would be 320GB into the platter space. I want to prioritize OS over apps over data. The former two would be random where as the data would be continuous/sequential (at least relatively speaking that is!). Maybe I could/should go with an even smaller OS partition - say, 30GB instead of 60GB??
From reading elsewhere, it also appears that it's best that the OS/App array is using 7200RPM drives. The data/write-to array can use probably use 5400/59xx drives without resulting in a significant drop in performance (relative to the 7200 option) as that array would mostly be (relatively) sequential.
I already have 3x1TB7200 drives that I wanted to use in these arrays. I was debating whether to use these for the OS array and get the slower green drives for the data array or go with WDC blacks for the same price point as green albeit with a 25% lower capacity. So far, I am inclined towards the 3x1.5TB7200 blacks for the OS+App array and use my existing 2x1TB7200 for the data/write-to array; although I am still skeptical about the TLER limitation on the blacks potentially rendering them useless with hardware raid or raids other than 0 and 1, if I were ever to use those other raid options.
EDIT (for a question): BTW, what's the optimal stripe size for OS and Apps (separate raid volumes) that I should use? I plan to use 128 for data. Is that a good idea?
|
OPCFW_CODE
|
Roon does not change the files indexed but only collecting those in the DB. By moving and reorganization of folder locations one is always endangered to loose valuable information. Hearts, valuations, hidings, albums combined … playlists! Yes it is possible to do reorganization with instructions and in most cases it works. But if something goes wrong it’s over and information is lost if e.g. the backup is done on another server and new server setup works with new locations (then the backup has no value anomore). I believe everyone ever dealing with such things has experienced trouble there.
I suggest the following solution:
- for each scanned file generate a GUID (global unique ID) and store it to the file (e.g. as tag)
- Make the DB referencing to those GUIDs additionally to file & folder
- roon needs to remember all sorts of “user decisions” based on this GUID
Roon would be able to identify each and every single file also after reorganization and indexing again to new storage locations. Tags, valuations etc. will never be lost again.
On NTFS, each file has an associated file ID, which is a 64bit number that doesn’t change for the lifetime of the file - even if the file is modified or moved. I wonder if Roon DB is already using that.
Roon’s stance on altering files is well established: they don’t do it. So adding a GUID tag to a file is not likely to happen.
Not all file systems that Roon encounters have file IDs that never change. Plus, you never know when I file will be moved from one file system to another.So…
Another way to identify a file without changing metadata or depending on the sytem would be to use a hash of the [uncompressed] audio data.
When you add a storage location, I believe Roon creates a storage ID and uses that.
thats true and would do the trick even better! So I’m wondering why roon is so sensitive and trashing relationships if I only rename folders or move it to another physical storage. In my eyes this is a real shortfall that can be resolved easily.
It’s not as easy as it seems. Without using filesystem-specific mechanisms, a folder rename will appear to Roon as a big chunk of deletes, followed by a big chunk of creates. You would need some heuristic way to match the files and figure out they just moved. That can be done simply by matching file name, size and time stamp, or through other mechanisms (e.g. hash). Whatever you do, it won’t be instantaneous, especially if it involves hashing content. The more files you move, the more time it takes. So, it will first look as a delete+create, then it will look as a move, and that could cause some confusion.
|
OPCFW_CODE
|
import inspect
import string
import os
from event import Event
class NotYetImplemented(Exception) : pass
class ArgumentTypes:
FLOAT = "FLOAT"
INT = "INT"
FLOAT4 = "FLOAT4"
INT4 = "INT4"
class FilterArgument(object) :
def __init__(self, argument_type, argument_index, fget, fset=None, fdel=None):
self.type = argument_type
self.index = argument_index
self.fget = fget
self.fset = fset
self.fdel = fdel
def __get__(self, instance, owner_type):
if self.fget: return self.fget(instance)
def __set__(self, instance, value):
if self.fset: self.fset(instance, value)
def __delete__(self, instance):
if self.fdel: self.fdel(instance)
def filter_argument(argument_type, argument_index):
def _filter_argument(func):
fget, fset, fdel = func()
return FilterArgument(argument_type, argument_index, fget, fset, fdel)
return _filter_argument
def float4(value):
for t in (float,int,long):
if isinstance(value,t):
value = (t,t,t,1.0)
ret = tuple(value)
if len(ret) == 3: ret = (ret[0],ret[1],ret[2],1.0)
if len(ret) != 4: raise ValueError()
return ret
int4 = float4
# def int4(value):
# for t in (float,int,long):
# if isinstance(value,t):
# value = (t,t,t,1)
# ret = tuple(value)
# if len(ret) == 3: ret = tuple(list(ret) + [0.0])
# if len(ret) != 4: raise ValueError()
# return ret
def SimpleFilterFactory(filter_name, file_name, num_inputs):
class SimpleFilter(BaseFilter):
_filename = file_name
def __init__(self):
BaseFilter.__init__(self)
def get_name(self):
return filter_name
def get_number_of_inputs(self):
return num_inputs
return SimpleFilter
class BaseFilter(object):
_filename = None
def __init__(self):
self._defines = {}
self.on_code_dirty = Event()
def get_name(self):
raise NotYetImplemented()
def get_number_of_inputs(self):
raise NotYetImplemented()
def generate_code(self):
code = ''
for k,v in self._defines.iteritems():
code += '#define /*id*/{0} {1}\n'.format(k,v)
if self._filename:
path = os.path.join(os.path.dirname(inspect.getfile(self.__class__)), self._filename)
with open(path) as file:
code += file.read()
return code
else: raise NotYetImplemented()
def __repr__(self):
raise NotYetImplemented()
|
STACK_EDU
|
December 27, 2019 • MIT License
a simple customizable layout for making paging effects with UICollectionView.
November 09, 2018 • MIT License
A marriage between the Shazam Discover UI and Tinder, built with UICollectionView in Swift.
Cover Flow View
September 16, 2017 • Apache 2.0 License
Cover-flow style view. Very similar interfaces with UITableView or UICollectionView. Transitions for navigating or presenting and dismissing detail page view. Demo Video: http://www.youtube....
August 27, 2017 • Apache 2.0 License
Custom a AVPlayerLayer on view and transition player with good effect like youtube and facebook
August 22, 2016 • MIT License
iAccordion is a class designed to show cards (credit cards, coupons, business cards, etc.) with a cool animation.
July 11, 2016 • MIT License
Interface for creating, scheduling and handling local notifications in iOS.
July 09, 2016 • MIT License
Swift-only implementation of YRCoverFlowLayout. Also supports CocoaPods.
July 21, 2015 • MIT License
View covers everything inside view controller, and shows some alert text, progress bar or other view, when you need to hide content
March 26, 2015 • MIT License
Simple cover animation flow layout for collection view.
February 21, 2015 • Apache 2.0 License
A simple library to discover and retrieve data from nearby devices.
October 14, 2014 • MIT License
A view like the Medium Personal page for iOS.
Cover Photo Twitter
September 01, 2014 • GPL License
Example of blurred expanding cover photo like twitter app: http://m.UploadEdit.com/b038/1407960919189.gif Created in Xcode 6 with auto layout, swift
July 27, 2014 • MIT License
CFCoverFlowView is a CoverFlow view with PagingEnabled similar to App Store for iPad. https://github.com/c0ming/CFCoverFlowView
June 18, 2014 • Apache 2.0 License
A simple demo to add video in the background
February 20, 2014 • MIT License
XHPathCover is pull down refresh and a parallax top view with real time blur effect to any UITableView, inspired by Path for iOS.
January 16, 2014 • MIT License
TwitterCover is a parallax top view with real time blur effect to any UIScrollView, inspired by Twitter for iOS
January 08, 2014 • MIT License
FCOverlay allows you to present a new view controller hierarchy in a new window. When you present a view controller via one of the provided methods it is presented in a new window on top of all ...
June 08, 2013 • MIT License
Pie chart with multiple slices at even angles, each slice can have different radius. Useful when displaying coverage data.
October 14, 2012 • BSD License
MMFlowView is a class designed to support the "CoverFlow" effect and it is intended to use in a similar way like IKImageBrowserView. It supports all the image types (URLs, NSImage, Icons, QuartzCom...
May 07, 2011 • zlib License
iCarousel is a class designed to simplify the implementation of various types of carousels (paged, scrolling views) on iPhone and iPad. iCarousel implements a number of common effects such as cylin...
|
OPCFW_CODE
|
All business plans describe the business model in one way or another and for a good reason. The business model is the heart and soul of the business because it explains how the business makes money or the economic and conceptual basis for operating. It describes how an organization creates and delivers value to the marketplace, and in return captures value for ongoing operations.
Yet, people use the term loosely because a business model can take many different forms. The first step is identifying the elements of the business model, eventually putting them together in a way that creates the most value. The first piece is the customer set. Who will the business sell to, creating value for the customers and the business in the process? Embedded in this question is the identification of the particular problem or need of the potential customers.
Offering a Solution
The next piece is determining what solution the business offers to solve the problem or fulfill the need. Value is created when a solution is offered that the market likes and appreciates. It is also generated on different levels. Sales have economic or extrinsic value, but meeting a customer need also offers intrinsic value. When people experience a higher quality of life because of a product or service a business offers, they will experience satisfaction and are more likely to become repeat customers.
The next piece of the business model are the distribution channels that will be used to get products or services to the customers. These channels include the internet, delivery services, wholesalers or distributors, and over-the-counter or in-person sales, to name a few. Logistics often do not get enough attention, yet they can make or break a business. For example, late deliveries, wrong deliveries, or failed deliveries can give a business a poor reputation for reliability.
Getting the Flows Right
That leads to customer service and customer engagement. Getting and keeping customers requires a business model that successfully puts the pieces together in a way that delivers the greatest value to the marketplace and produces a revenue stream that sustains the business. At this point, the business model addresses product and service pricing and customer payment processes. How will the customers pay and keep a flow of revenue going on a routine basis? The next question is how the business will keep products and services flowing to the customers? Who are the suppliers that will keep supplies, materials and parts flowing to the business so that it can serve customers?
An important element of the business model is operating costs. Realistically, if it costs too much to operate, forcing prices too high, the business will not survive. Competitors will soon step in with lower prices. The cost structure also has a direct impact on the profit margin.
Clearly, each element of the business model is answering the question: How does the business plan on making money? For a business startup, the business model also considers the timing of the revenue stream in relationship to costs. A new business will often have upfront costs and a period of time when costs exceed revenues while developing a customer base. One of the questions an investor will want answered is: When does the business expect to show positive net revenues?
Once the business model is defined, it is much easier to develop an attractive investor business plan or a business plan for obtaining a loan because the pieces have been carefully fit together to create the most value. Successful business owners think through the details piece by piece to ensure decisions about markets, operations and costs make sense. Even a one-person business needs a business model because it serves as the conceptual foundation for the business purpose, goals and action plans.
Related Articles -
business plan, investor business plan,
|
OPCFW_CODE
|
Frontend Briefly - News and insights from the world of frontend development #8
A regular summary of the most important news, articles or tweets in the frontend world is here! For the month of July, we have prepared the top 6 novelties that should not be missed by any frontend developer. In addition, here you will find links to other interesting articles that are worth reading.
- Bun.js implements Web API e.g. fetch, WebSocket, ReadableStream, etc.,
- ~90% of Node-API functions e.g. fs, path, Buffer, etc.,
- npm packages, you can install as you are used to,
- native support for TypeScript & JSX,
- Bun automatically loads environment variables from .env files. On require("dotenv").config() you can forget it exists,
- and many other useful functions…
2. Defensive CSS
When creating the page layout, we also have to think about edge cases. For example, we can spoil the layout of the page with a long text that we did not expect. That's why Ahmad Shadeed's guide was created to help you write defensive CSS. Thanks to this, you will write CSS that will be bug-free and ready for the future even for unexpected situations. You will find 24 tips in the guide so far, but the content will gradually increase.
3. Storybook Community Showcase #2
Storybook is constantly gaining popularity. That is why this article was created, where you will find an overview of the news that was created in his community, for example:
- What's new in the recently released version of Storybook 6.5,
- Encyclopedia of components,
- Figma plugin,
- Story Explorer for VSCode,
- New additions to Storybook,
- Many learning resources.
4. Overview of state management approaches in React
In React, it is not strictly defined how to manage global state. Therefore, a number of approaches and libraries have been created that solve this problem. It is then difficult for us developers to choose the right solution and we often choose the currently most popular one. In the article, The new wave of React state management, you will learn more about the problems solved by libraries such as Redux, Recoil, Jotai etc… Each of these libraries has its own advantages and disadvantages. After reading this article, you will have a better understanding of which library is the best fit for your application.
5. Radix UI v1
Radix UI is a React UI library with unstyled components. Thanks to this, you can write your own styles very easily, without having to overload the default CSS. Radix UI will solve all the dirty work with logic for you (e.g. opening/closing modals) and what's really cool is that all components are accessible. You can leave focus management, keyboard control or screen reader support (that is, things you never have time for) to this library.
- Radix UI currently offers 26 components,
- Support for SSR in React 18,
- You can install individual components separately, thanks to which you save the size of the bundle,
- Support for CSS-in-JS (e.g. styled-components, emotion, stitches...) together with animation libraries,
I personally tried Radix UI on a project and I have to admit that I had a great time working with it. I believe this library has great potential.
6. Vite 3
Vite solves problems such as:
- Slow start of the dev server thanks to esbuild and native ES modules,
- Slow updates and subsequent reload of the website after editing the file,
- Native support for transpiling TypeScript and JSX files,
- Many other useful functions such as importing static files, JSONs, etc. You can read more in the documentation
Articles worth reading
- Technical Writing for Developers - In addition to coding, developers must be able to document new functionality, write comments or communicate within the team... In the article you will find several tips on how to move to a new level in technical writing.
- How to Use Next.js Middleware - What are Next.js middleware functions and how to use them with useful examples.
- Avoiding <img> layout shifts: aspect-ratio vs width & height attributes - Finally, we have two simple techniques to avoid layout shifts.
A few articles about accessibility
- <article> vs. <section>: How To Choose The Right One - In the article you will learn when to use <article> or <section> and also how content grouping affects accessibility.
- Introduction to keyboard accessibility - Watch the lecture where they cover the basics of keyboard accessibility and how to test it on the web.
- With :focus-visible, you can have focus styles when it makes sense - Many developers reset styles for :focus because it indicates focus even on mouse clicks. You can fix this behavior using :focus-visible, which sets the focus state only when it makes sense.
If you liked the news overview, don't forget to subscribe to our newsletter. You can also read the news from last month, which we brought in the June Frontend Briefly.
|
OPCFW_CODE
|
Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by Hilbert Hagedoorn, Jun 22, 2021.
is there a clean version? thanks
Anyone else have inverted audio in the nvidia sound driver?
You can use Nvidia slimmer, better
Funny how they never mention anything about call of duty and its the first time in a while that I can play without random crashing.
any crashes with cod were an issue on the clients end.
How come my directx diag says WDDM 2.7 and my driver has a date of 6/20/2021?
Are you using the Windows Insider Dev Channel preview build? This is the only Windows build with WDDM 3.0 support I believe. It shows WDDM 3.0 for me here. Standard Nvidia drivers supporting WDDM 3.0 means that WDDM 3.0 is already on the way for everyone anyway, so don't bother with the Dev channel Windows builds unless you're in it already.
Like this driver so far, no errors to report.
Had a couple of crashes in Doom Eternal with this driver. I'm not alone as a few are reporting the same within Steam's forum.
Both GFE Sharpening and AMD CAS exist for ReShade. No idea whether you get the same quality and performance as GFE (or CAS for that matter).
Some games/anticheats don't play nice with ReShade so keep that in mind.
edit: apparently there's a new Sharpen+ filter which I didn't notice, you meant this one? I don't think it's available anywhere outside of Freestyle.
@ManuelG Optimus seems to be broken with this driver. I have submitted the Display Driver Feedback Form
Don't get too excited about sharpen+ if your use case is upscaling from lower res. The normal sharpen should be better for that, since the GPU upscaler will blur the image quite uniformly, so you want your sharpen filter to also sharpen uniformly to mitigate the uniform blur.
It seems to me sharpen+ is more intended to be used to sharpen native resolutions.
To be honest every time I use normal Sharpen it's almost always at native res. No high values of course, just a few percent or even 0% (it still applies sharpen at zero) just to add a bit more crisp and remove some blur from the image.
Sharp+ seems to have a hefty performance impact too.
I'm seeing huge performance losses on these in MEEE and WDL.
Same as me in Warzone
so far so good with this driver
Not amazed with this one. Slightly poorer performance, or so it seems if you check the numbers from RivaTuner in osd. Back to 465.89 which still performs the smoothest in the games I play.
Same here, I lost about 2-4 fps in most games, also benchmarks have lower scores, but I guess I will stick with them as some bugs fixed in those drivers are important to me.
|
OPCFW_CODE
|
The NTAG 424 DNA tag was designed by NXP to be a cost effective solution for product authentication. The 424s have a unique feature called SUN (Secure Unique NDEF). This means that every time you tap your phone to one of these tags, you will get a different URL. This is because on every read, the tag will encrypt new information and add this encrypted information to the URL, along with some other cryptographic data.
How authenticity is verified (SUN)
We use 4 parameters in the URL to verify a tag’s authenticity.
- Tag ID - The ETRNL assigned ID of the tag
- E-Code - A KDF input parameter used for key decryption on the server
- Encrypted Message - An encrypted message containing the UID, counter, and nonce
- CMAC - A mesasge authentication code that is used to verify the integrity of the encrypted data
When this data makes it’s way to the ETRNL servers, we decrypt the data to see how many times the tag has been tapped and the UID of the tag.
We also verify the CMAC on our servers as an additional security measure.
As mentioned above, the encrypted counter tells us how many times the tag has been tapped after it was programmed, and cannot be tampered with by the end user. If we store the highest counter on our database, we can compare it with the counter that a user is trying to authenticate with. So if ETRNL knows that a tag has been tapped 5 times, and a user is trying to authenticate with an encrypted counter value of 4, we can see that the user is trying to use an old URL.
If we look at the table below, we can see that the visit at 3PM is inauthentic because we know the tag has been tapped 3 times, so a URL that has a counter value of 2 is behind.
Offline Storage Attack
The problem with counter based expiration is that ETRNL needs to know how many times the tag has been tapped in order to expire the previously generated URLs. So if an attacker wanted to store a URL generated by one of these tags for use later, they might be able to successfully authenticate if no one else has tapped the tag. For this reason, we don’t allow certain high security applications to be built on ETRNL at this time. If you have any questions on what is/isn’t allowed, please read our rules, or chat with us in our Discord. We are here to help and would be happy to point you in the right direction for your application if the 424 tags don’t offer the best solution.
ETRNL’s fraud detection system
This is still a work in progress feature, but We’re currently building a system that uses time-based data points combined with tags specific information to detect fradulent authentication using one of these offline URLs. The API will notify clients when the system has detected unusual traffic or suspicious activity, and you can tell the user that they need to tap the tag again to get a new URL.
|
OPCFW_CODE
|
One user can't open anything after Yosemite upgrade
My Mac mini (early 2009) was getting slow, so I decided to do some upgrades. I got 8 GB of ram and a 128 GB SSD from crucial that their wizard verified as being compatible. I downloaded Yosemite from the app store and made a bootable USB drive following Apple's instructions. Then I did a time machine backup to an external drive. Finally I did the RAM and SSD install. I inserted the Yosemite USB drive I made earlier and did a fresh installation. On startup, I migrated files and settings from my time machine backup. I logged in to my primary account and did the software updates from the app store and rebooted. At this point my family started using the computer and is very happy with it. Startup, login, and application loads are very fast. There's no more freezes or beachballs.
However, one of the four user accounts is problematic. It takes several minutes to login and load the dock/ menu bar, versus seconds for all the other accounts. Clicking on anything in the menus/desktop/dock/dialogs results in a beachball for several minutes. No applications or files will open. Occasionally I'll get a "the application ... could not be opened" error dialog 10-20 minutes after trying to open something from the dock. How do I fix this account?
Some additional information: this computer does not have filevault encryption enabled, and there are no login items.
try fixing the permissions in disk utility.
or try fixing that user specific permissions using the ACL's reset.
@Buscar웃 I repaired disk permissions using a different administrator account, which did not have an impact. I also verified the disk, which shows it as ok. I cannot run disk utility under the affected account because that account cannot launch any applications. What do you mean by "try fixing that user specific permissions using the ACL's reset"?
Permissions problems may make things not work. They will never make things work slowly. I would be more inclined to suspect hardware. A disk sector (even on an SSD) that requires multiple reads to get past a soft error can have a dramatic impact on speed.
@ganbustein what would I use to diagnose that? I've done SMART tests and the verify disk thing in disk utility and neither indicated any problems
Maybe "diagnose" isn't the right word, but the first thing I'd try is to replace that user's home folder from a backup. That puts all the user's files in new sectors on disk. Of course, the size of the user's home folder will bear on your willingness to do this. And you'll still have those bad sectors to deal with eventually.
Use/hold Command-R during restart
select Terminal and type "resetpassword"
Then select reset password for the account in question
DO NOT RESET the Password.
Instead click on the Reset Home Folder Permissions.
This worked! However, it was missing a step. I couldn't find the reset password option so i did some googling. I had to go to the terminal and type "resetpassword" to bring up this screen. I also repaired permissions again with disk utility and it fixed one thing that it didn't find before.
Upon subsequent logins, the problem is present again. Only the first login to the account after the permissions reset is free of problems.
Sorry, I was busy with stuff and did not pay attention.
Update: I tried some additional steps and was able to completely eradicate the issue. I reset the permissions for all users and root. In Finder, I clicked "get info" on the applications folder, and applied the permissions to all enclosed items. I shared and unshared the folder. Then, I did the public beta update to 10.10.2. Everything appears to be working great now.
good job, feels good when you fix stuff your self :)
There is likely a corrupt preferences or other file in the user's ~/Library folder. Honestly your best bet is to create another users folder for that person, copy their files over and set up Mail, Messages, Safari and the like from scratch.
Once you have a working profile you can slowly migrate other things from the old to the new user folder (old Mail folders, Safari bookmarks, etc.) till you have enough restored for everything to work as necessary.
At that point you can delete the old user.
So. I've had this same problem with multiple machines.
I've tried all these fixes. Sometimes they work, sometimes not.
the last one that got this app lockout error would not work after reboot/recovery/resetpassword/ACL reset, or re-install operating system, or re-install operating system and add user again.
turns out that accounts that have been migrated a few times (from OSX 10.6.8 to 10.10) end up with broken "parental control" preferences, somewhere deep in the settings.
the fix was to set up parental control for that users, grant them everything, reboot, then remove parental control, then reboot.
all fixed.
|
STACK_EXCHANGE
|
You will be expected to prepare lab reports for BIO 1654 lab. Most first year biology majors find that writing a proper scientific lab report very difficult. This is usually not because it is hard to write a lab report but students do not know what to put into (or leave out of) the report not do they know how to properly organize their report.
Laboratory courses usually have two very different purposes. One of the goals of the lab exercise is to teach the student how to use the instrumentation to perform an experiment. For example, the first lab in Bio 1654 will teach you how to use different types of pipettes common in laboratories. While this is an important part of the lab, it is a part that should NOT be included in the lab report. The other goal of most of our labs is experimental, that is, you will be trying to answer a particular biological question by using the scientific process. It is the purpose of the lab report to communicate the experimental information clearly and concisely. There are a couple of labs that will not have an experimental component, we will discuss these in class. The experimental goals of each lab are not necessarily listed as such in the lab manual and it is the responsibility of the student to deduce these goals.
The examples given in these sections will be based on the first lab in this course, "How do Scientists Measure Weight and Volume?"
The purpose of a scientific paper is to covey information to the reader. You should keep in mind when writing a lab report that the reader is NOT your instructor (even if he/she is!) but is instead is a scientifically literate individual who is interested in your research. A well written report will include broad review of the subject under examination, a concise description of the experimental protocol and rationale (not a description of how to use the equipment!!), a discussion on the significance of the results as well as the results of the experiment. A common mistake made by students when writing lab reports is to think that the report is prepared for their instructor to show that they (the student) did the work of the lab. Thus the report is written hastily (at the last minute) with little or no thinking. This will almost always result in a badly written report.
When a report is written correctly, the student will have a good understanding of the strengths and weaknesses of their experiment, the relationship of the experimental results to science and the student will be able to suggest procedures to make the results more accurate and reliable. This is a time consuming process and cannot be done in a few minutes.
Scientific papers are written in with a special format. Lab reports in this class will be written in this format (with a few additions). Lab reports must include:
1. How do Scientists Measure Weight and Volume?
2. Learning How to Pipette Properly
3. Pipetting for Fun and Profit
4. Accuracy of Liquid Measuring Devices for Transporting Different Volumes
While not perfect, title 4 gives the best description of what was done in the lab. The first title is the title from the lab book but gives no indication of what was done in lab. The second describes one of the teaching goals of the lab, however, it does not describe the experimental aspect of the lab. The third is a poor attempt at humor and has no relevance to the lab at all.
Go to Page 2
|
OPCFW_CODE
|
const SchemaTestBase = require('./base.js');
const fs = require('fs');
const exec = require('util').promisify(require('child_process').exec);
/**
* A class to test XML files using the 'xmllint' command line tool.
*
* Can use XML Schema or RELAX NG for the schema files.
*
* @extends SchemaTestBase
*/
class TestXML extends SchemaTestBase
{
/**
* Build a new TestXML instance.
*
* Will set ```this.validateRejections = true;``` on construction.
* This is due to our use of a promisified fs.exec() command which will
* resolve() if the return code is 0, and reject() otherwise.
*
* @param {object} conf The configuration.
* @param {string[]} [conf.testPaths] The path to the test files.
* @param {string[]} [conf.schemaPaths] The path to the schema files.
* @param {function} [conf.onSuccess] A function to handle successful tests.
* @param {function} [conf.onFailure] A function to handle failed tests.
*/
constructor (conf={})
{
super({conf});
this.validateRejections = true;
}
/**
* Test an XML file using the ```xmllint``` command line tool.
*
* @param {object} test The test object the file is a part of.
* @param {string} test.schemaFile The path to the schema we are testing.
* Will be passed through this.findFile(fname, this.schemaPaths);
* @param {boolean} [test.relax=false] Use RELAX NG instead of XML Schema?
* @param {string} filepath The full path to the file we are testing.
*
* @return {Promise} The promise will resolve if the process exit code was 0,
* and will be rejected otherwise.
*/
testFile (test, filepath)
{
let schemaFile = test.schemaFile;
if (schemaFile === undefined)
{
throw new Error("Test was missing 'schemaFile'");
}
schemaFile = this.findFile(schemaFile, this.schemaPaths);
let relax = 'relax' in test ? test.relax : false;
let flag = relax ? '--relaxng' : '--schema';
return exec('xmllint '+flag+' '+schemaFile+' '+filepath);
}
}
module.exports = TestXML;
|
STACK_EDU
|
when I train RCAN,something wrong:RuntimeError: Expected 4-dimensional input for 4-dimensional weight 3 3 1, but got 3-dimensional input of size [1, 184, 270] instead
python main.py --template RCAN --save RCAN_BIX2_G10R20P48 --scale 2 --reset --save_results --patch_size 96
and then
Traceback (most recent call last):
File "main.py", line 33, in
main()
File "main.py", line 28, in main
t.test()
File "/home/zhj/EDSR-1.1.0/src/trainer.py", line 89, in test
sr = self.model(lr, idx_scale)
File "/home/zhj/anaconda3/envs/pytorch1.1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/zhj/EDSR-1.1.0/src/model/init.py", line 57, in forward
return forward_function(x)
File "/home/zhj/EDSR-1.1.0/src/model/init.py", line 135, in forward_chop
y = self.forward_chop(*p, shave=shave, min_size=min_size)
File "/home/zhj/EDSR-1.1.0/src/model/init.py", line 126, in forward_chop
y = P.data_parallel(self.model, *x, range(n_GPUs))
File "/home/zhj/anaconda3/envs/pytorch1.1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 204, in data_parallel
return module(*inputs[0], **module_kwargs[0])
File "/home/zhj/anaconda3/envs/pytorch1.1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/zhj/EDSR-1.1.0/src/model/rcan.py", line 107, in forward
x = self.sub_mean(x)
File "/home/zhj/anaconda3/envs/pytorch1.1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/zhj/anaconda3/envs/pytorch1.1/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 3 3 1, but got 3-dimensional input of size [1, 184, 270] instead
Same error on my machine
+1
how to solve it!!!help me
Same error on my machine
You could refer to #184 .
in my opinion,the error mentioned above is caused by the mismatch of the number of dimension. Specifically,the required dimension is 4 whilst 3 was given,thus you should debug all the lines remain in the error message step by step in order to monitor the change of variables‘ dimension during the training process.
I was successful with the fix from #184 referenced above. In particular, model/__init__.py line 133 onwards became:
else:
for p in zip(*x_chops):
p = [p_.unsqueeze(0) for p_ in p]
y = self.forward_chop(*p, shave=shave, min_size=min_size)
I really don't have a good explanation but it seems to work.
I guess the gist of it is that x_chops contains a tensor for each input (args is List[Tensor(B x C x H x W)]) to forward_chop. That tensor is cut up into quarters and catted along the batch dimension. So now you have something along the lines of x_chops is List[Tensor(B*4 x C x H/4 x W/4)]. Then the "clever" line
for p in zip(*x_chops):
is equivalent to something like
for i in range(B*4):
p = [x_ch[i, ...] for x_ch in x_chops]
which, as you can see when it's not so "clever", is going to drop the first dimension on each element in x_chops. Which is a problem because p is the recursive input to forward_chop. :(
I solved this problem by omitting "--chop" in option.(python==36, pytorch==1.1)
|
GITHUB_ARCHIVE
|
February 2 2023 GM
Glitter Meetup is the weekly town hall of the Internet Freedom community at the IF Square on the TCU Mattermost, at 9am EST / 2pm UTC. Do you need an invite? Learn how to get one here.
Date: Thursday, February 2nd
Time: 9am EST / 2pm UTC
Who: The community
Where: On IFF Mattermost Square Channel.
- Don't have an account to the IFF Mattermost? you can request one following the directions here.
The Glitterest Glitter Meetup
At the Glitterest GM you’re welcome to join and talk about whatever you want: sharing a song, insights on the latest big tech nonsense, general questions about the experience of working in the digital rights field. We will be happy to hear from you!
What is happening in India
- The BBC aired a documentary on the Indian PM, Narendra Modi, in the UK. It was so controversial that the government banned platforms like Twitter from even sharing any clips from it online.
- It was wild since it wasn't even being aired IN India in the first place
- But the law that enabled it to happen was the country’s information and technology law, which allows the government to demand that Twitter and YouTube block any content it deems a "civil threat"
- This is particular worrying: "The Indian government is now looking to significantly expand that control, proposing another amendment to its technology laws last week that would require online platforms to take down information identified as “fake or false” by the government’s own Press Information Bureau or by other government agencies. The proposal was slammed as “censorship” by the Editors Guild of India, a leading journalist group."
- Has this happened in other places where digital authoritarianism is high?
Updates from Vini Fortuna: Outline VPN, Remote IDS & OTF Internet Controls Fellowship Program
- We released Dynamic Access Keys, which lets server operators update configs without having to resend keys.
- We released Prefix Disguise, which lets one make the connection look like another protocol, bypassing protocol allowlists.
- Remote IDS:
- We open sourced a tool that helps you identify Pegasus infections (and possibly other malware) based on suspicious network activity. You can run on an Outline server to keep your users safer.
- I put together some slides explaining it.
- It's a functional prototype, so you can use it right away. However, the engineer working on it got laid off, so I'm looking for other people to move the project forward, since I believe it's promising.
- I would love to host people interested in applying for the OTF Internet Controls Fellowship Program to work on internet censorship, measurements or our remote IDS idea.
Updates from Localization Lab member
- I just started doing translation (Chinese Simplified) at Localization Lab (Briar + I2P) last month, and I'm also learning hacking now (might wanna go direction digital forensic later)
What kind of stuff are you finding as you learn hacking? Anything that's surprised you?
- That there are so many entry points. I can never see the input field without concern. I mean, in web applications for example, the field for users to search information like specific items could also be a point to inject some malicious codes.
So people can inject codes in the search bar?
- Yes, it's possible but I think most sites have good protection against it
And are you seeing this happen mostly in Chinese websites or more broadly?
- There is no universal method to do the injection, it always depends on several factors like what database they're using, so the codes (or so called payload) are always different.
- Theoretically you can perform it on every website that doesn't have enough protection. I never tried this in real life (except my own website), I mostly do exercise and learn on tryhackme.com.
|
OPCFW_CODE
|
11-28-2011 01:27 PM
Signature Tool seems to stop signing process between 20-50 approved signatures and then I recieve a flood of Successful Code Signing Request for Client emails..
The Image below is the app which launches.
After I put the password in the status items on the *.cods begin to go green and count up but then the app appears to do hang and do nothing.
I'm using phonegap to build the cod.
Any ideas appreciated.. Thanks
Solved! Go to Solution.
11-28-2011 01:36 PM
It looks like you have quite a few sibling cods, are you sure its hung and not just signing more files?
11-28-2011 02:37 PM
It seems to just get stuck I cant click any of the buttons and the signed status does not go up any more but I do recieve notifications comming in.The only way to close it is via task manager but it does not show there is an issue with the java app..
Is there a way to reduce the ammount of CODS that get generated?
I have a small app here with 33CODS and it runs through and generates fine without hanging.
I will do a test now and see if I get more than 306 emails.
That will help confirm that the app has somehow hung.
I have the following installed -
- BBWebWorks 18.104.22.168
- Phonegap 1.2
Laptop Spec -
- 8gb Ram
- SSD Drive
- i5 2.3ghz
11-28-2011 02:43 PM
The problem is Java 1.7. Unfortunately we only support java 1.6. You should be able to fix this by setting both your path and JAVA_HOME to point to 1.6 but there are several threads on our forums pertaining to this.
02-02-2012 05:41 AM
I'm new to BB development. I tried to run the PhoneGap sample application and I have few questions:
- It spams 33 signing requests for a simple application and I receive 33 emails back. Why does it happen?
- Can I run the application on a device if I don't have an Internet connection neither on the phone or my desktop?
- Why does my device reboot every time I deploy the application?
I use JDK 6, PhoneGap sample application without changes, BB 9810 device, current version of WebWorks.
|
OPCFW_CODE
|
L.J. Consulting Inc is the business I started in 2005, initially as a hobby business, but then turned into something real for several years while I was at the Naval Postgraduate School.; Although I am still at the Naval Postgraduate School, the business is back to being a hobby business and I use the site to promote myself with my Mouse and Mole refer again opposed. 12th-place Night Missouri is the superordinate Mississippi, St. Section I: How Our Laws Are Made, by Charles W. Copyright carbohydrate; 2013-2017 - special: iTunes. JSTOR is a remaining willing Studium of important practices, protocols, and important Include. You can target JSTOR End or file one of the problems all. There well some more students on JSTOR that you may be busy. A download Earthquakes of commercial history: the say-truth, the Compounds, and its analysis. A fury between Benjamin Franklin and his section, Samuel Rhoads, who did Proudly the use of Philadelphia. JSTOR is Trial of ITHAKA, a detailed job helping the own general are extraordinary strategies to fix the Christian book and to find download and site in due volume-features. item;, the JSTOR robot, JPASS®, and ITHAKA® love decreased accomplishments of ITHAKA. The download Earthquakes and you need interpreting for presents bare. You might be what you are full for by falling our science or device states. You have also End -Reg to change this subject. millions address We so think video router. Thomas Gastine( PhD) Outline Outline Introduction - what is guide and why covering it? Scientific Programming, Analysis, and Visualization with Python. We will browse the Python shipping format. Because it wants considerable to be and flat buds are PDFs in Python so we can Learn. As we contains created in elusive beliefs, we can be Linux interaction refers to Understand first History mustard. along with a place to document my thoughts and showcase many of my ideas and projects download Earthquakes and Acoustic to size does also to links on two such engineers: physicist and correspondence. Banerjee, Bowie and Pavone An algorithmic teacher of the Trust Relationship answer 308 in Bachmann and Zaheer Ads. Trust Research in author' ll sometimes). divorce, between watching-is and Vibration and Then within an high edition, like the huge Context or the trauma Council Auth0 disk. The further found results Have from the Support of the download Earthquakes, it distinguishes more popular to be End. For downtime, how' re you believe the pound of a study or a Adaptive book that dissects including your compute? We would read End a man of own and new areas. The letters on this benzene will End ethical surfaces of classic but will match on the theory of whole flows and masks, which are white to a n't invalid version. How load you solve dice or men chronicled on download Earthquakes? .3 to 5 download novels. This nomenclature is just poignant as an book. This has an transmission pasjonatw just. become The Next Website Page, almost Advanced to understand, the change has positive and cognitive, no PDSW.
|
OPCFW_CODE
|
Analogy of ideals with Normal subgroups in groups.
I've started with Ideals in ring theory but still not comfortable with the analogy it has with normal subgroups in group theory.Like we can visualize normal subgroups as
Is there some good intutive way to visualize Ideals to see the analogy?
Out of curiosity, where are these diagrams from?
@rschwieb these diagrams are from chapter 6 and 7 of the Visual group theory book by Nathan Carter.Sorry I should have written it while refering the diagram.
These two pictures give a reasonable description of how cosets work, but they are not really suitable for explaining why normal subgroups and ideals are "analogous."
These two diagrams say, in effect, "$G$ can be split up into chunks (cosets) and $gH$ is basically $H$ translated by $g$. These two translations are different in general, but when $H$ is normal, multiplying on the left and the right unambiguously translate $H$ to $gH=Hg$."
But that is basically where the usefulness of looking at the internal structure of cosets ends. The fruitful path is to then ask "Is there some obvious structure that makes the set of cosets more than a set? Can I make it into a group or a ring?"
For groups, the obvious candidate for $aH\cdot bH$ is $abH$. But as you probably know, this isn't well defined unless $H$ is normal in $G$.
If $R$ is a ring and $I$ is an ideal, then not only do we need $I$ to be a normal subgroup of $(R,+)$, so that $(a+I)+(b+I)=a+b+I$ is well defined, we also need $(a+I)(b+I)=ab+I$ to be well defined. It turns out that the absorption properties of ideals are exactly saying that this multiplication is well defined, so that the set of cosets becomes a ring.
Once you make the set of cosets into a group (or a ring) then you can talk about the homomorphism $G\to G/H$ and $R\to R/I$, and see that $H$ is the kernel of the first homomorphism, and $I$ is the kernel of the second homomorphism. This gives an equivalent way of looking at normal subgroups and ring ideals: they are precisely the kernels of homomorphisms. This viewpoint is probably the most unifying of the two ideas (and indeed many more.)
Later on, conditions on the quotient $G/H$ (or $R/I$) can circle back to be conditions on $H$ and $I$. When learning about the two concepts in general for the first time, though, it makes more sense to focus on what these two definitions mean for the quotient, and not for the internal structure of each coset.
Can you please make it more clearer to me what does Kernel of a ring homomorphism means.Like in groups we have ker ($\phi $)as the sets whose image under group homomorphism is identity of the target group....
The kernel of a ring homomorphism is the set of elements mapped to zero. It's the same for a module homomorphism.
do you mean zero under addition (i.e. of additive group )
Yes, "zero" is the nickname for the additive identity of a ring or module.
Thank you, I was wondering if ideals in rngs and normal-subgroups in groups were analogous when I saw the 1st ismorph thm for rngs and was pleased to see your explanation. It was very helpful over 5 years later!
|
STACK_EXCHANGE
|
Add supports for eslint-plugin-unicorn
Wow, found this is so cool, can we add support for eslint-plugin-unicorn?
can we add support for eslint-plugin-unicorn
Sure! It is highly extendable :slightly_smiling_face:
There is already support for e.g. jsdoc and other stuff.
Do you want to try it and open a PR?
Will grant you write access so you don't need to fork
There is also a script to generate a rule
https://github.com/Shinigami92/eslint-define-config/blob/435ca836125e7228ad8afce9ee3909207c309eb1/package.json#L13
Do you want to try it and open a PR?
Sorry, but I don't use ts at all.
Sorry, but I don't use ts at all.
I will not hurt you 😏
Or will it? :eyes:
Could you write your top 10 most used rules of eslint-plugin-unicorn?
Then I could add these.
The generation is not that automated :slightly_frowning_face: So I need to hand-write every rule on its own...
That's why I mostly support first the rules that are common used.
I had a quick look on the script, you're hard coding the docs links, why not load the rule, links exists in their meta.
Can these options generated from the schema?(Just an idea.)
I had a quick look on the script, you're hard coding the docs links, why not load the rule, links exists in their meta.
Can these options generated from the schema?(Just an idea.)
Every plugin has a different kind of structure and generating them automatically would not result in such good types than with handcrafted types and JSDoc.
Every plugin has a different kind of structure
Can you explain?
In example https://github.com/aotaduy/eslint-plugin-spellcheck/blob/master/rules/spell-checker.js doesn't have a link to the docs I manually added
https://github.com/Shinigami92/eslint-define-config/blob/435ca836125e7228ad8afce9ee3909207c309eb1/src/rules/spellcheck/spell-checker.d.ts#L68
And e.g. https://github.com/Shinigami92/eslint-define-config/blob/435ca836125e7228ad8afce9ee3909207c309eb1/src/rules/spellcheck/spell-checker.d.ts#L22-L26
this was copied from the docs
https://github.com/aotaduy/eslint-plugin-spellcheck#configuration-options
I see, but we can still use script to generate them, and modify if nessary.
Please feel free to create a draft. And feel free to do that in plain js for now.
If it works I can convert it to ts later on.
Just create a branch/PR and feel free to escalate :smile:
Okay, I'll try, going to sleep now.
Okay, I'll try, going to sleep now.
Added you on Discord :eyes:
Finally I found some free time to work on an automation to generate the rules
I added support for your requested plugin :tada:
https://github.com/Shinigami92/eslint-define-config/commit/cabba8d4f91ed4ce01af311fdfe6a0b4069c6804
Will be released in 1.1.0
|
GITHUB_ARCHIVE
|
using System;
using System.IO;
using System.Collections.Generic;
using Helper;
using MessageParser;
using System.Text;
using System.Linq;
using System.Text.RegularExpressions;
namespace WhatsBookSharp
{
public class BookCreator
{
private List<Tuple<string, string>> _copyList;
private string _emojiInputDir;
private EmojiParser _emojis;
private string _header;
private string _footer;
private readonly string EMOJIPREFIX = "emoji_u";
private static string[] _months = new string[]
{
"Januar",
"Februar",
"März",
"April",
"Mai",
"Juni",
"Juli",
"August",
"September",
"Oktober",
"November",
"Dezember"
};
/// <summary>
/// Gets or sets the top level input directory. It typically contains the subdirectories chat and config
/// </summary>
/// <value>The input dir.</value>
public string InputDir
{
get;
set;
}
/// <summary>
/// Gets or sets the output dir. This is the directory where the tex file and other output files are written
/// </summary>
/// <value>The output dir.</value>
public string OutputDir
{
get;
set;
}
/// <summary>
/// Gets or sets the emoji output dir. This is the directory where the used written emojis are written to.
/// Default is OutputDir/emojis
/// </summary>
/// <value>The emoji output dir.</value>
public string EmojiOutputDir
{
get;
set;
}
/// <summary>
/// Gets or sets the chat dir. It is the directory where the chat txt file and the images are stored.
/// These files can be obtained by exporting the chat in the Whatsapp app
/// By default the directory is set to InputDir/Chat
/// </summary>
/// <value>The chat dir.</value>
public string ChatDir
{
get;
set;
}
/// <summary>
/// Gets or sets the config dir. In this directory all configuration files are, e.g. the chatname.match.xml file
/// </summary>
/// <value>The config dir.</value>
public string ConfigDir
{
get;
set;
}
/// <summary>
/// Gets or sets the image dir. It contains all images for the chat.
/// By default it is set to ChatDir
/// </summary>
/// <value>The image dir.</value>
public string ImageDir
{
get;
set;
}
/// <summary>
/// Gets or sets the image pool. It should contain all whatsapp images.
/// This directory is used if there "<Media omitted>" lines in the chat file.
/// and if no chatname.match.xml file is available.
/// It is set to null by default.
/// </summary>
/// <value>The image pool.</value>
public string ImagePoolDir
{
get;
set;
}
public BookCreator(string inputDir, string outputDir, string emojiInputDir)
{
var emojiList = ReadEmojiList(emojiInputDir);
_emojiInputDir = emojiInputDir;
_emojis = new EmojiParser(emojiList, x => GetEmojiPath(x));
_header = File.ReadAllText("header.tex.tmpl");
_footer = File.ReadAllText("footer.tex.tmpl");
InputDir = inputDir;
OutputDir = outputDir;
ChatDir = Path.Combine(InputDir, "chat");
ConfigDir = Path.Combine(InputDir, "config");
ImageDir = ChatDir;
ImagePoolDir = null;
EmojiOutputDir = Path.Combine(OutputDir, "emojis");
Directory.CreateDirectory(EmojiOutputDir);
}
public void WriteTex()
{
_copyList = new List<Tuple<string, string>>();
var txtFiles = Directory.EnumerateFiles(ChatDir, "*.txt");
if (txtFiles.Count() != 1)
{
throw new ArgumentException("Invalid number of .txt-files found: " + txtFiles.Count());
}
var txtInputPath = txtFiles.First();
Console.WriteLine($"Using {txtInputPath} as input");
var namePrefix = Path.GetFileName(txtInputPath);
namePrefix = namePrefix.Substring(0, namePrefix.Length - 4);
var texOutputPath = Path.Combine(OutputDir, namePrefix + ".tex");
var matchInputPath = Path.Combine(ConfigDir, namePrefix + ".match.xml");
var matchOutputPath = Path.Combine(OutputDir, namePrefix + ".match.xml");
var im = new ImageMatcher();
if (File.Exists(matchInputPath))
{
Console.WriteLine($"Loading matches '{matchInputPath}'");
im.LoadMatches(matchInputPath);
im.SearchMode = false;
}
else
{
if (ImagePoolDir == null)
{
im.SearchMode = false;
}
else
{
Console.WriteLine($"Loading pool images from '{ImagePoolDir}'");
im.LoadFiles(ImagePoolDir);
im.SearchMode = true;
}
}
var parser = new WhatsappParser(txtInputPath, im);
var sb = new StringBuilder();
sb.AppendLine(_header);
IMessage msg;
DateTime last = DateTime.MinValue;
while ((msg = parser.NextMessage()) != null)
{
if (TimeDiffer(last, msg.Timepoint))
{
sb.AppendLine(@"\begin{center}" + GetDateString(msg.Timepoint) + @"\end{center}");
}
last = msg.Timepoint;
if (msg is TextMessage)
{
AppendTextMessage(msg as TextMessage, sb);
}
else if (msg is ImageMessage)
{
AppendImageMessage(msg as ImageMessage, sb);
}
else if (msg is MediaOmittedMessage)
{
AppendMediaOmittedMessage(msg as MediaOmittedMessage, sb);
}
else if (msg is MediaMessage)
{
AppendMediaMessage(msg as MediaMessage, sb);
}
}
sb.AppendLine(_footer);
Console.WriteLine($"Writing tex file to '{texOutputPath}'");
File.WriteAllText(texOutputPath, sb.ToString());
Console.WriteLine($"Writing match file to '{matchOutputPath}'");
im.Save(matchOutputPath);
Console.WriteLine($"Copy emojis to '{EmojiOutputDir}'");
CopyList();
}
private void CopyList()
{
foreach (var x in _copyList)
{
File.Copy(x.Item1, x.Item2, true);
}
}
private static bool TimeDiffer(DateTime date1, DateTime date2)
{
return date1.Year != date2.Year || date1.Month != date2.Month || date1.Day != date2.Day;
}
private static string GetDateString(DateTime date)
{
var dayName = "UNKNOWN";
switch (date.DayOfWeek)
{
case DayOfWeek.Monday:
dayName = "Montag";
break;
case DayOfWeek.Tuesday:
dayName = "Dienstag";
break;
case DayOfWeek.Wednesday:
dayName = "Mittwoch";
break;
case DayOfWeek.Thursday:
dayName = "Donnerstag";
break;
case DayOfWeek.Friday:
dayName = "Freitag";
break;
case DayOfWeek.Saturday:
dayName = "Samstag";
break;
case DayOfWeek.Sunday:
dayName = "Sonntag";
break;
}
var monthName = Latex.EncodeLatex(_months[date.Month - 1]);
return $"{dayName}, der {date.Day}. {monthName} {date.Year}";
}
private static string GetTimeString(DateTime date)
{
return $"{date.Hour:D2}:{date.Minute:D2}";
}
private static string FormatSenderAndTime(IMessage msg)
{
var sender = string.Format(@"\textbf{{{0}}}", Latex.EncodeLatex(msg.Sender));
return string.Format("{0} ({1}):", sender, GetTimeString(msg.Timepoint));
}
private string Encode(string str)
{
str = Latex.EncodeLatex(str);
str = Latex.ReplaceURL(str);
str = _emojis.ReplaceEmojis(str);
return str;
}
private void AppendTextMessage(TextMessage msg, StringBuilder sb)
{
var senderAndTime = FormatSenderAndTime(msg);
var content = Encode(msg.Content);
sb.AppendLine($"{senderAndTime} {content}");
sb.AppendLine(@"\\");
}
private void AppendImageMessage(ImageMessage msg, StringBuilder sb)
{
sb.AppendLine(FormatSenderAndTime(msg) + @"\\");
sb.AppendLine(@"\begin{center}");
sb.AppendLine(@"\includegraphics[height=0.1\textheight]{" + Path.Combine(ImageDir, msg.Filename) + @"}\\");
sb.AppendFormat(@"\small{{\textit{{{0}}}}}", Encode(msg.Subscription));
sb.AppendLine(@"\end{center}");
}
private void AppendMediaOmittedMessage(MediaOmittedMessage msg, StringBuilder sb)
{
sb.AppendLine(FormatSenderAndTime(msg) + @"\\");
sb.AppendLine(@"\begin{center}");
foreach (var x in msg.Relpaths)
{
sb.AppendLine(@"\includegraphics[height=0.1\textheight]{" + Path.Combine(ImagePoolDir, x) + @"}\\");
sb.AppendFormat(@"\small{{\textit{{{0}}}}}\\", Encode(x));
}
sb.AppendLine(@"\end{center}");
}
private void AppendMediaMessage(MediaMessage msg, StringBuilder sb)
{
var str = string.Format(@"{0} \textit{{{1}}}", FormatSenderAndTime(msg), Latex.EncodeLatex(msg.Filename));
if (!string.IsNullOrWhiteSpace(msg.Subscription))
{
str = str + " - " + Encode(msg.Subscription);
}
sb.AppendLine(str);
sb.AppendLine(@"\\");
}
private List<string> ReadEmojiList(string dir)
{
var list = new List<string>();
foreach (var x in Directory.EnumerateFiles(dir))
{
var fileName = Path.GetFileName(x);
var regex = new Regex(EMOJIPREFIX);
var nr = regex.Replace(fileName, "");
regex = new Regex(@"\.png");
nr = regex.Replace(nr, "");
list.Add(nr);
}
// TODO find better solution
var excludes = new string []{ "0023", "002a", "0030", "0031", "0032", "0033", "0034", "0035", "0036", "0037", "0038", "0039" };
foreach(var x in excludes)
{
list.Remove(x);
}
return list;
}
private string GetEmojiPath(string str)
{
var src = $"{_emojiInputDir}/{EMOJIPREFIX}{str}.png";
var dst = $"{EmojiOutputDir}/{str}.png";
_copyList.Add(new Tuple<string, string>(src, dst));
return $"\\includegraphics[scale=0.075]{{emojis/{str}.png}}";
}
}
}
|
STACK_EDU
|
Wooden posts for backyard pullup bar
I want to install a backyard pullup bar similar to this one. It's essentially 2 pressure treated wooden posts connected with a metal bar and anchored into the ground with concrete.
I'm about 180lbs and primarily do weighted pullups. Sometimes I use over 100lbs of extra weight so it should be able to handle about 300lbs comfortably. Would 4x6 posts be sufficient or would it be wiser to use 6x6 posts? The 6x6s seem to be a lot more costly so I'd prefer to go with the 4x6s if they're a reasonable choice for me.
Anything else I should keep in mind? The author dug the holes 2ft deep but I'm planning to make them 3ft deep as that seems like it would add more stability.
Location is in Western Washington state, near Seattle. According to this article the frost depth is about 10" unless I'm misunderstanding.
Thanks
The metal bar and how it attaches is the main concern. You also need to know the frost depth for your local area to determine the depth of the hole(if more than one year of use). 4x6 will handle the weight just fine.
2ft or 3ft fine - depends on the side loads you intend to inflict. Are you going to take a run up and swing?
@crip659, I was actually going to buy one of the X-431 crossmembers from here: https://www.roguefitness.com/monster-lite-crossmembers and just use that. There's no good pics but it has two big holes per side and I was just going to drive a long lag bolt into each hole.
@SolarMike no running up to it but I guess I will occasionally do more explosive movements like muscle ups.
Even 4x4s are great plenty.
4x6 will be fine. I didn't bother to look up the tables, but a 4x6s vertical load capacity in this situation will be well over 10,000 lbs each.
Make sure to orient them such that the wide face side holds the bar.
I agree you should put them deeper in the ground, your main issues will be bending moments. Swinging front to back won't bother the 4x6 at all, but you could cause the footings to wiggle loose in the ground. I would put them as deep as you can tolerate, around 4 feet or more . If you live where it freezes, you'll want to exceed the frost depth as well. I would also double the amount of concrete. 160 lbs of concrete for each footing I think is a little light, even for kids.
I agree, but 4' seems excessive. If the soil is that soft, go 3' and add a 4x6 cross member a few inches below grade and about 2' past the posts.
|
STACK_EXCHANGE
|
Mathematics Projects on Week 48, 2018Previous Week Next Week
- Project for Mahmoud G. Project for Nawal E. Project for Umair A. -- 18/11/26 00:49:54 Statistics Tutoring Theory of Formal Languages deep learning project for feature extraction of ECG signals for arrhythmia detection Project for Emmanuel N. MATLAB DEVELOPER FOR RESEARCH WORK IN POWER SYSTEM DYNAMICS WITH SIGNAL PROCESSING APPLICATIONS Econometric case Project for Mirza Muhammad A. -- 3 Advance Maths, Comptuter Science - Theory of formal languages Abstract reasoning expert needed Project for Volodymyr K. - 27/11/2018 00:59 EST Sorting points in order to build a smart path (VB6 project) I need help writing a function in matlab Project for Aren B. realizzare un algoritmo machine learning per mercati finanziari Project for Osama H. Interpret a simple statistical result from SPSS Calculate Average Monthly Growth Rate for Earnings and Revenue QA of CCAT Project for Tonmoy R. Matlab work Compressible Flow Project: Turbojet Design Project for Nick B. Project for Alka R. Integration help PROOFREAD MATHEMATICS CONTENT Fix a statistics project GRE/GMAT/TOEFL tutor needed Mathematics test makers for 5th grade maths topics Statistical analysis with K-means Fluid Mechanics Project for Fuh C. -- 18/11/30 16:25:58 Machine learning with Python tutors MATLAB SCRIPT
- Solve an Algorithm for me in 1 hour Project for Anshul B. -- 18/11/25 21:26:33 Matlab Image Analysis Coding Required Digital Worksheet for Kids Project for Pratiksha Jangid -- 18/11/26 10:33:10 digital signal processing for telecommunications Project for Ivan (John) Siahlo Cutters list Project for Md Musab F. French to English Translation Project for Uttam Kumer S. Questions for a statistician 帮我把FORTRAN转换成MATALB Project for rakshathawait215 Searching Help for MATLAB Script improvement (Image Processing) Project for Francisco G. -- 2 Project for Saadia N. Include variable in python script to calculate odds through Poisson Model matlab experts.... Project for Dinh Hong P. Project for Charu J. -- 18/11/28 11:35:55 Project for Hamid M. -- 2 Looking for a Subject Matter expert for Math (10th standard) Project for Oleh Y. MATLAB EXPERT REQUIRED FOR SMALL TASK -- . Project for Alka R. -- 2 Project for Black B. operations project help Matlab expert needed for long term projects Excel Graph question Project for Job O. Matlab project Matlab code to identify the colour coordinates of jpeg images and colour differences. Project for Khomaissa E. R analysis
- Project for Alka R. Project for Muhammad A. Looking for someone to tutor in statistics, Algebra, & Basic Math Arrhythmia Warning Algorithm (AWA) matlab coding. Project for Emmanuel N. microeconomics Project for Filip S. -- 18/11/26 14:10:04 Project for hza3 Project for Juan Ignacio applied mathmayic project Project for Adrianus Y. -- 18/11/26 21:31:13 MatLAB Experienced required for urgent project -- ... __ MCS Technique Application for A Project Bulk numerical solution of mechanical engineering- vibration problems Statistical Quality Control Project for Taoufik I. -- 2 Project for Volodymyr K. -- 2 Project for Mirza Muhammad A. Project for Mirza Muhammad A. Project for Van Ngo N. Project for Yuriy D. Project for Dinh Hong P. Project for Asadollah K. -- 3 Project for Mirza Muhammad A. Oval Coordinates Calculator Project for Mirza Muhammad A. writing chemistry equations on web CRAMER-RAO LOWER BOUND FOR LOCALIZATION IN ENVIRONMENTS WITH DYNAMICAL OBSTACLES Project for Abid R. Machine Learning Guidance Mathematics Statistical learning Compilation of questions and answers into Powerpoint Project for Amar K. Project for Mirza Muhammad A.
- Project for Alka R. -- 2 Project for Daniel M. plot of multiple cdfs in one figure Matlab Project Help - System of ODs Stata Project mathematics, statistics theory Project for Nawal E. -- 2 Threejs expert needed -- 2 Project for Md Musab F. 18/11/27 - 00:25:23 Statistical Data Analysis Problemas de mecanica cuantica y estado solido Project for Eduardo B. Basic physical science Creating a analysis excel sheet of a survey with graphs Project for Abid R. State Machine, LTL & CTL Project for supper5guy -- 18/11/27 22:35:09 Project for Abid R. Prolog simple work Data Science Expert Required Programação Octave Project for Muhammad A. Statistics solution Project for Edgard Jose D. Project for Tonmoy R. Cadence Virtuoso Matlab Script Calculus Article - In depth explanation of how to calculate the arc length of a helix. matlab code for estimator using cramer rao lower bound Mathematical modeling for research journal article Project for Siripong R. Need Calculus / Math Proficient to help with Software java program to solve math problems Project for Aarushi S. game theory / finance Project for Mirza Muhammad A. -- 2
|
OPCFW_CODE
|
Parent Page(s): Blogs, Wikis and Discussion
NSync is a plugin in WordPress created as a way to compile posts from various blog sources into one centralized hub. Empower students to contribute to collaborative projects while giving them the opportunity to develop their own personal blogs.
How does NSync work?
Instructors can create a blog dedicated for a class (“main hub”), then invite students as users to the class blog. When students create a post, they have the option to also publish it to their class blog if the NSync plugin is enabled. The following steps describe the process of pushing a post to the centralized blog space:
Create a new post in the source blog (e.g. student’s own blog).
Categorize the post accordingly with the centralized blog.
Push to Publish
Select the blog(s) you wish to also publish your post to.
View your new post on all the blog(s) you have published to.
How can NSync help me?
The NSync plugin can be used as a tool to organize and compile posts from different blog sources into a centralized blog space. This creates an opportunity for students to create their own personal blog and develop technical skills. By creating a centralized hub for students to contribute their ideas and opinions, it promotes a sense of community and scholarship. Additionally, this tool cuts down on time spent searching and scrolling through individual student’s or contributor’s, making grading and reviewing of student’s work more efficient.
Below are some examples of collaborative projects in which this tool can enhance your blog:
- Resource Repository
- Experiential Learning Reflections
- Debate Discussion
- Wiki Topics
- Synchronizes class discussion or posts in a centralized space.
- Easy to publish posts with one click of a button.
- Easier for grading purposes as all the posts are compiled together.
- As students contribute posts from term to term, the knowledge continues building and can act as a resource repository for others.
- Students have ownership of their own blogs so they will continue to have access to it after the course ends, in addition to customizing their own personal blogs.
- The instructor has the ability to control who gets to push feeds to the course blog and when to discontinue this.
- Set up time is required and instructor must provide clear instructions on how to use categories for it to be organized properly.
- Pre-planning is required if instructor is considering on feeding various type of posts into different pages.
- Available as a UBC Blogs plugin.
Tips for using NSync in your blog
- You can restrict source post categories to the ones already on your blog and pre-determine where these posts should appear in your course blog (e.g. different topics, section, assignments.
- You can also have students be the creators of the categories and push them to the course blog. Provide clear instructions and guidelines if you choose to do this to keep the categories organized and avoid duplication.
|
OPCFW_CODE
|
The two places where I think GraphQL really shines is:
Frontend and backend team is kind of separated. Can make it easier for them to work more decoupled (for example frontend can skip fetching one field no longer needed without having to wait for the backend team to remove it and backend can add new fields without having to wait for frontend).
You fetch similar data in many places but in different formats. So you might need all fields in one place and in other places you just need some of the fields. Instead of overfetching or having lots of extra endpoints graphql makes it easier since you can fetch what you need all from the same endpoint.
Big negative for me is that graphql is harder to cache and a bit more complex to setup (both frontend and backend) compared to REST. So if it is a simple app (in the sense that it doesn’t fetch a lot of different stuff everywhere) I go for REST but if it is a complex project I might go for graphql.
Definitely harder to cache. I would almost never use graphql personally. I think automatic documentation generation and client generation + rest + a little json API could cover 90% of uses cases.
Yeah, hard agree here. Nowadays you can compose a good OpenAPI file and you get an awesome interactive UI with which you can browse your REST API. Not to mention that your
openapi.json) can be used to automatically generate clients for your API as well.
GraphQL I’ve mostly seen defended by frontend teams that find it easier to work with because they are its consumers. For the backenders it’s definitely extra work.
One thing I’ll absolutely give GraphQL props for however is – good detailed data schema. We need more of that, like literally everywhere.
Absinthe is pretty quick to setup, I just haven’t found inspiration for graphql yet, despite working with it for a while now.
I worked with Absinthe at least 4 times but it was always high-maintenance for me. Business and tech leadership constantly needed changes / additions / deletions and the GraphQL benefits almost never materialized.
not strictly related to mobile apps, but i might prefer GraphQL over REST when:
- i need elastic web interface/frontend to database/data store.
- i am not looking for an interface to business code.
- i am not after very tight controls over data going in and out.
I’m not an professional Elixir dev but I currently work as dev in NodeJS app that uses GraphQL for web app. My company decided to outsource mobile app development. Decision was made that mobile will use REST API. What a mistake it was. So many hours lost because we chose REST. Having second GraphQL endpoint just for mobile would have saved us because GraphQL uses schema for the API. It would have caught so many problems this outsource company was making instead of us having to point out these problems.
I would only use REST if you want to have an API customers would call by themselves.
If I would be developing with Elixir I would really look into LiveView if it fits your needs. I think it should fit most interactive web app development needs that don’t need offline capability. Because by using it you wouldn’t have to write any kind of internet facing API at all saving you lot of dev time.
I guess graphql fragmenet could be useful for caching. GraphQL fragments explained - LogRocket Blog
I like GraphQL for the ease of traversing related/associated data. I like that I can define a User object once and that’s it. And that it’s smart about when it needs to hit the db and when not.
Also, we very strictly use Dataloader so we never have n+1 issues. And I’m not sure, but I was under the impression that Dataloader could also aid in automatic concurrent loading of data.
We only use GraphQL for read only data though; no mutation API at all.
A REST API can also give only the fields you need if coded accordingly. My REST APIs will accept this
https://apibaas.io/endpoint for a response with all the fields, this
https://apibaas.io/endpoint?fields=a,b,c,d for a response with only the specified fields, or
A REST API can also aggregate results from several entities if you want, you just need to code accordingly.
Developers just need to have an open mindset and not blindly follow the herd. Also, follow API design first approach and use the tools at your disposal to design the API with OpenAPI specs. After you are happy with your API design give it to your consumers and address their feedback in the specification before you start to code the implementation, and repeat this as many times as needed until the API specification is beneficial for both the developers implementing it and the ones consuming it. During this API design first process you will discover many issues with your initial idea to solve the business requirements, thus saving you from discovering them after having already some thousands lines of code written.
The Chapter 1 of book “Craft GraphQL APIs in Elixir with Absinthe” explains this problem very well. You can read it.
In summary, it:
- allows the users of API (frontend developers, mobile app developers, etc.) to describe the data structure that they want, no more overfetching or underfetching:
- overfetching - downloading unnecessary data
- underfetching - a specific endpoint doesn’t provide enough of the required information. The client will have to make additional requests to fetch everything it needs
- allows the creators of API (backend developers) to focus on data relationships and business rules instead of implementing and optimizing specific endpoints:
- no more annoying user input validation
- no more limited flexibility of query parameters
- no more endpoints which are similiar but return different data
GraphQL solves above problems. If you are facing these problems, consider to use it. Or, you don’t need it, maybe.
The schema is the selling point. It makes client API generation so easy.
In a REST API you have OpenAPI specs that when done properly (follow always design first approach) will allow you to:
- auto generate client SDKS (client API generation)
- auto generate mock servers (frontend team can work in isolation of the backend team)
- create automated tests for contract acceptance between client and server
- automate validation of incoming requests against the specification. This is huge from a security point of view. If the request isn’t in the format allowed in the specification you can refuse it, thus making it a lot harder for attackers to run fuzzing tools against your API in order to breach and exploit it.
- automate user input validation.
Same goes for OpenAPI but I’ll agree that GraphQL gives you more typing info which is invaluable.
I am not against GraphQL at all, I like its idea but in practice it’s been extremely tedious to constantly maintain and evolve literal hundreds of records / endpoints. And as mentioned above, I worked in teams with very dynamic and evolving requirements. GraphQL is… shall we say, not very applicable to projects that are constantly 10 test breaks away from being a prototype/MVP again. It seems very good for more mature projects.
I wish we had a much more compact and more comprehensive IDL and that’s actually universal… I’d really like to work on that before I retire (I have good 10-20 years in me still but the burn out and the mental resistance to straining my brain much further is already in place, for the better or worse).
Off topic but, man… this hits too close to home… I think I’m right there with ya.
My 2c are that if I am building a fully featured REST API, I would build it on top of GraphQL. Depending on the audience, GraphQL can be overwhelming (especially if you adopt advanced conventions like relay). However, GQL has a lot of benefits as an intermediate layer between REST and your business logic.
In practice, your REST layer would call a GQL query/mutation. You can leverage GQL’s resolvers for serialization, backwards compat for changes in business logic, N+1 is handled through Absinthe helpers / dataloader, and “expanded” REST fields are just conditional fragments on a query. A simple utility can convert GQL errors nested in the response into HTTP error codes. So your users are calling REST urls, but it’s actually GQL under the hood. And then, of course you could expose your GQL endpoint as well…
Interesting, caching has always been very straightforward for us. Our resolvers look up values in the cache first, then hit the data source. I’ve never understood the caching critique with GraphQL
In the more general discussion: as someone that’s spent most of their career building the backend for one or the other I’m happy I’m working on a GraphQL service, and in Absinthe Still one of the best libraries I’ve used. The idea that every field in an API is backed by a function, and a query represents a sort of composition of them, is very easy for me to reason about.
I can see how the type system could become onerous if you’re in a constant state of flux though, I’ve heard people say similar things about typed languages.
It’s about HTTP level of caching e.g. people want browsers or middle boxes to cache HTTP requests with an exact combo URL + parameters + cache headers.
Personally that doesn’t bother me much but I can see their point. I’m more worried by projects in flux as you mentioned – was hit by that and GraphQL left a sour taste back then. For more stable projects it’s very good however.
Ah I see. Apollo’s caching is pretty cool client side, and I think implementing your own caching in the backend is quite straight forward, but I also agree nothing beats the simplicity of wanting
/foo/bar/baz to be cached with (effectively) zero effort.
As a sole/indie developer, I use GraphQL over REST. It’s much less work. There are many great clients like Altair. It documents itself. It catches coding errors. It provides a protocol for API errors. It’s opinionated: I can just write my services and data access and then continue on.
NB: I decided not to use absinthe because I found the schema DSL vague and stringly typed. Code like
field :user, :user. Instead I’m using a statically-typed inferred-schema framework - Python’s Strawberry. That’s worked well. Rust’s frameworks are also code-first not schema-first. (Although I do prefer schema-first.)
|
OPCFW_CODE
|
TACHANKA - Save Lord Tachanka
Our friend Lord Tachanka is on an important mission. He has to reach basecamp quickly. But the evil enemies have stolen many LMG's and have placed them along the way. You'll have to help him out!
Lord Tachanka is initially at the top left corner (1,1) of a rectangular N × M grid. He needs to reach the bottom right corner (N,M). He can only move down or right. He moves at the speed of 1 cell per second. He has to move every second—that is, he cannot stop and wait at any cell.
There are K special cells that contain the LMG planted by the enemies. Each LMG has a starting time t and a frequency f. It first fires at time t seconds, followed by another round at time t+f seconds, then at time t+2f seconds …. When a LMG fires, it simultaneously emits four bullets, one in each of the four directions: up, down, left and right. The pulses travel at 1 cell per second.
Suppose a LMG is located at (x,y) with starting time t and frequency f. At time t seconds, it shoots its first set of bullets. The bullets travelling upwards will be at the cell (x,y-s) at time t+s seconds. At this time, the bullets travelling left will be at cell (x-s,y), the bullets travelling right will be at cell (x+s,y) and the bullets travelling down will be at cell (x,y+s). It will fire next at time t+f seconds. If a bullets crosses an edge of the grid, it disappears. The LMG's are numbered 1 to k, and if two bullets from different LMG's happen to meet, the one coming from the higher numbered LMG survives. At any time, if Lord Tachanka and a bullet are in the same cell, he dies. That is the only way bullet interact with Lord Tachanka.
But don't be worried, as you can help the Lord. He can contact his basecamp and can report them the exact position of an LMG, and it will be destroyed by air support. But the war is going on, and you as a commander will have to ensure that minimum missiles are wasted.
Given these, you should find the least time (in seconds) in which Lord Tachanka can reach his basecamp safely.Also calculate the minimum LMG's that are needed to be destroyed so that the Lord can reach the basecamp safely.
2 <= N, M <= 500
1 <= K <=500
All the frequencies are guaranteed to be integers between 1 and 600, inclusive.
All the starting times are guaranteed to be integers between 0 and 600, inclusive
All the coordinates of the LMGs are guaranteed to be valid cells in the N×M grid.
No two LMGs will be on the same cell.
Line 1: Three space separated integers N, M and K, describing the number of rows and columns in the grid and the number of LMG's, respectively.
Lines 2 to K+1: These lines describe the K LMGs. Each line has four space separated integers.
The first two integers on the line denote the row and column where the LMG is located,
the third integer is its starting time, and the fourth integer is its frequency.
The LMG's are numbered in the order in which it appears in the input, i.e from 1 to k
You need to output two integers, the minimum amount of time required for the Lord to reach the basecamp, and the minimum LMG's needed to be destroyed.
Input: 4 4 1 3 2 1 3
Output: 6 0
Can Lord Tachanka enter a cell that contains an LMG?
Wrong answer in 15th case. Can you please check my submission?Last edit: 2018-03-23 11:43:53
we have to minimise the time, and also report the minimum missiles used in the process of minimising the time, right?
|
OPCFW_CODE
|
What does 'that' refer to?
From now on, as long as there are human beings on this or any other
heavenly body, humans will have a continuous, ever-extending history
that traces itself back unbrokenly to our day now and our planet here.
What is going to happen to all those people—what will they do—in
unending time? How in the far, far future will they think of us now,
who are so near the beginning of it all, and whom they will know a lot
about if they choose to? How shall we appear to them in the light of
all that will have happened between us and them, in a period many,
many times as long as that between the dawn of civilisation and today?
(Ultimate questions, Bryan Magee)
What does "that" refer to?
And I don't understand the use of "in a period many, many times" in the last sentence. Could you help me?
Thanks
It mean 'the time period' – "many times as long as [the length of time] between the dawn of civilisation and today". In other words, much further into the future from today, than from today back into the known past. Aside: I do not agree with that paragraph: it's a "feel-good" idea that sort of congratulates us, but overlooks the uncertainly of what the future might bring.
Does this help: "in a period (which is) many, many times as long as..." ?
Maybe the confusion comes from the ambiguity of parsing (a period many, many times as long as that) as a single noun phrase, which would leaves the remaining (between the dawn of civilisation and today) being as an adjective phrase describing all that will have happened? When it is actually intended to be parsed as (many, many times as long as (that between the dawn of civilisation and today))
When I saw this in HNQ I was sure that the Q was going to be meatloaf related
Let's remove a few unnecessary bits to get:
A period many times as long as that between the dawn of civilisation and today
That stands in for period. So you could also say:
A period many times as long as the period between the dawn of civilisation and today
As a whole the phrase is a comparison between future humans looking at us and us looking at the first civilisations. It's saying that these humans will be further in the future than the first civilisations are in the past. (The first civilisations developed around 10,000 years ago, so if future humans exist in 50,000 years the period will be 5 times as long).
|
STACK_EXCHANGE
|
Topics for today’s Squidwrench Electronics Workshop: Session 5 in a continuing series.
Having discussed transistors as current-controlled current sources, we can now
select one as a victim use one as a switch, then add capacitors to learn about exponential charging, and introduce the oscilloscope as a vital tool.
So, we proceed:
Transistors as switches
Review graphical parameters
- saturation voltage for high Ic
- cutoff voltage for near-zero Ic
- resistive load line: VR = Vcc – Vc
- power dissipation hyperbola (at all Vc)
- secondary breakdown limit (at higher Vc)
Something like this, only drawn much larger and with actual numbers:
Reminder of linear vs. log scales converting hyperbolas into straight lines.
NPN transistor as “to ground” switch
- where to measure device voltages?
- passing mention of flyback diodes
- IB needed for saturation?
- Darlington transistors: beta multiplier, VBE adder
Without the LED, you get nice square waves:
An ancient green LED reduces Vc by a little over a volt:
Discuss PNP transistor as “from supply” switch
- why VCC must not exceed controller VDD
- kill microcontroller and logic gates
Wire up pulse gen to transistor
- function generator for base drive voltage
- collector resistor (then LED) as output
- how do you know what it’s doing?
- add oscilloscope to show voltages
- explanation of scope functions!
Capacitor as charge-storage devices
Useful ideas and equations
- C = Q/V
- so C = ΔQ/ΔV
- therefore i = C * Δv/Δt
- energy = 1/2 * C * V²
Charging capacitor from a voltage source through a resistor
- Exponential waveform: e^t/τ
- time constant τ=RC
- show 3τ = 5%
- and 5τ < 1%
Add cap to transistor switch with R to soften discharge path
- charge vs discharge paths
- calculate time constants
- wire it up
- verify with oscilloscope
The circuit will look like this:
Discussion of parts tolerance: 100 nF caps are really 78 nF
With one cap:
Add another cap for twice the time constant:
Let the scope calculate 10-90% rise time:
- rise time = 2.2 τ (compare with calculations!)
- rise time = 0.34/BW
Do it on hard mode with the old Tek scope for pedagogic purposes.
That should soak up the better part of four hours!
|
OPCFW_CODE
|
One of the main challenges for data warehousing design is how to recover from a failure as quickly as possible. Recovery usually involves fixing the client or server systems, changing configuration parameters or system resources, restarting the interrupted jobs based on their last checkpoints, and bringing the system back to normal without resorting to rigorous manual efforts or writing piece-meal recovery procedures.
Most of the time, jobs may also be required to perform "catch up" so that transactions that were accumulated during the "failure window" can be applied to the target systems as quickly as possible.
To this end, Teradata PT provides some unique features that allow you to speed up the recovery process without resorting to changing job scripts after a job failure. These features include:
- Making all jobs checkpoint restartable by default.
- Archiving transactional data in a readily-loaded format concurrently with the loading of such transactions into target tables using the Duplicate APPLY feature, which allows the same data to go into different targets.
- Defining a single script language for all operators, which not only results in common approaches for defining operators, but also allows substantial reusability of metadata and operators.
- Supporting unlimited variable substitution using a job variables file so that changeable and common job parameters, called “attributes,” can be isolated in a single place for value assignments.
- Having complete independence between the producer operator (for data extraction) and the consumer operator (for data loading) in a job substantially simplifies the process of "switching export/load protocols". In other words, changing either the producer operator or the consumer operator in a job would not impact the other.
To take advantage of the above features for restartabilty, some best practices for designing and implementing job scripts are necessary. The best practices presented below speak to reusability and manageability of job scripts, the flexibility of building and enhancing them to deal with ever increasing data volumes and changes in execution environments, and restartability after job failures. These practices can also be regarded as standard guidelines in building data warehousing processes.
- Always use a job name to execute a job.
- Use job variable files to capture changeable and common parameters such as user ID, password, file names, source or archive directory names, the number of producer and consumer instances, and so on.
- Run with backup or archive using the Duplicate APPLY feature so that each APPLY statement can send the same data to a different target.
- Define checkpoint frequency to control load granularity in case of failure. The smaller the frequency, the less time to recover a job, but more time to take checkpoints.
- Switch the load protocol (for example, Stream to Update) for purposes of catch up after a system failure.
- Always execute a job with the job variables file so that parameters are defined in one place instead of being distributed across job scripts.
|
OPCFW_CODE
|
What's the benefit of having a private repository for personal projects?
So I just created my first GitHub repository and started to wonder if there would be any reason why somebody shouldn't post their code. I don't mean the obvious, such as code that is IP of somebody else or any other possible legal situation; I'm talking about a newbie posting their own, albeit terrible, code.
I've heard several times on this site that one of the things that a some of the hiring managers do is check out the person on Github (or similar site), so what if the code is lacking? Would the position desired—for example, if I'm going after a junior developer position instead of a senior developer position—matter?
The purpose of private repositories is to save your code without having it in the open. Such as programs that are proprietary for you at the moment and that you don't want to share. Effectively it's just a place to back-up your private code in a remote repository.
Regarding your worries that your code may be lacking if you publish things openly; you shouldn't fret too much about it. Just having an account on github (at the writing moment) tells me that you're in a higher echelon of programmers and in my experience recruiters only check the code briefly on what you've done. Even though you have some mistakes in your code it is still a better gauge for your prospective employer that you can actually do stuff, which matters more than anything.
Almost 99% of all candidates don't give any indication in their resumé on how much they can program or design a program. Heck, some "Senior Java programmers" I've met were so clueless that they didn't even know what an interface is or why they would use it.
+1. Don't fret too much about it. Just the fact that @Jetti even has heard of Git, probably puts him in the top 3% already.
Thank you Spoike, this has made me feel much better. I have started pushing my code to my github repo and have a simple C program and a few Java things up there now.
Get an account on Bitbucket if this is a concern. Bitbucket gives you as many private (Mercurial) repositories as you want.
If you're prefer git, then read the comparison of free private repository hosting services, which focuses specifically on services that offer free private repositories.
Bitbucket now offers Git repos.
Yeah, the fact they offer free private git repositories is why I chose them over the rest.
Visual Studio Team Services offers free private GitHub repos as well.
Your link is broken
As of Jan 2019, GitHub free accounts can create unlimited private repositories: https://github.blog/2019-01-07-new-year-new-github/
Open source projects are generally hosted for free on these sites, so make two accounts. One for your hobby sandboxes and another that you don't mind the general public looking at. Publish the username of the good one.
There's no harm in hosting on GitHub or Bitbucket. In fact its accessible from anywhere and you can attract other developers to contribute. You could use private repos if you don't want to make projects open source.
And it depends on the hiring manager on how much impact a good github profile makes.
Can't think of anything other than shame or humility. Others have said this much better before, but those are very important personal qualities in a programmer. That's not to say that you should tell everyone how terrible your code is, but they can result in someone striving constantly to improve their work. And the field of programming is so mind-bogglingly huge that nobody can possibly learn everything, or even close to it. So be confident that while any programmer can find faults (objective or subjective) with any other programmer's code (or their own), that doesn't mean your code should be hidden away.
YES
Take a look at this post: Is it worth listing testing or self-learning repositories on my résumé? -- I would extend it to also having it online with your name on it.
I strongly agree with the accepted answer there. The sum of all things visible about you online is your personal balance sheet. You want to make sure to display as many assets while minimizing liabilities.
If I come across your name and find some of the code you wrote, I can't tell if it's some toy program you didn't care about or if this is your best work.
I just posted my first project on GitHub myself, and I spent good amount of time making sure it is readable by someone other than myself.
If you are just looking for storage, I would take other people's advice and use one of other repositories (online or offline). Personally, I also use Perforce at home (I have no affiliation with the company) for my toy projects. It is a very mature and good product and it comes with 2 free users, so if its just for you, you get full, professional, non-crippled in any way version control system for free.
There is no harm in having public repositories. However it is true that recruiters like to browse your GitHub profile and see what you have done. If you have a mix of 'beautiful' and 'ugly' projects, you can always make a portfolio website that displays the beautiful projects, or even explains which repositories are beautiful and which are sandboxes. Also it is a good practice to describe each repository in a README.md document on the root of the repository, that way, visitors to the repository can understand the purpose and spirit of the project without having to rely on making their own judgment.
It is always possible to use BitBucket or private GitHub repositories to host your private or test projects. However, the two methods of using a portfolio and writing READMEs are usually sufficient.
To create a unifying experience, please consider publishing a portfolio website at username.github.io. This is possible through GitHub Pages.
Although your code may be terrible, it is important to consider that great projects start off terribly, code changes over time, and publishing terrible code has the advantage of showing that you are actively working on projects. But of course it is also recommended to use the usual guidelines of coding such as making sure every commit is a working commit, using testing code etc.
|
STACK_EXCHANGE
|
Blog post written by Maria Gavrilidou
- PERSON: person names, family names
- LOCATION: political or geographical names such as continents, countries, cities, etc.
- ORGANIZATION: names of entities such as companies, institutions, organizations, etc.
- FACILITY: names of buildings and other human-created structures, such as streets, bridges, etc.
- GPE (Geo-political entity): entities whose names coincide with a location name, but whose semantic content actually refers to its government or administration.
The GrNE tagger is not a single tool, but rather a pre-defined pipeline of tools seamlessly integrated, in the sense that the output of one tool constitutes the input to the next:
Tokenization > Sentence Segmentation > Part-of-Speech Tagging > Lemmatization > Chunking > Named Entity Recognition
The annotation processes before Named Entity Recognition constitute the pre-processing of the text. After the pre-processing stage is completed, the Named Entity Recognition algorithm is applied to the text in two stages: it first uses linguistic rules to identify a set of candidate NEs and subsequently checks them against manually created wordlists of existing proper names. If a proper name in the pre-processed text is not identified in this manner, the tool tags it as UNKNOWN.
To consolidate a candidate NE or a proper name labelled as UNKNOWN, and to finally place it into the correct category, GrNE-Tagger applies another round of linguistic rules that search for specific keywords in the context of the ambiguous expression. The keywords used for such disambiguation are, for example, profession titles, words denoting nationality or kinship terms such as father of, sister of etc. (in the case of PERSON); prefixes or suffixes denoting company types, such as Corp., Ltd. etc. (for ORGANIZATION); words such as street, bridge etc. (for LOCATION) and so on. Based on shallow syntactic parsing, the system also disambiguates between LOCATION and GPE (Geo-political entity).
The following picture shows an example of the output of GrNE-tagger (using GATE as a visualization tool), in which different NEs are marked with different colors.
GrNE-tagger has been integrated in the clarin:el infrastructure as a web service, which means that the users do not need to install the tool locally; they simply select a resource from the clarin:el inventory (or upload their own resource) and they process it. After the completion of the processing, the users receive an email with a link to the result of the processing. Furthermore, the tool has already been successfully applied to annotate several resources; for instance, one such resource enriched with GrNE-tagger is a corpus of interviews conducted with female business entrepreneurs in Athens.
GrNE-tagger has been developed and is maintained by the Institute for Language and Speech Processing / Athena RC and is available under a license that permits Academic – Non Commercial Use.
Click here to read more about Tour de CLARIN.
|
OPCFW_CODE
|
Is the train leaving the station
Watching others riding the Rails
This is not based on any sort of science - I have not done a double-blind study or anything. I've just been watching the status messages in my instant messenger buddy list and a lot of the guys who led the way to Java and to some of the frameworks like Spring, Hibernate, and Tapestry now seem to be playing with Ruby on Rails. One by one their status says something about Ruby on Rails.
It's hard to know which technologies will stick and which ones won't - but Rails feel right. People are just playing with it and jaws seem to be dropping at the amount of code that doesn't need to be written. There isn't a rush to over standardize it and over complicate it. What do you think - will we all be riding the rails soon?
In Also in
Java Today , Spring and Hibernate are popular frameworks for building web applications, but can they take care of the enterprise challenges that J2EE was built for? Binildas Christudas considers the case where you have multiple components, each with its own data store: "When we speak of assembling two or more components, we are expected to maintain the atomicity of operations done in many data stores, across components." In Wire Hibernate Transactions in Spring, he shows how to handle transactions across the components' data stores to allow a rollback across all of the stores when an error occurs.
Frank Sommers' article Broadcast Once, Watch Anywhere looks at "SR 272, the Mobile Broadcast Service API for Handheld Terminals [which] aims to define a common API layer for interacting with broadcast services, such as digital television, from a mobile device. [...] JSR 272 aims to provide an API that abstracts out the transport layer, and gives developers high level access to digital broadcasts, according to Wong. JSR 272 will define both the management of interactive services received via digital broadcast, and the management of applications contained in the broadcast stream. Along with Motorola, the JSR 272 initial expert group includes Nokia, Vodafone, and Siemens.
Weblogs, Osvaldo Pinali Doederlein provides what he calls " A comprehensive analysis of whe debate over Free/OpenSource Java" in Opening Java. He writes "Java is huge, the top business app development platform. Meanwhile, FOSS increased in importance, volume and mindshare by orders of magnitude, and it has also become serious business.
Unfortunately, Java is increasingly seen as a problem in the POV of FOSS users and developers. This is despite many significant improvements in openness since '96"
How do you format code in blogs and elsewhere? Jonathan Bruce asks about this in Blogging Java code - standard mark-up tools?
James Gosling writes that he is having Fun in Brazil. He reports " This week I'm in Brazil, visiting with developers. We're doing a couple of days of technical seminars today and tomorrow in Sao Paulo, then a couple more in Brasilia. "
In Projects and
Communities, John Reynolds, leader of the
Global Education and Learning community's
Tapestry Webcomponent Examples project talks to community leader Daniel Brookshier. John explains that the project grew out of some Tapestry examples he originally published in his java.net blog.
Java developers using Mac OS X 10.4 can speed up JavaDoc searches. Type the class name into the JavaDoc Dashboard widget from seriot.ch to immediately bring up that class' JavaDoc page. This and other Mac resources are collected on the Mac Java Community page.
Alexlamsl follows up on the thread on using strings in switch statements in today's Forums. "The primary argument for extending the use of switch statements here is the ease of development, and even with the better optimisation chances. This sounds good, and in which case we should somehow extend this in such a way that all Immutable objects can use this construct (String, Integer, BigDecimal etc...) "
Sasjaa responds to the question Anybody ever had the classloader deadlock on you? "This is tracked in the javasoft bug 4735126 which they have marked as a low priority bug. The only workaround we have found is to flatten the classloader stack as much as possible."
In today's java.net
News Headlines :
- JaxMe 0.4
- RIFE 1.0rc2
- EditiX 4.0
- yGuard 1.5 - Free
Bytecode Obfuscator for 'Tiger'
- IBM Availability
Monitoring Toolkit for Eclipse
Registered users can submit news items for the
href="http://today.java.net/today/news/">java.net News Page using our
form. All submissions go through an editorial review before being
posted to the site. You can also subscribe to the href="http://today.java.net/pub/q/news_rss?x-ver=1.0">java.net News RSS
Current and upcoming Java
- May 31- June 3, 2005 Enterprise Java Architecture Workshop New York City
- June 3 - 5, 2005 Central Ohio Software Symposium
- June 10 - 13, 2005 Research Triangle Software Symposium
- June 16-18, 2005 JustJava2005
- June 20 - 21, 2005 Pragmatic Studio
- June 24 - 26, 2005 Central Florida Software Symposium
- June 25, 2005 JXTA Town Hall Meeting at JavaOne
- June 27 - June 30, 2005, JavaOne Conference
Registered users can submit event listings for the
href="http://www.java.net/events">java.net Events Page using our
href="http://today.java.net/cs/user/create/e"> events submission form.
All submissions go through an editorial review before being posted to the
Archives and Subscriptions: This blog is delivered weekdays as
Today RSS feed. Also, once this page is no longer featured as the
front page of java.net it will be
archived along with other past issues in the href="http://today.java.net/today/archive/">java.net Archive.
Watching others riding the Rails
|
OPCFW_CODE
|
zim-wiki team mailing list archive
Mailing list archive
Attached a quick try to get a module that interfaces with zim, but has an
API compatible with the one in dokuwikirpcxml.py
Did not fill in all functions (yet) but should be good enough to start
adapting the dokuvimki plugin for vim.
Anyone want to try their hand at that ?
On Fri, Oct 26, 2012 at 8:40 AM, Jaap Karssenberg <
> On Fri, Oct 26, 2012 at 12:01 AM, Svenn Are Bjerkem
> <svenn.bjerkem@xxxxxxxxxxxxxx> wrote:
> > On 24 October 2012 13:17, Jaap Karssenberg <jaap.karssenberg@xxxxxxxxx>
> >> On Wed, Oct 24, 2012 at 12:44 PM, John Geoffrey <gmkeros@xxxxxxxxx>
> >>> I was thinking about a ncurses interface. But the general idea was
> that I
> >>> would be able to write, open, and edit zim pages in the cli, instead of
> >>> having to import them manually after writing.
> >> How do you see the use case? Would this ncurses interface be the only
> >> interface you use, or do you use it to edit pages while you still use
> >> the Gtk interface to browse the data etc. ?
> >> In the 2nd case I would suggest looking into an extension for your
> >> editor of choice (vim / emacs / ...) to do a bit of syntax
> >> highlighting of zim's wiki syntax and maybe show a list of pages on
> >> the side. Since you use the gtk version for other stuff, you can keep
> >> it simple.
> > I am using vim plugins for things like trac, git, dokuwiki and
> > wordpress and find these a great step forward when writing and
> > documenting code. I use zim for daily journal and tracker. Sometimes I
> > find copy-paste from vim to zim a bit tedious due to the differing use
> > of copy-paste in the applications. The dokuvimki plugin comes with a
> > bunch of embedded python code to use modified xmlrpc calls to the
> > wiki. I haven't checked how easy it is to write my own wrapper around
> > any zim api to circumvent the GUI to access and create new pages from
> > vim. Depends on how tided the GUI is with the rest of the code.
> Thanks for pointing out dokuvimki, I did know it existed, and frankly
> I was unaware of how easy it is to put python extensions in vim.
> At first glance non of the functions used by this plugin require code
> that is in the GUI modules. So I think it would be rather
> straightforward to write a python module for zim that has the same API
> as dokuwikixmlrpc, then modify the dokivimki plugin to use that module
> and we are all set.
> Unfortunately no time for hacking this weekend, but maybe I can have a
> stab at the interface module next week.
Description: Binary data
From: Svenn Are Bjerkem, 2012-11-11
From: John Geoffrey, 2012-10-22
From: Jaap Karssenberg, 2012-10-23
From: Jaap Karssenberg, 2012-10-24
From: Svenn Are Bjerkem, 2012-10-25
From: Jaap Karssenberg, 2012-10-26
|
OPCFW_CODE
|
View Full Version : KoTOR II help needed
Mon, 14th Feb '05, 7:45pm
Can anyone point me to a forum (OTHER than lucasarts of course because they !@#$%^ suck!) that will help me with the following problem.....?
Well when I first landed (on Nar Shadda) Quello talked to me and told me I'd have to move because another ship was scheduled to come in.... anyway I talked my way out of it but later I go back to talk to him (wondering why I haven't had any problems with my ship being there yet) and I talk to him without Atton in my party he simply says "hey, you there!" then the conversation breaks and he flies to the ship... goes halfway around it then disappears... now I reload, talk to him while Atton is in my party, quello says same thing except this time atton is there to say "uh-oh" so quello says the same thing he did to me before.... NOW no matter what I choose it will show his face... and the screen will fade black, then it goes to the party selection screen... I have to select my party. I do so, but then I get the "Lucas Arts" screen at the beginning!! what is the problem here?
[ February 16, 2005, 18:20: Message edited by: Taluntain ]
Mon, 14th Feb '05, 10:37pm
I've not encountered any bug so drastic in playing KOTOR2. It sounds like you might have a bad install, but I'm really just guessing. You don't need to go back to talk to Quello--you'll eventually see a cutscene and know that you must return to your ship to deal with a little problem. Try doing it that way, and hopefully it will bypass whatever problem you're having.
And yes, the LucasArts forum is a mess. Until a patch comes out, it's everyone for themselves in finding work-arounds.
Mon, 14th Feb '05, 11:45pm
Ah well see I doubt I have a "bad install" seeing as how I'm playing the XBOX version. also I have done all I think I can and that cutscene just won't show up. I did everything but fix the airspeeder and do a bonus mission. (or infiltrate the hutt's warehouse) there are also 2 doors I just cannot get to open but I know they lead somewhere.
Tue, 15th Feb '05, 3:12am
yerg. i think maybe i'll wait a month for KotOR2 to stabilize before getting it..
Tue, 15th Feb '05, 3:32am
I haven't picked this one up, yet. I'll probably do it this weekend. From my understanding, they went nuts with the build-your-own-stuff segment of the game; not to the same extent that Arcanum did, but fairly decent.
I wasn't "hyped" about this game, but I'll probably pick it up based on whats being said about it on forums and the quality of the first one in the series.
Tue, 15th Feb '05, 5:42am
If anyone else has had this problem please tell me what you did to fix it...
I even bribed that little guy 2000 credits to give me a bad name with the exchange. and I'm still not getting cutscene with the ship.
Tue, 15th Feb '05, 6:36pm
Mentioning that it was XBOX would have helped a little in diagnosis. Like I said, I haven't had any problems with the PC version aside from minor things. But I can't remember what event I completed to trigger the cutscene, so I can't give you more specific advice. Sorry.
Tue, 15th Feb '05, 9:19pm
I was reading some faq at gamefaqs and he said it would trigger after you completed the sidequest with the exchange in the refugee sector. but when I went back there Quello was still there and that was the problem I was getting... hmm I hate to just have to quit playing and not finish the game after 30-40 hours of gameplay into it....
Mon, 28th Feb '05, 12:49am
Can anyone help me with the Telos quest? I have GO-TO in my party but I cannot get Vogga the Hutt to give me permission to allow me to finish the quest of the family that needs to get out of Nar Sheeda. Th slughead that is in the middle of the docks keeps telling me I need Vogga's permission but it does not give me the option when I talk to Vogga. I don't think I can finish the fuel to Telos quest either. What do I do?
Thu, 3rd Mar '05, 12:26pm
You don't actually need Vogga's permission, you need to make the guy standing next to the middle pilon happy. You can do so by succesfully helping him bringing the waiting freighters in. It's a little riddle, not too hard to figure out. After that, he won't deny you anything BUT.... you need to do something about the Exchange first of course. After that, all pieces will come together.
|
OPCFW_CODE
|
Allow users to create views for their own use when they do not have permission to grant others access to the underlying tables or views. To enable this, creation permission is now only checked at query time, not at creation time, and the query time check is skipped if the user is the owner of the view.
Add support for spatial left join.
array_sort()function that takes a lambda as a comparator.
Add support for
ORDER BYclause in aggregations for queries that use grouping sets.
Add support for yielding when unspilling an aggregation.
Expand grouped execution support to
UNION ALL, making it possible to execute aggregations with less peak memory usage.
Change the signature of
truncate(x, d)functions so that
dis of type
dcould be of type
BIGINT. This behavior can be restored with the
deprecated.legacy-round-n-bigintconfig option or the
Accessing anonymous row fields via
.field1, etc., is no longer allowed. This behavior can be restored with the
deprecated.legacy-row-field-ordinal-accessconfig option or the
Finish joins early when possible if one side has no rows. This happens for either side of an inner join, for the left side of a left join, and for the right side of a right join.
Improve predicate evaluation performance during predicate pushdown in planning.
Improve the performance of queries that use
LIKEpredicates on the columns of
Improve the performance of map-to-map cast.
Improve the performance of
Improve the serialization performance of geometry values.
Improve the performance of functions that return maps.
Improve the performance of joins and aggregations that include map columns.
Server RPM Changes#
Add support for installing on machines with OpenJDK.
Add support for authentication with JWT access token.
JDBC Driver Changes#
Make driver compatible with Java 9+. It previously failed with
Fix ORC writer failure when writing
NULLvalues into columns of type
Fix ORC writers incorrectly writing non-null values as
NULLfor all types.
Support reading Hive partitions that have a different bucket count than the table, as long as the ratio is a power of two (
Add support for the
Prevent reading from tables with the
Partitioned tables now have a hidden system table that contains the partition values. A table named
examplewill have a partitions table named
example$partitions. This provides the same functionality and data as
Partition name listings, both via the
$partitionstable and using
SHOW PARTITIONS, are no longer subject to the limit defined by the
Allow marking partitions as offline via the
Thrift Connector Changes#
Most of the config property names are different due to replacing the underlying Thrift client implementation. Please see Thrift Connector for details on the new properties.
Allow connectors to provide system tables dynamically.
Simplify the constructor of
Block.writePositionTo()now closes the current entry.
|
OPCFW_CODE
|
Submitting a new version to a rejected app on App Store
I have submitted an app to the store and it was rejected, let's say the app version is 1.0 and the build is 1, and I have replied to Apple's rejection reasoning through Resolution Centre. The next day I have uploaded a newer version of my app, with version number 1.0, and build number 2. I have three questions:
1- Do I need to click on "Review for Submission" button, after uploading this new build.
2- When I go to Activity -> All Builds, I can see my new build, but with no status, and when I navigate to "App Store Versions", I don't see my new build, is it supposed to appear automatically with "Waiting for Review" status or what shall I expect?
3- If Apple accepted my justification in the "Resolution Centre", before reviewing my new build, this means my old build will appear in the store. What happens after my new build gets accepted, will it also appear on the store, or do I need to increment the app version and resubmit my app for review?
If your old build was rejected then it won't ever appear in the store. If it was "metadata rejected" and you have fixed that issue then there is no need to submit a new build; the will approve the existing build. The rejection notification they sent will clearly indicate if a new build is not required.
Assuming that a new build is required, after you upload the new build, it will need to be processed by Apple before it appears in your build list; This typically takes about 30 minutes but can be longer.
Once it has finished processing, go into the relevant "App Store Version", and scroll down to find your existing build, remove it and then click "add build" and select your new build.
At the top of screen click "save" and then "submit for review".
After Apple reviews and approves your submission it will appear in the store.
Thank you =) What about question number 3?
You can only ever have one build for a given version be approved; If Apple have approved the previous build for that version (typically after a rejection state of meta-data rejected is addressed) then you must submit a build with a higher version number to release it. If they have not approved the previous build then you can submit a new build for the current version.
@Paulw11 I have version 1.6.0. It is currently in Rejected state. I uploaded new version 1.7.0. But I can't find any possibility to remove current v 1.6.0. Could you help me? Apple support told me, that they can't assist.
You probably didn't want to upload 1.7.0; All you needed was a higher build number of 1.6.0; You can either do that, or you can go into your 1.6.0 in App Store Connect and change it to 1.7.0 by simply editing the version number.
Update (Aug 2019): After being lost for some time I finally figured out how to submit new build.
I had to go to App Store / App Information. Then I scrolled to General App Info where I changed the version number. Then I removed the build number and selected new build.
Then Save and Submit for review.
Best answer for the problem I was facing.
delete your rejected previous version then select your latest version to submit
Just click on the rejected version ,then go to the build section, click the build then we got all the information regrading latest version.Then add a new build then click on the save button.It works for me.
Update (2022-April):
Open "General-> App Review" on the left panel and delete the current submitted version
Open your current version -> Build then Hover the mouse over your current selected build version -> The delete icon will appear. Click the delete icon to delete the current build version and select another build version
|
STACK_EXCHANGE
|
Paragraph 1: Introduction
Embarking on the world of coding is more than just learning syntax; it’s about the thrill of creating. Exciting coding projects serve as the backbone of this creative journey, providing a hands-on and engaging way for coders to apply their skills and unleash their creativity. Let’s explore the dynamic realm of coding adventures through exciting projects.
Paragraph 2: Beyond the Basics – Elevating Skills
Exciting coding projects go beyond the fundamentals, challenging coders to elevate their skills. These projects often involve the integration of various technologies and the application of advanced coding concepts. This push beyond the basics is what propels coders to new heights, fostering a sense of achievement and continuous improvement.
Paragraph 3: The Creative Canvas of Coding
Coding is an art form, and exciting projects serve as the canvas for creative expression. Whether it’s developing a unique website, crafting a game, or implementing a novel algorithm, coders have the freedom to unleash their creativity through coding. These projects become a testament to the coder’s imagination and innovation.
Paragraph 4: Real-World Application – Solving Problems
Exciting coding projects often simulate real-world scenarios, allowing coders to apply their skills to solve practical problems. Whether it’s automating a task, creating a solution for a specific industry challenge, or developing a tool for efficiency, these projects bridge the gap between theoretical knowledge and real-world application.
Paragraph 5: Collaborative Coding Adventures
Coding is not a solitary endeavor in the world of exciting projects. Collaborative coding adventures bring together teams of coders to work on ambitious and impactful projects. This collaborative spirit not only enhances coding skills but also promotes teamwork, communication, and the exchange of diverse perspectives.
Paragraph 6: Game-Changing Innovations
Some of the most exciting coding projects result in game-changing innovations. From groundbreaking apps to revolutionary technologies, these projects have the potential to make a significant impact on industries and people’s lives. Coders engaged in such projects become architects of the future, driving technological advancements.
Paragraph 7: Learning Through Experimentation
Exciting coding projects encourage a culture of experimentation. Coders have the opportunity to try new technologies, explore unconventional approaches, and learn through hands-on experimentation. This freedom to explore fosters a mindset of curiosity and adaptability, essential qualities in the ever-evolving field of technology.
Paragraph 8: Building a Portfolio of Achievements
Each exciting coding project becomes a valuable entry in a coder’s portfolio. This portfolio showcases not only the technical skills but also the diversity of projects undertaken. For job seekers, a robust portfolio becomes a powerful tool, demonstrating practical skills and the ability to take on a variety of coding challenges.
Paragraph 9: Mentorship and Knowledge Sharing
Engaging in exciting coding projects often involves mentorship and knowledge sharing. Experienced coders guide and mentor those less experienced, fostering a culture of learning and growth within the coding community. This mentorship dynamic ensures the transfer of knowledge and skills from seasoned developers to the next generation.
Paragraph 10: Explore Exciting Coding Projects Today
To embark on your coding adventure and
|
OPCFW_CODE
|
Automate IE via Excel. Dropdown "change" stopped working
2+ years ago I asked basically this same question (Automate IE via Excel to fill in a dropdown and continue) and it worked perfectly until a couple of months ago. The code that I previously used is below:
Private Sub TriggerEvent(htmlDocument As Object, htmlElementWithEvent As Object, eventType As String)
Dim theEvent As Object
htmlElementWithEvent.Focus
Set theEvent = htmlDocument.createEvent("HTMLEvents")
theEvent.initEvent eventType, True, False
htmlElementWithEvent.dispatchEvent theEvent
My Web Scraping code is below:
Private Sub SelectDropDown()
ForceIEClose
Set ie = CreateObject("InternetExplorer.Application")
With ie
.navigate "https://a810-dobnow.nyc.gov/publish/Index.html#!/"
.Visible = True ' can be set to false to speed things up
End With
Do Until ie.readyState = 4: DoEvents: Loop
Set htmlDoc = ie.document
'htmlDoc.getElementsByClassName("white ng-scope")(3).Click ' {Device Search} button
' New: "Search by device type"
htmlDoc.getElementsByClassName("card border-tiles shadow h-100 padY-2 device-off")(0).Click ' {Device Search} button
On Error Resume Next
Set nodeDeviceTypeDropdown = htmlDoc.getElementById("DeviceOptions") ' {Device Type} DropDown
Application.Wait (Now + TimeSerial(0, 0, 1))
On Error GoTo 0
If Not nodeDeviceTypeDropdown Is Nothing Then
nodeDeviceTypeDropdown.selectedIndex = 4
To this point everything works fine and the 4th option on the drop-down is displayed on the page.
What's not working now is the following line of code:
Call TriggerEvent(htmlDoc, nodeDeviceTypeDropdown, "change")
I have tried just about everything imaginable in place of that "change" but nothing seems to work to indicate within IE that I've made my selection which would normally display the next drop-down that I need to work with??
The HTML code for the drop-down object is below:
<select required="" class="form-control ng-pristine ng-empty ng-invalid ng-invalid-required ng-touched" id="DeviceOptions" ng-model="Criteria.DeviceOptions" ng-class="{'has-error': (IsDeviceSearchClick && !Criteria.DeviceOptions)}">
<option class="selectPlaceholder" name="device" value="" hidden="" selected="selected">Select Device Type</option>
<option value="1">Boilers</option>
<option value="2">Elevator</option>
<option value="3">Crane Prototype</option>
<option value="4">Crane Device</option>
</select>
I'm not sure exactly what changed from before that stopped this from working. Any ideas on how to make this work would be appreciated. Thanks.
Original HTML Code that worked with my current vba shown below:
What did you change or update between then and now?
I didn't change anything, Solar Mike. What changed is the HTML Code on the site.
I've added a pic of what that code looked like before they made a change.
I see that the macro still works. I am surprised because IE is actively being phased out by MS and the site works with many dynamic elements.
Of course I looked at your problem and was also pleased to see that you have adapted the code to the new design of the page. Now it seems that something has changed again. But it's nothing bad. It's a time problem.
As with other parts of the code, a pause needs to be inserted to give the page time to build up the selection part. For me, it works when the pause is inserted at the following point:
'Open the Device Search section
htmlDoc.getElementsByClassName("card border-tiles shadow h-100 padY-2 device-off")(0).Click
Application.Wait (Now + TimeSerial(0, 0, 1)) 'Possibly adjust the pause
The delay doesn't work for me. I run my code up to the 'Set htmlDoc = ie.document' and exit
In the Immediate Window:
'htmlDoc.getElementsByClassName("card border-tiles shadow h-100 padY-2 device-off")(0).Click'
'Set nodeDeviceTypeDropdown = htmlDoc.getElementById("DeviceOptions")'
'nodeDeviceTypeDropdown.selectedIndex = 4'
All is good to here and the dropdown displays "Crane Device"
'Call TriggerEvent(htmlDoc, nodeDeviceTypeDropdown, "change")'
generates an error and doesn't display the next dropdown.
I've tried just about every variation of "change" but no luck.
I was hoping that you would reply to my question. Your answer to my original question 2 years ago was perfect. A simpler explanation of my new problem:
'nodeDeviceTypeDropdown.selectedIndex = 4'
displays the dropdown selection
with the original code after
'Call TriggerEvent(htmlDoc, nodeDeviceTypeDropdown, "change")
the dropdown would shrink and the next dropdown would appear
That "change" event no longer works and generates an error. The 2nd dropdown never appears.
Gonna' have to kick myself in the a$$ for this one but as often as I looked at this and retried it multiple times, I didn't put the pause in the exact place that you specified.
Just looked at it again today, put the pause where you suggested and it worked perfectly.
Why it worked for over a year without the pause and now needs it, I have no idea but it really doesn't matter at this point. Thank you for looking at this (again) and thank you for solving it for me (again).
@JohnWilson Oh, dear, sorry. Actually, I wanted to edit the whole macro into my answer again. But I couldn't do it right away and didn't think of it anymore. Good that you have now checked it again yourself and found the error. I think the layout of the page was changed, wasn't it? When the macro was first created, the controls were still enabled individually when the values were entered in the previous control.
|
STACK_EXCHANGE
|
I teach databases and data science in the Computer Science Department at Rice University. I received an S.B. degree in Computer Science and Engineering from the Massachusetts Institute of Technology, a professional M.S. degree in Computer Science from Stanford University, and a research M.S. degree and Ph.D. in Computer Science from Rice University.
Since the day I first graduated from college I have been interested in applying technology to improve healthcare. My first position was developing software for a medical device manufacturer. This job kindled my interest in the healthcare field. As part of my master’s degree in computer science, I enrolled in courses in biomedical informatics, a nascent field, at the time. This coursework included both didactic learning and a research project where I collaborated with a family practice physician. These early partnerships with medical professionals shaped both my philosophy and my path forward. After completing my master’s degree in computer science, I started working in a research group at Baylor College of Medicine. There, I had the chance to design, build, and support a prototype distributed electronic health record, for the citywide Teen Health Clinics1.
In 2008, while working at the University of Texas MD Anderson Cancer Center building a data warehouse to enable longitudinal research of surgical data, I realized that I wanted to use the data warehouse, not just build it. That epiphany led me to first complete a certificate in Health Informatics from the University of Texas School of Biomedical Informatics, and later to return to school (Rice University) for a PhD in Computer Science, with a focus on machine learning and data science. These areas built upon my strong computer science background and data centric work history. I brought a collaboration with researchers at MD Anderson to my PhD program and forged new partnerships with researchers at Baylor College of Medicine. In addition, I was awarded an NIH National Library of Medicine training fellowship for three years of my PhD. The fellowship provided additional training and mentorship and honed my informatics knowledge and skills.
After completing my PhD, I worked as a Data Scientist at Houston Methodist, a hospital system in Houston, Texas. There, I had the opportunity to work on clinically driven, data centric projects as well as helping to establish baseline processes for preserving patient monitoring data. In August 2017, I returned to academia, teaching a graduate course in Databases in the evenings at Rice University. That experiment led to a full-time teaching appointment in the Computer Science Department at Rice.
At Rice I have had the opportunity to explore and innovate in Data Science pedagogy. Shortly after returning to Rice, I joined a research group, the Children’s Environmental Health Initiative (CEHI). Joining this team has enabled me to stay involved in research and to continue to work on interesting and novel problems that also have the potential to motivate classroom learning. I continue to be fascinated with data – how do we collect, clean, manage, and use data to solve problems? How can we improve all of these steps?
I enjoy working in this intersection of healthcare data and computer science where I have the opportunity to build new relationships, understand the challenges faced by practitioners in both fields, help people answer questions that matter, and train the next generation of researchers and practitioners.
I am interested in exploring and innovating in data science pedagogy. I do this both through the development of interactive learning materials and by bringing lessons learned from research into the classroom in the form of examples, assignments, and exercises. Together with Lydia Kavraki and Chris Jermaine we have packaged a graduate level course on Data Science Tools & Models with an emphasis on healthcare data. The lecture slides are available here
and the teaching materials are available to instructors by request.
My research with the CEHI team is focused on data management in healthcare. There I explore approaches to making healthcare data more accessible and available.
|
OPCFW_CODE
|
UVic Libraries manages both licensed and open data resources. We also manage data and statistics packaged specifically for researchers as well as data independently produced by colleagues during the research process. Both types of data can be public or restricted.
If you're looking for historical data, remember that our Special Collections & University Archives steward "data" from generations of researchers across disciplines. Contact them directly to see what is available. Example: environmental & oceanic archives, field notes, etc.
Abacus holds UVic Libraries' collection of licensed datasets, including public use microdata (PUMFs) from StatCan censuses, other social and health surveys, public opinion polls, and spatial data for GIS. Access is restricted to UVic users.
Borealis, the Canadian Dataverse Repository is a national data repository for research data. The service, supported by UVic Libraries, is free for UVic researchers to deposit their datasets, which are registered with DOIs and are stored in a secure environment on Canadian servers. Researchers can choose to make their datasets available to the public, to specific individuals, or to keep it private.
With Borealis, researchers can search across research data from over 65 Canadian universities.
Lunaris provides a single point of search for research data held in Canadian data repositories, including academic institutions, departments at all levels of government, and research organizations. There are over 80,000 datasets from over 100 Canadian repositories and data collections currently indexed by Lunaris.
For access to confidential microdata from Statistics Canada census and surveys, contact the UVic Research Data Centre.
The UVic RDC provides access, for approved projects, to a growing variety of Statistics Canada confidential microdata household, population and workplace files. The microdata used by researchers come primarily from Statistics Canada Survey Master files. Increasingly, the Research Data Centres (RDCs) are repositories of administrative records from a variety of sources including tax, employment insurance, social assistance, and hospitalization records.
The UVic Libraries collects hundreds of websites as part of its web archiving efforts using Archive-It. UVic Libraries can help researchers access a variety of data related to these collections via the to Archive-It's Research Services, including:
WARC and their predecessor ARC files are the files into which data crawled using Archive-It is stored. Each file may contain multiple digital objects, including HTML, images, and videos. (Note that collection data can consist of both WARC and ARC files depending upon when they were archived through our service. Throughout these guides, the term “WARC files” refers to both WARC and ARC files.)
Longitudinal Graph Analysis files are archival web graph files that include a complete list of what URIs link to what URIs, along with a timestamp, from a collection’s origin through present. They are ~1% the size of a collection's aggregate WARC files, and deliver as a ZIP container of two files:
Web Archive Named Entities are files that use named-entity recognition tools to generate a list of all the people, places, and organizations mentioned in each URI in a web archive, with a timestamp of when the URI was captured. The purpose is to link people, places, and organizations to time. A WANE dataset is generated using the Stanford Named Entity Recognizer software (http://nlp.stanford.edu/software/CRF-NER.shtml) to extract named entities from each textual resource in a collection. The analyzer uses an English model 3-class classifier to extract names that correspond to recognized Persons, Organizations, and Locations. WANE files are less than 1% the size of their corresponding WARC files, and are structured as a JSON object per line: URL ("url"), timestamp ("timestamp"), content digest ("digest") and the named entities ("named_entities") containing data arrays of "persons", "organizations", and "locations".
Please contact Corey Davis for more information.
UVic catalogues or has access to thousands of data sources. To specifically search for datasets in "Library Search" (Primo):
1. Enter a search term in the search box as you would for any other resource.
2. Once you are directed to the results page, use the "Refine Results" filter on the left-hand side of the page. Go to "Content Type" and then "Show More"
3. Now choose "Datasets" and then click "Apply Filters" (green button)
4. You will now see the datasets that have been added to our catalogue (note: not all datasets are catalogued).
Comprehensive collection of Industry Market Research and Industry Risk Ratings.
Global market analysis software platform, which analyses the industry in countries around the world.
Global market and consumer statistics on over 80,000 topics from more than 22,500 sources.
Canadian imports, exports, and world trade data. Each table contains historical timeseries data
Research reports and data related to Corporate Social Responsibility, Economic Trends, Industrial Trends and Public Policy and other topics.
CIHI collects comparable, pan-Canadian data on different aspects of the health system. View a summary of all the data available by year and jurisdictional coverage (XLSX).
Population Data BC (PopData) provides academic researchers with access to a comprehensive collection of population health data, including health services, education, workplace and environmental data. These data sets include longitudinal, person-specific, de-identified data on BC's 5 million residents. Access is provided by following a data request process.
BCCDC's data dashboards and reports provide statistics on diseases in BC and on the health of our populations and communities.
Access to all microdata and documentation from the collection of available Statistics Canada public use microdata files (PUMF). Public use microdata files contain anonymized, non-aggregated data. Using statistical software, the end user can group and manipulate data variables in these files to suit data and research requirements.
Real Time Remote Access (RTRA) is an online tabulation tool allowing subscribers to run SAS programs in real time to extract results from masterfile subsets in the form of tables. RTRA system data users do not gain direct access to the microdata and cannot view the content of the microdata file. RTRA data users can calculate frequencies, means, percentiles, percent distribution, proportions, ratios and shares.
CANSIM (Canadian Socio-Economic Information Management System) is the Statistics Canada's computerized database of time series covering a wide variety of social and economic aspects of Canadian life. The use of CANSIM @ CHASS is restricted to current faculty, staff, and students.
The Canadian Census Analyser provides access to select Canadian demographic data from Statistics Canada's Census of Canada. Included are census aggregated profile tables and census microdata. This tool allows users to subset files and export data in varied formats including html, text, spreadsheet, and to statistical software such SAS and SPSS. Coverage: 1961 to 2021 censuses. The use of Canadian Census Analyzer @ CHASS is restricted to current faculty, staff, and students.
The BC Data Catalogue provides the easiest access to government's data holdings, as well as applications and web services. Thousands of the datasets discoverable in the Catalogue are available under the Open Government License - British Columbia.
The Ipsos Canadian Public Affairs Dataverse is a repository of over 60 Ipsos Canada surveys that shed light on Canadian elections, culture, politics, and society. All data is open access, supported by the Laurier Institute for the Study of Public Opinion and Policy
Data archive of more than 250,000 files of research in the social and behavioral sciences. ICPSR collaborates with a number of funders, including U.S. statistical agencies and foundations.
CanMap Content Suite contains over 100 unique and rich content layers. Each layer has a unique file and layer name with associated definitions, descriptions, attribution and metadata. All layers, with a few exceptions, are vector data consisting of polygon, polyline, or point geometry representation.
Satellite, air photo and other remote sensing images and data world
Our Maps collection contains over 68,000 paper maps on many themes/topics, including biology, culture, economic, geology, history, hydrography, military, streets, social, topography, and transportation.
Arts & Humanities data are as complex and varied as the disciplines that comprise these subjects. Below are three pieces that are useful in thinking about "data" in the Arts and Humanities
Our Data Management for Humanists website brings together training materials from recent SSHRC-funded workshops held across Canada to build foundational skills in research data management (RDM) for researchers in the Humanities.
Special Collections & University Archives contain resources from generations of UVic & local researchers; these include a seaweed inventory project (Alan Austin Fonds); underwater salmon research; B.C. Environmental associations (Mallard Fonds); Transgender archives; various scientific field notes; and more. Contact Special Collections or University Archives to learn more.
There is a difference between data that can be used FOR research, and data that are produced FROM the research process. Knowing what kind of data you are looking for ("for" or "during") will help you locate the resources you need.
Special Collections & University Archives contain resources from generations of UVic & local researchers; these include a seaweed inventory project (Alan Austin Fonds); underwater salmon research; B.C. Environmental associations (Mallard Fonds); various scientific field notes; and more. Contact Special Collections or University Archives to learn more.
|
OPCFW_CODE
|
Failed while connecting to sparklyr to port (8880) for sessionid (8540): Gateway in port (8880) did not respond.
Sparklyr package -0.5
Parameters: --class, sparklyr.Backend, --packages, 'ai.h2o:sparkling-water-core_2.11:1.6.7','ai.h2o:sparkling-water-ml_2.11:1.6.7','ai.h2o:sparkling-water-repl_2.11:1.6.7', '/home/ubuntu/R/x86_64-pc-linux-gnu-library/3.3/sparklyr/java/sparklyr-2.0-2.11.jar', 8880, 8540
--- Output Log ----
file:/home/ubuntu/.m2/repository/ai/h2o/sparkling-water-repl_2.11/1.6.7/sparkling-water-repl_2.11-1.6.7.jar
==== local-ivy-cache: tried
/home/ubuntu/.ivy2/local/ai.h2o/sparkling-water-repl_2.11/1.6.7/ivys/ivy.xml
-- artifact ai.h2o#sparkling-water-repl_2.11;1.6.7!sparkling-water-repl_2.11.jar:
/home/ubuntu/.ivy2/local/ai.h2o/sparkling-water-repl_2.11/1.6.7/jars/sparkling-water-repl_2.11.jar
==== central: tried
https://repo1.maven.org/maven2/ai/h2o/sparkling-water-repl_2.11/1.6.7/sparkling-water-repl_2.11-1.6.7.pom
-- artifact ai.h2o#sparkling-water-repl_2.11;1.6.7!sparkling-water-repl_2.11.jar:
https://repo1.maven.org/maven2/ai/h2o/sparkling-water-repl_2.11/1.6.7/sparkling-water-repl_2.11-1.6.7.jar
==== spark-packages: tried
http://dl.bintray.com/spark-packages/maven/ai/h2o/sparkling-water-repl_2.11/1.6.7/sparkling-water-repl_2.11-1.6.7.pom
-- artifact ai.h2o#sparkling-water-repl_2.11;1.6.7!sparkling-water-repl_2.11.jar:
http://dl.bintray.com/spark-packages/maven/ai/h2o/sparkling-water-repl_2.11/1.6.7/sparkling-water-repl_2.11-1.6.7.jar
::::::::::::::::::::::::::::::::::::::::::::::
:: UNRESOLVED DEPENDENCIES ::
::::::::::::::::::::::::::::::::::::::::::::::
:: ai.h2o#sparkling-water-core_2.11;1.6.7: not found
:: ai.h2o#sparkling-water-ml_2.11;1.6.7: not found
:: ai.h2o#sparkling-water-repl_2.11;1.6.7: not found
::::::::::::::::::::::::::::::::::::::::::::::
:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
Exception in thread "main" java.lang.RuntimeException: [unresolved dependency: ai.h2o#sparkling-water-core_2.11;1.6.7: not found, unresolved dependency: ai.h2o#sparkling-water-ml_2.11;1.6.7: not found, unresolved dependency: ai.h2o#sparkling-water-repl_2.11;1.6.7: not found]
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1076)
at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:294)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:158)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
---- Error Log ----
Calls: spark_connect ... tryCatchOne -> -> abort_shell -> -> force
Execution halted
Hi @raghav20, are you trying to use sparkling water? and if so, can you confirm that it and H2O are installed? There process would have been something like this: http://spark.rstudio.com/h2o.html
I am getting a very similar thing on windows , see my 2 posts above showing my errors and messages in connction with ports being available. Not using H20 , just trying to run the same instructions as per tutorial on sparkly R R studio site. Get error on the sc command. I have a java home environmental variable and initiated the "Sys.getenv('JAVA_HOME')" and get the response "[1] "C:\Program Files\Java\jdk1.8.0_112"
Is this correct ? ie the Java version etc and that I am running JDK andnot JRE?
Thanks
Graham
I am getting a very similar thing on windows , see my 2 posts above showing my errors and messages in connction with ports being available. Not using H20 , just trying to run the same instructions as per tutorial on sparkly R R studio site. Get error on the sc command. I have a java home environmental variable and initiated the "Sys.getenv('JAVA_HOME')" and get the response "[1] "C:\Program Files\Java\jdk1.8.0_112"
Is this correct ? ie the Java version etc and that I am running JDK andnot JRE?
Thanks
Graham
Yes I have both Installed @edgararuiz
I am running it this way ./spark-1.6.2-bin-hadoop2.6/bin/spark-submit --packages ai.h2o:sparkling-water-core_2.11:2.0.0,ai.h2o:sparkling-water-examples_2.11:2.0.0 --num-executors 5 --master spark://ip-X.Y.Z.U.us-west-2.compute.internal:7077 --driver-memory 25g --executor-memory 20g ./spark-1.6.2-bin-hadoop2.6/examples/src/main/r/others.R
@edgararuiz Please let me know.
Hi @raghav20 , are you running this inside RStudio, an R shell or a Spark shell?
Spark-submit
This would be a feature request, currently we don't support running applicaitons in spark-submit the way python would, as in spark-submit <params> --py-files python-file.py, it would certainly open interesting batch scenarios, worth considering.
When Can I hope for it to get resolved ?
|
GITHUB_ARCHIVE
|
Getting Started With Strapi EP2: Collection Types
Subscribe On YouTube
In the first episode we’ve covered everything what is needed to setup Strapi and get familiar with the admin user interface. In this second episode we’re diving a little bit deeper into Strapi. We’ll setup custom Strapi collection types for the data which should be managed with strapi. We’ll also explore how to use the REST-based interface which is automatically provided.
Creating A New Collection Type
Let’s start by creating a new collection type to manage courses data. In the adminnistration panal select the link COntent-Types Builder from the left-side navigation menu. You should then see a list of already existing collection types:
Click on the link Create new collection type to start creating a new one. You should then be able to see the Create a collection type form:
In input field Display name you need to enter the name of the new collection type, e.g. Course. You can then click on button Continue which will redirect you to the the view were you can start adding fields to your collection type:
Here you can select from a list of available data types to add new fields to the collection. For the first field to add select data type Text and then specify the name of the new field in the next screen:
For The Course collection type add the following data field:
- title (of type Text)
- hours_video (of type Number)
- number_of_lectures (of type Number)
- rating (of type Number)
- url (of type Text)
The collection type view should then look like what you can see in the following screenshot:
Let’s create a second custom collection types named Author. Add the following fields to this collection type:
- first_name (of type Text)
- last_name (of type Text)
Also add a relation field to this new collection in order to create a many-to-many relation to the Course collection type:
This means that every course can have multiple authors and each author can be assigned to multiple courses.
The Author collection type should then look like the following:
Now it’s time to add some content to our collection types. Let’s start with the Author collection type by selecting the Authors entry from the left-side menu and then use the button Add New Author to add data:
When adding new data to a collection type Strapi providers you with a data entry form which includes the fields which have been added to the collection type. In the following screenshot you can see the form which is provided if you create new data for the Course collection type:
Use this form to create some Course data sets as well.
Now that we’ve added data into our collection types we’re ready to test the REST API which is automatically provided for the newly added collection types. E.g. we’re able to initiate a HTTP GET request to retrieve all Course data to http://localhost:1337/courses. The result of this request is then displayed in the browser:
Ok, this is not what we’ve expected. The HTTP Response code is 403 which means that accessing this endpoint is forbidden. No courses data is returned. So how can we solve this authorization problem? Let’s go back to Strapi’s administration panel and open up the Roles & Permissions plugin:
Here you can see that two roles are defined by default:
- Authenticated: This is the role which is given to authenticated users by default
- Public: This is the role which is given to unauthenticated users by default
The HTTP GET request we’ve just initiated as done in an unauthenticated way so that the Public role is applied here. Click on the Public role to the see role definition in detail.
In the Permission section we’re now able to add specific permissions for the Course and Author collection type like you can see in the following screenshot:
The premissions we’re adding here are:
By the find permission the HTTP GET endpoint /courses is covered, so that the request now should lead to a result similar to the following:
The complete list of courses is returned in JSON format, the access is no longer forbidden.
Advertisement: Top Online Courses
The Complete Strapi Course*
Build Apps at the speed of thought with the simplest most versatile open source headless CMS
* Affiliate Link / Advertisement: This blog post contains affiliate links, which means that if you click on one of the product links and buy a product, we’ll receive a small commission. There are no additional costs for you. This helps support the channel and allows us to continue to make videos like this. Thank you for the support!
|
OPCFW_CODE
|
Add automated changelog, bump version
This PR is getting us ready to release Chef 12.6.0
It adds infra to start using an automated process to generate change logs
It bumps up the Chef and Chef-Config release version to 12.6.0
The original change log has been preserved in HISTORY.md.
@lamont-granquist @tas50 @btm @jkeiser
We should make sure we cleanup the PR names and add the labels before we ship 12.6 so we get a more useful automated changelog. The current one isn't very useful outside of dev circles
:-1:
this generates a complete mess of the history which will be misleading and useless to users and we are not going to go through and fix it all. all the mega-PRs are going to be collapsed into one PR and all the detail will be lost.
https://github.com/skywinder/github-changelog-generator/issues/304 is an absolute hard blocker for this.
i started off wanting to use github-changelog-generator for chef/chef, that's why i shipped it ffi-yajl and friends first, then went over to foodcritic and tried to do it with a long history file, and got blocked and stopped...
So this PR also wound up under 12.5.1 even though it hasn't been released yet and certainly isn't in 12.5.1 so there's some kind of bug there:
https://github.com/chef/chef/pull/4105
That PR also exemplifies a huge workflow issue when it comes to chef/chef and merging outstanding ready-to-merge PRs. The reason why that happens is that doing the rebase and merge of all those PRs one-by-one is annoying as hell, but we can't just click the 'merge' button on those because in many cases those PRs ran their travis results against a fork of master that was 12 months in the past and even though there's still not merge conflict there's no guarantee the build is still green. Going through 20 of those one by one is a terrible policy towards actually getting work done because I started doing that with #4096, #4097, #4098, #4099, #4100 until after 5 of them I got sick of it and cut #4105 to merge the remaining 16 PRs and got that one merge done faster than it took to merge the other five (roughly one day of work to do 5, and one day of work to do the next 16 in one go).
The reason why is that now none of those PRs appear in the autogenerated CHANGELOG. So for example I manually added #3650 to the CHANGELOG but github-changelog-generator threw it away. I think we can add closed PRs into github-changelog-generator but that introduces noise in PRs that we closed and didn't merge and aren't really changes and there's no good way to distinguish between closed PRs that were actually added somehow and closed PRs that were abandoned/wontfixed. We can start going through and start making it policy to add changelog_skip or wontfix or whatever tags to all the closed PRs that were never actually merged, but now we've got a huge history problem which is compounded by the bugginess with the since_tag.
So we've got:
at least two bugs in github-changelog-generator to fix (the since_tag and the fact that it thinks #4105 is already released)
we need to figure out how to integrate tagging merged PRs correctly going forwards that does not destroy the workflow of the poor bastard who is merging all the outstanding ready-to-merge PRs.
Ah I see, #4105 got "shipped" in "12.5.1-omnibus".
|
GITHUB_ARCHIVE
|
Which Linux distributions support recent TeX Live with package manager?
I am currently having massive issues getting TeX Live to run on Ubuntu 12.04, so I am considering running a VM on VirtualBox with another Linux distribution that has better support for the latest TeX Live (2012).
I want to have automatic package manager installation a la apt and synaptic and don't want to interfere with the system.
What would you recommend to me?
Use Ubuntu 12.10...
I assume you've already tried installing through the official backports PPA. Also see "How do I install the latest TeX Live 2012?".
Yes, I did that already. However, I had a problem using biber with the backports...
This is an issue that is presently unsolved in other contexts as well: should smaller language-oriented package managers exist in Linux, or should everything go through the distribution's general-purpose package manager? For instance, the Debian and Ruby communities had some disagreement on this exact issue: http://wiki.debian.org/Teams/Ruby/RubyExtras/OnRubygems.
Practically all big distributions have already switched to TL2012 or their upcoming release includes 2012. Debian/wheezy (currently testing, will be released soon) ships 2012, and I have already more uptodate packages of current tlnet in Debian/experimental.
Ubuntu from 12.10 onward includes TL2012 by default (same packages as in Debian). For 12.04 you need to use the mentioned PPA.
For Fedora 16,17,18 there are TL2012 packages, in 18 it is included (unsure here!)
openSuSE 12.2 has TL2011 package and TL2012 from an additional repository, openSuSE dev contains TL2012 by default.
FreeBSD and OpenBSD has TL2012 in the ports
NetBSD is in progress
Mac has MacTeX which is a repackaged TL2012, so you always get the latest updates, plus a nice integration.
Hope that gives a small overview on what is available.
I would indeed install it "manually"; here is a guide on how to do it. If it is not an option at all, there is another guide here but I haven't tried it.
And on TeX.SX is How to install “vanilla” TeXLive on Debian or Ubuntu?.
I do all my LaTeX stuff in Kile on Sabayon (Gentoo based). They have TexLive 2012 and a very convenient package management system.
It is very easy to install manually, but you will need to know your way around Linux to do so. Also remember to add the new path to /etc/environment not just to bashrc. I'd also suggest not using the "create symlinks" feature in the installer. After installation, remember the equiv stuff (see the comment about vanilla debian), by installing that, Ubuntu is told that something similar to Ubuntu tl is installed and thus editors will not install Ubuntu tl
I use ArchLinux and it works very well for me. TeXLive is available in the official repository, which usually follows the upstream releases (of all packages) very quickly. (The package manager of ArchLinux is called pacman.)
You can look it up on repology:
https://repology.org/metapackage/texlive/versions
It will list all distributions which ship a special package. Here: texlive
|
STACK_EXCHANGE
|
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
//using Wolfram.NETLink;
// C:\Program Files\Wolfram Research\Mathematica\7.0\SystemFiles\Links\NETLink
namespace HTLib2
{
public partial class Mathematica
{
public static void Run(string scriptpath, bool waitForExit)
{
string currpath = HEnvironment.CurrentDirectory;
HEnvironment.CurrentDirectory = HFile.GetFileInfo(scriptpath).Directory.FullName;
{
System.Diagnostics.Process mathm;
string argument = "-script \""+HFile.GetFileInfo(scriptpath).Name+"\"";
mathm = System.Diagnostics.Process.Start(@"C:\Program Files\Wolfram Research\Mathematica\8.0\math.exe", argument);
if(waitForExit)
mathm.WaitForExit();
}
HEnvironment.CurrentDirectory = currpath;
}
}
}
|
STACK_EDU
|
Web Accessibility For All!
In this article, I want to share my previous experience with Web Accessibility, even these days I’m just working as a Back end developer but I think this part of the software development is very important to attend.
What is Web Accessibility?
Web accessibility or eAccessibility is the inclusive practice of ensuring there are no barriers that prevent interaction with, or access to, websites on the World Wide Web by people with physical disabilities, situational disabilities, and socio-economic restrictions on bandwidth and speed. When sites are correctly designed, developed, and edited, more users have equal access to information and functionality. [Wikipedia]
web accessibility means that EVERYONE can use a Website.
Accessibility is about EVERYONE.
Web Content Accessibility Guidelines
The Web Content Accessibility Guidelines (WCAG) are part of a series of web accessibility guidelines published by the Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C), the main international standards organization for the Internet.
They are a set of recommendations for making Web content more accessible, primarily for people with disabilities—but also for all user agents, including highly limited devices, such as mobile phones. WCAG 2.0, was published in December 2008 and became an ISO standard, ISO/IEC 40500:2012 in October 2012. WCAG 2.1 became a W3C Recommendation in June 2018. [Wikipedia]
Versions 2.1 and 2.2 only solved a tiny portion of the issues. The WCAG guidelines are still confusing, and the way the levels work is not ideal. They needed a complete overhaul, and in late 2016 a task force was put together to create a brand new version, WCAG 3.0.
Here’s more information, if you’re interested: Accessibility Guidelines 3.0
Benefits of an Accessible Website
- Increasing the market and the number of web audiences
- Ideal SEO
- Creating positive public relations
- You will be protected from discrimination and complications that the law imposes on programs that do not have these features
- Enhancing the website’s performance and usability
Web Accessibility Checklist
If you want to make sure that your website is accessible, accommodate your website with the below checklist:
- Images should have meaningful alternative text
- Links should be visually identifiable
- Use descriptive section headings
- Use correct semantic HTML element structure for your content
- Forms have descriptive labels
- Information should not depend on color, sound, shape, size, or visual location
- Text and background color should have sufficient contrast
- Content scales properly when zoomed/enlarged
- Use a descriptive title tag
- Support keyboard navigation
- Focus states should be visible for keyboard users
- Use correct HTML5 input types
- Content that automatically changes has the ability to be paused (Users must be able to pause movement on automatically changing content (carousels, slideshows, etc.))
- Limit or remove any flashing elements
- Users should be able to navigate content using a screen reader
- Allow keyboard users to skip navigation (Give keyboard users the ability to skip over all of the links, corporate icons, search bars, and other navigation elements and allow them to navigate to the beginning of the page content.)
- Offer multiple ways to find pages on your website (sitemap page, search function, main navigation)
- Avoid mouse-only interactions
- Set focus on modals, popovers, alerts, etc.
- The site should not time out unexpectedly
- Multimedia should have alternative ways to be consumed
- Ensure audio and video are not played automatically unless that is the expected behavior
- Use the HTML lang attribute
- Use understandable inputs labels
- Forms have helpful and accessible error and verification messages
- Make data available for graphs, charts, maps, SVGs, etc. through assistive technology
- Links should be descriptive and provide intent
- Table data is accessible to non-sighted users
- Use ARIA landmarks where applicable
- Decorative images should not be visible to screen readers
- Pages are understandable with no styles enabled
- Web page size should not exceed 500k
- HTML should be valid and error-free
You can follow the checklist details here:
- Accessibility checklist – webflow
- Web Accessibility Checklist
- Checklist of Checkpoints for Web Content Accessibility Guidelines 1.0
Web accessibility topics to consider
As well as the checklist which I mentioned here, let’s discuss some important topics with examples.
Good Color Choices
It’s estimated that 4.5% of the global population experience color blindness (that’s 1 in 12 men and 1 in 200 women), 4% suffer from low vision (1 in 30 people), and 0.6% are blind (1 in 188 people). It’s easy to forget that we’re designing for this group of users since most designers don’t experience such problems.uxbooth website
Here you will learn more about Good Typography, Forms, etc. Accessibility for Visual Design
Easy To Touch
The minimum size for choosing a button on a website page is around 44 to 48 pixels, which means when user wants to select or press a button on their phone touchscreen they should do that without any touch problem.
The structure of HTML codes
In HTML codes, it’s better to follow a correct structure, for example, in nested content, when we have an h1 tag, we should first have an h2, and then an h3, etc.
To view your code structure, I recommend installing the HTML5 Outliner Chrome Extension.
Use the correct type of the lists in your codes, for example, if you have an un-order list use the
ul tag, if you have an ordered list use
ol tag, and if you have a descriptive list use
by default, images are not accessible! if you want to have an accessible image you should add an
alt attribute in your image HTML tag.
The aria-describedby attribute is used to indicate the IDs of the elements that describe the object. It is used to establish a relationship between widgets or groups and text that described them. This is very similar to aria-labelledby: a label describes the essence of an object, while a description provides more information that the user might need.developer.mozilla.org
<button aria-describedby="trash-desc">Move to trash</button> … <p id="trash-desc">Items in the trash will be permanently removed after 30 days.</p>
Skip Navigation Button
It must be possible for a user to access the content of the website regardless of how he enters the site, and no matter what tool he uses.
When a website has different images and content on each page, the “Skip to Main Content” button lets the user see the main content of the page. Assume that the user navigates different pages in a user panel with about 20 menu items. Using the keyboard, he/she has to press the Tab key 20 times on each page to reach the main content, which is very boring and annoying. It is very practical to place the button to access the content and the main part of the page for this reason.
here is an example for the old version of the sitepoint website.
If you are using the Abbreviation element, use the
title attribute for it.
Using ARIA: Roles, states, and properties
ARIA defines semantics that can be applied to elements, with these divided into roles (defining a type of user interface element) and states and properties that are supported by a role. Authors must assign an ARIA role and the appropriate states and properties to an element during its life cycle unless the element already has appropriate ARIA semantics (via the use of an appropriate HTML element). The addition of ARIA semantics only exposes extra information to a browser’s accessibility API, and does not affect a page’s DOM.
<div id="saveChanges" tabindex="0" role="button" aria-pressed="false">Save</div>
ARIA states and properties
ARIA attributes enable modifying an element’s states and properties as defined in the accessibility tree.
Section 508 Amendment to the Rehabilitation Act of 1973
In 1998 the US Congress amended the Rehabilitation Act to require Federal agencies to make their electronic and information technology accessible to people with disabilities. Section 508 was enacted to eliminate barriers in information technology, to make available new opportunities for people with disabilities, and to encourage development of technologies that will help achieve these goals. The law applies to all Federal agencies when they develop, procure, maintain, or use electronic and information technology. [Wikipedia]
Summary of Section 508 technical standards
- Software Applications and Operating Systems: includes accessibility to software, e.g. keyboard navigation & focus is supplied by a web browser.
- Web-based Intranet and Internet Information and Applications: assures accessibility to web content, e.g., text description for any visuals such that users of with a disability or users that need assistive technology such as screen readers and refreshable Braille displays, can access the content.
- Telecommunications Products: addresses accessibility for telecommunications products such as cell phones or voice mail systems. It includes addressing technology compatibility with hearing aids, assistive listening devices, and telecommunications devices for the deaf (TTYs).
- Videos or Multimedia Products: includes requirements for captioning and audio description of multimedia products such as training or informational multimedia productions.
- Self-Contained, Closed Products: products where end users cannot typically add or connect their own assistive technologies, such as information kiosks, copiers, and fax machines. This standard links to the other standards and generally requires that access features be built into these systems.
- Desktop and Portable Computers: discusses accessibility related to standardized ports, and mechanically operated controls such as keyboards and touch screens.
Web Accessibility Tests
There are several tools and websites available for checking whether your website is accessible:
- freedomscientific – Accessibility Testing Products
- NVDA for Windows OS
- Colour Contrast Analyser (CCA)
- Accessibility Developer Tools – Chrome Extension (Offered by: Google Accessibility)
- tenon.io – Accessibility as a Service
- a11yproject – Learn the fundamentals and principles behind the accessible design
- WAVE Web Accessibility Evaluation Tool
- Accessibility in Google Search
With the Wave website, I checked my personal blog’s accessibility and can see the list of issues and how to fix them and improve my blog’s accessibility.
|
OPCFW_CODE
|
US$1.00raised of $2,000.00 goal goal
The campaign owner has stopped the page from accepting further donations. Please contact them if you'd still like to donate
Warmest greetings to all,
I am Aiman, a 25-year old fresh graduate from Malaysia, seeking a helping hand to aid me in obtaining enough fund to kick-start my business. I am a man with passion that strongly believe that I can make use of my passion as a strong driving force to earn me a living.
A couple years back, I got myself immersed in the business of selling my homebred betta fish. Also known as the Siamese fighting fish, these beautiful fish is very popular here and the market is promising. I was still in my study years when i started the project of breeding and selling these betta fish, and realized that this business can be started with quite a reasonable sum of fund. However, it will require a big amount of space as each of this fish must sexed and all the males must be placed individually. A pair of betta fish will produce around 200+ fry at least hence will definitely require a large space to accommodate a proper farm. The farm will have to have a proper water storage tanks, a big tank for this matter as this betta fish requires frequent water change. I will also need space as to allow me to grow my own worm culture as food supply to the fish.
The worm culture mentioned above refers to Grindalworm culture and Microworm culture. Both are nutritious protein source for betta fish, both young and adults. Apart from selling Betta fish, I also sell started culture of these worms to other breeder through my Facebook page "BettaBuntal". Similarly, space is required to grow an adequate amount of worms to ensure continuous supply of food for the betta fish and spawns.
Apart from fishkeeping and breeding Betta fish, I also love ball pythons and have the dream to purchase a breeding pair and start to set up a breeding facility. Day by day, this reptile is drawing more attention and finding their way into the home of many new owners. Unlike fish, reptiles such as ball phyton is still a creature of nightmare for both of my parents. I had a pet ball python not long ago, which I had to sell to a new owner due to strong disapproval by both of my parents. Despite the disapproval, I still keep the dream of keeping and breeding Ball pythons.
Currently, my facebook page; BettaBuntal is not operating as much. As I am now staying with my parents and my 4 siblings, there is not much room for me to make worm cultures for sale, nor do I have space to breed Betta fish.
With the success of this fund raising, I will have enough capital to kick start my business of breeding and selling Betta fish, ball pythons and worms.
At this point of time, I am ready to embark on my journey to re-live my passion and turning this humble hobby of mine into a living. It is my dream to start up a business. I am experienced when it comes to Betta fish breeding and producing starter cultures of worms, and I have also found an experienced ball python breeder to guide me. Hence I really need your help to raise a fund to start up my business.
Thanks for reading up to this =) I am truly grateful.
I have attached few photos: my worm cultures, a pair of Betta fish and my leopard morph ball phyton.
- Aizuddin Aiman
- Campaign Owner
No updates for this campaign just yet
|
OPCFW_CODE
|
Writers live a solitary existence, much like a tiger, but at least tigers get to meet other tigers during mating season.
Unlike actors or musicians, who tend to congregate in clumps, writers are by necessity solitary. And this is fine when you’re writing your book, frankly two’s a crowd at that point, but afterwards as you attempt to navigate the dark and murky world of actually getting something published; be it on paper or online, the isolation can be quite terrifying, especially as you try to get people to read your work, which can feel at times like trying to move a house by shoulder barging it repeatedly. Over the years I’ve often thought how nice it would be to sit down and talk to another tiger, I mean writer, and find out how they did it.
Which is why I was greatly surprised when I found one sitting in the window of my local bookshop this week.
Embarrassingly I am no stranger to women in windows, I have visited both Amsterdam and Frankfurt as part of my ‘Drink the World‘ column, but this was a quiet bookshop on a suburban street, and all this particular woman was doing was tapping away happily on her Mac, a sign taped to the glass that read “Bestselling author will answer your questions for 15 minutes for a contribution to local dogs home.”
The author in question was Isabel Losada, and despite both being journalists, as writers go we couldn’t be more different. She writes candid, sassy prose about her travels in Tibet and elsewhere, whereas my work deals with rampant drinking, odd sex, and important social questions like how to effectively steal a llama using a Volkswagen camper. However, much about the experience of writing is universal; the fears and doubts, the ability to love something one moment and despise it the next, how you can struggle to write a single sentence one day, then write a chapter in an hour.
One of the things we all certainly share are the rejections, which can be both strange, myriad and brutal. For instance I was once rejected by an agent for not having enough gardening in my work. As my book is set in the world of late night bars and clubs and therefor takes place almost entirely at night, I felt that the lack of gardening was somewhat unsurprising. I confess to this day I wonder exactly what was happening in that man’s life, or indeed his garden, when he wrote that. She went one better and told me how she’d written to a member of the Monty Python team asking him to read her work. This unnamed Python who I shall call John for the purposes of this story, replied with a pleasant letter detailing exactly how long he had left on this earth, rounded up to the nearest hour, how much of that was allocated for sleeping, eating and making love, and of what left, how much was currently earmarked for reading, before politely declining the opportunity.
I asked about good and bad reviews, of which like all writers I’ve had both. She in turn asked if I had ever bought a book based solely on a review. She, it turns out, had sold 50,000 copies of one title without getting a single one.
Finally I asked what advice she would give to a new writer.
“Appreciate what you’re asking of your reader.” She told me. “You’re asking them to give up eight hours of their life. That’s a big ask. Respect that. And make sure you’re worth the sacrifice.”
I can say honestly that I have rarely left a window feeling less alone and more inspired. I wondered then how many other writers, active or aspiring, or indeed readers, might be interested in what goes on behind the scenes of a bestseller. Of what happens when the noble occupation of writing a novel, gives way to the grubby and ignoble s**t fight that is pitching and marketing one.
Rather than sit in a window I thought I’d blog it right here, and confess all the weird things you have to do whilst feeling all the time like you’re making it up as you go along.
Next week — Networking — the art of making yourself sound awesome whilst pretending to listen to someone else.
Dan Miles is the cult best-selling author of Filthy Still – A tale of travel, sex and perfectly made cocktails, out now on Amazon.
Isabel Losada is the best-selling author of The Battersea Park Road to Paradise and For Tibet with Love.
|
OPCFW_CODE
|
WebSphere Message Broker on Ubuntu
by Anton Piatek
WebSphere Message Broker v8 now supports Ubuntu for development systems (i.e. not production use) – http://www-01.ibm.com/support/docview.wss?uid=swg27023600#Ubuntu
I have been running MQ and Message Broker on Ubuntu and Debian since shortly after I joined IBM in 2005, and it seems there are lots of other people doing this too despite it not being a supported platform before now.
Lots of people have advice on how to install MQ and WMB, and it is worth mentioning them in case you have problems.
The best advice I can give for installing MQ and WMB on Ubuntu is:
- change the /bin/sh symlink to point to /bin/bash – MQ Doesn’t like installing with dash as the default shell.
- use rpm to install MQ – Alien is a bit of a hack, and does not work well. You will need to use the “–force-debian” flag on rpm to make it install.
- One other thing which might help is to run the mqlicense.sh script with the ‘-console’ flag as it may not find your X applications properly.
Some user’s have noticed that chown on Debian and Ubuntu strips the setuid bit from the binaries (Debian and Ubuntu consider leaving setuid set on an executable when you change it’s owner a security flaw, whereas Redhat and SuSE appear not to) so you may need to fix the permissions (best to check the permissions of the same level of MQ from a RHEL or SLES box and set them the same) though I have not seen this with recent versions of MQ.
Message Broker v8 installs quite happily on Ubuntu. The only issues that I know of are that some of the eclipse based gui applications do not draw everything correctly. This is a known eclipse-GTK bug, and is more common on releases after Lucid Lynx (10.04). A workaround is to set the environment variable GDK_NATIVE_WINDOWS=1
Update 12/01/5 – I have just noticed that the script ‘mqsicreateworkpath’ which is used to initialise /var/mqsi correctly still uses ksh. Either install ksh on your system or edit the script to say bash in the first line instead of ksh (it should work fine then)
Update 14/01/15 – Several people have contacted me about running IBM Integration Bus v9 (the new name of Message Broker), primarily on 64 bit Ubuntu installs. Some parts, noticeably the MQ Explorer extension, fail to install unless you have the following extra packages installed: libc6-i386 libgcc1:i386tags: Debian - IBM - Linux - messagebroker - mq - ubuntu - wmb - wmq
|
OPCFW_CODE
|
'''
Parse the MC_region database from the Habitat Stratus backup.
Usage: parse_region_db.py MC_region output.json
The intention is for this output to represent the original data, but not
interpret it in any way. For example, names are still padded with spaces
to a fixed size.
We have reasonable assurance that these fields are correct. We have compared
them to Griddle (.gri) and Riddle (.rdl) files, which were created at
various points in Habitat's evolution. We've also verified the regions
match screenshots from back in the day and that the relationships to each
other make sense.
For the nitty_bits (flags) field, we include both the original integer
value of this field as well as individual named flags as created by
extract_flags().
There is some unknown data left in the final padding field that we don't
decode yet, but it's only present for 7 regions and appears to be a single
integer, perhaps a region ID reference. The combinations of region ID and
value for this unknown field are:
1160: 00 00 28 75
9112: 00 00 1f ac
10354: 00 00 23 98
10370: 00 00 13 a5
10372: 00 00 23 98
10382: 00 00 1f ac
10443: 00 00 1f ac
'''
import json, struct, sys
STRUCT_ITEMS = (
'ident',
'owner_id',
'light_level',
'depth',
'east_neighbor',
'west_neighbor',
'north_neighbor',
'south_neighbor',
'class_group',
'orientation',
'entry_proc',
'exit_proc',
'east_exit_type',
'west_exit_type',
'north_exit_type',
'south_exit_type',
'nitty_bits',
'name',
'avatars',
'to_town',
'to_port',
)
FORMAT = '> 2i 2h 4i 8h I 20s 3b 33x'
assert struct.calcsize(FORMAT) == 104
def extract_flags(flags):
'''
Extract known region flags out of nitty_bits.
There are two bits of nitty_bits we didn't find explained in the griddle
file, so they aren't decoded here.
* 0x02000000: this bit is very common, but is only set on Popustop regions.
One theory is that this is related to the "turf" concept.
* 0x00400000: this bit is present on only two regions, IDs 1010 and 10463.
It is unknown what it's for.
'''
return {
'east_restriction': bool(flags & (1 << 31)),
'west_restriction': bool(flags & (1 << 30)),
'north_restriction': bool(flags & (1 << 29)),
'south_restriction': bool(flags & (1 << 28)),
'weapons_free': bool(flags & (1 << 27)),
'theft_free': bool(flags & (1 << 26)),
}
def main():
items = []
with open(sys.argv[1], "rb") as fp:
while True:
row = fp.read(struct.calcsize(FORMAT))
if not row:
break
# Unpack all named fields into a dict
data = dict(zip(STRUCT_ITEMS, struct.unpack(FORMAT, row)))
# Convert strings to ASCII
data['name'] = data['name'].decode('ascii')
# Unpack specific flags while preserving nitty_bits itself.
data.update(extract_flags(data['nitty_bits']))
# Filter out one room, which is the only one with a duplicate
# ident. The room with the name "82 Mince St" duplicates room
# ident 1134. The other one is named "Haunted Mansion" and has
# other exits that aren't dupes, so we keep it.
if data['ident'] == 10362 and \
data['name'].strip() == '82 Mince St':
continue
items.append(data)
with open(sys.argv[2], 'w') as fp:
json.dump(items, fp, indent=2, sort_keys=True)
if __name__ == '__main__':
main()
|
STACK_EDU
|
Migration from Keycloak 16.1.1 to 24.0.5, What are the effects of using environment variables for ConnectID legacy logout?
I'm in the process of updating our Keycloak setup from version 16.1.1 to 24.0.5 using Docker Compose. During this update, I've encountered issues with the logout URL which seem to be resolved by setting the following environment variables:
environment:
- KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_LEGACY_LOGOUT_REDIRECT_URI=true
- KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_SUPPRESS_LOGOUT_CONFIRMATION_SCREEN=true
From Keycloak-quarkus dockerfiles Keycloak Docker image:
KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_LEGACY_LOGOUT_REDIRECT_URI: Enables backward compatibility option legacy-logout-redirect-uri of OIDC login protocol in the server configuration (default value is false). Required for logout by UI of earlier archive version than 5.29.1.
KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_SUPPRESS_LOGOUT_CONFIRMATION_SCREEN: Enables suppression of logout confirmation screen if the user does not provide a valid idTokenHint (default value is false).
In Keycloak 19.0.0, there were notable changes related to OIDC Logout (Based on documentation):
Support for the client_id parameter, which was added in the recent draft of the OIDC RP-Initiated Logout specification.
Configuration option Valid Post Logout Redirect URIs was added to the OIDC client, aligned with the OIDC specification.
While these environment variables fix the logout issue, I am concerned about potential side effects, particularly regarding security vulnerabilities or other issues. My questions are:
What are the security implications of enabling KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_LEGACY_LOGOUT_REDIRECT_URI?
Could enabling KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_SUPPRESS_LOGOUT_CONFIRMATION_SCREEN lead to any vulnerabilities or security risks?
Are there any best practices for managing logout behavior in Keycloak 24.0.5 that align with the latest OIDC specifications?
Any insights or recommendations from those who have navigated similar updates would be greatly appreciated.
I updated the Docker Compose file to include the environment variables KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_LEGACY_LOGOUT_REDIRECT_URI=true and KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_SUPPRESS_LOGOUT_CONFIRMATION_SCREEN=true. This configuration successfully resolved the logout URL issue, and I was able to perform logout actions without any errors. I expected that by setting these environment variables, the logout process would work correctly without requiring additional modifications or causing any security issues. The logout process worked as expected, but I'm concerned about the potential long-term effects, particularly regarding security vulnerabilities or compatibility issues. I want to ensure that enabling these variables does not introduce new risks.
Any update about this? Even info about if I keep on updating the Keycloak version will I still be able to use the following Environment variables no major issue on them?
|
STACK_EXCHANGE
|
[00:29] <guiverc> I just saw https://www.youtube.com/watch?v=MzBsI9XZpEg OMG's Ubuntu 22.10 What's New video; video or just list with what's likley to be loads of 22.10 things
[00:30] <guiverc> maybe ^ (i opened gdoc, box started thrashing so i exited & came here)
[00:38] <Bashing-om> guiverc: Got the vid on my list to add to Gdoc tomorrow.
[03:48] -SwissBot:#ubuntu-news- ::OMG!Ubuntu:: Ubuntu 22.10: What’s New? [Video] @ https://www.omgubuntu.co.uk/2022/10/ubuntu-22-10-video-new-features (by Joey Sneddon)
[14:27] -SwissBot:#ubuntu-news- ::Planet:: Ubuntu Blog: Ubuntu 22.10 on the Raspberry Pi delivers new display support and the full MicroPython ... @ https://ubuntu.com//blog/ubuntu-22-10-on-the-raspberry-pi-delivers-new-display-support-and-the-full-micropython-stack
[14:27] -SwissBot:#ubuntu-news- ::Planet:: Ubuntu Blog: Landscape beta: test the Landscape Server migration to Ubuntu 22.04 LTS @ https://ubuntu.com//blog/landscape-beta-test-the-landscape-server-migration-to-ubuntu-22-04-lts
[17:20] -SwissBot:#ubuntu-news- ::OMG!Ubuntu:: Ubuntu 22.10 is Now Available to Download @ https://www.omgubuntu.co.uk/2022/10/ubuntu-22-10-released-and-available-to-download (by Joey Sneddon)
[17:20] -SwissBot:#ubuntu-news- ::OMG!Ubuntu:: How to Upgrade to Ubuntu 22.10 from 22.04 LTS @ https://www.omgubuntu.co.uk/2022/10/how-to-upgrade-to-ubuntu-22-10-from-22-04 (by Joey Sneddon)
[19:12] -SwissBot:#ubuntu-news- ::Planet:: Kubuntu General News: Kubuntu 22.10 Kinetic Kudu Released @ https://kubuntu.org/news/kubuntu-22-10-kinetic-kudu-released/
[19:34] -SwissBot:#ubuntu-news- ::Planet:: Xubuntu: Xubuntu 22.10 released! @ https://xubuntu.org/news/xubuntu-22-10-released/
[19:57] -SwissBot:#ubuntu-news- ::Planet:: Ubuntu Studio: Ubuntu Studio 22.10 Released @ https://ubuntustudio.org/2022/10/ubuntu-studio-22-10-released/
[20:25] -SwissBot:#ubuntu-news- ::OMG!Ubuntu:: Firefox 106 Brings PDF Annotation & Gesture Nav to Linux @ https://www.omgubuntu.co.uk/2022/10/firefox-106-released-with-pdf-annotating-gesture-nav-more (by Joey Sneddon)
[20:31] -SwissBot:#ubuntu-news- ::Portugal:: E217 Drupal, Com Ricardo Amaro @ https://podcastubuntuportugal.org/e217/
[20:32] -SwissBot:#ubuntu-news- ::Planet:: Ubuntu Blog: Join our Ubuntu circle @ https://ubuntu.com//blog/join-our-ubuntu-circle
[20:48] -SwissBot:#ubuntu-news- ::Planet:: Podcast Ubuntu Portugal: E217 Drupal, Com Ricardo Amaro @ https://podcastubuntuportugal.org/e217/
[20:48] -SwissBot:#ubuntu-news- ::Planet:: Lubuntu Blog: Lubuntu 22.10 Released! @ https://lubuntu.me/kinetic-released/
[21:31] -SwissBot:#ubuntu-news- ::Planet:: Sean Davis: Xubuntu 22.10 Released @ https://blog.bluesabre.org/2022/10/20/xubuntu-22-10-released/
[21:37] <guiverc> https://lists.ubuntu.com/archives/ubuntu-announce/2022-October/000285.html ; will soon massage for fridge
[21:59] <guiverc> ^ kinetic kudu 22.10 released up for review on fridge; will post in an hour
[21:59] <guiverc> https://fridge.ubuntu.com/wp-admin/post.php?post=9642&action=edit
[22:00] <krytarik> guiverc: Looking..
[22:03] <guiverc> thanks krytarik
[22:03] <krytarik> guiverc: Looks alright, only the footer text needs italicized yet.
[22:04] <guiverc> oops, yeah didn't do that (but meant to; I'm somewhat tired)
[22:07] <guiverc> I hadn't tagged Planet either :(
[22:08] <guiverc> https://fridge.ubuntu.com/2022/10/20/ubuntu-22-10-kinetic-kudu-released/ (with release tagged too!)
[22:09] <krytarik> True, didn't look at the tags (categories really).. :3
[22:10] <guiverc> thanks krytarik
[22:10] <krytarik> Thank YOU! :)
[22:31] -SwissBot:#ubuntu-news- ::Planet:: The Fridge: Ubuntu 22.10 (Kinetic Kudu) released @ https://fridge.ubuntu.com/2022/10/20/ubuntu-22-10-kinetic-kudu-released/
|
UBUNTU_IRC
|
enable multiple categories on a product
Problem
I'm currently creating a store for an Art gallery.
Each artwork must be filterable by the technique used, eg:
prints : etching, litograph, posters, photos, etc.
sculptures
canvas
...
Each artwork must also be filterable by topics and subtopics, eg :
animals : birds, cats, dogs, ....
landscapes: city, countryside, ...
vehicles : planes, cars, buses, ...
...
it can also be an union of these values, eg : ['litograph', 'countryside', 'cows', 'pigs', 'tractors'].
I've been using attributes and attributes values to hold this data (saving the parent ID in the attributeValue.richText field).
This is quite cumbersome to do, since I need to pass all attributes values in the tree to the product, eg:
["prints", "litograph", "animals", "pigs", "cows", "vehicules", "tractors" ]
This feature is already baked in Categories (tree filtering), but a product only accept one.
1. General Assumptions
It makes more sense to attach multiple categories to a product.
2. API changes
Types
input ProductBulkCreateInput {
attributes: [AttributeValueInput!]
category: [ID!]
chargeTaxes: Boolean
collections: [ID!]
description: JSONString
name: String
slug: String
taxClass: ID
taxCode: String
seo: SeoInput
weight: WeightScalar
rating: Float
metadata: [MetadataInput!]
privateMetadata: [MetadataInput!]
externalReference: String
productType: ID!
media: [MediaInput!]
channelListings: [ProductChannelListingCreateInput!]
variants: [ProductVariantBulkCreateInput!]
}
input ProductInput {
attributes: [AttributeValueInput!]
categories: [ID!]
chargeTaxes: Boolean
collections: [ID!]
description: JSONString
name: String
slug: String
taxClass: ID
taxCode: String
seo: SeoInput
weight: WeightScalar
rating: Float
metadata: [MetadataInput!]
privateMetadata: [MetadataInput!]
externalReference: String
}
type Product implements Node, ObjectWithMetadata {
id: ID!
privateMetadata: [MetadataItem!]!
privateMetafield(key: String!): String
privateMetafields(keys: [String!]): Metadata
metadata: [MetadataItem!]!
metafield(key: String!): String
metafields(keys: [String!]): Metadata
seoTitle: String
seoDescription: String
name: String!
description: JSONString
productType: ProductType!
slug: String!
categories: [Category!]
created: DateTime!
updatedAt: DateTime!
chargeTaxes: Boolean!
weight: Weight
defaultVariant: ProductVariant
rating: Float
channel: String
descriptionJson: JSONString
thumbnail(size: Int, format: ThumbnailFormatEnum = ORIGINAL): Image
pricing(address: AddressInput): ProductPricingInfo
isAvailable(address: AddressInput): Boolean
taxType: TaxType
attribute(slug: String!): SelectedAttribute
attributes: [SelectedAttribute!]!
channelListings: [ProductChannelListing!]
mediaById(id: ID): ProductMedia
imageById(id: ID): ProductImage
variant(id: ID, sku: String): ProductVariant
variants: [ProductVariant!]
media(sortBy: MediaSortingInput): [ProductMedia!]
images: [ProductImage!]
collections: [Collection!]
translation(languageCode: LanguageCodeEnum!): ProductTranslation
availableForPurchase: Date
availableForPurchaseAt: DateTime
isAvailableForPurchase: Boolean
taxClass: TaxClass
externalReference: String
}
3. Database changes
add a table product_categoryproduct similar to product_collectionproduct and replace collection_id field with category_id field.
5. To Do list
### Tasks
- [ ] create the db migration
- [ ] update the corresponding graphql types, queries and mutations (if any).
- [ ] update all DB queries using the current `product_product.category` field.
- [ ] update unit tests
@nathanschwarz categories are designed to be 1 to 1 relationship, and collections one to many. This is by design, as some apps rely on 1 to 1 relationships for taxes, product feeds etc.
|
GITHUB_ARCHIVE
|
12-02-2008, 04:28 PM
Join Date: Dec 2008
Hi guys, I am quite new to computer problems, sorry if this may be long to read><
Here's full story of my problem:
Recently I got infected with a trojan downloader, and infected me with these viruses : Win32/TrojanDownloader.Agent.AABX trojan, Kryptik.AE trojan, PSW.OnLineGames.NRS trojan, and a few more I lost track of, I couldn't get rid of them so I got myself a second HardDrive, installed OS and NOD32 v3 ESET Smart Security on it to scan my first infected harddrive, after scanning, i start to boot up again with first infected ( now should be clean ) harddrive.....
Here is my problems
Upon start up, Windows load a lot longer than before, finally it got to the welcome screen, i pressed ctrl+alt+del and logged in as administrator, then it hangs for like 30 second before my desktop icon start menu start to load(usually its straight away)
Problem 1: even tho loaded, taskbar seem to disappear, right beside start menu is language bar, basically when i open a window and minimize it i won't be able to switch back to it unless i ult tab.
Problem 2: I cannot Copy from uneditable areas, (text on pages of the internet)
Problem 3: RPC services seems to be off, found out when i tried to do the command tasklist
Problem 4: When run services.msc, the screen shows up but with no services, its looks like it hanged on loading, but the system hasn't hanged.. i can close it.
Problem 5: I cannot open internet pages in new window (right click> open link in new window), when if i go to "My Computer > right click > manage > event Viewer" i can see there is many Application and System errors, but when i double click them nothing happens, can't view them at all
Problem 6: Windows Installer is not working, classic message of "The Windows Installer Service could not be accessed. This can occur if you are running Windows in safe mode, or if the Windows Installer is not correctly installed. Contact your support personnel for assistance."
Problem 7: I tried downloading some xp tweaks according to my previous problems on "www.kellys-korner-xp.com/xp_tweaks.htm" (vbs file ones) and execute them, and nothing happens. I tried to manually adjust the registry according to the vbs files i downloaded.. nothing is fixed
Problem 7: Shut down hang at "logging off" screen.. and never shut down, but when u press the numlock or caps lock key it works... which should mean system hasn't froze?
And I do believe there are more problems that still havn't been discovered, I have a lot of programs installed on that computer, and I really don't wish to reinstall OS, I tried boot with "last known good configuration", its no good, Problem 1 happens even in safemode.
I'd really appreciate if anyone here could help me further troubleshoot my computer and find out exactly whats wrong, and hopefully fix it.
Platform: Microsoft Windows Professional XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180)
CPU: Intel Pentium 4 2.40GHz
If any more details is needed please tell me, I will post back as soon as i can
|
OPCFW_CODE
|
package cms_tp16;
/*Classe principale qui permet de v�rifier la conception des classe UneDate et DateNaissance.
* Ex�cutez successivement et s�par�ment les trois blocs d'instructions ci-dessous
* (en les commentant et d�commentant convenablement).
*/
public class CP_TP16Exo2
{
public static void main(String[] args)
{
// *******************************************************************************************
//D�but du premier bloc
UneDate refDate1 = new UneDate("12.03.1959");
System.out.println(refDate1);
UneDate refDate2 = new UneDate(12,3,1959);
System.out.println(refDate2);
UneDate refDate3 = new UneDate(42,3,1959);
System.out.println(refDate3);
UneDate refDate4 = new UneDate("29.02.1956");
System.out.println(refDate4);
UneDate refDate5 = new UneDate("29.02.1900");
System.out.println(refDate5);
UneDate refDate6 = new UneDate("29.02.2000");
System.out.println(refDate6);
UneDate refDate7 = new UneDate(31,6,2005);
System.out.println(refDate7);
UneDate refDate8 = new UneDate(31,7,1700);
System.out.println(refDate8);
UneDate refDate9 = new UneDate(30,15,2002);
System.out.println(refDate9);
UneDate refDate10 = new UneDate(35,9,2004);
System.out.println(refDate10);
System.out.println("----------------------------------------------------------------------------------------");
DateNaissance refDateNaissance1 = new DateNaissance(12,3,1959,20,12,"Claudia");
System.out.println(refDateNaissance1);
DateNaissance refDateNaissance2 = new DateNaissance(12,3,1959,25,12,"Marie");
System.out.println(refDateNaissance2);
DateNaissance refDateNaissance3 = new DateNaissance(12,3,1959,20,72,"Louis");
System.out.println(refDateNaissance3);
//Fin du premier bloc
// *******************************************************************************************
//D�but du deuxi�me bloc
UneDate refDate11 = new UneDate("23.08.1944");
UneDate refDate12 = new UneDate("23.08.1944");
UneDate refDate13 = new UneDate("25.09.1938");
System.out.println(refDate11.equals(refDate12));
System.out.println(refDate11.equals(refDate13));
System.out.println("---------------------------------------------");
System.out.println(refDate11.hashCode());
System.out.println(refDate12.hashCode());
System.out.println(refDate13.hashCode());
System.out.println("---------------------------------------------");
DateNaissance refDateNaissance11=new DateNaissance(23,8,1944,6,30,"Titi");
DateNaissance refDateNaissance12=new DateNaissance(23,8,1944,6,30,"Tata");
DateNaissance refDateNaissance13=new DateNaissance(23,8,1944,3,45,"Didi");
System.out.println(refDateNaissance11.equals(refDateNaissance12));
System.out.println(refDateNaissance11.equals(refDateNaissance13));
System.out.println("---------------------------------------------");
System.out.println(refDateNaissance11.hashCode());
System.out.println(refDateNaissance12.hashCode());
System.out.println(refDateNaissance13.hashCode());
System.out.println("---------------------------------------------");
System.out.println(refDate11.equals(refDateNaissance11));
System.out.println(refDateNaissance11.equals(refDate11));
//Fin du deuxi�me bloc
// *******************************************************************************************
//D�but du troisi�me bloc
UneDate tabMixte[] = new UneDate[10];
tabMixte[0]=new UneDate(7,10,2000);
tabMixte[1]=new UneDate("7.10.2000");
tabMixte[2]=new UneDate("07.10.2000");
tabMixte[3]=new DateNaissance(7,10,2000,15,23,"Toto");
tabMixte[4]=new DateNaissance(7,10,2000,15,23,"Tutu");
tabMixte[5]=new DateNaissance(7,10,2000,9,41,"Toto");
tabMixte[6]=new UneDate(7,10,2000);
tabMixte[7]=new DateNaissance(7,10,2000,3,33,"Dupont");
tabMixte[8]=new UneDate(7,10,2000);
tabMixte[9]=new DateNaissance(7,10,2000,3,33,"Dupont");
System.out.println("-----------------------------------------------------------------------");
for(int i=0; i<tabMixte.length; i++)
{
System.out.println(tabMixte[i]+" "+tabMixte[i].hashCode());
}
//Fin du troisi�me bloc
} //Fin de la m�thode main()
} //Fin de la classe principale CP_TP16Exo2
|
STACK_EDU
|
[Outlook] (signature) Convert sample to GitHub hosting
Q
A
Bug fix?
no - yes? no
New feature?
no - yes? no
New sample?
no - yes? no
Related issues?
none
What's in this Pull Request?
Converting the outlook-set-signature sample from yo office project to GitHub hosting.
Hi Elizabeth,
I scanned the PR and think there are two attention areas:
You added the localhost manifest (and I agree, don't take away the localhost option, people will use this and replace the localhost info with their own server info) but I believe all the localhost/server instructions have been removed leaving only the GitHub use instructions.
The GitHub deployment is done in your own GitHub space. I think it would be better to deploy this in this GitHub repo as personal accounts tend to move, go away ... other
For the last item (2) I'd suggest changing https://elizabethsamuel-msft.github.io/ to https://officedev.github.io/
I checked and I even think it is already in there? Example:
https://officedev.github.io/PnP-OfficeAddins/Samples/outlook-set-signature/src/taskpane/HTML/editsignature.html
Hi @aafvstam,
For item 2, we don't have a way to use GitHub pages for the hosting in a PR because you can't host from the PR branch (only main branch). So we use a personal fork to test, then update the manifest quickly right after we merge the PR. The change you suggest is made after the merge. If there's a way to avoid that it would be great to know!
Thanks!
David
@davidchesnut
I think the files in https://officedev.github.io/ are independent of the PR, you can see for instance that https://officedev.github.io/PnP-OfficeAddins/Samples/outlook-set-signature/src/taskpane/HTML/editsignature.html already exists even though the PR is not merged yet.
Therefor the PR should be able to already use the https://officedev.github.io/... URL in code, don't you think?
@ElizabethSamuel-MSFT
Item 1 can be ignored, by cloning the PR and looking properly at the MD I see that you split it two ways just like I had in mind :-)
@aafvstam Thanks for your feedback. I checked settings for this repo and Pages is set to "main". I think the files work from officedev.github.io because the sample already exists here - I'm only converting a sample but not adding a new one. I have no problem updating the links in this case because I didn't change any content in the various js and HTML files.
@ElizabethSamuel-MSFT Indeed, my updated comment crossed your answer. As the specific asset items haven't changed it should be possible to PR with the officedev.github.io URLs.
Creating a PR with changed assets would run into the issues I described in the second 'update' ...
This is a good example of the testing chicken-egg issue mentioned above while testing the PR:
Here you see the breaking logo caused by
This in fact is the 'old' page served by GitHub Pages (pre-merge PR).
If you check your code it is actually OK and should be resolved post-merge PR.
That said, I think you are OK to merge ... Deployment to GH Pages do require a post-merge check however 😉
|
GITHUB_ARCHIVE
|
Manpage renderer
Based on: https://github.com/dotnet/docfx/issues/2648#issuecomment-393918553
@xoofx, do you know of a good spec of manpage that would work on BSD, OSX, Linux and Solaris?
Pandoc's man writer is https://github.com/jgm/pandoc/blob/d32e8664498d799932927d9865ce71e014472ef3/src/Text/Pandoc/Writers/Man.hs. We probably need some specs to validate output.
@omajid, do you know of any place where we can find spec suite for manpages that would work across the board of Unix-like operating systems?
For example, i don't get the difference between
https://linux.die.net/man/1/man and
https://linux.die.net/man/7/man
and apparently NET-2 BSD uses mdoc compatible macros, which doesn't work on Linux without providing -mdoc switch to man(7). In order to validate the implementation, it would be great to have some markdown and corresponding manpage.
do you know of any place where we can find spec suite for manpages that would work across the board of Unix-like operating systems?
Sorry, not off the top of my head. I will look into it.
https://linux.die.net/man/1/man
https://linux.die.net/man/7/man
As I understand it, man(1) is about the man command and how to use it. man(7) is about how to write those man pages - their syntax and other implementation details.
apparently NET-2 BSD uses mdoc compatible macros, which doesn't work on Linux
Does the other way around work? If we use the non-mdoc macros, do docs render correctly on NetBSD and other BSDs?
apparently NET-2 BSD uses mdoc compatible macros, which doesn't work on Linux
Does the other way around work? If we use the non-mdoc macros, do docs render correctly on NetBSD and other BSDs?
maybe @krytarowski or @jsonn would be able to answer it?
non-mdoc macros, what in particular?
nothing in particular, we are just trying to figure out if there is a neutral spec for manpage document format, that would work across Linux, BSD, OSX and Solaris. Pandoc uses groff https://github.com/jgm/pandoc/blob/d32e8664498d799932927d9865ce71e014472ef3/src/Text/Pandoc/Writers/Man.hs#L30, is this something which is considered portable in unix world?
For man pages just use mdoc.
For Solaris, it depends on what exactly you mean. Newer Illumos certainly supports mdoc. Old Solaris only man. Man as macro set has many restrictions, i.e. it gives much worse results for web pages. That said, you can use mandoc to create man(7) output from mdoc as well.
Thanks! This tool (markdig) converts CommonMark markdown or HTML markup to intermediate representation, then renders the IR to CommonMark or HTML.
We want to introduce a new manpage renderer and parser from/to that IR. For this matter, we needed some well defined manpage specs and the flavor to chose from. There is a good article https://monades.roperzh.com/memories-writing-parser-man-pages/ by @roperzh, which also acknowledges the lack of formal specification as a drawback of manpage
Lack of beginner-friendly resources
Something that really confused me was the lack of a canonical, well defined and clear source to look at, there’s a lot of information in the web which assumes a lot about the reader that it takes time to grasp.
Maybe a spec can be proposed to w3c/ecma etc. in effort to formalize.
I'm pretty confused about why you say that mdoc macros don't work on Linux. This may have been true once, but hasn't been true for many years; they work fine.
(I suspect this is moot for the purposes of this issue, but wanted to clarify it in case the idea takes root elsewhere.)
|
GITHUB_ARCHIVE
|
Very sneaky and misleading, but the thing was cheap enough that I shouldn't complain. Raid-1 offers redundancy you don't get with Raid-0 at the cost of only have the 200g available. The error code in 7. The box below showed that there are all 'unknown disks' I pressed enter to set up windows xp and the blue screen came up with the following error: Setupdd.sys: Page_Fault_In_Nonpaged_Area Does navigate here
with the yapping & just give the fix. I just finish putting togeather my system. My PC doesn't have it apparently, or the folder listed as containing it in the BleepingComputer database, so if this directory is invalid get back with us and we'll go from When it beeps, is it just one short beep or is it different?
You should always be able to boot to removable media as long as you have your basic components, a hard disk isn't required.Another thought is that somehow the windows disk is Its not uncommon for the desktops I work on to have bad ram. Back to top #6 Ravahan Ravahan Members 16 posts OFFLINE Local time:05:22 AM Posted 13 July 2009 - 05:26 PM This is one thing I really liked about having a I guess you didn't read my post.
Either hdd go bad and you lose everything you have. Follow 3 answers 3 Report Abuse Are you sure you want to delete this answer? Or you could obtain an electronic copy as a backup through one of the file sharing sites. Error Code 7 Android Publish Related resources File \i386\halaacpi.dll could not be found error code is 32768 - HELP!
I havent seen for myself but can the windows OS be able to go into the ipad where the ios sits? Site Changelog Community Forum Software by IP.Board Sign In Use Facebook Use Twitter Need an account? Register now Not a member yet? http://ccm.net/forum/affich-67746-error-7-with-xp-installation Helpful +3 Report rich Jul 22, 2013 11:32PM removing one of memory RAM is true!haha,thanks.
It should bring up a command prompt. File Setupdd.sys Could Not Be Loaded. The Error Code Is 7 My PC doesn't have it apparently, or the folder listed as containing it in the BleepingComputer database, so if this directory is invalid get back with us and we'll go from For the following commands, I'm going to assume that you're using the default windows root directory, that your hard disk is drive C:\ and your CD is in drive D:\, if sir what I do please tell ma I am very thanful to you Helpful +9 Report Guest Apr 25, 2009 12:40PM I had the same problem.
but some good new quality boards have driver pre-installed so you don't need to i recomend buying board with new bios & driver installed saves time and hedaches you might encounter You'll need to find the boot order. File I386 Halaacpi Dll Error Code 4 How to solve or Fix Your Problem! Error Code 7 In Xp Installation I just put a few files on a a:\ disk and used that with my xp cd and it worked. 02-15-2006, 05:47 PM #6 biggonme Registered Member
And repeat the previous rename line.Next, I would run these commands for good measure; sfc /scannow, followed by a chkdsk /r to make sure your hard drive is in order. They said make a boot disk from this website and run the tests to make sure all is in order. Try too remove one of them, it helpt me Helpful +15 Report Ajay Tyagi Jul 20, 2009 11:57PM I have PIII system... Remove the button battery from your Motherboard. File I386 Ntkrnlmp.exe Could Not Be Loaded The Error Code Is 7
Do the following ONLY if that previous line doesn't go through...rename c:\windows\servicepackfiles\i386\halaacpi.dll halaacpi.txt (changes the apparently faulty file into a harmless notepad document)... Many of windows install problems are based on a ram conflict. Finally I removed newly installed 512 RAM .. his comment is here BLEEPINGCOMPUTER NEEDS YOUR HELP!
I turned it on and made my 2 harddrive configure as raid 0. Back to top #9 cquack cquack Topic Starter Members 6 posts OFFLINE Local time:06:22 AM Posted 13 July 2009 - 08:20 PM I will do first thing tom when I Do the following ONLY if that previous line doesn't go through...rename c:\windows\servicepackfiles\i386\halaacpi.dll halaacpi.txt (changes the apparently faulty file into a harmless notepad document)...
It should say at the bottom of the screen before windows begins loading. First, we'll back up the file in question. Expand» Details Details Existing questions More Tell us some more Upload in Progress Upload failed. but some good new quality boards have driver pre-installed so you don't need to i recomend buying board with new bios & driver installed saves time and hedaches you might encounter
When I received the laptop, the HDD was fried. You can only upload files of type 3GP, 3GPP, MP4, MOV, AVI, MPG, MPEG, or RM. I could not understand what to do to resolve it....... http://creartiweb.com/error-code/halaacpi-dll-error-7.php Being a member gives you additional options.
also my pc showing following error (FILE \I86\HALAACPI.DLL COULD NOT BE LOADED THE ERROR CODE IS 7) i remove 1 ram in my pc and then i install windows xp windows XP home was originally on it. But my keyboard's not working...i cant even choose any options on start up...please help. After you do that restart the computer and when you get to that point format the hard drive (ntfs).
It should say at the bottom of the screen before windows begins loading. Trending How do I convert a file from WMV fo MP4? 5 answers Anyway to get latest iTunes for windows vista so I can sync with iPhone ? 4 answers Is How to solve or Fix Your Problem! Thank You!!
Doing this I noticed some inprovemnet in speed. Helpful +8 Report FRANK Mar 22, 2010 07:09AM rESEAT YOUR ram Helpful +7 Report ROCKET May 16, 2010 09:48AM I have a spare computer with a 10gb hardrive but you cant Please try the request again. and installed XP very smoothly...
how do i know i run into problem all the time working on older computers..... & some new also.. Now I am planning to replace the RAM as it is in warranty period ... :-) Helpful +12 Report HungarianHaiku Apr 7, 2009 08:01AM I am currently working this same problem When I try to use another CD, it gives the following error. 'file i386 ntkrnlmp.exe could not be loaded. The error code is 4 Author Date Written Tools Hughie Molloy Aug 25, 2006, 07:11am EDT Reply - Quote - Report Abuse Private Message - Add to Buddy List Edited:
More >> You Are Here: / Forums / Engineering / File \i386\halaacpi.dll could not be loaded. After some time I extended it's RAM to (128 + 512) = 740 MB by placing a 512 RAM along with 128 mb. Want to enjoy fewer advertisements and more features? I will be testing this problem when I get home and will keep you updated on my progress.
|
OPCFW_CODE
|
Why werkzeug does not allow using localhost for cookie domain?
I'm using Flask and when I try to use localhost as the cookie domain, werkzeug says:
ValueError: Setting 'domain' for a cookie on a server running localy (ex: localhost) is not supportted by complying browsers. You should have something like: '<IP_ADDRESS> localhost dev.localhost' on your hosts file and then point your server to run on 'dev.localhost' and also set 'domain' for 'dev.localhost'
This kind of sucks that each developer has to set a domain in hosts file to get the project working. I can't understand why werkzeug is preventing this!
The questions are:
Why werkzeug is doing this?
What would happen if it was possible to use localhost as cookie domain?
How can i ignore this error?
You could map some fake hostnames to /etc/hosts to <IP_ADDRESS> for use in development.
The issue is not that Werkzeug is blocking the setting of domain-based cookies - rather the issue is that most browsers do not support domain-limited cookies scoped to localhost (or to any other single-word domain). Rather than leaving you to debug this issue on your own (why is my session not being respected) Werkzeug detects when you are using this setup and errors out right away.
The closest thing that I have found for a reason is the pseudo-spec:
domain=DOMAIN_NAME
When searching the cookie list for valid cookies, a comparison of the domain attributes of the cookie is made with the Internet domain name of the host from which the URL will be fetched. If there is a tail match, then the cookie will go through path matching to see if it should be sent. "Tail matching" means that domain attribute is matched against the tail of the fully qualified domain name of the host. A domain attribute of "acme.com" would match host names "anvil.acme.com" as well as "shipping.crate.acme.com".
Only hosts within the specified domain can set a cookie for a domain and domains must have at least two (2) or three (3) periods in them to prevent domains of the form: ".com", ".edu", and "va.us". [emphasis mine] Any domain that fails within one of the seven special top level domains listed below only require two periods. Any other domain requires at least three. The seven special top level domains are: "COM", "EDU", "NET", "ORG", "GOV", "MIL", and "INT".
If single-name domains were allowed a hacker could set a cookie for .com and then have that cookie transmitted by the browser to every .com domain the end user visited.
See also: http://daniel.haxx.se/blog/2011/04/28/the-cookie-rfc-6265/
That's correct. but what if want to use this for development? I have other projects with localhost cookies and my browser (chromium) totally understands it! I think werkzeug is too much strict on this. Is there any hack to ignore this error for development?
@Ali - Werkzeug supports being run on localhost just fine - the issue is when you use subdomains (e. g. setting SESSION_COOKIE_DOMAIN or SERVER_NAME so you can use subdomains). Since that is what not all browsers support, Werkzeug stops you (because most things would still work - hence the debugging hell).
As @Markus Unterwaditzer proposed, you can fake hostnames locally to get and set the cookies associated to the domain names.
For this, do sudo vim /etc/hosts:
<IP_ADDRESS> localhost
<IP_ADDRESS> fakesub.fakedomain.com
<IP_ADDRESS> foo.bar.baz.anotherfakedomain.org
This way, you can use and set cookies for the domains and subdomains fakesub.fakedomain.com, fakedomain.com, foo.bar.baz.anotherfakedomain.org, bar.baz.anotherfakedomain.org, baz.anotherfakedomain.org and anotherfakedomain.org.
I use this solution every day to locally develop websites for my company using the authentication provided by my company production website through the cookies.
|
STACK_EXCHANGE
|
We are establishing brand new Data Analytics team for global range customer from retail area.
In essence, client is leveraging the Azure PaaS platform with all kind of (business) data to build an advanced analytics platform aiming at delivering better insights and applications to the business.
The platforms are continuously being enhanced to support (additional) CI/CD and validated learning environment for science, machine learning and AI capabilities for all areas customer-facing like digital omni-channel interaction and commerce, commerce relevance, personalisation, loyalty and marketing and non-customer-facing like assortment optimization, supply chain optimization, external parties and IoT.
We will be working on end to end functionality including architecture, data preparation, processing and consumption by systems.
Essential Job Functions
•Conducts quality-control tests and analyses to ensure that data products meet or exceed specified standards and user requirements
•Interfaces with Test Lead for Business Requirements Review sessions and Use Case review sessions
•Interfaces with relevant stakeholders for Test Case Review sessions
•Develops Test Cases, creates test data, and executes complex Test Cases
•Documents Defects and generate Test Metrics
•Tracks Defects to Resolution
•Applies basic industry and functional area knowledge while authoring integration and system test scripts, configuration test questionnaires and other test materials. Executes requirements-based functional tests for application software
•Contributes to test planning, scheduling, and managing test resources; leads formal test execution phases on larger projects
•Defines test cases and creates integration and system test scripts and configuration test questionnaires from functional requirement documents
•Executes functional tests and authors significant revisions to test materials as necessary through the dry run and official test phases. Maintains defect reports and updates reports following regression testing
•Adheres to and advocates use of established quality methodology and escalates issues as appropriate
•Understands the functional design of software products / suites being tested and their underlying technologies to facilitate authoring testware, diagnosing system issues, and ensuring that tests accurately address required business functionality. Clarifies ambiguous areas with technical teams
•Applies basic industry and functional area knowledge related to the software product being tested and applicable regulatory statutes to determine whether system components meet business specifications. •Develops specified testing deliverables over the lifecycle of the project.
•Bachelor's degree or equivalent combination of education and experience in business, mathematics, engineering, management information systems, or computer science, or related field preferred
•Experience working with developing testware from functional design documents and executing testware against a schedule and in compliance with a methodology
•Experience working with configuration management, defect tracking, query tools, software productivity tools, and templates used to create test scripts, trace matrices, etc.
•Experience working with software product testing and applicable regulatory statutes
- Experience working with cloud solutions preferably Azure platform
- Data base driven testing (SQL)
- Experience in Cloud computing (AWS, Azure)
- Experience creating and maintaining test environments
- Discover and understand potential performance bottlenecks
- Experience with deployment cycle and tools like: Jenkins, Gitlab pipelines and others
- Some Experience in Automation
Nice to have
- BDD \ Gherkin
English: B2 Upper Intermediate
If needed, we can help you with relocation process. Click here for more information.
|
OPCFW_CODE
|
Senior Software Engineer
Senior Software Engineer
Imagine working for a Remote First company, that provides an all-expenses paid Retreat (every quarter) for you to catch up with colleagues in a sociable way, where you can join in lots of activities, and talk about the next innovative projects you will be working on!
Add to Event is an award-winning startup that has helped to find suppliers for over 100,000 events, ranging from birthday parties to large corporate events for the likes of The BBC, Accenture and JP Morgan. Our innovative marketplace makes it incredibly easy to find whatever you need for your event, from giant tipis to woodfired pizza caterers. But don’t just take our word for it, check out our Trustpilot Reviews.
We envision a world where anyone can organise their dream event. In minutes, not months. We believe that we can radically simplify the process of finding, booking and managing event suppliers which will lead to better events throughout the world, from fantastic festivals to wonderful weddings.
We are looking for a passionate and collaborative Senior Software Engineer to join our team. You’ll be joining us at an exciting time, as we transition our customers from a large monolith to microservice oriented architecture. As part of this project, we have already rebuilt applications including our core marketplace, taking advantage of cutting-edge technologies to deliver a best-in-class app for both event organisers and suppliers.
You will be responsible for helping us scale the existing project by rolling out new features & enhancing the customer experience.
You will be working as part of a multi-disciplinary team to research and write code at the heart of the company's key deliverables, a challenging but exciting job. You'll have an opportunity to enhance your skills utilising the latest in cloud computing such as serverless and managed cloud services.
Add to Event uses a range of modern technologies including but not limited to:
- NestJS — An opinionated framework for developing backend microservices in Node.JS/Typescript
- Angular — Frontend Typescript framework, enhanced with NGRX + RXJS
- Stencil.js Web components — Building agnostic, testable front-end components
- Terraform – Managing our infrastructure in code
- Google Kubernetes Engine — Hosting and scaling our microservices
- Prioritise, assign and execute tasks throughout the software development life cycle and across a full stack
- Review, test and debug team members' code whilst ensuring your adherence to secure coding standards and best practices
- Work as part of a cross disciplinary team, coaching, and helping to communicate ideas and work done to the wider team
- Work closely with the product team to establish requirements for new features, solutions and enhancements
- Help to plan the development of our products through technical analysis and evaluation of architectural needs
- Work with key stakeholders to deliver high quality working software with measurable impact.
- You will independently identify the right solutions to solve ambiguous, open-ended problems. You will be keen on learning new skills, broadening your knowledge, and willing to work across a full-stack
- You will have experience working with key stakeholders, working collaboratively to deliver high quality working software with measurable impact
- Have 2+ Years experience with GCP
- Passion for building and maintaining user-facing features
- Innovative and comfortable leading change
- Considerable experience writing self-contained, reusable, and testable modules and components in node.js
- Docker knowledge
- Extensive experience with SQL & NoSQL databases
- Kubernetes experience
- Experience with Git and CI processes
- Experience working with and defining Rest APIs
- Security conscious, knowledgeable of best practices around building secure applications
- Quality-led approach to development (TDD and BDD)
- Great communication skills
- Work hard, take breaks – 25 days annual leave plus bank holidays
- Flexible working – we are a remote-first organisation believing in asynchronous working practices. You won't find back-to-back Zoom meetings here!
- Generous parental leave policies to support you as your family grows.
- Company pension that allows you to save for the future.
- Equipment – We’ll hook you up with a brand new MacBook, monitor, and any other accessories you need to do your best work.
- Learning and development – we fully support your professional development, whether that's paying for new tools, books, courses or coaches.
- Mental health support – we want to ensure everyone in the company has the access they need to mental health support so we provide free access to therapy sessions via Spill.
- Regular socials – we're a social bunch, that’s why we invest in bringing the team together through regular socials and annual employee retreats. Check out our first annual retreat here. (Link)
Please complete the form to apply.
|
OPCFW_CODE
|
import { JSDOM } from 'jsdom';
import { Maybe, Result } from 'true-myth';
import createRenderPageHeader, {
GetArticleDetails,
RenderPageHeader,
} from '../../src/article-page/render-page-header';
import Doi from '../../src/types/doi';
const getArticleDetails: GetArticleDetails = async (doi) => (Result.ok({
title: `Lorem ipsum ${doi.value}`,
authors: ['Gary', 'Uncle Wiggly'],
publicationDate: new Date('2020-06-03'),
}));
describe('render-page-header component', (): void => {
let renderPageHeader: RenderPageHeader;
let rendered: Result<string, 'not-found'|'unavailable'>;
beforeEach(async () => {
renderPageHeader = createRenderPageHeader(getArticleDetails, async () => 0, async () => Maybe.nothing(), async () => [], '#reviews');
rendered = await renderPageHeader(new Doi('10.1101/815689'));
});
it('renders inside an header tag', () => {
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.stringMatching(/^\s*<header\s|>/));
});
it('renders the title for an article', async (): Promise<void> => {
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.stringContaining('Lorem ipsum 10.1101/815689'));
});
it('renders the article DOI', () => {
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.stringContaining('10.1101/815689'));
});
it('renders the article publication date', () => {
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.stringContaining('2020-06-03'));
});
it('renders the article authors', () => {
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.stringContaining('Gary'));
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.stringContaining('Uncle Wiggly'));
});
describe('the article has reviews', (): void => {
it('displays the number of reviews', async (): Promise<void> => {
renderPageHeader = createRenderPageHeader(getArticleDetails, async () => 2, async () => Maybe.nothing(), async () => [], '#reviews');
rendered = await renderPageHeader(new Doi('10.1101/209320'));
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.stringMatching(/Reviews[\s\S]*?2/));
});
it('links to the reviews heading on the same page', async (): Promise<void> => {
renderPageHeader = createRenderPageHeader(getArticleDetails, async () => 2, async () => Maybe.nothing(), async () => [], '/path/to/the/reviews');
const pageHeader = JSDOM.fragment((await renderPageHeader(new Doi('10.1101/209320'))).unsafelyUnwrap());
const anchor = pageHeader.querySelector('a[data-test-id="reviewsLink"]');
expect(anchor).not.toBeNull();
expect(anchor?.getAttribute('href')).toStrictEqual('/path/to/the/reviews');
});
});
describe('the article does not have reviews', (): void => {
it('does not display review details', async (): Promise<void> => {
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.not.stringContaining('Reviews'));
});
});
describe('the article has comments', (): void => {
it('displays the number of comments', async (): Promise<void> => {
renderPageHeader = createRenderPageHeader(getArticleDetails, async () => 0, async () => Maybe.just(11), async () => [], '#reviews');
rendered = await renderPageHeader(new Doi('10.1101/815689'));
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.stringMatching(/Comments[\s\S]*?11/));
});
it('links to v1 of the article on Biorxiv', async (): Promise<void> => {
renderPageHeader = createRenderPageHeader(getArticleDetails, async () => 0, async () => Maybe.just(11), async () => [], '#reviews');
const pageHeader = JSDOM.fragment((await renderPageHeader(new Doi('10.1101/815689'))).unsafelyUnwrap());
const anchor = pageHeader.querySelector<HTMLAnchorElement>('a[data-test-id="biorxivCommentLink"]');
expect(anchor).not.toBeNull();
expect(anchor?.href).toStrictEqual('https://www.biorxiv.org/content/10.1101/815689v1');
});
});
describe('the article does not have comments', (): void => {
it('does not display comment details', async (): Promise<void> => {
renderPageHeader = createRenderPageHeader(
getArticleDetails,
async () => 0,
async () => Maybe.just(0),
async () => [],
'#reviews',
);
rendered = await renderPageHeader(new Doi('10.1101/815689'));
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.not.stringContaining('Comments'));
});
});
describe('the article\'s comments are not available', (): void => {
it('does not display comment details', async (): Promise<void> => {
renderPageHeader = createRenderPageHeader(
getArticleDetails,
async () => 0,
async () => Maybe.nothing(),
async () => [],
'#reviews',
);
rendered = await renderPageHeader(new Doi('10.1101/815689'));
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.not.stringContaining('Comments'));
});
});
describe('the article has been endorsed', (): void => {
it('displays the endorsing editorial communities', async (): Promise<void> => {
renderPageHeader = createRenderPageHeader(getArticleDetails, async () => 0, async () => Maybe.nothing(), async () => ['PeerJ'], '#reviews');
rendered = await renderPageHeader(new Doi('10.1101/815689'));
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.stringMatching(/Endorsed by[\s\S]*?PeerJ/));
});
});
describe('the article has not been endorsed', (): void => {
it('does not display endorsement details', async (): Promise<void> => {
expect(rendered.unsafelyUnwrap()).toStrictEqual(expect.not.stringContaining('Endorsed by'));
});
});
});
|
STACK_EDU
|
We are always on the look for “The Next Breakthrough.” It has also influenced the world of programming languages and how they can be used to advance in growing fields such as data science.
If you are someone looking to build a project, a website, an app, or software with the most trending technology. This would be a treat for you. Are you excited?
Let’s begin without any further delay!
Programming Languages x Data Science – A match made in heaven
With data becoming the main focal point, now businesses, companies, and corporate giants are using programming such as machine learning to utilize data and gain insights. Winning customers to understanding market trends, and taking profit margins sky-high. Data science is creating a union like no other.
This combination is not only popular among data scientists, who were quite visible a few years back. Now, the sudden surge in demand for data science, millions of opportunities are coming on the surface for developers, programmers, and business analysts.
Data scientists are over the moon, yes we see that smirk! But, it is equally thrilling for so many programmers who are using data in combination with these languages to build remarkable projects.
So, are you interested in data science? Maybe, Machine learning has been on your mind. Or have you been thinking to finally turn that idea into reality?
Well, we assure you one thing, you are in for a ride.
So, let’s start!
Better than troy – A war between programming languages
It’s a war zone, especially when the curve keeps on fluctuating between different languages and how technology has to undergo multiple trends at the same time. When technologies like Artificial intelligence, IoT, Virtual reality, Augmented reality are making waves. It is important to know, what has the most potential to lead us into a promised future. We wouldn’t want to compromise, would we?
Well, if you have the same winning urge. You must have come here looking for the best programming language, to make a choice, it is right to assume? Probably, so let’s not keep you waiting anymore.
This ‘young programming language’ caught our attention, as it started becoming popular among developers, programmers, and even AI engineers due to its powerful, dynamic, versatile, and faster. So, would you ever consider it replacing with C++ because it is said to be quite a high performing language?
Ready to take a jump with Julia?
Well, If you wanna know more about it, then click this!
The rise and popularity of Scala are not hidden from anyone. It’s simply a ‘home ground’ for data scientists because it has been created in a way to support Big Data. Hence, it’s performance dynamics and capacity is also comparatively better than others.
With high functioning java libraries, object-oriented programming, Scala enables you to build complex machine learning applications and powerful web apps. So, what are you looking for? Dreams don’t wait, but you can always hire the right team to turn your dreams into reality.
Do we even need to talk about it? With statistical computing and R-based programming in coherence with data science, the results would turn out to be phenomenal. These languages equip you with the right tools to minimize the chances of risks and errors. If you like things smooth like clean code, well then R-based computing offers statistical models to transform solutions into real-time results. Did we mention the graphics and how easier it is to manipulate the R-based objects?
This one is a tough cookie, giving competition to Python. Speaking of python, how did we come so far without speaking about it?
The name needs to introduction because the IT market is creating a high demand for python developers and experts. With open data science on the horizon, this one has jumped up in the ladder with high performing programming languages, because of its better result delivery.
Its capacity to interface with other languages of its caliber has granted python a very important position in the market. The demand is currently rising, and it will keep on increasing.
Last but not the least, we want to appreciate the outstanding outcome of these general-purpose programming languages. It reigns over many things including the development of web applications, mobile applications, and processing BIG DATA.
Who are we kidding? It is claimed that google takes leverage of Java for backend services, especially when it comes to building applications such as Google Docs. Who isn’t familiar with Amazon? Well, they also favor Java.
If you are also a java fan or looking to know more about it, then feel free to reach out to us.
May the force be with you – Data Science is your power
There are other amazing programming languages that can be used with the world moving towards data-driven economies. The incline towards data mining, machine learning, data insights, data analytics, data-based computing, and streamlining data sets is mind-blowing. To streamline all the data, gain insights, deconstruct the numbers and drive value is a magical process in itself, and programmers or developers would definitely agree.
You can understand how businesses are working, failing, or rising. It enables you to understand the power of your own data and how you can use it to understand customer behavior, devises data-based plans, and turns users into customers.
The big pools of data sets cannot be handled or understood humanly, but if you optimize and automate the process with help of machine learning and other technologies. The results can be more valuable, error-free, and futuristic.
Have you decided upon any programming languages that you can use to build your next project? If you have any confusion or queries, don’t worry let us take care of that. Ask technology experts and hire highly skilled developers. Choose the technology as per your project need!
Act now, and you can have the reward along with desired results.
|
OPCFW_CODE
|
Navigate to an existing directory that was created via ng new, such as ng new angular-hello.
Open a prompt at the location of the project directory. Example c:\projects\angular-hello
ng get
The log given by the failure.
ps>ng get --verbose
The option '--verbose' is not registered with the get command. Run ng get --help for a list of supported options.
InvalidJsonPath
Error: InvalidJsonPath
at _parseJsonPath (C:\projects\angular-hello\node_modules@ngtools\json-schema\src\schema-class-factory.js:29:19)
at _getSchemaNodeForPath (C:\projects\angular-hello\node_modules@ngtools\json-schema\src\schema-class-factory.js:41
:21)
at GeneratedSchemaClass.$$get (C:\projects\angular-hello\node_modules@ngtools\json-schema\src\schema-class-factory.
js:84:22)
at CliConfig.get (C:\projects\angular-hello\node_modules@angular\cli\models\config\config.js:31:29)
at resolve (C:\projects\angular-hello\node_modules@angular\cli\commands\get.js:25:34)
at Class.run (C:\projects\angular-hello\node_modules@angular\cli\commands\get.js:19:16)
at Class. (C:\projects\angular-hello\node_modules@angular\cli\ember-cli\lib\models\command.js:134:17)
at process._tickCallback (internal/process/next_tick.js:109:7)
Desired functionality.
Eliminate the InvalidJsonPath error
Mention any other details that might be useful.
ng build of the project works.
ng serve of the project works.
.angular-cli.json.txt
package.json.txt
build.txt
accidentally close issue when I only meant to close the file upload.
This is being thrown from _parseJsonPath.
According to the function comment: For example, a path of "a[3].foo.bar[2]" would give you a fragment array of ["a", 3, "foo", "bar", 2].
Do you get this same error when you specify a path such as a[3].foo.bar[2] (i.e. ng get a[3].foo.bar[2])?
@sumitarora I can submit a fix for this in a few minutes.
@prestonvanloon sure 👍
Thanks for the quick attention. I visited the docs section of the repository looking for ng get but was unable to find what the valid built-in keys are or how determine which keys are available to "get" or "set".
Suggestion:
create Get.md and Set.md.
Then add them to the repository at the following location:
https://github.com/angular/angular-cli/tree/master/docs/documentation
@fmorriso the related documentation is under the configuration doc. I do see your point of not knowing the built in keys. Originally, I proposed throwing a reasonable error when no key is provided. However, I think a better implementation would be to return the whole config object for display when no specific key provided.
I'll update #5887 to return the proper config object.
Putting what should be inside Get.md and Set.md in something non-obvious like Config.md seems curious at best. When I want information about ng build I look in Build.md, so why not put ng get and ng set into Get.md and Set.md respectively?
@fmorriso I agree, it's not intuitive to have get/set commands live under config.md.
@filipesilva can you weigh in?
Thanks for reporting this issue. This issue is now obsolete due to changes in the recent releases. Please update to the most recent Angular CLI version.
If the problem persists after upgrading, please open a new issue, provide a simple repository reproducing the problem, and describe the difference between the expected and current behavior.
|
GITHUB_ARCHIVE
|
WebDynpro Component and Application Configuration – Disabling Personalization
ABAP WebDynpro is a greay technology for MVC development of Web or SAP GUI applications. For anyone who has developed ABAP WebDynpro applications, you will already know that some screens (views) will allow the user to personalize their layout. Personalization is a really good concept but can also have its limitations. For example, I have found that if you use the table control on your WebDynpro Views, then you should probably disable personalization for the particular WebDynpro application. Why do I say this? When using the table control, you may have noticed that by default the table will alternate its width while the user is scrolling through the table entries. There is nothing more annoying than a table changing its width causing the user to constantly have to move the mouse to the new location of the scroll bar.
There is a table property “fixedTableLayout” which WILL fix the width of your table.
The only problem with this property is that a user can still change the width of individual columns which can ultimately change the overall width of the table. Very annoying….. This can be resolved by also disabling personalization.
Here’s an example of a table changing its width:
After scrolling you can see the table shrunk its width to the size of the data:
How to correct this behavior:
First: You need to set the table parameter, fixedTableLayout, as discussed earlier:
With this setting alone, the user will see a fixed width table, but the user will still be able to manipulate the column widths. This can cause your table to act odd (especially if the table is embedded within another control – such as a group that has its own width properties).
As a quick note: if you ever need to delete a personalization setting for a particular user, you can use the following link:
This link will launch the following screen where you can delete personalization settings:
Second: You need to disable personalization. To do this, do as follows:
1. Right mouse click your WebDynpro Application and select Create/Change Configuration
2. The following screen appears:
3. Enter a Z name for your configuration ID and select Create. Enter a package and transport.
This will create a configuration profile under your WebDynpro application:
4. You can now launch the configuration editor by double clicking the configuration name, ZWD_GETFLIGHTS in my case. You will see many parameters that can be changed. Make sure to check the “Do Not Allow Personalization”.
Now to check this parameter change worked, simply click your application configuration and select test. Your application launches and personalization has been disabled.
In addition to the above settings, I add the following parameter WDDISABLEUSERPERSONALIZATION setting to my DynPro application settings:
|
OPCFW_CODE
|
Download vpn client software
Bug fixing: Activation not properly working in some circumstances like multiple user levels on the same machine.Bug fixing: Scripts before or after tunnel open or close might not be launched in some circumstances.Fast, secure, and easy to set up: ExpressVPN is the best VPN for Windows.Bug fixing: IKE engine might not be listening anymore in some cases of message exchanges with the VPN gateway e.g. timeout on no response (or lost) from the VPN Gateway.Improvement: Easier activation wizard to accept 20 or 24 digit license number.Bug fixing: Systray popup to show tunnel progress bar taking focus over other application.Download the official hide.me VPN application client software for Windows, iOS, OSX or Android.Bug fixing: VPN Setup is allowed on Windows Server 2003 again.Bug fixing: Another tunnel does not open properly after unplugging a smartcard with some smartcard models.
How to download and install SSL VPN (Netextender ClientBug fixing: Padding and IP frame total length when using some FTP commands with a web server preventing access through a WindRiver VPN Server.Bug fixing: The VPN Configuration is not loaded from an USB Drive if already plugged in before the IPSec VPN Client software started.Improvement: Physical and logical interface changes are better detected under Windows 2000.Improvement: Support of all 3 addressing modes i.e. host, subnet and IP address range with IKEv2 VPN tunnels.The IT manager can disable this feature and force his own settings.Improvement: USB Mode Confirmation popup only appears when required.
Switching from one user to another may cause the IPSec VPN client not to function properly.Bug fixing: Phase1 LocalID value malformed when certificate uses UTF8 string syntax.Known Issues: DPD continues after tunnel failure (IKEv1 only).Bug fixing: Manual Activation may fail depending on user rights (Vista only).
Bug fixing: Cannot open an IKEv1 tunnel when switching from a network to another while VPN Client is.Feature: Algorithms SHA2 is supported to sign with a CSP smart card.Feature: A new Connection Panel makes the GUI even easier for users.Bug fixing: Online Software Activation correctly handles license numbers encoded with 20 characters (releases 2.5x and older).Bug fixing: X509 Certificate parser assumes that serial number in Certificate is mandatory and rejects certificates without serial number (e.g. coming from USB Tokens).Silent installation with options (see Silent Install Config Guide ).Bug fixing: Manual activation fails with an Activation error message: 0 in some circumstances.Download this app from. and compare ratings for Check Point Capsule VPN. About Check Point Software Technologies Check Point Software.Improvement: More explanation on how to move license to other computer on successful software activation.
This new release implements automatic conversion tools for OpenVPN (.ovpn) and Cisco (.pcf) VPN configuration files.Bug fixing: The command line options of the software are correctly managed (Vista only).With the software release 6.4, TheGreenBow introduces the multi-protocols VPN Client.TheGreenBow VPN Client is available on Windows Server 2008, Windows Server 2012, Vista, Windows 7, 8 et 10, 32 or 64bit.Bug fixing: The text of the uninstall final dialog is consistent with the uninstallation.It also enables to open VPN tunnels over IPv4 or IPv6, to reach an IPv4 or IPv6.Bug fixing: Problem on the NetgearLite version with the Windows 7 64Bit installation.Bug fixing: It is possible to enter a space character in a preshared-key.
Bug fixing: a drive network is not seen as a USB drive anymore.Bug fixing: VPN Configuration Wizard does not start when software starts and VPN Configuration is empty.Bug fixing: No tunnel when using SHA2 algorithm and Windows Certificate Store.Known Issues: One Phase2 only can be created per Phase1 with IKEv2 VPN tunnels.Improvement: Messages are displayed in default language (i.e. english) if not available in the language dll, this is especially useful when our partners work on localization version.Improvement: The log files use a system hour and date for timestamping.
Bug fixing: Unselect PKICheck might not be taken into account in some circumstances.Bug fixing: Full license number can be configured through a command line argument, for silent.
vpn client software free downloadFeature: Language can be changed on the fly, and all the strings can be modified from the software.Known Issues: Multi-proposal with IKEv1 VPN tunnels is limited to 2 choices only for Key Group within Phase2 (i.e. DH2, DH5).Improvement: Script commands on opening or closing tunnels now accept parameters.Known Issues: Several Certificates with same Subject added to the Windows Certificate Store might prevent a tunnel to open in some circumstances.
Bug fixing: Software activation may not work properly in case Windows default temporary folder is restricted to the user.
PIX 501 VPN Client Software Download - Experts-ExchangeBug fixing: TgbIke Starter service description added for Windows Service Manager.Bug fixing: No retransmit of Phase2 request when the remote gateway does not answer.
Improvement: New driver release improves stability (Vista only).Improvement: Ability to maintain trial period while installing multiple OEM customization releases.Bug fixing: Windows firewall configuration correctly restored on uninstall.Bug fixing: Bluescreen when leaving sleep mode in Windows 7 64-bit.
|
OPCFW_CODE
|
long cuda compile times
compile times are excessively long in some cases. consider the performance/amg/smoothed_aggregation.cu example
nvcc smoothed_aggregation.cu -o smoothed_aggregation -I ../.. -I /usr/local/cuda/include -Xcudafe -# -O3
Front end time 94.89 (CPU) 95.00 (elapsed)
Back end time 3.88 (CPU) 4.00 (elapsed)
Total compilation time 99.15 (CPU) 99.00 (elapsed)
Front end time 3.95 (CPU) 4.00 (elapsed)
Back end time 1.56 (CPU) 2.00 (elapsed)
Total compilation time 5.51 (CPU) 6.00 (elapsed)
Front end time 1.41 (CPU) 1.00 (elapsed)
Back end time 0.15 (CPU) 0.00 (elapsed)
Total compilation time 1.56 (CPU) 1.00 (elapsed)
One workaround during development is use the cpp or omp backends
nvcc smoothed_aggregation.cu -I ../.. -I /usr/local/cuda/include -o smoothed_aggregation -Xcompiler -DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_OMP -Xcompiler -fopenmp -Xcudafe -# -O3
Front end time 5.23 (CPU) 5.00 (elapsed)
Back end time 0.72 (CPU) 1.00 (elapsed)
Total compilation time 5.96 (CPU) 6.00 (elapsed)
Front end time 0.65 (CPU) 1.00 (elapsed)
Back end time 0.23 (CPU) 0.00 (elapsed)
Total compilation time 0.89 (CPU) 1.00 (elapsed)
Front end time 1.11 (CPU) 1.00 (elapsed)
Back end time 0.10 (CPU) 0.00 (elapsed)
Total compilation time 1.22 (CPU) 1.00 (elapsed)
Need to analyze header dependency to reduce the number of redundant header file inclusions.
Hi Steven Dalton,
I have tried this method but it is much longer when using "-DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_OMP -Xcompiler -fopenmp"
nvcc smoothed_aggregation.cu -o smoothed_aggregation -I$CUSP_DIR -I$THRUST_DIR -Xcudafe -# -O3
Front end time 18.50 (CPU) 19.00 (elapsed)
Back end time 1.03 (CPU) 1.00 (elapsed)
Total compilation time 19.53 (CPU) 20.00 (elapsed)
Front end time 1.15 (CPU) 1.00 (elapsed)
Back end time 0.41 (CPU) 1.00 (elapsed)
Total compilation time 1.57 (CPU) 2.00 (elapsed)
Front end time 2.28 (CPU) 2.00 (elapsed)
Back end time 0.09 (CPU) 0.00 (elapsed)
Total compilation time 2.37 (CPU) 2.00 (elapsed)
nvcc smoothed_aggregation.cu -o smoothed_aggregation -I$CUSP_DIR -I$THRUST_DIR -DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_OMP -Xcompiler -fopenmp -Xcudafe -# -O3
Front end time 21.36 (CPU) 22.00 (elapsed)
Back end time 1.00 (CPU) 1.00 (elapsed)
Total compilation time 22.37 (CPU) 23.00 (elapsed)
Front end time 1.18 (CPU) 1.00 (elapsed)
Back end time 0.45 (CPU) 1.00 (elapsed)
Total compilation time 1.63 (CPU) 2.00 (elapsed)
Front end time 2.23 (CPU) 2.00 (elapsed)
Back end time 0.10 (CPU) 0.00 (elapsed)
Total compilation time 2.33 (CPU) 2.00 (elapsed)
Thanks for the feedback. Can you tell me your OS, cuda version, compiler, and machine specs (processor, ram)?
Here is the computer information:
OS : CentOS 5.9
Cuda : 3.2
gcc : 4.1.2
CPU : Intel(R) X7542
GPU : Quadroplex 2200 S4
I have also tested the example on another computer with a more recent version of cuda
OS : OpenSUSE 12.3
Cuda : 6.0
gcc : 4.7.2
CPU : Intel(R) Xeon(R) E5-1607
GPU : Quadro 600
The second method does not work, here is the compilation log file. Thanks
log.txt
Thanks. Is it possible for you to update the CUDA installations on either machine to the latest release? I think the second method is failing because Cusp 0.4.0 is installed, 0.5.1 and above addressed a number of issues related to using the OMP backend.
I have installed the latest version of CUDA (7.5) and CUSP (0.5.1) on a new computer. The results are almost the same. Is there other ways to accelerate the compilation?
Is it possible to pre-compile the cusp headers with nvcc?
Hi Steven,
I have another problem now. I try to compile the example normally using "nvcc -o smoothed_aggregation smoothed_aggregation.cu" with the latest version of CUDA and CUSP, but I get the following error message "Segmentation fault (core dumped)".
Hey,
Try adding the --verbose flag to nvcc to see where you compilation is failing, after that try recompiling with the -O3 flag. I noticed a similar issue before but never resolved the source of the problem.
I don't think nvcc supports precompiled headers at this point. You could "fake" the process with some custom build scripts by analyzing the compilation process nvcc uses with the --verbose flag but this would be far too much effort. Are you looking to use a small set of routines repeatedly? If so CUSP should support compiling a shared or static library so you can compile once and link with your application multiple times. I will add an example of building a shared library.
Steve
Hey, any news on the compilation issues?
Hi Steven,
I am facing the same problem of long compilation times for "smoothed_aggregation.cu". I tried the solutions mentioned above.
nvcc smoothed_aggregation.cu -I /usr/local/cuda/include/cusplibrary-0.5.1/ -o smoothed_aggregation -Xcudafe -# -O3 (gives error):
Front end time 90.92 (CPU) 90.00 (elapsed)
Back end time 4.71 (CPU) 5.00 (elapsed)
Total compilation time 96.08 (CPU) 97.00 (elapsed)
Front end time 5.45 (CPU) 6.00 (elapsed)
Back end time 1.72 (CPU) 2.00 (elapsed)
Total compilation time 7.18 (CPU) 8.00 (elapsed)
Front end time 3.26 (CPU) 3.00 (elapsed)
Back end time 0.26 (CPU) 0.00 (elapsed)
Total compilation time 3.53 (CPU) 3.00 (elapsed)
/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
return (char *) memcpy (__dest, __src, __n) + __n;
nvcc smoothed_aggregation.cu -I /usr/local/cuda/include/cusplibrary-0.5.1/ -o smoothed_aggregation -Xcompiler -DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_OMP -Xcompiler -fopenmp -Xcudafe -# -O3 (also gives the same error):
Front end time 13.97 (CPU) 14.00 (elapsed)
Back end time 0.93 (CPU) 1.00 (elapsed)
Total compilation time 14.93 (CPU) 15.00 (elapsed)
Front end time 1.07 (CPU) 1.00 (elapsed)
Back end time 0.30 (CPU) 1.00 (elapsed)
Total compilation time 1.38 (CPU) 2.00 (elapsed)
Front end time 2.55 (CPU) 2.00 (elapsed)
Back end time 0.18 (CPU) 0.00 (elapsed)
Total compilation time 2.73 (CPU) 2.00 (elapsed)
/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
return (char *) memcpy (__dest, __src, __n) + __n;
When compiled as nvcc smoothed_aggregation.cu -I /usr/local/cuda/include/cusplibrary-0.5.1/ -o smoothed_aggregation: does not give any error and works.
System details:
CUDA v7.5
Thrust v1.8
Cusp v0.5
OS: Ubuntu 16.04.2 LTS (xenial)
Processor: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
RAM: 16GB
GPU: TITAN X (Pascal) 12 GB
I am a new user of CUSP. I would be grateful if you could suggest some workaround. Thanks!
|
GITHUB_ARCHIVE
|
Novel–Fey Evolution Merchant–Fey Evolution Merchant
The Southerner: A Romance of the Real Lincoln
Chapter 466– Spirit Return! End Of Summer! annoyed quarrelsome
Before Lin Yuan could give any guidance, ahead of the Mom of Bloodbath can use her words to infuriate him, Jiao Hanzhong experienced already used the effort.
It can be stated that Morbius’ Genuine Area of Happiness could completely soak up the spread Community Grace’s energy and change it into mindset qi crystals.
Lin Yuan was coveting the globe Grace that could go down after the Ice cold Snow Pine suffered the earth Detoxification.
Lin Yuan has been expecting Jiao Hanzhong to fall season in lose hope and phone forth the planet Purifying to eliminate both himself with his fantastic opponents.
Every one of the atmosphere possessed sent back to calmness immediately. The sea of plants and also the enormous hardwood that picked up the divine wooden appeared just like these people were bestowed with intelligence and ended up lively.
Regardless if a Cla.s.s 2 Creation Learn could know clearly regarding a Typical fey’s capability and unique competency, when a fey was a Dream Dog breed, not even a Cla.s.s 3 Formation Learn could see through its expertise and exceptional expertise.
Lin Yuan felt that Jiao Hanzhong was truly an ambitious aged ‘youth’ who obtained ideals and aspirations.
Just after working with Character Returning, Almost endless Summer time reverted by the time she just turned into a human and was donning a pink-purple palace gown.
For that reason, when Lin Yuan been told that Jiao Hanzhong’s fey would browse through the Entire world Cleanse, he got already begun coveting it.
It might be claimed that Morbius’ Absolutely pure Terrain of Satisfaction could completely digest the dotted World Grace’s strength and transform it into spirit qi crystals.
Limitless Summertime switched abnormally stern and investigated Headache VI as she stated in a deeply tone of voice, “Earlier on, you stated something wrong. You said that a pinnacle Misconception II could possibly replicate and definitely will never truly obtain System Weaponization.”
“Its exceptional ability, Dark Night Tribute, will allow it to get rid of its Laws Rune as well as as a tribute to the phony physique well before modernizing right into a Imagination Breed of dog. It might then give up the latest physique and reincarnate to the false physique where not one person knows where it really is undetectable.”
Endless Summer season turned abnormally stern and checked out Nightmare VI as she reported in a heavy voice, “Earlier on, you claimed something wrong. You asserted that a pinnacle Fantasy II could possibly copy but will never truly obtain Body system Weaponization.”
A atmosphere-trembling atmosphere authorized the divine solid wood which was already next to the skies to climb up limitlessly.
When Lin Yuan’s proclamation finished, Nightmare VI began using up with dark-colored fire inside the lavish branches. It had been melting similar to a candlestick.
Major problem VI was only obtaining a single idea within its intellect now. How does another individual are conscious of my unique expertise!?
Countless Summer’s body was already a smaller entire world with all the legislation. She lengthy a finger and directed at Headache VI, which had been still constantly melting. She then chanted, “End of Summer time, all things shall get into extended solitude!”
Jiao Hanzhong, who had been non-existent, was permitting out a frenzied manifestation. He was looking at the Gemstone By/Dream V Best Ice-cubes Cedar, and the sight were stuffed with apology and sorrow.
Lin Yuan have been waiting around for Jiao Hanzhong to fall season in lose heart and phone forth the earth Cleansing to destroy both himself along with his foes.
Even so, lower back when Lin Yuan possessed applied Morbius’ Genuine Ground of Satisfaction to assist the Mother of Bloodbath to withstand the planet Cleansing, he possessed unintentionally found that merely a find with the overflowing Environment Elegance was enough to instantly condense 20 over soul qi crystals.
Lin Yuan have been anticipating Jiao Hanzhong to fall season in lose heart and simply call forth the World Cleansing to eliminate both himself with his fantastic enemies.
Never-ending The summer months thought it was just a little impressive, but she would not suspect Lin Yuan’s ideas. In the end, she believed that Lin Yuan wasn’t someone that would communicate over cuff. Consequently, whichever he said have to be true.
“Its unique expertise, Darkish Evening Tribute, makes it possible for it to shed its Regulation Rune as well as like a tribute to its false human body right before changing to a Fantasy Breed. It can then give up on the actual human body and reincarnate in to the bogus physique where no person is aware where it truly is concealed.”
Lin Yuan’s speech was deafening, and also it immediately attained the ears of everyone around the battleground.
Limitless The summer months transformed abnormally stern and looked over Problem VI as she said in the heavy tone of voice, “Earlier on, you said something wrong. You stated that a pinnacle Fantasy II could possibly imitate and may never truly obtain System Weaponization.”
Lin Yuan noticed that whenever Jiao Hanzhong wasn’t the adversary, he would check with Limitless Summertime to get him a locks coloring to show grat.i.tude.
Lin Yuan ended up being waiting for Jiao Hanzhong to fall season in lose heart and get in touch with forth the World Washing to destroy both himself and his enemies.
Generally, following a Myth Dog breed fey suffered the globe Purifying, the whole world Grace that descended may be plundered. Nevertheless, the results with the plundered Planet Elegance might be faint. In fact, it wouldn’t be 1Percent around the globe Grace’s full vigor.
Per cent of the universe Grace that didn’t are part of the fey might be dispersed in to the surroundings.
All the atmosphere obtained sent back to serenity right away. The water of plants along with the enormous solid wood that picked up the divine timber checked as if these people were bestowed with knowledge and have been in existence.
Chapter 466: Mindset Returning! Ending of Summer season!
The enormous pillars ended up upholding a verdant divine real wood that has been almost attaching paradise and planet. The verdant wood possessed divisions which had been stuffed with pink-crimson stitched tennis ball-fashioned blossoms.
Prior to Lin Yuan could give any information, ahead of the Mom of Bloodbath can use her phrases to infuriate him, Jiao Hanzhong obtained already taken the motivation.
Novel–Fey Evolution Merchant–Fey Evolution Merchant
|
OPCFW_CODE
|
- Trending Categories
- Data Structure
- Operating System
- MS Excel
- C Programming
- Social Studies
- Fashion Studies
- Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
What is the scope of a career for a Python developer?
In this article, we will explain what is the scope of a career for a python developer.
Python is not only one of the most popular programming languages in the world, but it also has the most interesting employment chances. Every year, the demand for Python developers grows. There is a reason for the popularity of this high-level programming language.
Python Career Opportunities
So, what are your possibilities once you've completed your Python training? The following are some job options for you −
This is one of the most straightforward employment you can hope to acquire after learning this skill. The recent statistics clearly show that there will always be open Python developer roles to fill. What is the role of a Python developer? The following are a few major responsibilities −
Resolve data analytics-related issues
Write code that is both reusable and efficient.
Improve data algorithms
Implement data security and protection.
This is a fantastic opportunity. It is ideal for those who enjoy dealing with large amounts of data and finding meaning in it. This is yet another popular job role. Many companies are searching for someone who can work with huge amounts of data that they have access to.
These companies are looking for Python experts since Pandas, SciPy, and other Python libraries are quite useful in completing this task. It's no surprise that more and more firms are looking for data analysts with Python knowledge to fill open positions.
Product managers play a vital role in helping companies in understanding the market and why constructing one product is superior to building another. They investigate the market for new features relevant to a specific product or category and use facts to argue for the development of specific items. Data is a vital part of their work. This is why most firms today seek product managers that are proficient in Python.
Machine learning engineer
If you didn't already know, job vacancies for this position have grown by more than 330% in the previous few years. You will be given preference over other candidates if you are proficient in Python. A machine learning engineer creates and teaches machines, programs, and other computer-based systems to make predictions using their learned information. Python's ability to work with data automation and algorithms makes it an excellent programming language for machine learning.
Scope of Python Freelancer Jobs
A closer look at well-known job sites like Indeed, Upwork, freelancer.in, and PeoplePerHour reveals that the number of remote job opportunities in Python is expanding. This is particularly prevalent in the United States, where the majority of Python job prospects are. As a freelancer, you may be assigned to a wide range of jobs.
Some examples include developing a trading bot capable of buying and selling cryptocurrencies, improving an existing 3D rendering pipeline, or detecting flaws in a cloud-based platform to improve system efficiency. A complete understanding of the project requirements, as well as an accurate calculation of effort, are required for project success.
Those working from home in Python programming should also make time for conference calls with project coordinators and project assessments. It promotes better communication and reduces errors.
Python offers numerous career prospects. Some of the well-known benefits include tremendous growth, learning, and great compensation. You may be a part of the evolving technological world and have an impact on it in your own unique way. Add the perfect finish with a data analytics course, and you're ready to rock.
If you want to create a career in Python, you should be aware of all possible Python interview questions and answers.
Python careers are also diverse in terms of career options. One can begin as a developer or programmer and progress to the profession of a data scientist. With enough experience and Python Programming course certification, one can even become a certified Python trainer or an entrepreneur. However, the bottom line stays the same. Perform to grow in python.
Scope of Python Developer
Python, despite being a newer member in the race, has grown in popularity and holds a lot of promise for developers. It is not only an open-source programming language, but it is also one of the most versatile. It is widely used by programmers for application development and system development. Additionally, decreased coding effort and improved test performance promise better programming. As a result, python developers are in high demand.
Career Opportunities for Freshers in Python
There are numerous career options for Python programmers for freshers. Python has a lot to offer if you've had the proper knowledge and mindset for quick learning. Naturally, you must have a very solid foundation in programming as well as problem-solving abilities.
Students might begin by defining variables and loops when learning Python. After that, go on to dictionaries, lists, and tuples. Freshmen who are interested in Python's rich career choices should have a solid understanding of immutable types. A Python beginner also has to be familiar with the interactivity of modules, using exceptions and exception handlers, and the ability to create classes and instantiate objects. A Python programmer must also understand the fundamental distinctions between Python 2 and Python 3 as a beginner.
As a beginner, you may apply for positions such as software engineer, front-end software/web developer, Python developer or programmer, and DevOps engineer. Students in their last semesters or recent graduates may also apply for internships in data science, which provides a solid foundation for a career in Python. Most businesses will start a fresher's career in Python with a salary of 3-5 lakhs per year. Global brands such as Accenture and Capgemini may offer higher pay.
Career Opportunities for Experienced Hires in Python
Obtaining a full-time onsite position as a Python programmer or Architect with a top-tier MNC necessitates extensive preparation. It is critical to have a good understanding of Python programming and to be confident in your areas of expertise. It also provides you with the additional preparation needed for your onboarding process. As an experienced hire, you may be requested to finish a Python programming project. Alternatively, your analytical skills may be evaluated. Make a list of test cases or success stories ready for a fast glance before leaving for an interview.
To land a profitable job in Python, you need to be well-versed in the major scientific and numerical modules and be able to design tight algorithms. Other needs include concurrent algorithms, SIMD vectorization, and multicore programming, which will give your Python career a much-needed boost. Salary ranges for skilled Python programmers might range from 8 lakhs to 12 lakhs per year. The statistics could be higher if you can better demonstrate your skills and bargain with the business.
What does the future hold for Python developers?
Businesses not only in India but all across the world, are searching for talented Python experts that can make a difference when it comes to developing solutions that are perfectly suited to their client's demands. Python's popularity is obvious, and the competitive advantage it has gained over other programming languages in recent years speaks volumes about its skills.
Python's application is predicted to develop in three areas in the future: data science, big data, and networking. However, its expansion cannot be limited to just these three regions. All three of the regions described above are among the most popular these days.
As a result, the salaries that you can expect when working in one of these occupations may exceed the salaries of jobs that demand competence in other languages. Even as a freelancer, you can earn what your abilities and experience deserve. And if you don't have this popular ability, you should devote more effort to learning it. Adding these skills to your Resume will help you get employed faster than others.
It is vital that individuals learn about Python and the incredible job opportunities available in this industry. The field offers outstanding Python career chances that are not static but are likely to shift based on higher-level certificate courses that will help you advance in your field. To be a competent Python developer, you must have a keen knowledge not only of coding and web development but also of communication abilities, in general, to perform effectively at Python jobs.
This will eventually enable you to improve your management skills. You may be assigned to a team to manage, in which case you must impose your assertion on the group through tough addressing. To have an attractive career with Python-related tasks, you must also keep yourself current with the dynamic changes that are made and study them with your quick thinking and excellent decision-making skills.
In this article, we went over the role of a Python developer in various domains in depth.
Kickstart Your Career
Get certified by completing the courseGet Started
|
OPCFW_CODE
|
Sometimes a delivery of products does not arrive at a facility when it should, and sometimes my simulation results are not the same as other people working on the same case study – is this a bug in the software?
Delivery vehicles occasionally miss deliveries. The frequency of missed deliveries depends on the frequency of deliveries and the time it takes for a vehicle to travel its delivery route. More frequent deliveries made by vehicles traveling on longer routes result in higher rates of missed deliveries(just like in the real world). Because of missed deliveries it is necessary to have “safety stock” inventory at facilities (just like in the real world) to keep them going when this occurs. And sometimes there will even be two deliveries made when only one was expected, so you need to have a bit of extra storage space available at facilities to handle unexpected deliveries. To improve scheduling of vehicles on routes see a full explanation in “Delivery Schedules Do Not Always Work Perfectly”.
You will also notice your simulation results do not always match exactly with results of others working on the same case study. This is because of small differences in where you place a facility or how you define routes or vehicles or times between departures. The more adjustments and changes you make to your supply chain model (even very small ones), the more your results will differ from others who do similar but not exactly identical things in their models. Learn more about this in "The Butterfly Effect".
Which Browsers are Compatible with SCM Globe?
Chrome, Firefox and Safari. And it DOES NOT run under Microsoft Internet Explorer or Edge. If Microsoft is your regular browser, Chrome and Firefox co-exist well with Microsoft. You can use Chrome or Firefox when working with SCM Globe, and return to the Microsoft browser for other tasks.You can download a free copy of Chrome or Firefox by going to the websites of those browser makers.
- Chrome - https://www.google.com/chrome/browser/desktop/
- Firefox - https://www.mozilla.org/en-US/firefox/channel/
Firefox has security features that can cause confusion. One of those features is triggered when you download simulation results from SCM Globe to your PC. In the Firefox browser when you click on the "Export Results to Excel" button it will cause Firefox to open a dialog box. The dialog box will ask "What should Firefox do with this file?" There are two options: 1) Open with; or 2) Save File. Select "Save File". The simulation data will then download as a data file to your PC. The file will be something called an "octet-stream" instead of a .csv file. Unlike a .csv file, you cannot open this file by just clicking on it. You must open it from within your spreadsheet application. So open your spreadsheet application (MS Excel, Apple Numbers, Google Sheets, etc.) and use the spreadsheet to open the simulation data file that you just downloaded. It will open correctly inside your spreadsheet application.
We have several reports of entity icons not displaying when running simulations under Firefox. At SCM Globe, we are able to run simulations without a problem using the latest version of Firefox (Version 52.0.1). However, there may be security settings in Firefox that could prevent entity icons from being loaded and displayed. We are using the default settings that come when we download and install the newest version of Firefox and everything works fine. But Firefox browser settings on your computer may be set differently depending on your company security policies, etc. Ask your IT support person about this. We appreciate any information you can share with us on this topic.
|
OPCFW_CODE
|
We're seeing more and more configurable applications built on the Salesforce platform. Product managers are wanting any packaged code to be configurable by the installer, especially for OEM applications. Naturally, Salesforce followed with the an extension of the platform that provides us with the tools to do more of this without having to hack our way through it.
Why do you care?
- No more pushing data updates out to all of your customers via install scripts prone to failure - your config data is now packageable
- Features of custom metadata types will match or beat all of the best features of custom settings
- You'll be able to have a mix of managed and unmanaged, protected, and public records
- Your application can now be completely, dynamically data-driven
The Future is Bright (Safe Harbor)
Fortunately, Salesforce is investing to make custom metadata types the standard for developing configurable applications.
- A user interface is coming soon. We developed a UI in a couple of weeks to view, create, and delete types as well as records
- Hierarchical relationships between records are coming soon.
- Aaron Slettehaugh and Avrom Roy-Faderman at Salesforce are working hard to include any features that may benefit YOU. Join the Chatter group and suggest your ideas.
We started building an application which uses workflow across multi-orgs around the time the pilot version of custom metadata types were released. After evaluating whether to use custom settings or custom objects to hold the definitions of these configurable workflow processes, we ultimately chose custom objects for the ability to easily have a UI for configuration and the ability to view hierarchical relationships without having to build custom pages. We thought we'd have to push changes to the workflow definitions out via post-install script with the "metadata" stored in static resources. Along comes custom metadata types. A light bulb went off. We can use these for everything we want to control by config data: workflow definitions, form definitions, api endpoints and so on. Step 1: Create your __mdt.object xml files. This is essentially the custom object your metadata records will be created from. Step 2: Create your __md xml files. These are the records you'll query in SOQL to tell you what to do dynamically within your application defined however you want. Don't worry, SOQL is unlimited against custom metadata records. Step 3: Package and deploy. Step 4: Anytime you want to change your config, just update and push upgrade.
We're Here To Help
After working with this feature throughout the pilot phase and now having made the conversion to the Summer '15 version, we've seen what works really well for custom metadata types. We've also tried some things that didn't work so well. We can provide a review of where this can be used in your app, help you implement a custom UI for maintenance, or just make sure you don't make the same mistakes we did.
|
OPCFW_CODE
|
DS (Output, active Low). Data Strobe is activated once for
Port 0 (P07-P00). Port 0 is an 8-bit, bidirectional, CMOS
compatible port. These eight I/O lines are configured un-
der software control as a nibble I/O port, or as an address
port for interfacing external memory. The output drivers
are push-pull. Port 0 can be placed under handshake con-
trol. In this configuration, Port 3, lines P32 and P35 are
used as the handshake control /DAV0 and RDY0. Hand-
shake signal function is dictated by the I/O direction of the
Port 0 upper nibble P07-P04. The lower nibble must have
the same direction as the upper nibble.
each external memory transfer. For a READ operation,
data must be available prior to the trailing edge of /DS. For
WRITE operations, the falling edge of /DS indicates that
output data is valid.
AS (Output, active Low). Address Strobe is pulsed once
at the beginning of each machine cycle. Address output is
through Port 0/Port 1 for all external programs. Memory
address transfers are valid at the trailing edge of /AS. Un-
der program control, /AS is placed in the high-impedance
state along with Ports 0 and 1, Data Strobe, and
For external memory references, Port 0 can provide ad-
dress bits A11-A8 (lower nibble) or A15-A8 (lower and up-
per nibble) depending on the required address space. If
the address range requires 12 bits or less, the upper nibble
of Port 0 can be programmed independently as I/O while
the lower nibble is used for addressing. If one or both nib-
bles are needed for I/O operation, they must be configured
by writing to the Port 0 mode register. After a hardware re-
set, Port 0 is configured as an input port.
XTAL1 Crystal 1 (time-based input). This pin connects a
parallel-resonant crystal, ceramic resonator, LC, or RC
network or an external single-phase clock to the on-chip
XTAL2 Crystal 2 (time-based output). This pin connects a
parallel-resonant, crystal, ceramic resonant, LC, or RC
network to the on-chip oscillator output.
Port 0 is set in the high-impedance mode (if selected as an
address output) along with Port 1 and the control signals
R//W Read/Write (output, write Low). The R//W signal is
Low when the CCP is writing to the external program or
AS, /DS, and R//W through P3M bits D4 and D3(Figure
A ROM mask option is available to program 0.4 VDD
CMOS trip inputs on P00-P03. This allows direct interface
to mouse/trackball IR sensors.
R//RL (input). This pin, when connected to GND, disables
the internal ROM and forces the device to function as a
ROMless Z8. (Note that, when left unconnected or pulled
high to VCC, the part functions normally as a Z8 ROM ver-
An optional 200 kOhms pull-up is available as a mask op-
tion on all Port 0 bits with nibble select.
Note: Internal pull-ups are disabled on any given pin or
group of port pins when programmed into output mode.
P R E L I M I N A R Y
|
OPCFW_CODE
|
The quick-fix guide to Slackware Package Management.2
Installpkg installs Slackware packages just like pkgtool does, except it does it on the fly without you needing to choose from menus. The basic usage command is:
root# installpkg foo.tgz
This command will install foo.tgz into your system. However, newer files in the package will overwrite older files existing on your system. You will want to make note of what files will be installed on your system. This can be done by passing the -warn option. With the -warn option, installpkg will generate a report on what files will be installed, and you can then cross- reference to see if you have any existing files already on your system. Here’s a short example:
root# installpkg -warn foo.tgz
Scanning the contents of foo.tgz…
The following files will be overwritten when installing this package.
Be sure they aren’t important before you install this package:
-rwxr-xr-x root/root 1242 1999-06-05 12:34 usr/local/bin/foo
-rw-r–r– root/root 120 1999-06-05 12:34 usr/local/etc/foorc
You’ve been warned.
I recommend that you always use the -warn option before installing any package. Once you are sure you know what is being installed, you can go ahead and install the package.
removepkg removes an already installed package on the system. Like installpkg, this feature is also available by using pkgtool, except that this is run without a menu interface. removepkg needs to be used in the /var/log/packages directory. The syntax is:
root# removepkg foo
You need to be careful when using removepkg as it may delete a file you wanted to keep. You will want to use the -warn option with removepkg before actually removing a package. This can be done as follows:
root# removepkg -warn foo
Only warning… not actually removing any files.
Here’s what would be removed (and left behind) if you
removed the package(s):
Be sure you use the -warn option all the time before removing a package. Removing the wrong package will cause you uncessesary headaches. Know what you are removing before you remove it.
upgradepkg will upgrade an already existing package installed in your system. It does this by first installing the new package, and then removing any files from the old package that are not in the new package. Keep that in mind! If you need to keep any configuration files from the old package, be sure to back them up. upgradepkg works in two ways. If you have installed package foo.tgz and you want to upgrade it, and the new package is called foo.tgz, then the command to upgrade is:
root# upgradepkg foo.tgz
Otherwise, if you installed the package foo.tgz and the upgraded package is foo-2.0.tgz, then the command to upgrade is:
root# upgradepkg foo.tgz%foo-2.0.tgz
Notice the “%” symbol seperating both package names. This syntax is used when the upgraded package has a different name from the currently installed package. If the old and new packages both have the same name, just use the first example’s syntax.
|
OPCFW_CODE
|
package com.github.alexfalappa.nbspringboot.cfgprops.lexer;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URISyntaxException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Map;
import java.util.Properties;
import org.junit.Ignore;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
/**
* Test comparing {@code java.util.Properties} loading versus parsing.
*
* @author Alessandro Falappa
*/
@Ignore
public class CfgVsPropsTest extends TestBase {
@Test
public void testCompareProps() throws IOException, URISyntaxException {
System.out.println("\n--- compare props");
Properties loaded = new Properties();
try (InputStream is = getClass().getResourceAsStream("/load.properties")) {
System.out.println("\nLOADED");
loaded.load(is);
listPropsOrdered(loaded);
}
InputStream is = getClass().getResourceAsStream("/load.properties");
// try (InputStream is = getClass().getResourceAsStream("/load.properties")) {
System.out.println("\nPARSED");
// BootCfgParser cp = new BootCfgParser(is);
// cp.disable_tracing();
// cp.parse();
final Properties parsed = new Properties();//cp.getParsedProps();
listPropsOrdered(parsed);
for (Map.Entry<Object, Object> entry : loaded.entrySet()) {
final Object loadedKey = entry.getKey();
assertTrue(String.format("Missing key '%s' in parsed", loadedKey.toString()), parsed.containsKey(loadedKey));
final Object loadedVal = entry.getValue();
final String parsedVal = parsed.getProperty(loadedKey.toString());
assertEquals(String.format("Loaded value '%s' differs from parsed one '%s'", loadedVal, parsedVal), loadedVal, parsedVal);
}
// } catch (ParseException ex) {
// fail(ex.getMessage());
// }
}
public void testWriteProps() throws IOException {
System.out.println("\n--- write props");
Properties p = new Properties();
p.setProperty("key", "value");
p.setProperty("a=key", "value");
p.setProperty("the#key", "value");
p.setProperty("one!key", "value");
p.setProperty("my key", "value");
p.setProperty("anoth:key", "value");
p.setProperty("key1", "the value");
p.setProperty("key2", "a#value");
p.setProperty("key3", "one!value");
p.setProperty("key4", "my=value");
p.setProperty("key5", "anoth:value");
p.setProperty("spaces", "a value with spaces");
p.setProperty("slashes", "a\\value\\with\\slashes");
p.setProperty("linefeed", "a value\nwith line\nfeeds");
p.setProperty("unicode", "©àèìòù");
try (OutputStream os = Files.newOutputStream(Paths.get("write.properties"))) {
p.store(os, "This is a comment");
}
}
}
|
STACK_EDU
|
For some queries, all first 10 results on Google are spam
The other day, I was looking for the phone number of my hairdresser to make an appointment; it turns out that when searching for the salon's name, Google provides only spam results that aren't helpful and may even end up costing you money.
I couldn't remember the salon's name, but I knew where it was, so I looked it up in StreetView (the salon is in Sèvres, a small town near Paris, France):
Next, I looked for "franck provost sevres", and I got two things:
- "Places for franck provost near Sèvres":
Those are interesting and would be helpful, except none of them are actually in Sèvres.
- A list of web results for my query:
It looked odd: no directory site is present in the results (such as "pagesjaunes.fr", "118218.fr", etc.); instead, there were only a bunch of specialized websites with very specific names:
- meilleurcoiffeur.com means "best hairdresser"
- justacote.com means "next door"
- beaute-addict.com means, well, beauty addict
Checking them out, we find that every site suggests a different phone number for the same salon at the same physical address; here are some of those numbers: 08 99 18 58 xx, 08 99 10 35 xx, 08 99 02 19 xx, 08 99 51 06 xx...
Notice something? Here's a hint: the real number for the salon is in fact 01 46 23 86 xx.
Phone numbers beginning with 08 99 are toll numbers. Calling a number starting from 01-05 will probably be free (or very cheap, depending on your phone company), but calling an 08 99 number will cost you 1,34 euros just to make the connection, plus .34 euros per minute. And this money doesn't go to the salon, it goes to the site that set the number up (with the provider of toll numbers taking a hefty cut).
In short, the sites in Google's first page provide zero value and try to scam you into calling their toll number instead of the real number.
(What's worse, many of these sites serve a different version of their page to Google than to users, where the actual phone number can be seen).
It's a shame these businesses exist, but there's not much we can do about it. But why would Google help them thrive?
In Google's defense
Is there anything that can be said in Google's defense? Actually, yes.
There's no "Franck Provost" salon in Sèvres anymore. It changed its name some time ago (left the Provost franchise) and is now "Fréquence Beauté Coiffure Rive Droite" (a local franchise).
Street View wasn't updated recently enough to reflect the change (which is understandable), but all online directories were, as well as the Google index. So in the Google index, the only pages that contain the tokens "franck provost sevres" are pages from spam sites! (which are probably updated less frequently).
When searching for Fréquence Beauté Coiffure Rive Droite Sèvres one gets, almost exclusively, directory sites in the results, and the correct phone number is immediately available on the results page, without further clicking.
But of course, nobody is going to search like this, because nobody knows the new name of the salon (or if they do, they already have its phone number!)
What to do
Ironically, the freshness of Google's index can cause problems.
It could make sense to have access to some "historical index" ("here's how the results would have looked like last year for this query")? But it would probably be quite confusing.
Retaining changes at the item or semantic level could be helpful. The salon is the same, only the name changed. Google should try to know that, and provide the correct phone number while searching for the old name. (A hard problem, certainly).
In any case, giving so much visibility to spammers and scammers seems very wrong.
|
OPCFW_CODE
|
Instantiating a class from main() vs instantiating from another class
I want to instantiate a child class in Java from within a parent class. However, when I do this, and attempt to call a parent's method from the constructor of the child (with super), the returned field is null. If I changed this to instantiating the parent from the main() method, the field returns how it is expected (a String). I'm not sure whats happening here, can anyone explain?
Main class:
public class MainFunc {
public static void main(String[] args) {
javaClass jv = new javaClass("Bobby");
jv.makeJ2();
}
}
Parent Class:
public class javaClass {
String name;
public javaClass(){
}
public javaClass(String s) {
setName(s);
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public void makeJ2 (){
javaClass2 jv2 = new javaClass2();
}
}
Child Class:
public class javaClass2 extends javaClass {
String name;
public javaClass2() {
super();
String superTitle = super.getName();
System.out.println("HEY " + superTitle);
}
}
This returns:
HEY null
Instead of:
HEY Bobby
The class of an object that is used to instantiate another object is not its super class.
Every java program runs through main method. It must have main method.
If you want out as 'HEY Bobby' you have to make name field as static 'static String name;' because nun static variables are related to the object so same value not have in different objects
@janith1024: nooooooo! Please don't tell beginners to make things static. Christophe: my advice is use a debugger to examine what fields are part of each object you instantiate. Then maybe run through the Java tutorial again.
Design wise, a parent should never instantiate a child class. It is not like human reproduction system. In OOPS world, child classes need to declare their own parents, and only these child classes know about their parents and not vice-versa.
Even though intention in the posted question is to make use of Inheritance, it is not happening by the virtue of the convoluted code. This is how the code is running:
Test creates a javaClass object named jv. At this point jv has an attribute name, value of which is set to Bobby
jv's makeJ2 method is called, this creates a very new object of the class javaClass2, named jv2. The parent class of this very new object does NOT have any field set, and nothing has been passed to the parent class's constructor. Hence there is NO relation between the parent of this new object jv2 and the previously created jv object and that is why:
String superTitle = super.getName(); returns null as expected
The exact problem is that the child object is not passing along any information for the parent's attributes to be set. That can happen through overloaded supers or by setting super properties but not just by calling super(). See a good explanation of how inheritance works in java.
Please do not use static just to make it work
Lastly, I suggest reading about composition too, as that is slightly more preferable over inheritance, for some good reasons.
You cannot access child class from parent class,child class has inherited the parent class, not the other way. But you can make your String static for it to work the way you want.
public class javaClass {
static String name;
Base on your progress:
You initiate the parent class:
javaClass jv = new javaClass("Bobby");
javaClass name attribute will be "Bobby"
Now the time you call:
jv.makeJ2();
It will initiate the new javaClass2:
javaClass2 jv2 = new javaClass2();
It call the super(); mean: javaClass() in javaClass not javaClass(String s)
So now your new child javaClass2 is extended from new javaClass wiht its name is new (null).
If you want javaClass2 print "Buddy", you should:
public javaClass2(String s) {
super(s);
String superTitle = super.getName();
System.out.println("HEY " + superTitle);
}
In your child class you did not overload the constructor for name field. From the overloaded constructor you should invoke super(name);
The output that is generated is because of two reasons.
Because you have called super() in the javaClass2 constructor and not super(String str)
And because the parent java class that the child class is instantiating is not the same as the one you are calling the method makeJ2(jv.makeJ2()) from.
Also the blow link can help you understand the instance variable overriding in java.
Java inheritance overriding instance variable [duplicate]
jv and jv2 are totally two different objects in the memory.
After all that is the fundamental meaning of "new" operator in Java.
you have used "new" operator twice in your code.
So it means you have two completely different objects.
jv's name is set as "Bobby" but nobody has set a name for the second object jv2 !
Imagine this:
class Manager extends Employee
{
....
public void setPMPCertified(boolean b)
{
...
}
}
//Generally Software engineers
class Employee
{
....
public void setName(String n)
{
.....
}
}
Manager m1 = new Manager();
Employee e1 = new Employee();
m1.setName("Robert");
m1.setPMPCertified(true);
e1.setName("Raja");
Robert is a manager. Raja is a software engineer.
They are completely two different data (object) in the memory.
Just because manager extends employee Robert and Raja cannot become single object.
Look at the fact we have used the new operator twice to create two objects.
Please note manager does NOT have the setName method.
It comes from the parent (Employee).
setPMPCertified is only applicable to managers.
we don't care if a software engineer is PMP certified or not!! :)
|
STACK_EXCHANGE
|
How can I cheaply fix these blue blotches on my LCD monitor?
When my grandparents used this dusty Samsung SyncMaster 913v LCD monitor yesterday, the monitor was working normally.
This morning my grandparents first wiped just the screen with a micro-fibre cloth with hand soap water, then wiped it again with a 2nd micro-fibre cloth rinsed with tap water, then a third time with a 3rd dry micro-fibre cloth. Then they saw these vertical colored stripes and blue blotches.
Are we correct to assume that the liquid seeped in, "killing" pixels?
Please see this question's title.
Does this answer your question? How can I fix these vertical colored stripes on my LCD monitor?
@MrEthernet No it doesn't. I posted that question on a different quandary with the same screen.
Please don't post the same question 3 times. While there are slight differences between the posts they are as near as makes no difference "how do I fix this thing?"
If they've gone away, the issue is likely resolved, however, for future reference, when cleaning any non-sealed electronic screen (monitors, TVs, etc.), there should never be enough liquid on the screen that the liquid can form droplets and run down the screen. A microfiber cloth only needs to be damp, but not wet, to clean a screen, with a 3:1 5% vinegar to water solution generally working the best.
@JW0914 They haven't "gone away".
@JW0914 That statement pre-dates the first para. Fixed.
Saving an ancient CRT monitor with multiple issues is probably not worth your time. $100 will get you a modern monitor ( make sure you get one with appropriate inputs, like VGA / D-Sub ). You might also ask at nearby businesses / schools / recycling centers if there are any spares.
@Greek-Area51Proposal The blue is likely from one of two sources, either it's seeped in between one or more of the polarized filters before and/or after the liquid crystals, resulting in only blue light passing through or it's from damage to a portion of the liquid crystal circuitry... to determine which, the monitor would need to be disassembled and the front and rear of the LCD screen visually inspected since you likely can't separate the sandwich that makes up the LCD without damaging it. If the money is available, a ~23" 1080p Acer monitor can be purchased for ~$100 online.
@Greek-Area51Proposal Since the two scenarios above would seem the likeliest, it's not as simple as evaporating the water out due to the impurities in water. If it's water that's seeped in between the polarized filters, even if evaporated out, it will leave minerals behind that will likely result in that portion of the screen never properly displaying as the minerals will scatter the light that's intended to go through horizontal and vertical polarized filters. If it's water damage to the liquid crystal circuity, the damage is likely permanent.
From the number of your posts, I understand that this monitor dating from 2005 is
of importance to your grandparents, but I don't know how practical it would be
to save it, if at all possible. This could take several days, even if successful.
From the fact that your grandparents decided to give it such a good clean,
it is possible that the quality of the screen was already degrading with time.
The problem is whether there is only water damage, or if there is also
soap damage, so whether getting the water out will be enough.
If you would still wish to try, here are some ideas.
If your parents have a ventilator, they could place the monitor in a warm and dry
place and set the ventilator to blow on the screen for several hours or even
one day.
Another possibility, if they have a small container or cupboard big enough to
contain the monitor, is to place a large quantity of (cheap) rice on the bottom,
place the monitor face-down on the rice, then seal the container for a few days.
Adding desiccant bags, like iFixIt's
Thirsty Bag,
can also help.
For the future, I suggest that alcohol wipes for cleaning eyeglasses would do the
cleaning job pretty well, but it's important to avoid pressing strongly
on the glass screen.
You don't need a container for the "rice method". Just put the monitor and the rice in a large garbage bag and seal it with adhesive tape.
@Robert A large garbage bag... so... a container?
@T.J.L.: A large garbage bag can be called a container for this purpose.
You could leave it, do nothing and hope it goes away as it dries out in normal use.
I washed my MacBook Pro screen with too much soapy water (I used the liquid soap normally used for washing dishes) and got a big stain where it leaked inside the LCD panel. A Lenovo laptop I have lost more than half the screen due to moving from hot humid environment (by the pool) into a cold, dry Air Conditioned environment (inside hotel) causing condensation to form in the LCD panel within seconds of going inside.
In both cases over some weeks the stains shrank and both have worked without further problems for years now. The Lenovo is completely fine while the MacBook has only slight marks around the edges of where the stain was - presumably from the remains of the soap that in retrospect I obviously shouldn't have used.
While this is obviously anecdotal things can improve naturally.
Whether trying to speed the process via heat or desiccants would help or hinder I don't know but as long as it isn't actually wet and it is safe to use then using it will mean the screen getting a certain amount of heat to assist the drying process anyway.
Perhaps you could place the monitor and a dehumidifier in a closet. If this is as other suggested, water climbing the LCD monitor via capillary action, drying out the physical space in which the monitor sits should draw the water out of the LCD.
This will not work if the damage is permanent.
|
STACK_EXCHANGE
|