Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Adding a Draft Feature to Gatsby
Created: Sep 08, 2019 – Last Updated: Apr 26, 2021
Tags: GatsbyDigital Garden
There are a lot of guides on the internet on how to add default values to your Gatsby schema, e.g. a draft entry in the frontmatter to hide posts that are still work-in-progress. However, all these solutions are kinda hacky, as they for example require you to use environment variables or even define the draft entry in every single frontmatter you have. Since Gatsby implemented its schema customization API (opens in a new tab) there is an easy solution (and not hacky at all!) available. Most importantly: This quick tip is applicable to all data sources you have, not only markdown (and its frontmatter).
You can have a look at the codesandbox gatsby-draft-default-values (opens in a new tab) to see the working code in a minimal example.
If you want to follow the example along, you can install the default blog starter by running
gatsby new my-blog https://github.com/gatsbyjs/gatsby-starter-blog with
The default blog starter uses markdown for its content and you can use the so called frontmatter in markdown files to define additional data, such as e.g. a draft status (true or false). The goal is to define a default value for this draft status so that you don’t have to define it in every markdown file but only in the ones you’d like to be a draft.
Add this to the existing
createTypes function you have to define a nested type on the
MarkdownRemark type (read Nested types (opens in a new tab) for more info) to be able to type the draft field. On the draft field itself the custom extension is used as a directive. The directive allows you to reuse this action on other fields, too. In case you’re using a CMS or other data source than markdown you’ll need to find and define your type (instead of
MarkdownRemark) to have this working. I’d recommend using GraphiQL (
localhost:8000/___graphql) if you’re unsure about the names!
To be able to use the @defaultFalse directive, you need to define a custom extension with
source contains all fields from the frontmatter (in this case: title, description, date, and draft).
info.fieldName is the name of the field you’re applying the directive to – in this case
draft. Fields that are not provided are null by default but because draft should be a boolean you need to return
false in this case. If it’s provided simply return the value.
Read the complete guide on Creating type definitions (opens in a new tab) to get in-depth knowledge of how and why it works this way.
Now that the draft field is set up and defaults to false, you can go ahead and add a
draft: true to the frontmatter of a blogpost.
When opening GraphiQL you also should be able to query the draft field now and filter by it. A query to list all markdown posts that are not a draft looks like:
|
OPCFW_CODE
|
# install QnA Maker package on Windows with command:
# py -m pip install azure-cognitiveservices-knowledge-qnamaker
# install QnA Maker package on Mac/Linux with command:
# pip install azure-cognitiveservices-knowledge-qnamaker
# ==========================================
# Tasks Included
# ==========================================
# This sample does the following tasks.
# - Create a knowledge base.
# - Update a knowledge base.
# - Publish a knowledge base.
# - Download a knowledge base.
# - Query a knowledge base.
# - Delete a knowledge base.
# ==========================================
# IMPORTANT NOTES
# This quickstart shows how to query a knowledgebase using the V2 API,
# which does not require a separate runtime endpoint.
# Make sure you have package azure-cognitiveservices-knowledge-qnamaker 0.3.0 or later installed.
# The QnA Maker subscription key and endpoint must be for a QnA Maker Managed resource.
# When you create your QnA Maker resource in the MS Azure portal, select the "Managed" checkbox.
# ==========================================
# ==========================================
# Further reading
#
# General documentation: https://docs.microsoft.com/azure/cognitive-services/QnAMaker
# Reference documentation: https://docs.microsoft.com/en-us/python/api/overview/azure/cognitiveservices/qnamaker?view=azure-python
# ==========================================
# <Dependencies>
import os
import time
from azure.cognitiveservices.knowledge.qnamaker import QnAMakerClient
from azure.cognitiveservices.knowledge.qnamaker.models import FileDTO, QnADTO, MetadataDTO, CreateKbDTO, OperationStateType, UpdateKbOperationDTO, UpdateKbOperationDTOAdd, EndpointKeysDTO, QnADTOContext, PromptDTO, QueryDTO
from msrest.authentication import CognitiveServicesCredentials
# </Dependencies>
# Set the `authoring_key` and `endpoint` variables to your
# QnA Maker authoring subscription key and endpoint.
#
# These values can be found in the Azure portal (ms.portal.azure.com/).
# Look up your QnA Maker resource. Then, in the "Resource management"
# section, find the "Keys and Endpoint" page.
#
# The value of `endpoint` has the format https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com.
# <Resourcevariables>
subscription_key = 'PASTE_YOUR_QNA_MAKER_MANAGED_SUBSCRIPTION_KEY_HERE'
endpoint = 'PASTE_YOUR_QNA_MAKER_MANAGED_ENDPOINT_HERE'
# </Resourcevariables>
# <MonitorOperation>
def _monitor_operation(client, operation):
for i in range(20):
if operation.operation_state in [OperationStateType.not_started, OperationStateType.running]:
print("Waiting for operation: {} to complete.".format(operation.operation_id))
time.sleep(5)
operation = client.operations.get_details(operation_id=operation.operation_id)
else:
break
if operation.operation_state != OperationStateType.succeeded:
raise Exception("Operation {} failed to complete.".format(operation.operation_id))
return operation
# </MonitorOperation>
# <CreateKBMethod>
def create_kb(client):
print ("Creating knowledge base...")
qna1 = QnADTO(
answer="Yes, You can use our [REST APIs](https://docs.microsoft.com/rest/api/cognitiveservices/qnamaker/knowledgebase) to manage your knowledge base.",
questions=["How do I manage my knowledgebase?"],
metadata=[
MetadataDTO(name="Category", value="api"),
MetadataDTO(name="Language", value="REST"),
]
)
qna2 = QnADTO(
answer="Yes, You can use our [Python SDK](https://pypi.org/project/azure-cognitiveservices-knowledge-qnamaker/) with the [Python Reference Docs](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker?view=azure-python) to manage your knowledge base.",
questions=["Can I program with Python?"],
metadata=[
MetadataDTO(name="Category", value="api"),
MetadataDTO(name="Language", value="Python"),
]
)
urls = []
files = [
FileDTO(
file_name = "structured.docx",
file_uri = "https://github.com/Azure-Samples/cognitive-services-sample-data-files/raw/master/qna-maker/data-source-formats/structured.docx"
)]
create_kb_dto = CreateKbDTO(
name="QnA Maker Python SDK Quickstart",
qna_list=[
qna1,
qna2
],
urls=urls,
files=[],
enable_hierarchical_extraction=True,
default_answer_used_for_extraction="No answer found.",
language="English"
)
create_op = client.knowledgebase.create(create_kb_payload=create_kb_dto)
create_op_monitor = _monitor_operation(client=client, operation=create_op)
# Get knowledge base ID from resourceLocation HTTP header
knowledge_base_ID = create_op_monitor.resource_location.replace("/knowledgebases/", "")
print("Created KB with ID: {}".format(knowledge_base_ID))
return knowledge_base_ID
# </CreateKBMethod>
# <UpdateKBMethod>
def update_kb(client, kb_id):
print ("Updating knowledge base...")
qna3 = QnADTO(
answer="goodbye",
questions=[
"bye",
"end",
"stop",
"quit",
"done"
],
metadata=[
MetadataDTO(name="Category", value="Chitchat"),
MetadataDTO(name="Chitchat", value="end"),
]
)
qna4 = QnADTO(
answer="Hello, please select from the list of questions or enter a new question to continue.",
questions=[
"hello",
"hi",
"start"
],
metadata=[
MetadataDTO(name="Category", value="Chitchat"),
MetadataDTO(name="Chitchat", value="begin"),
],
context = QnADTOContext(
is_context_only = False,
prompts = [
PromptDTO(
display_order =1,
display_text= "Use REST",
qna_id=1
),
PromptDTO(
display_order =2,
display_text= "Use .NET NuGet package",
qna_id=2
),
]
)
)
urls = [
"https://docs.microsoft.com/azure/cognitive-services/QnAMaker/troubleshooting"
]
update_kb_operation_dto = UpdateKbOperationDTO(
add=UpdateKbOperationDTOAdd(
qna_list=[
qna3,
qna4
],
urls = urls,
files=[]
),
delete=None,
update=None
)
update_op = client.knowledgebase.update(kb_id=kb_id, update_kb=update_kb_operation_dto)
_monitor_operation(client=client, operation=update_op)
print("Updated knowledge base.")
# </UpdateKBMethod>
# <PublishKB>
def publish_kb(client, kb_id):
print("Publishing knowledge base...")
client.knowledgebase.publish(kb_id=kb_id)
print("Published knowledge base.")
# </PublishKB>
# <DownloadKB>
def download_kb(client, kb_id):
print("Downloading knowledge base...")
kb_data = client.knowledgebase.download(kb_id=kb_id, environment="Prod")
print("Downloaded knowledge base. It has {} QnAs.".format(len(kb_data.qna_documents)))
# </DownloadKB>
# <DeleteKB>
def delete_kb(client, kb_id):
print("Deleting knowledge base...")
client.knowledgebase.delete(kb_id=kb_id)
print("Deleted knowledge base.")
# </DeleteKB>
# <GenerateAnswer>
def generate_answer(client, kb_id):
print ("Querying knowledge base...")
listSearchResults = client.knowledgebase.generate_answer(kb_id, QueryDTO(question = "How do I manage my knowledgebase?"))
for i in listSearchResults.answers:
print(f"Answer ID: {i.id}.")
print(f"Answer: {i.answer}.")
print(f"Answer score: {i.score}.")
# </GenerateAnswer>
# <Main>
# <AuthorizationAuthor>
client = QnAMakerClient(endpoint=endpoint, credentials=CognitiveServicesCredentials(subscription_key))
# </AuthorizationAuthor>
kb_id = create_kb(client=client)
update_kb (client=client, kb_id=kb_id)
publish_kb (client=client, kb_id=kb_id)
download_kb (client=client, kb_id=kb_id)
generate_answer(client=client, kb_id=kb_id)
delete_kb (client=client, kb_id=kb_id)
# </Main>
|
STACK_EDU
|
[Allan Randall (930312.1200)]
Rick Marken (930310.1400) writes:
First, you say that Information Theory (IT) is to PCT as
calculus is to Newton's laws; IT is a tool like calculus.
But calculus helps us make detailed predictions...
...at the end of your post you say:
>you seemed to be asking for a *prediction* about which
>(condition 2 or 3) will be better in a real-world situation.
>This seems to be beyond the scope of information theory as it
Well, IT isn't much of a tool if it can't help us predict things;
looks like PCT WITH IT is no better off than Isaac's brother Phil
But calculus does NOT allow you to make the kind of prediction
that Bill Powers is asking for. What Bill is asking for is like
asking calculus to predict the orbit of a planet, all by itself
WITHOUT any of Newton's Laws. Bill is specifically asking, unless
I am misunderstanding him, for a PCT-type prediction using
information theory and ONLY information theory. I agree that
this may not be possible. What I do not want to see him do is
to reject what could turn out to be a valuable tool, on the basis
that it cannot do the job all by itself. This is what I was getting
at with the Newton analogy. Contrary to what you imply, calculus
simply can't make physical predictions unless it is used in
combination with a physical model of some kind. All by itself,
it is just a mathematical technique, like information theory.
>If no information about the
>disturbance can be extracted from this data, then there is
>no way the system can translate the error from this signal into
>an action on the world that will counter the disturbance. Is this
>or is this not true?
This is NOT TRUE. Surprise!
>If you agree, then you are agreeing with an
>information theoretic analysis.
So I guess I disagree with an IT analysis -- and if I'm right
(which I am) then the IT analysis is wrong, right?
Right. Exactly. Here is something we can actually nail down. The
statement I made, which you say is not true, is the whole crux
of this controversy. It is a fundamentally information theoretic
statement. The reason why I asked whether it was true or not was
that I could not imagine a PCTer disagreeing with it (I was wrong -
you did), but at the same time I considered it an information
theoretic statement. Here you are claiming the statement is actually
false - there is no information about the disturbance in the
perceptual signal - a claim that is in direct contradiction with
Ashby's Law of Requisite Information. If you are correct, Ashby's Law
is completely unfounded. (However, I don't think you actually are
It is this fact about control systems that nails everyone to the
wall -- and proving it to myself is what turned me into a PCT
...In a high gain, negative feedback control loop, the
output DOES NOT depend on the sensory input; rather, SENSORY INPUT
IS CONTROLLED BY OUTPUT.
The second statement is true, but I think your first statement is
false. The output DOES depend on the sensory input, and the sensory
input DOES depend on the output. We do not have to choose between
the two. It is better to say they are interdependent, than that
one depends on the other. Isn't your response-stimulus description
just as wrong as stimulus-response?
What you do in a tracking task is NOT
caused by what you see; ...
... there is a LOOP so that what you see is
both a cause AND A RESULT of your output.
Now you are back to the closed loop and admitting that they are
The nearly perfect
relationship between output and disturbance does NOT exist
because the system has access to information (from the sensory input)
about the disturbance.
This is silly. How can there be ANY relationship between the output and
disturbance, let alone one that is "nearly perfect," if the system has
no access to information about the disturbance? This is just physically
impossible, Rick. I think either you are misinterpreting what I mean by
"information," or you believe some kind of witchcraft is responsible
When control is good, there IS NO information
about the disturbance in the stimulus -- NONE, ZILCH, NADA.
This is true NOT when control is *good* but when control is *perfect*,
which is inherently impossible for an error-control system, as Ashby
said. Let's be clear what we mean by "disturbance." Stop me when you
disagree. The disturbance is the sum of all the various environmental
influences impinging on the CEV. This CEV is not an absolute property
of the environment, but is defined by the hierarchy. In information
theory terms, the hierarchy is an encoding scheme for representing the
environment. Each perceptual signal represents one CEV in the
environment. This does *not* mean that the disturbance exists in
the hierarchy and not in the world. In order to describe anything,
says information theory, it must be described in some language. The
disturbance is in the world, but the description that isolates it as
an entity seperate from the rest of the environment is in the
hierarchy, not the world.
Only that information which is relevant to control of the CEV is
transmitted in the percept. But to say there is NO information at all,
none, zilch, nada, is just incoherent. If I am driving my
car down the road and there is a sudden huge gust of wind from the
right and at the same time I start sliding on the ice, the
"disturbance," in PCT terms, is the disturbance to the CEV that results
from these forces. But an outside observer will tend to describe
the disturbance in a different language (a different encoding scheme).
They will probably describe the gust of wind, its force, and the sliding
motion due to the ice as seperate, complex entities. But this
description is no more an absolute depiction of reality than the
perceptual one inside the driver! The "disturbance" relevant to PCT
is the one described inside the organism that is controlling, not the
one described dispassionately by an external observer. Both use
an encoding scheme to describe the disturbance. One scheme requires
many many bits, while the other requires very few.
This is what I mean when I say that the hierarchy brings the information
content of the disturbance in line with the output capacity. An external
observer describes the disturbance in a language that requires many bits
(such as the detailed description of the molecular positions/momentums
that the compensatory thermostat requires). The internal hierarchy
describes the same real-world disturbance in a language requiring only
a small number of bits - a number that can be handled by the capacity
of the output channel (such as the much simpler description used by
the error-control thermostat). This is Ashby's Law.
mirrors the disturbance because this is what the output MUST DO in order
to keep the input IN THE REFERENCE STATE; this is the magic of
closed loop control.
Yes, this does sound like magic, and not a scientific explanation at
all. There must be an explanation WHY the control system is able to
do what it MUST DO. Just to say that it MUST DO it does not explain
Allan Randall, firstname.lastname@example.org
NTT Systems, Inc.
|
OPCFW_CODE
|
Which Sci-Fi work introduced the idea of "Touchscreen used by fingers"?
"I killed the Newton because of the stylus. If you're holding a stylus, you can't use the other five that are attached to your wrist."
- Steve Jobs
Steve Jobs (2015) movie; I have no idea whether he really said that
Newton was killed in 1998 and this 1996 video from Star Trek: The First Contact may hint where visionary Steve Jobs get the idea from (Although, famous 1993 "You Will" commercial of AT&T also showed touchscreen used by finger).
While our Android/iOS smartphones are just a decade old, googling revealed that the first finger-driven touchscreen was invented by E.A. Johnson in 60s, when Star Trek was using colorful glass buttons.
But, I don't believe that it didn't appear in Sci-Fi world first.
What is the first reference in Sci-Fi to a touch-screen computer interface?
This question has some earlier touchscreen references, but it's not clear that they are finger driven.
Which Sci-Fi work introduced the idea of "Touchscreen used by fingers"?
What else would you touch it with?
@TGnat Stylus...
@TGnat - Your elbow.
@Valorum I believe that Steve Jobs killed that innovation too...
The crew of the Enterprise were using touchscreen panels long before First Contact. The year you're looking for here is 1987 (when TNG began).
Sci-Fi work in media or print? I don't know that we had a good way to depict touchscreens in an interesting fashion before they were actually available, thus not very interesting unless you make them transparent, or something. (like some shows did) Books, however, suffered no such constraint; however, how do you make a touchscreen interesting? Why not go straight to voice commands or neural links/telepathy. Once you have those things what makes a touch screen interesting?
Looking up touchscreen history - found this 1981 computer that used Infrared to detect finger movement. Clearly Star Trek was an inspiration. https://en.wikipedia.org/wiki/Touchscreen#/media/File:Platovterm1981.jpg
Doctor Who doesn't show touchscreen in TARDIS even today because levers, buttons, orbs and steampunk designs are still way cool and futuristic if you show it right.
2001: A Space Odyssey showed tablet-like tech. They seem to have buttons at the bottom, rather than being true touch screens. I think TNG might actually be the first.
Michael Crichton's The Andromeda Strain, 1969, has stylus-touch screens. "She turned on the computer. “This is how you order laboratory tests,” she said. “Use this light pen here, and check off the tests you want. Just touch the pen to the screen.” She handed him a small penlight, and pushed the START button."
According to this Wiki article the first paper on touchscreens was written in 1965.
|
STACK_EXCHANGE
|
Gartner positions Microsoft as a leader in BI and Analytics Platforms for ten consecutive years.
75% of Companies are Investing or Planning to Invest in Big Data by 2017 – Gartner
Global BI and Analytics Market to Reach $16.9 Billion in 2016 – Gartner
Average US Salary for a Microsoft BI professional is $ 107,000 – indeed.com
Inexpensive and easy to operate
The BI solution is relatively inexpensive, especially when it is compared with other large BI vendors. For most BI users the functionality is more than adequate and ‘office-like’ user interface, the tool is relatively easy to operate.
Very user-friendly features
Very user-friendly, because of a strong integration with the Office products.
Interactive features with complete support
Easy exploration of data and better visualization.
Fully managed self-serviced business intelligence.
Complete use of native Excel features.
Interactive dashboards, guided navigation and drill-down.
Complete support to .Net and web services.
Complete end-to-end solution.
Who Should Attend
- Engineering and IT students - B.Tech/B.E, BCA, MCA, B.Sc IT, M.Sc IT
- People who are interested in reports creation in Microsoft SQL Server Reporting Services. No prior experience with T-SQL is required.
- People with previous exposure to any Relational Database System or spreadsheet data, wants to design and create reports with SQL Server.
- Professionals including Software and Analytics who have basic experience in Relational Database Systems, ETL, OLAP.
- Installation of Microsoft Business Intelligence suite with all the required components i.e. SQL Server Reporting Services, SQL Server Integration Services and SQL Server Analysis Services.
- Understanding the concepts of ad-hoc reports using SQL Server Reporting Services.
- Creation of basic and advanced reports and visualizations.
- Understanding the concept of Data Warehousing (ETL and multi-dimensional modeling)
- Creation of cubes using SQL Server Analysis Services.
GauravSenior consultant Machine Learning
Gaurav is a Business Consultant and worked with prestigious companies like TCS, American Express, AON in the past 10 years and has been working in the Analytics industry since the beginning of his career.
He has worked for international and domestic markets as an expert in Predictive modeling and forecasting role in the field of Business Analytics
Read Less Read More
What are system requirements to install Microsoft Business Intelligence Suit
Machine hardware requirement"
WindowsMicrosoft Windows 7 or newer (32-bit and 64-bit)Microsoft Server 2008 R2 or newerIntel Pentium 4 or AMD Opteron processor or newer2 GB memory5 GB minimum free disk space1366 x 768 screen resolution or higher
How will Hands-on and practical lab sessions be conducted?
|
OPCFW_CODE
|
Email campaigns convey valuable information about how the campaign performed for open, clicks, conversions, did the recipients consider it spam, did it lead to unsubscribing, and so on. Email Analytics help improve the emails you send and to which users you send these emails.
MoEngage aggregates the information and displays it on the campaign info page of the email campaign. Navigate to MoEngage Dashboard > Engage > Campaigns and click any campaign on the All Campaigns page to view the analytics.
- SENT: This is the total number of emails that have been sent. This number is arrived at after removing dupilcate/invalid emails, bounces/unsubscribed/complained users and removing emails due to email Frequency Capping and Personalization failures. Read more about Email Campaign Delivery breakdown.
- OPENS: Total number of unique users who opened the email
- CLICKS: Total number of unique clicks on various links within the email.
- CONVERSIONS: Total number of conversion events that occurred as part of the campaign conversion goal
- UNSUBSCRIBES: Number of users who unsubscribed after receiving this email. No further emails will be initiated to this unsubscribed user.
- CONVERTED USERS: Total number of unique users who performed the conversion event
- COMPLAINTS: Number of users who marked this email as spam complaint.
- HARD BOUNCES: Number of email addresses that hard bounced because they were incorrect and mail server rejected these emails.
UNSUBSCRIBES, COMPLAINTS and HARD BOUNCES
The following helps improve campaigns to ensure better opens, clicks, conversions, and lower unsubscribes and complaints.
Revenue Tracking is optional. If you toggled Revenue Performance during Email Campaign Creation for any campaign, Revenue Metrics will be tracked on Campaign Analytics Page
For your campaign, you then will be able to see three revenue metrics as below:
Total Campaign Revenue is the sum of the total order value across conversion events attributed to the campaign.
Average Order value is calculated as:
Average Revenue Per User is calculated as:
Revenue metrics are tracked for all attribution types such as View through, Click Through, and In-session attribution. Change the attribution type using the filter on the top right of the Campaign Analytics page and view the respective revenue metrics.
The variation performance is displayed when a multivariate A/B email campaign is created. You can compare the multivariate performance and use the best performing variate.
Campaign Delivery Stats
A very common question that marketers sending email campaigns ask is "Why is the number of actual emails sent is lower than the user segment count", This can be explained by the delivery funnel under campaign delivery stats
The various delivery points are as follows:
- Segment Count - Total number of users that belong to the segment of the campaign
- Users with Email - Total number of users who have an email attribute
- After B/U/C removal - Number of users obtained after removing email addresses that have earlier bounced, unsubscribed or spam complained.
- After Invalid/Duplicate removal - Number of users obtained after removing Invalid emails (missing "@" & ".") and Duplicate Emails (multiple users with the same email address)
- After FC Removal - Number of users obtained after removing emails that have crossed Frequency Capping as defined in the FC Settings.
- After Personalization Removal - Number of users obtained after removing emails for which the email could not be sent due to personalization failure.
- Sent - This is the final count of emails that have been sent.
The exact number of email removals is broken down in "Campaign Delivery Breakdown" as shown in the image:
Email Call-to-actions (CTAs) can be highly crucial to your email marketing strategy. User click data on these links in your email can tell if your users are finding a reason to click your emails. You can view the data as a Click Map or Click Graph.
This shows the link click data on a preview of your email. You can click on each link to view the link and no. of clicks (unique & total) that occurred on that CTA. You can also export this preview along with click data a pdf by clicking on the printer icon.
This will show the link click data as a list of links - you can sort this list on the basis of total/unique clicks and also export this data as CSV.
This shows the link click data as a histogram where you can identify the popular and most clicked CTAs in your email and tailor your email templates accordingly.
You can view the information of the campaign sent using the Campaign Info tab.
Segmentation and Scheduling
You can view the following:
|Target Audience||The set of users who received the camapign.|
|Filters||The conditions or filters used in the campaign.|
|Conversion Goal Event||The conversion event that is used in the campaign.|
|Start Sending||The sent time of the campaign|
You can view the following:
|Variation||The variations of the campaign.|
|From||The email address using which the campaign is sent.|
|Reply||The return email address to receive response to the sent campaign.|
The subject of the campaign.
|Sender Name||The name of the person or team sending the campaign.|
|Preview||The preview of the sent campaign.|
Delivery Control and Goals
You can view the following:
|Ignore frequency capping for this message||The Frequency Capping set for the camapign.|
|Campaign Goals Info|
|Goal Name||The Name of the campaign goal set for the campaign.|
|Event Name||The Event Name based on which the campaign goal is set.|
|Attribute Name||The Attribute Name based on which the campaign goal is set.|
|Attribute Value||The Attribute Value based on which the campaign goal is set.|
|Revenue Performance Info Value|
|Event Name||The Event Name based on which the campaign revenue information is set.|
|Attribute Name||The Attribute Name based on which the campaign revenue information is set.|
|Currency||The Currency used to calculate the campaign revenue.|
|
OPCFW_CODE
|
A proposal for separate English localization
mikekaganski at hotmail.com
Tue Oct 17 13:43:06 UTC 2017
On 10/17/2017 2:39 PM, Eike Rathke wrote:
> Hi Mike,
> On Sunday, 2017-10-15 10:55:17 +0000, Kaganski Mike wrote:
>> The source string is the key for all translations, and is kept immutable
>> after creation. But the localization string might change later, e.g. to
>> be consistent, like this:
>> source locale
>> "do foo action" "Do Foo action"
>> so they go out of sync.
> And that is bad. If what is visible in the English UI does not match the
> source then finding the source string of some UI element gets
> complicated. Being able to locate the source of an UI string and from
> there dive into its use in the code is a helpful procedure.
>> Why not sustainable? Actually, we somehow expect all of our translations
>> to be kept in sync (as well as possible); so why do we think about this
>> one differently? Actually we have multiple places in code that should be
>> kept synchronized at all times, and this works well (e.g., some
>> enumeration values);
> Translations (pootle etc.) are not code. In fact the primary source of
> translations isn't even in a code repository, just merged from time to
> time to the translations/ submodule. Technically it also doesn't matter
> how much is translated to one language or if translations are accurate
> (with a few exceptions).
>> and if the sync state is being checked at compile
>> time (like some plugin maybe), this is absolutely possible.
> Maybe I got you wrong. Are you talking about some merge back from
> translations into source code to have the source synced with the English
This should be considered in context of #2 of the original message
which is "The English translation should be created in a different way
(in a dedicated source file?) to be easy for developers to change."
With that in sight, I assume something like this:
1. Source files that contain references to translatable strings (in the
form like "do foo action" above). Already present.
2. Per-module (?) "localization.en_US.txt", with pairs like
"do foo action" = Do Foo action
- this already makes it possible to locate the code from UI string
with single indirection
3. A script that is run by make that checks that each string from #1 has
its counterpart in #2, and errors out on failure; otherwise, compiles
the en-US localization.
The other translations continue to be served as they are now. But in
case an English string changes in a localization.en_US.txt, then all
other translations flag their relevant string as needed to be reviewed
(but not loose their current translations!). That could be possible,
because underlining code string (identifier "do foo action") stays
unchanged. That needs a script, that would to that trick (checking if
English translation of a string is changed) when other translations'
pootles are updated.
This could be further extended to allow including context into the
reference string in code, to look like
gettext("moduleX/dialogY/do foo action")
(but I don't know if that's helpful, and this isn't a subject of my
On 10/17/2017 2:46 PM, Eike Rathke wrote:
> For a new string replacing an old one and introducing a different
> functionality a new context string should be used as well. I don't see
> a problem with this.
If you have many strings that you not only have to check, but also old
translation is lost, (and it's not always easy for translator to get
access to a part of UI with that string to get clear idea about it) the
problem is evident. AFAICT, those who complain encounter that problem on
a regular basis. Being able to just flag strings, but not loose current
translation, is the main idea.
More information about the LibreOffice
|
OPCFW_CODE
|
You will discover the salaries of the Nigerian army after reading this article.
However, the Nigerian army earns a little amount of money every month mostly for undergraduate applicants.
This is due to the fact that they have no professional skills.
In fact, the undergraduate recruiters are the ones that fight for the crisis of the country especially the Boko Haram insurgency.
If you are planning to join the Nigerian Army one day.
It will be better you know what you will be paid or what your relative in the army are earning per month.
However, the Nigerian army is paid according to their ranks in the military.
For instance, a Sergeant earns N68,000 a month while major earns N300,000 every month.
You might be thinking that they earn more than this every month base on the belief of many people.
However, this are the people that risk they live for the security of the nation.
Besides, let see their ranking structures and what the Nigerian army earns a salary per month.
Nigerian Army Ranking
You will be glad to know that the Nigeran army ranks are categories into two different types.
Here are the two categories of the Nigerian Army;
- Commissioned officers
- Non-commissioned officers
However, for the commissioned officers, they are regarded as senior officers in the armed forces.
These are the people that join the military armed forces with degrees, professional certificate or direct from the Nigerian Defense Academy.
Most of them are with B.Sc, B.A, M.Sc, and even P.hD holders.
Besides, the majority of the commissioned officers are recruiting to the armed force through direct short service.
While the non-commissioned officers are the ones you can call junior officers.
They are ones who yes sir very often.
However, these are the people that join the Nigerian Army force through Massive recruitment.
Most of them are secondary school graduates or National Diploma holders.
Nevertheless, we will still inform you what are the requirement of joining the Nigerian Army in this article.
Salaries of The Nigerian Army 2021
Here are the Salaries structures of the Nigerian Army 2021;
- Recruit (No salary at this rank)
- Private Soldier earns approximately N48,000 to N49,000
- Lance-Corporal earns approximately N54,000 to N55,000
- Corporal earns N58,000
- Sergeant earns N63,000
- Staff Sergeant earns N68,000
- Warrant Officer Class 1 earns N80,000
- Warrant Officer Class 2 earns N90,000
- The second Lieutenant earns N120,000
- Lieutenant earns N180,000
- Captain earns N220,000
- Major earns N300,000
- Lt. Colonel earns N350,000
- Colonel earns N550,000
- Brigadier Generalearns N750,000
- Major General earns N950,000
- Lt. General earns N1 million
- General earn N1.5 million
- Field Marshal (Salary is unknown)
Nigerian Non-commission Ranks
As we have early said before, the Nigerian army ranks are in two categories.
Here are the ranks in non-commission categories of the Nigerian Army;
- Private Soldier
- Staff Sergeant
- Warrant Officer Class 2
- Warrant Officer Class 1
This are mainly the people who join the Nigerian Army through Massive Recruitment.
However, most of them are the SSCE, WACE, Primary 6, Trade test or OND holders.
Nigerian Commission Ranks
Here are the ranks for commission categories of the Nigerian Army;
- Second Lieutenant
- Lieutenant Colonel
- Brigadier General
- Major General
- Lieutenant General
- Field Marshal
What are the Functions of the Nigerian Army?
If you wish to join the Nigerian Army and receive the above salaries of the Nigerian Army.
Here are what you need to know about the function of the Nigerian Army;
- Your duties will be to safeguard the country against any kind of external aggression.
- You are to promote the interest of your fellow citizens and the nation.
- Maintaining the country territorial integrity
- You are to securing the Nigeria borders from trespassing on land, sea, or air.
- Overcome insurrection.
- You are to represent in aid of civic officials and restore order when called upon.
- You will be answerable to the President but subject to such conditions as may be commanded by an Act of the National Assembly.
- Perform such other functions as may be directed by an act of the National Assembly.
What is the Requirement to join the Nigerian Army in 2021?
For you to join the Nigerian Army and earn the above mentioning salaries in the Nigerian Army.
Here are a brief summary of the requirement you need to faithful in other to join the Nigerian army through recruitment.
- You must be of Nigerian origin by birth.
- Your age must be between 18 to 22 years before you can submit your application.
- You must show proof of physical and mental fit tender by a registered doctor or hospital in Nigeria.
- Your height must not be below 1.65m (for men).
- If you are a woman then your hight must be 1.56m.
- You must have not been charged or convicted of any crime before.
- You are to complete an online form and submit the print-out at your screening venue.
- The date of screening will be com
To Sum it Up
Earning the above mention salaries of the Nigerian Army is not easy even when it is too small.
However, the competition for employment into the Nigerian Army is high and continue increasing every year.
You are advised to stay aware of the scam.
And any individuals who promise to offer you a job in the Nigerian Army.
Nevertheless, the Nigerian army is the largest armed forces in Nigeria but not the highest-paid armed forces in Nigeria.
Besides, all the Nigerian army employees are subject to the chief of Army staff who’s salary is the highest-paid in the Nigerian Army.
Do let us know what you think about the Salaries of the Nigerian Army in the comment.
And if you have any questions, we will be glad to answer you still down in the comment.
Recommended for you
- Salaries of Lecturers in Nigeria: Monthly and Yearly Structure
- Prices of Foodstuffs in Nigeria
- How To Become a Millionaire before the Age of 30?
- An Island in Greece That is Willing to Pay You To Move-In.
- Which State is The Poorest in Nigeria?
- Top 10 Most Expensive Place in Lagos, Nigeria [Latest]
- Cleanest State in Nigeria – Top 5 + [Photos]
Image source: mysalaryscale.com
|
OPCFW_CODE
|
The Fuchsia archive format is a format for storing a directory tree in a
file. Like a
.zip file, a Fuchsia archive file stores a mapping
from path names to file contents.
Fuchsia archive files are sometimes referred to as FARs or FAR archives,
and are given the filename extension
An archive is a sequence of bytes, divided into chunks:
- The first chunk is the index chunk, which describes where other chunks are located in the archive.
- All the chunks listed in the index must appear in the archive in the order listed in the index (which is sorted by their type).
- The archive may contain additional chunks that are not referenced in the index, but these chunks must appear in the archive after all the chunks listed in the index. For example, content chunks are not listed in the index. Instead, the content chunks are reachable from the directory chunk.
- The chunks must not overlap.
- All chunks are aligned on 64 bit boundaries.
- All chunks must be packed as tightly as possible subject to their alignment constraints.
- Any gaps between chunks must be filled with zeros.
All offsets and lengths are encoded as unsigned integers in little endian.
The index chunk is required and must start at the beginning of the archive.
- 8 bytes of magic.
- Must be 0xc8 0xbf 0x0b 0x48 0xad 0xab 0xc5 0x11.
- 64 bit length of concatenated index entries, in bytes.
- Concatenated index entries.
No two index entries can have the same type and the entries must be sorted by type in increasing lexicographical octet order (e.g., as compared by memcmp). The chunks listed in the index must be stored in the archive in the order listed in the index.
- 64 bit chunk type.
- 64 bit offset from start of the archive to the start of the referenced chunk, in bytes.
- 64 bit length of referenced chunk, in bytes.
Directory chunk (Type "DIR-----")
The directory chunk is required. Entries in the directory chunk must have unique names and the entries must be sorted by name in increasing lexicographical octet order (e.g., as compared by memcmp).
- Concatenated directory table entries.
These entries represent the files contained in the archive. Directories themselves are not represented explicitly, which means archives cannot represent empty directories.
Directory table entry
- 32 bit offset from the start of the directory names chunk to the path data, in bytes.
- 16 bit length of name, in bytes.
- 16 bits of zeros, reserved for future use.
- 64 bit offset from start of archive to the start of the content chunk, in bytes.
- 64 bit length of the data, in bytes.
- 64 bits of zeros, reserved for future use.
Directory names chunk (Type "DIRNAMES")
The directory names chunk is required and is used by the directory chunk to name the content chunks. Path data must be sorted in increasing lexicographical octet order (e.g., as compared by memcmp).
- Concatenated path data (no encoding specified).
- Zero padding to next 8 byte boundary.
Although no encoding is specified, clients that wish to display path data using unicode may attempt to decode the data as UTF-8. The path data might or might not be UTF-8, which means that decoding might fail.
- Octets of path.
- Must not be empty.
- Must not contain a 0x00 octet.
- The leading octet must not be 0x2F ('/').
- The trailing octet must not be 0x2F ('/').
- Let segments be the result of splitting the path on 0x2F ('/'). Each
segment must meet the following requirements:
- Must not be empty.
- Must not be exactly 0x2E ('.')
- Must not be exactly 0x2E 0x2E ('..')
Content chunks must be after all the chunks listed in the index chunk. The content chunks must appear in the archive in the order they are listed in the directory.
The data must be aligned on a 4096 byte boundary from the start of the archive and the data must be padded with zeros until the next 4096 byte boundary.
|
OPCFW_CODE
|
Team Week reprise: in praise of agile product teams
All week we’ve been tweeting about agile teams. You can find the tweets using the hashtag #TeamWeek.
A couple of years ago I wrote a post asking what made a good product manager. Having analysed the top five posts returned by Google, I pulled out characteristics and sorted them by the number of mentions. What interested me was that virtually no mention was made of the product manager’s team. This seemed unfair to me, after all, without a team, a product manager can achieve very little.
In my research I saw that ‘Leadership’ got lots of mentions, but I left it off my list. I reasoned that as all the other characteristics combined would make a good leader, listing it again was redundant.
The connection to leadership that I failed to make in 2015, however, was that the characteristics of a leader, are also those of the team. In a high performing, agile team everybody leads. Shared leadership is more than required, it is essential to success. In an effective, self-organising environment you need everyone to be able to take the lead.
Building a team with these attributes takes time. And, to quote our first Team Week post, you don’t really build teams:
- “You grow them. And in the right environment their growth is a natural phenomenon. As a team grows, you start to see cohesion. The people in the team bond and jell and the whole becomes greater than the sum of the parts. In self-organising teams it’s understood that it’s better to succeed together than to succeed because of some individuals and not others.”
Agile principles are not just a way of thinking about software delivery, they inform behavior. Our third post talked about setting agile norms to give a team a framework for agile behavior. A norm that a recent EW team had on their working agreement was “Don’t step over the poo.” Essentially, when you see that something’s broken, don’t ignore it, fix it. We all own the quality of the product, so at the absolute minimum, card it up and put it on the board.
The last of our Team Week tweets covered the single most important attribute of an agile team: its potential to continually improve via regular retrospectives. Yesterday’s post described how one of EW’s teams were able to process a difficult iteration, and make their retrospectives even more effective.
We hope you’ve enjoyed the posts we shared this week. Small, high performing, agile teams are the bedrock of Energized Work’s success, and we are privileged and very proud to work with some of the most talented agile architects, developers, designers, testers, BAs, and product people in the UK.
Let us know in the comments if you’d like to hear our thoughts on other aspects of agile teams, digital products, or strategic innovation. We’d love to hear from you.
|
OPCFW_CODE
|
This is all about my journey to submit my first kernel patch. Here I have discussed all the important things you must know before you submit your kernel patch and also respond to the feedbacks.
How to check code style before submitting the patch?
Kernel community uses a very different coding style and is a bit different. But, what they have done is that they have created a script to check the coding style before we send the code for review.
For e.g, if you have changed the file - da9062-core.c
In order to run checkpatch.pl, you need to install a few python modules.
Sometimes, I get an error that my pip version is older and I need to upgrade it. The best way to upgrade pip version is to not use sudo command but the following command.
Since python packages are used by Operating system too. If we use sudo then it will interfere with the OS python packages and it might overwrite them. This will lead to system not working. Instead of installing for system we should install for the user.
Another way of installing pip package is
using virtual environment.
How to commit your changes?
One need to sign-off using git commit -s. By signing, you agree with the kernel policies.
How to give git commit message?
Here mfd:da9062 is the file showing drivers/mfd/da9062-core.c
In case, you are not happy about your commit, you can again commit the code using.
How to generate a patch file?
How to add revision history?
Generally the maintainers are quite keen to see the revision history as it helps them to review. It is best to hand modify the file generated from ‘git format-patch” to add the version history to the patch. It is not part of the commit message and should go below the ‘—’ line and above the information regarding the files touched by the patch and the number of changes, so in my case:
How to see maintainers list for the changes you made?
How to apply the patch file?
The patch file can be applied using the command.
How to use git send-email?
How to find the messageID of the email in order to reply.
Please refer to this link. (message id)[https://www.codetwo.com/kb/messageid/]
How to setup the git email client
Setting up an email client
Example to send the patch -
Example to re-send the patch, basically replying on to the email using revised patch.
There is another email client called - mutt, but I won’t suggest using mutt because it needs you to turn off the secure settings from your mail server like “allow less secure apps”
Here is the mutt.rc settings
Setting up your local environment to show the git branch?
This is pretty helpful and it will save your time. After this you don’t have to run ‘git branch’ to know what branch you are working at. Add this line to your bash.rc in order to see what branch you are working in?
Here is the output and you can see that I am onto the warrior branch.
This brings an end to this article. It’s a good idea to give back to the open source community like linux kernel.
|
OPCFW_CODE
|
Lesson 7: Applying Construction to Vehicles
Coast Guard Boat Demo
Step by step
This time, let's slow down and try doing more preparatory work. By doing an orthographic proportion study, we can start to lay down basic landmarks against which we can place our major forms. I always start this off by drawing a rectangle that roughly matches the overall length and height of my subject. Then I subdivide it like crazy, creating a grid. When I actually draw the side view of the object, I try to stick to straight, clear-cut lines. I'll only use a curve when it's integral to the form I'm building, and even then I'll often start it off as straight cuts, coming back afterwards to smooth it out.
The approach I'm using here is a mixture between the encompassing-box technique I first introduced in the last lesson, and the more standard stacking of forms. I'm using the encompassing box to create sections of my object, rather than the entire thing. In this case, the section is the base of the ship.
The first step I'm taking to construct the bow of the ship is to establish its curvature with a sort of footprint. In order to construct this, I further subdivide the front section of the box a fair bit, in order to give myself a finer grid to work from. The important thing here is that I want to make sure that if the curve passes through certain points on one side, the opposite side of the curve should pass through the same corresponding points. This helps maintain consistent perspective distortion, which can be tricky without any additional reference points to rely upon.
In retrospect, I probably could have created that footprint on the top face of the box, but I felt that having it on the bottom would help me relate it more easily against the actual footprint (which is slightly smaller and more tapered). Ultimately, I replicate that footprint on the top face, tuck in the base section a bit and connect them together to produce a three dimensional form.
This ship has three tiers to it, so now I'm stacking on the next one. Before I do so, I want to place its footprint (which is pretty straightforward this time since it's just a box) on the existing base. In order to do this, I take a rough estimate on one side, and then mirror that measurement across the center line of the base form. Then it's just a matter of extruding that foot print up.
Same idea, once again. Establish the footprint, extrude the form up.
Warning: Anyone with half an eye for perspective can start to see that my lines are... well, falling apart a bit. The reason being, the angle I'm drawing this at required me to work in three point perspective, which is notoriously difficult to estimate. I strongly encourage you at this stage to focus on constructions in two point perspective. Being able to trust that your verticals are all running straight up and down is incredibly useful. If you are going to be working in three point perspective, laying down a series of lines that go off towards the same vanishing point when you start out can be incredibly useful - just make sure that they are consistent in their alignment.
For the sake of time and my sanity, I'm skipping over the big structure on top of the wheel house, as well as a lot of the extraneous details. Here's the point that I want to start organizing my lines and generally sorting through my mess. The best way to do that is, of course, to add line weight to key areas. The biggest thing I want to reinforce is the overall silhouette of the object, bringing it out from the rest of the construction lines.
Continuing to do what I can to bring the meat of the boat out of the mess, while adding some minor details. One approach that I often use to help organize heavy messes like this, is to lay down some large, deliberately-designed shadow shapes. The key here is being deliberate. You have to think about how those shadows will run along surfaces, and you have to make sure the angles of projection are consistent. Basically, think about where your sun is going to be, and what direction it's going to be casting its shadows. Having a shadow come out towards your light source is a big no-no, as it'll break the illusion. Don't go too heavy with this, just place shadows in key areas, especially where it's going to help separate a key shape or form out from the fray.
|
OPCFW_CODE
|
My solution is to first calculate all the integers that have a form of while not exceeding 109. Then, for the given integer n, we can enumerate the first term from the previously obtained results and check whether their difference is also an integer that we have computed, which can be implemented based on binary search. The total complexity is thus O(NlogN).
We can test the threshold from small values to large ones. For each threshold, we check whether the first and last elements are both still “accessible” (event-1), and further check whether there are two consecutive elements, except for the first and last ones, that are not “accessible” (event-2). We find the maximum threshold that event-1 is true and event-2 is false.
192C - Династические головоломки
The basic idea is dp. We use dp[i][j] to denote the maximum length that we can obtain while using all the given strings, with starting letter “i” and ending letter “j”. The recursive formula is as follows:
When we meet a new string s, with starting letter “k” and ending letter “j”, we should update all dp[i][j] with i = 0, 1, ..., 25, according to dp[i][j] = max(dp[i][j], dp[i][k] + length(s)), as long as dp[i][k] > 0 (this means that we have found some combination of strings that starts with letter “i” and ends with letter “k”). Besides, string s can also serve as the first string, and thus we should further update dp[k][j] = max(dp[k][j], length(s)).
Finally, we should find out the maximum value of dp[i][i] as the answer.
Notice that the last element does not affect the result. We first sort the other n - 1 elements in a decreasing order, and compute the sum of the first k terms as ks. If ks ≤ b, it means that we have no choice but to select the last element. The reason is that they have enough money to reject any choice unless we select the last one.
On the other hand, if ks > b, it implies that we have a chance to obtain a better result. At first, as ks = a + a + ... + a[k] > b (index starts from “1”), it means that we can select any one of their original indices (remember that they have been sorted and thus we should find their “true” indices) as the final result, since b - (a + a + ... + a[j - 1] + a[j + 1] + ... + a[k]) < a[j] holds for any j and our final choice can not be rejected. Besides, we also have a chance to select any one of the other n - k - 1 (the last one is excluded) indices as our final result. It is obvious that we should first try a, a, ..., a[k - 1] so that the administration has the least money with a value of b - (ks - a[k]) to reject our final choice. Therefore, we can test the other n - k - 1 indices and check whether their cost is larger than b - (ks - a[k]), and if yes, then we can select it.
We keep updating the “best” result according to the above steps and finally output the answer.
A classical LCA problem.
At first, we should figure out how to calculate LCA of any given two nodes with compleixty of order logN. There are several classical algorithms to solve this and as one can find a large number of materials talking about this, we omit the details here. As for me, I used dfs to build a “timestamp array” and implemented RMQ to calculate LCA.
Next, we use dis[u] to denote the number for which the edges from u to the root node have been visited (quite similar to prefix idea). For instance, dis[u] = 2 means that every edge from u to the root node has been visited for two times.
For each query with given nodes u and v, we should imcrease dis[u] and dis[v] by one, respectively while decreasing dis[LCA(u, v)] by two. One can draw a simple graph and check this with paper and pen.
After dealing with all the queries, we are going to calculate the times for which each edge has been visited. To obtain a correct result, we should visit dis[u] in a decreasing order of the depth of u, and whenever we complete a node u, we should add dis[u] to its parent node. For instance, u is a leaf node and also serves as a child node of node v. Then, the edge between u and v has been visited for dis[u] times, and we update dis[v] as dis[v] = dis[v] + dis[u]. The reason is that any edge from v to the root node is counted both in dis[v] and dis[u], as it belongs to two prefix arrays.
|
OPCFW_CODE
|
"""
A quantizer that quantizes a maching learning model
from 32-bit floating point format to 8-bit integer format,
and its corresponding de-quantizer.
"""
from collections import namedtuple
from collections import OrderedDict
def quantize_model_weights(weights):
"""Quantize weights before sending."""
quantized_weights = OrderedDict()
for name, weight in weights.items():
quantized_tensor = quantize_tensor(weight)
quantized_weights[name] = quantized_tensor
return quantized_weights
def dequantize_model_weights(quantized_weights):
"""De-quantize quantized weights."""
dequantized_weights = OrderedDict()
for name, quantized_weight in quantized_weights.items():
dequantized_weight = quantized_weight.scale * (
quantized_weight.tensor.float() - quantized_weight.zero_point)
dequantized_weights[name] = dequantized_weight
return dequantized_weights
QTensor = namedtuple('QTensor', ['tensor', 'scale', 'zero_point'])
def quantize_tensor(tensor, num_bits=8):
"""Quantize a 32-bit floating point tensor."""
qmin = -2.**(num_bits - 1)
qmax = 2.**(num_bits - 1) - 1.
min_val, max_val = tensor.min(), tensor.max()
scale = (max_val - min_val) / (qmax - qmin)
if scale == 0.0:
scale = 0.001
initial_zero_point = qmin - min_val / scale
if initial_zero_point < qmin:
zero_point = qmin
elif initial_zero_point > qmax:
zero_point = qmax
else:
zero_point = initial_zero_point
zero_point = int(zero_point)
q_tensor = zero_point + tensor / scale
q_tensor.clamp_(qmin, qmax).round_()
q_tensor = q_tensor.round().char()
return QTensor(tensor=q_tensor, scale=scale, zero_point=zero_point)
|
STACK_EDU
|
The plugin will read the security code from the Omni to get the corresponding name so it can display a message when you set a mode, example:
[9/12/2022, 6:58:34 AM] [Omni] Area 1 [Area 1]: Set Mode ArmAway [Mantorok]
Could you please confirm and let me know
The security code is correct - I can’t find any name if I look at the controller with PC Access and I’ve set a name in the Homebridge config
On Sep 11, 2022, at 5:18 PM, mantorok1 @.***> wrote:
Hi @btgh https://github.com/btgh, I suspect that error is caused by either the securityCode in the config file being invalid (ie. not known to the Omni) or it doesn't have a name associated with it.
The plugin will read the security code from the Omni to get the corresponding name so it can display a message when you set a mode, example:
[9/12/2022, 6:58:34 AM] [Omni] Area 1 [Area 1]: Set Mode ArmAway [Mantorok].Could you please confirm and let me know
—
Reply to this email directly, view it on GitHub https://github.com/mantorok1/homebridge-omnilink-platform/issues/20#issuecomment-1243044924, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKZTSILZS5WSPOT3HFMPKL3V5ZEAZANCNFSM6AAAAAAQJYWU5M.
You are receiving this because you were mentioned.
I had the security in quotes - I also tried without bit get the following error instead:
Get Code Id failed [args.code.padEnd is not a function]
On Sep 12, 2022, at 12:11 PM, Bruce Trvalik @.***> wrote:
The security code is correct - I can’t find any name if I look at the controller with PC Access and I’ve set a name in the Homebridge config
On Sep 11, 2022, at 5:18 PM, mantorok1 @.*** @.***>> wrote:
Hi @btgh https://github.com/btgh, I suspect that error is caused by either the securityCode in the config file being invalid (ie. not known to the Omni) or it doesn't have a name associated with it.
The plugin will read the security code from the Omni to get the corresponding name so it can display a message when you set a mode, example:
[9/12/2022, 6:58:34 AM] [Omni] Area 1 [Area 1]: Set Mode ArmAway [Mantorok].Could you please confirm and let me know
—
Reply to this email directly, view it on GitHub https://github.com/mantorok1/homebridge-omnilink-platform/issues/20#issuecomment-1243044924, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKZTSILZS5WSPOT3HFMPKL3V5ZEAZANCNFSM6AAAAAAQJYWU5M.
You are receiving this because you were mentioned.
The security code needs to be in quotes in the config file, like this:
"securityCode": "1234",
In PC Access if you go to the "Setup" tab then click on "Codes" in left side menu you should see all the codes.
In the column named "Code Name Description" you need to have a name defined - usually your name. If it says something like "CODE 1" then it hasn't been defined. Are you able to set one, save and then update the Omni with it?
Hi @btgh,
Did you end up setting a name for your security code in PC Access?
If not then you can upgrade the plugin to 1.5.7 which adds support for unnamed security codes.
adding the name to the security code solved my issue - thanks for the enhancement anyway
I updated the security code with a name for my Omni - once I update the “name” in the config it started working - thanks of the help!
On Sep 12, 2022, at 6:46 PM, mantorok1 @.***> wrote:
The security code needs to be in quotes in the config file, like this:
"securityCode": "1234",
In PC Access if you go to the "Setup" tab then click on "Codes" in left side menu you should see all the codes.
In the column named "Code Name Description" you need to have a name defined - usually your name. If it says something like "CODE 1" then it hasn't been defined. Are you able to set one, save and then update the Omni with it?
—
Reply to this email directly, view it on GitHub https://github.com/mantorok1/homebridge-omnilink-platform/issues/20#issuecomment-1244654321, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKZTSIJDHXA7XVDYJ6NM7IDV56XERANCNFSM6AAAAAAQJYWU5M.
You are receiving this because you were mentioned.
|
GITHUB_ARCHIVE
|
A few weeks back, I talked about why a given MMO might have failed even when it possessed good qualities. It was a response to something that I see get passed around a lot any time a given game sunsets or winds up in a half-alive maintenance mode. And clearly I was being some kind of predictive wizard with that, as last week saw the sudden and rather brutal shuttering of four separate games (Eden Eternal, Twin Saga, Defiance, and Defiance 2050) along with the end of a reboot effort for another (Anthem).
And predictably, people came out of the woodwork to explain how if these games had really mattered or been loved they wouldn’t be shutting down in the first place. No surprises there.
One of the things I mentioned in that first piece was how this sort of clarion call is nearly always a bad-faith criticism, but today I want to take on the same basic problem from the other side. I don’t want to examine why a game could shut down without the problem being “the game was bad,” but rather I want to look at why there is this assumption that no one played the game or even just that the players fell below some vital critical mass to justify the continued effort.
Now, at face value, you can probably tell for yourself this is wrong. But consider this, if necessary. The core conceit I saw put forth, for example, was that if nearly as many people cared about Anthem when it was running as when it got the reboot shut down, the reboot project wouldn’t have been shut down.
Except that’s ignoring the fact that the number of people who were invested in the project up to that point clearly justified a year or more of work by a reasonably sized team of developers to reboot the game, work that wasn’t going on in secret or in addition to other projects. This work was largely done out in the open, the people in charge knew what these people were working on, and it was decided that keeping Team Anthem working on this project was a productive use of resources for quite a while.
And then it got shut down because it would have needed to pull resources from elsewhere to justify an arbitrary deadline, and the people signing the checks decided that now was the time to pull the plug. That’s all it comes down to.
You might wonder why that happened. Keep that wondering in mind. We’re going somewhere with this; for now, stick a pin in it.
When you get to smaller games that are being shuttered, there are fundamentally two reasons to ask whether or not anyone cares. The first reason is a genuine good-faith or at least minimally bad-faith argument put forth by people who simply don’t know anyone who’s into the title in the first place. It’s really easy to assume that Twin Saga was floundering without players because, well, you don’t know anyone who plays it and so it must not have had much of a playerbase.
But even that elides the fact that your knowledge of the MMO sphere does not equal the entirety of what it encompasses. I could argue that I don’t personally know anyone who’s very happy with retail World of Warcraft at the moment, but that doesn’t mean there aren’t people happy with it. This is why I spend a lot of time searching down information from a wide variety of sources to make sure that I know more than just my own limited perspective and still assume that my perspective is limited.
Yet that’s fundamentally a problem of ignorance. The other reason to ask why to care is more insidious and far more destructive. It assumes that MMO development and success is a pure meritocracy, wherein all of the best projects get the best people and the worse projects get progressively worse people. Failure, in this conception, is not only predictable but almost justified as an outgrowth of having a lower-tier team working on the project in the first place.
It’s an attractive view that assumes success is an outgrowth of skill and thus altogether deserved. And it only has the slight problem of being absolute nonsense with no resemblance to the real world in any way, shape, or form.
You know what game I don’t care about in the least? Star Wars Galaxies. Absolutely nothing I’ve ever heard about that game or seen about it makes me even remotely interested in playing it. But that doesn’t mean the game’s official shutdown was somehow justified by my apathy toward it.
This was a game that did have fans who loved the heck out of it, even after the NGE. It was a game that had a vibrant and active player community. It was shut down solely because the licensing fee for it was jacked up to unreasonable levels to “clear the board” for Star Wars: The Old Republic. Period end. There was no meritocracy in play here.
But for some people, it needs to be merited. There’s this strange obsession with the idea that all of this must be justified, that a game shutting down must come about because it somehow “deserved” this facet, because what’s the alternative? That all of this is being made in service to the whims of a system that has a very different set of priorities than you do?
Gosh, if that were the case, your favorite game or games might be subject to a shutdown for arbitrary reasons just like the games you don’t care about, and the only thing that’s keeping them running are whims and what the budget looks like on the balance sheet. It’s possible for a game to have a solid fanbase willing to overlook its flaws, a reboot plan on the table that would work, and for someone in charge to decide that it’s just going to cost too much money, making all the time spent working on that reboot plan a complete waste because it’s getting thrown out.
And that is… kind of scary! It’s not exactly heartening to think that WoW, for example, continues running because of ontological inertia and that people currently busy running the game to the consternation of players are being checked on by people who don’t care if players are unhappy so long as the game meets its financial targets. Heck, that’d mean that it’s possible for things to be successful or fail entirely separate of their artistic merits.
A belief in meritocracy when it comes to the survival online games is far more comforting. It’s much more pleasant to pretend that Anthem just didn’t have the support it needed from players and thus the real problem is that people who are missing it now didn’t give enough Support Energy or whatever, so it failed. That’s way better than seeing issues with an underlying system or leadership that may not be something you can actually control one way or the other.
It’s a nice fiction to believe in. But it is a fiction, and it serves only to demoralize and marginalize actual developers doing hard work to improve games by blaming them for shutdowns that they likely worked like mad to avert.
|
OPCFW_CODE
|
/*
* (C) René Vogt
*
* Published under MIT license as described in the LICENSE.md file.
*
*/
using System;
using System.Drawing;
using System.Linq;
using ConControls.ConsoleApi;
using ConControls.Logging;
using ConControls.WindowsApi;
using ConControls.WindowsApi.Types;
namespace ConControls.Controls.Drawing
{
sealed class ConsoleGraphics : IConsoleGraphics
{
readonly INativeCalls api;
readonly Size size;
readonly ConsoleOutputHandle consoleOutputHandle;
readonly CHAR_INFO[] buffer;
readonly FrameCharSets frameCharSets;
internal ConsoleGraphics(ConsoleOutputHandle consoleOutputHandle, INativeCalls api, Size size, FrameCharSets frameCharSets)
{
this.consoleOutputHandle = consoleOutputHandle;
this.api = api;
this.size = size;
this.frameCharSets = frameCharSets;
Logger.Log(DebugContext.ConsoleApi | DebugContext.Graphics, $"Initializing buffer with size {this.size}.");
buffer = api.ReadConsoleOutput(consoleOutputHandle, new Rectangle(Point.Empty, size));
}
public void DrawBackground(ConsoleColor color, Rectangle area)
{
Logger.Log(DebugContext.ConsoleApi | DebugContext.Graphics, $"drawing background {area} with {color}.");
FillArea(color, color, default, area);
}
public void DrawBorder(ConsoleColor background, ConsoleColor foreground, BorderStyle style, Rectangle area)
{
Logger.Log(DebugContext.ConsoleApi | DebugContext.Graphics, $"drawing border {style} around {area} with {foreground} on {background}.");
Logger.Log(DebugContext.ConsoleApi | DebugContext.Graphics, $"{area.Left} {area.Top} {area.Right} {area.Bottom}");
if (style == BorderStyle.None) return;
var charSet = frameCharSets[style];
var attribute = background.ToBackgroundColor() | foreground.ToForegroundColor();
bool leftInRange = area.Left >= 0 && area.Left < size.Width;
bool topinRange = area.Top >= 0 && area.Top < size.Height;
bool rightInRange = area.Right > 0 && area.Right <= size.Width;
bool bottomInRange = area.Bottom > 0 && area.Bottom <= size.Height;
if (leftInRange)
{
if (topinRange)
buffer[GetIndex(area.Left, area.Top)] = new CHAR_INFO(charSet.UpperLeft, attribute);
if (bottomInRange)
buffer[GetIndex(area.Left, area.Bottom - 1)] = new CHAR_INFO(charSet.LowerLeft, attribute);
}
if (rightInRange)
{
if (topinRange)
buffer[GetIndex(area.Right - 1, area.Top)] = new CHAR_INFO(charSet.UpperRight, attribute);
if (bottomInRange)
buffer[GetIndex(area.Right - 1, area.Bottom - 1)] = new CHAR_INFO(charSet.LowerRight, attribute);
}
if (area.Width > 2)
{
var charInfo = new CHAR_INFO(charSet.Horizontal, attribute);
for (int x = Math.Max(0, area.Left + 1); x < area.Right - 1 && x < size.Width; x++)
{
if (topinRange)
buffer[GetIndex(x, area.Top)] = charInfo;
if (bottomInRange)
buffer[GetIndex(x, area.Bottom - 1)] = charInfo;
}
}
if (area.Height > 2)
{
var charInfo = new CHAR_INFO(charSet.Vertical, attribute);
for (int y = Math.Max(0, area.Top + 1); y < area.Bottom - 1 && y < size.Height; y++)
{
if (leftInRange)
buffer[GetIndex(area.Left, y)] = charInfo;
if (rightInRange)
buffer[GetIndex(area.Right - 1, y)] = charInfo;
}
}
}
public void FillArea(ConsoleColor background, ConsoleColor foreColor, char c, Rectangle area)
{
Logger.Log(DebugContext.ConsoleApi | DebugContext.Graphics, $"Fillig area {area} with '{c}' in {foreColor} on {background}.");
var char_info = new CHAR_INFO(c, background.ToBackgroundColor() | foreColor.ToForegroundColor());
var indices = from x in Enumerable.Range(area.Left, area.Width)
from y in Enumerable.Range(area.Top, area.Height)
where x >= 0 && x < size.Width && y >= 0 && y < size.Height
select GetIndex(x, y);
foreach (var index in indices)
buffer[index] = char_info;
}
public void CopyCharacters(ConsoleColor background, ConsoleColor foreColor, Point topLeft, char[] characters, Size arraySize)
{
Logger.Log(DebugContext.ConsoleApi | DebugContext.Graphics, $"Copying characters to {topLeft}, size: {arraySize} in {foreColor} on {background}.");
var attributes = background.ToBackgroundColor() | foreColor.ToForegroundColor();
for(int sourceX = 0, targetX = topLeft.X;
sourceX < arraySize.Width && targetX < size.Width;
sourceX++, targetX++)
for (int sourceY = 0, targetY = topLeft.Y;
sourceY < arraySize.Height && targetY < size.Height;
sourceY++, targetY++)
buffer[GetIndex(targetX, targetY)] = new CHAR_INFO(characters[sourceY * arraySize.Width + sourceX], attributes);
}
public void Flush()
{
Logger.Log(DebugContext.ConsoleApi | DebugContext.Graphics, $"flushing buffer ({size}).");
api.WriteConsoleOutput(consoleOutputHandle, buffer, new Rectangle(Point.Empty, size));
}
int GetIndex(int x, int y) => y * size.Width + x;
}
}
|
STACK_EDU
|
Sorry for the delay in responding. This continues to be an issue, so thanks for looking into it. Our QB version is 2019. Opensync version is 3.0.25.
After clearing the contents of the txnexpenselinedetail table, and performing a full Re-polulate on Transaction, the txnexpenselinedetail did not refill. All we are getting is incremental updates of the latest data. Which repopulate action does this table go with?
Yudel, we have determined that a change in the QB database on the BillableStatus field of the TxnExpenseLineDetail table simply does not trigger an update in the OpenSync SQL table. We have found that if we subsequently modify the Memo – by adding a character on the end – then OpenSync will update the record will update in SQL. After it updates the value for BillableStatus matches the value in the QB database. For instance, we had a record which was showing in SQL BillableStatus = Billable, which had been applied to an invoice, therefore should have had value HasBeenBilled. We modified its memo by adding a character. After the next OpenSync refresh, the new memo appeared in SQL, along with the BillableStatus change to HasBeenBilled.
CustomerRef_ListID and CustomerRef_FullName tell me the job that the expense applies to. These are set when the expense is entered. What I want to know is if the expense has been applied to an invoice.
I’m trying to duplicate the Unbilled Costs by Job report that Quickbooks generates. But the Billed/Unbilled selector seems to be the BillableStatus field, and it’s just not updating to HasBeenBilled.
I’m not receiving empty values, I continue to receive a Billable value for records which have been applied to invoices, so they should come through as HasBeenBilled.
problem solved by synergration tech support. here is their suggestion:
this error was introduced by the latest update to qb 2013. to get around this you can just create the following table:
noteid – varvhar 10
note – varchar 4095
date – date/time
idkey – varchar 36
when you create your refresh task, click filters. you have to select a start date and end date to use fitlering. if you want all data, then select dates far into the past, and far into the future. with filtering enabled, you can use the block request by days feature. select a number of days, such as 90, that you know will work. then, when it queries the entire table, it will query it in 90 day chunks through your filter period.
yet more info:
the same error is caused by any customer which has a one or more notes listed in the customer center in qb. removing the notes makes it work, but it just skips to the next customer that has a note.
additional info: the error is specific to one customer:project record. however, the customer:project is near the beginning of the alphabet and no subsequent customers get processed after the error. i opened the record in qb, and nothing looks out of place.
|
OPCFW_CODE
|
Natural language generation (NLG) is a rapidly evolving field of artificial intelligence (AI) that has the potential to revolutionize the data analytics industry. NLG can be used to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
In the context of data analytics, NLG can be used to:
- Generate reports and summaries: NLG can be used to automatically generate reports and summaries of data, which can save analysts a significant amount of time. For example, an NLG-powered analytics tool could automatically generate a report on sales performance for a given month, highlighting trends and insights that would otherwise be difficult to identify.
- Create interactive dashboards: NLG can be used to create interactive dashboards that allow users to explore data in a more natural way. For example, a user could ask an NLG-powered dashboard “What are the top three reasons why customers are churning?” and the dashboard would generate a list of answers based on the data.
- Explain complex data: NLG can be used to explain complex data in a way that is easy to understand. For example, an NLG-powered analytics tool could explain how a particular marketing campaign impacted sales.
- Generate insights: NLG can be used to generate insights from data that would otherwise be difficult to identify. For example, an NLG-powered analytics tool could identify patterns in customer behavior that could be used to improve customer retention.
NLG is still a relatively new technology, but it has the potential to revolutionize the data analytics industry. By making data more accessible and understandable, NLG can help businesses make better decisions and improve their bottom line.
Here are some specific examples of how NLG is being used in the data analytics industry today:
- Salesforce: Salesforce is a leading CRM platform that uses NLG to generate reports and summaries of customer data. This allows Salesforce users to quickly and easily identify trends and insights that can help them improve their sales performance.
- Microsoft Power BI: Microsoft Power BI is a business intelligence (BI) tool that uses NLG to create interactive dashboards. These dashboards allow users to explore data in a more natural way, which can help them identify insights that they might not have otherwise found.
- IBM Watson Analytics: IBM Watson Analytics is a cloud-based BI tool that uses NLG to explain complex data. This makes it easier for users to understand the data and make better decisions.
These are just a few examples of how NLG is being used in the data analytics industry today. As NLG technology continues to evolve, we can expect to see even more innovative and groundbreaking applications of this technology in the years to come.
In addition to the benefits mentioned above, NLG can also help to improve the communication and collaboration between data analysts and business users. By generating text-based reports and summaries, NLG can make it easier for business users to understand the results of data analysis. This can lead to better decision-making and improved business outcomes.
Overall, NLG is a powerful technology that has the potential to revolutionize the data analytics industry. By making data more accessible and understandable, NLG can help businesses make better decisions, improve their bottom line, and gain a competitive advantage.
|
OPCFW_CODE
|
hansa Posted August 31, 2016 Share Posted August 31, 2016 Hello everyone, I am currently starting to plan my applications to grad school, specifically looking for a phd in computer science. My undergraduate degree (which I currently am close to completing) is in a technology field that mixed design, media, and computer science together, so while I did learn some basics I won't have the depth of knowledge that I would if I were a computer science student. I am pursuing a minor in computer science, which should help, but I'm not sure if that's enough to get me into a top computer science program. I would love to apply for Cornell, Berkeley, Caltech, Yale, and a small local university as a backup option (if I don't get into any of the others). I am involved in research, working with three different labs at my university and plan to have 3 publications by the time I graduate (two in artificial intelligence topics and one in an HCI-related topic). One of my research opportunities was a result of the undergraduate research award at my university. The reason I chose to be involved with three different research groups was so that I could have three good references, but now I wonder if it shows lack of focus. Another point of concern for me is my GPA. I estimate that by the time I graduate I will have around 3.8 to 3.85 - but on a 4.33 scale. Will this put me at a huge disadvantage? I have yet to actually do my GRE but I did a practice exam without studying, just to see where I am, and got a score of 158 on verbal and 157 on quantitative, which I realize is definitely low. If anyone has any studying suggestions, specifically for quantitative, I would be so grateful if you could please let me know. And one last question - I was invited to join an honor society for top students, but I have to pay a $90 membership fee. I have no idea if this is just for bragging rights or if it will actually have an impact on my application. I heard that honor societies are a bigger thing in the USA, and as I plan to study in the USA I wonder if it's worth it. Also I am a new user on this forum so if I'm posting in the wrong place I do apologize! and thank you for reading my question. Link to comment Share on other sites More sharing options...
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now
|
OPCFW_CODE
|
import numpy as np
import math
import sys
def normalizeVector(vector):
unit_vector = vector / np.linalg.norm(vector)
return unit_vector
def angleBetween(v1, v2):
v1_unit = normalizeVector(v1)
v2_unit = normalizeVector(v2)
v1_y, v1_x = v1_unit[0], v1_unit[1]
v2_y, v2_x = v2_unit[0], v2_unit[1]
return np.arctan2(v1_x*v2_y - v1_y*v2_x, v1_x*v2_x + v1_y*v2_y)
def getOrthogonalVector(vector):
arbritary_value = 1
vector_normalized = normalizeVector(vector)
orthogonal_vector = np.array([0.0, 1.0])
orthogonal_vector -= orthogonal_vector.dot(vector_normalized) * vector_normalized
return normalizeVector(orthogonal_vector)
class NavigationMethods():
@staticmethod
def getObjectsInRadiusOfMe(my_controller, my_ship, radius):
my_pose = my_ship.xy
return my_controller.get_mobs_in_radius(my_pose, radius, enemy_only=False)
@staticmethod
def getAsteroidsInRadius(my_controller, objects_in_radius):
all_asteroids = my_controller.get_data_from_asteroids();
asteroids_in_radius =[]
for a in all_asteroids:
if a in objects_in_radius:
asteroids_in_radius.append(a)
return asteroids_in_radius
@staticmethod
def getAsteroidResultingVector(my_controller, my_ship, asteroids_in_radius):
weights = []
distances = []
somme = 0
result_vector = np.zeros(2)
if(len(asteroids_in_radius) > 1):
for asteroid in asteroids_in_radius:
u_vector = asteroid.dir# #
print(f"u_vector : {u_vector}")
closest_point = u_vector*asteroid.radius + asteroid.xy
distance = math.sqrt((my_ship.xy[0]-closest_point[0])**2 + (my_ship.xy[1] - closest_point[1])**2)
print(f"distance : {distance}")
weight = 1/distance
result_vector += weight * u_vector
# result_vector *= -1
print(f"result_vector: {normalizeVector(result_vector)}",file=sys.stderr)
return result_vector
@staticmethod
def checkIfInDeadzone(my_controller, my_ship, deadzone_offset):
deadzone_center = my_controller.get_dead_zone_center()
deadzone_radius = my_controller.get_dead_zone_radius() - deadzone_offset
distance = math.sqrt((my_ship.xy[0]-deadzone_center[0])**2 + (my_ship.xy[1] - deadzone_center[1])**2)
return distance > deadzone_radius
@staticmethod
def moveShipAccordingToVector(my_controller, my_ship, motion_control, u_vector):
turn_angle = angleBetween(u_vector, my_ship.dir)
print(f"turn_angle : {turn_angle}")
if turn_angle > 0.03:
motion_control.set_rotation(-1.0)
return True
elif turn_angle < -0.03:
motion_control.set_rotation(1.0)
return True
else: return False
@staticmethod
def getDeadZoneResultingVector(my_controller, my_ship):
resulting_vector = np.zeros(2)
deadzone_center = my_controller.get_dead_zone_center()
ship_point = my_ship.xy
resulting_vector[0] = (-1) * ship_point[1]
resulting_vector[1] = (-1) *ship_point[0]
resulting_vector = normalizeVector(resulting_vector)
return resulting_vector
@staticmethod
def getDeadZoneOrthogonalResultingVector(my_controller, my_ship):
resulting_vector = NavigationMethods.getDeadZoneResultingVector(my_controller, my_ship)
orthogonal_vector = getOrthogonalVector(resulting_vector)
print(f"orthogonal_vector {orthogonal_vector}")
return orthogonal_vector
@staticmethod
def getPlayersResultingVector(players):
pass
@staticmethod
def getTotalResultingVector():
pass
|
STACK_EDU
|
Many game developers dream of creating a massive sandbox game in the vein of Grand Theft Auto V, World of Warcraft or Skyrim. But those games require millions of lines of code, tens of thousands of art and sound assets and hundreds of millions of dollars to develop and market. But what if you don’t have $500 million and thousands of employees? Do you just throw your hands in the air and give up on your dream of making awesome games?
Of course not. As a professional game developer, your projects should have a scope that fits your budget, resources and limitations. If you don’t think this is important, you can ask the creators of Daikatana and Duke Nukem Forever exactly what happens when the scope of your project gets too big.
The most important factor in having a realistic scope for your project is to start small and have a mentality of “good enough”. By starting small, you start by doing one thing, like make one character model, one level or put out one piece of music. Then do several small things in a row. Try to get a small portion of the game playable. It can be as little as one level, one battle, or one mechanic. Work on that mechanic until you’ve ironed out all the flaws, then release it.
Congratulations, you’ve just made a playable game. It might not be the next World of Warcraft but it’s something that people can play. So where does “good enough” come in? “Good enough” is important because what many developers do is finish half of a feature and then begin work on a new and exciting feature. This practice frequently results in a project that’s not ready for release because there are a bunch of half finished items and no playable game at its center.
So if you find yourself with a project that has a ton of half finished features, declare that you will release a “good enough” game and work on finishing the most important features before shipping. It’s fine if you don’t release with 100% of the features fully implemented, just a few features that are “good enough” will do. You can always go back and finish the rest of the game to your satisfaction.
The most difficult thing to do is to say that any given feature set is good enough, before focusing on polishing the game to make it as perfect as possible. But remember that starting the game creation process is trivial compared to actually finishing something playable and fun, and as a game developer, no one will ever remember or purchase your half-finished and unpolished works in progress. Start small and you will succeed, for every journey of a thousand miles begins with a single step.
Please be sure to share this article if you found this information useful!
|
OPCFW_CODE
|
Fantasticfiction fiction – Chapter 1115 crowded debonair suggest-p1
Novel–Release that Witch–Release that Witch
Chapter 1115 increase able
That was while he knew out of the notice that it was the emperor himself who got declined Kajen Troupe’s deliver to carry out a engage in for that coronation ceremony.
To his terrific amaze, it had been from your california king!
“Mr. Fels, will there be something wrong?” Bernis required with matter.
Considering that this kind of well-ready engage in obtained neglected to garner good interest through the king, Kajen thought His Majesty was only remaining pleasant to match his troupe inside the notice.
Release that Witch
“But… there’s a note with Graycastle’s noble seal off over the envelope. You told me that whenever it’s a note from Neverwinter, I ought to provides it to you personally imme—”
It proved that out of the beginning it experienced only been his own wishful planning to complete for any queen.
He acquired envisioned that his mentor being waiting for them comfortably within his recliner as usual, but this time he observed him listlessly standing by his workplace.
He got off his gla.s.ses and rubbed his sore eyes, he then shut the set of scripts and placed it back in stock beside his work desk.
Egrepo opened the entranceway with the examine then withstood agape.
Before the maid finished her phrase, Kajen suddenly launched the entrance.
“That’s ok. Browse it.”
As part of his see, most of these scripts lacked an getting plot and also a stunning scenario-telling design. He reckoned that this article author should have been a novice who has been only capable of write down the plot within a simple fashion. Having said that, he still held browsing these tales since he obtained little else to carry out at the present time.
Even though shutting down his vision, he leaned back to his office chair and reported, “Put it outside, I’ll verify them afterwards.”
“I have got gained a note from Neverwinter. It’s through the emperor.” Kajen picked up the letter on the workspace and thought to them. “In this article, have a look.”
Kajen could accept this outline regarding the marvelous flick.
“Might it be… acceptable?”
Only by looking at the scripts from Neverwinter could he temporarily overlook his difficulties.
Hearing that, Egrepo needed the note.
Should I can directly get hold of the california king, am i going to are able to find out more information on the magical flick?
The king was forthcoming in answering concerns about the wonderful video. During the notice, he explicitly revealed which it was developed using a specific instrument which was capable of documenting photographs. His Majesty also mentioned that he could not provide this piece of equipment for yet another troupe mainly because it was extremely unusual. As reported by the notice, this musical instrument could just be designed and managed by witches and is made of some unusual elements from an early relic.
Prior to when the maid accomplished her sentence, Kajen suddenly exposed the entrance.
“But… there’s a message with Graycastle’s royal close up on the envelope. You informed me that in case it’s a note from Neverwinter, I should provide it for you personally imme—”
Release that Witch
This dvd set of scripts was put together with all kinds of other scripts from Neverwinter, which include these for instance “The Witches’ Story”, “New City” and “Dawn”. Can experienced presented these people to him as being a farewell offer, which his individuals possessed considered to be making a mockery away from him. Surprisingly, Kajen Fels, a properly-recognized playwright, got accepted all the scripts and helped bring them returning to his movie theater. He set them from the handiest location on his bookshelf, through now he possessed already browse all of them repeatedly.
“Ahem, girls, we must also start to see the benefits.” Egrepo cleared his throat and continued, “We’ve improved rapidly right after consuming the former individuals three of the disbanded troupes. We are able to live regardless of who is the master. Come on, maintain your chin up. Don’t appearance so irritated because Mr. Fels is anticipating us.”
But he still observed coronary heart-busted reading the notice.
“Aha, when your admirers heard these phrases, their hearts and minds would crack,” Egrepo laughed and explained. “It’s expected. We have now more compact people for that performs since that time the king had delivered over half the n.o.bles to your mines and made Neverwinter the latest king’s community. But provided that this city still stands, items will gradually increase.”
|
OPCFW_CODE
|
1.7 Per-cell Quality Control
Strict quality control (QC) of scATAC-seq data is essential to remove the contribution of low-quality cells. In ArchR, we consider three characteristics of data:
- The number of unique nuclear fragments (i.e. not mapping to mitochondrial DNA).
- The signal-to-background ratio. Low signal-to-background ratio is often attributed to dead or dying cells which have de-chromatinzed DNA which allows for random transposition genome-wide.
- The fragment size distribution. Due to nucleosomal periodicity, we expect to see depletion of fragments that are the length of DNA wrapped around a nucleosome (approximately 147 bp).
The first metric, unique nuclear fragments, is straightforward - cells with very few usable fragments will not provide enough data to make useful interpretations and should therefore be excluded.
The second metric, signal-to-background ratio, is calculated as the TSS enrichment score. Traditional bulk ATAC-seq analysis has used this TSS enrichment score as part of a standard workflow for determination of signal-to-background (for example, the ENCODE project). We and others have found the TSS enrichment to be representative across the majority of cell types tested in both bulk ATAC-seq and scATAC-seq. The idea behind the TSS enrichment score metric is that ATAC-seq data is universally enriched at gene TSS regions compared to other genomic regions, due to large protein complexes that bind to promoters. By looking at per-basepair accessibility centered at these TSS regions, we see a local enrichment relative to flanking regions (1900-2000 bp distal in both directions). The ratio between the peak of this enrichment (centered at the TSS) relative to these flanking regions represents the TSS enrichment score.
Traditionally, the per-base-pair accessibility is computed for each bulk ATAC-seq sample and then this profile is used to determine the TSS enrichment score. Performing this operation on a per-cell basis in scATAC-seq is relatively slow and computationally expensive. To accurately approximate the TSS enrichment score per single cell, we count the average accessibility within a 50-bp region centered at each single-base TSS position and divide this by the average accessibility of the TSS flanking positions (+/- 1900 – 2000 bp). This approximation was highly correlated (R > 0.99) with the original method and values were extremely close in magnitude.
The third metric, fragment size distribution, is generally less important but always good to manually inspect. Because of the patterned way that DNA wraps around nucleosomes, we expect to see a nucleosomal periodicity in the distribution of fragment sizes in our data. These hills and valleys appear because fragments must span 0, 1, 2, etc. nucleosomes (Tn5 cannot cut DNA that is tightly wrapped around a nucleosome.
By default in ArchR, pass-filter cells are identified as those cells having a TSS enrichment score greater than 4 and more than 1000 unique nuclear fragments. It is important to note that the actual numeric value of the TSS enrichment score depends on the set of TSSs used. The default values in ArchR were designed for human data and it may be important to change the default thresholds when running
Creation of Arrow files will create a folder in the current working directory called “QualityControl” which will contain 2 plots associated with each of your samples. The first plot shows the
log10(unique nuclear fragments) vs TSS enrichment score and indicates the thresholds used with dotted lines. The second shows the fragment size distribution.
For our tutorial data, we have three samples as shown below:
For CD34 BMMC:
We are now ready to tidy up these Arrow files and then create an
|
OPCFW_CODE
|
package masterofgalaxy.world.worldbuild;
import com.badlogic.gdx.math.MathUtils;
import com.badlogic.gdx.math.Rectangle;
import com.badlogic.gdx.math.Vector2;
import com.badlogic.gdx.utils.Array;
import masterofgalaxy.gamestate.Player;
import masterofgalaxy.world.World;
import masterofgalaxy.world.WorldScreen;
import java.util.Random;
public class RectangleWorldStarLayout {
private final float rectPadding = 32.0f;
private WorldScreen screen;
private World world;
private Array<FreeRect> rects;
private Array<FreeRect> freeRects;
private Random random;
private long seed;
private int numUnownedStars;
public RectangleWorldStarLayout(WorldScreen screen) {
this.screen = screen;
}
public World buildStars() {
random = new Random(seed);
rects = dividePlayfield();
freeRects = new Array<FreeRect>(rects);
placePlayers();
placeUnownedStars();
return world;
}
private void placePlayers() {
StarBuilder builder = new StarBuilder(screen, world, random);
for (int i = 0; i < world.getPlayers().size; ++i) {
Player player = world.getPlayers().get(i);
Rectangle rect = popNextRectangle();
Vector2 pos = randomizeStarPositionInRect(rect);
builder.createHomeworld(player, pos.x, pos.y);
}
}
private void placeUnownedStars() {
StarBuilder builder = new StarBuilder(screen, world, random);
for (int i = 0; i < numUnownedStars; ++i) {
Rectangle rect = popNextRectangle();
Vector2 pos = randomizeStarPositionInRect(rect);
builder.createRandomStar(screen.getGame().getActorAssets().starClasses.pickRandrom(random), pos.x, pos.y);
}
}
private Vector2 randomizeStarPositionInRect(Rectangle rect) {
float offsetX = (rect.width - (rectPadding * 2.0f)) * random.nextFloat();
float offsetY = (rect.height - (rectPadding * 2.0f)) * random.nextFloat();
float x = (rect.x + rectPadding) + offsetX;
float y = (rect.y + rectPadding) + offsetY;
return new Vector2(x, y);
}
private Array<FreeRect> dividePlayfield() {
Rectangle[] rects = buildSubRects();
Array<FreeRect> result = new Array<FreeRect>();
for (int i = 0; i < rects.length; ++i) {
result.add(new FreeRect(rects[i]));
}
return result;
}
private Rectangle[] buildSubRects() {
int numCols = getNumCols();
int numRows = getNumRows();
float width = world.getPlayField().width / numCols;
float height = world.getPlayField().height / numRows;
Rectangle[] rects = new Rectangle[numCols * numRows];
for (int col = 0; col < numCols; ++col) {
for (int row = 0; row < numRows; ++row) {
Rectangle rect = new Rectangle();
rect.x = width * col;
rect.y = height * row;
rect.width = width;
rect.height = height;
rects[index(col, row)] = rect;
}
}
return rects;
}
private Rectangle popNextRectangle() {
int idx = random.nextInt(freeRects.size);
try {
return freeRects.get(idx).rect;
} finally {
freeRects.removeIndex(idx);
}
}
private int index(int col, int row) {
return row * getNumCols() + col;
}
private int getNumRows() {
return MathUtils.ceil((float)Math.sqrt(getTotalNumberOfStars()));
}
private int getNumCols() {
return MathUtils.ceil((float)Math.sqrt(getTotalNumberOfStars()));
}
private int getTotalNumberOfStars() {
return numUnownedStars + world.getPlayers().size;
}
public int getNumUnownedStars() {
return numUnownedStars;
}
public void setNumUnownedStars(int numUnownedStars) {
this.numUnownedStars = numUnownedStars;
}
public long getSeed() {
return seed;
}
public void setSeed(long seed) {
this.seed = seed;
}
public World getWorld() {
return world;
}
public void setWorld(World world) {
this.world = world;
}
private class FreeRect {
public Rectangle rect;
public FreeRect(Rectangle rect) {
this.rect = rect;
}
}
}
|
STACK_EDU
|
You wish to understand how PDQ Deploy installs software. This can be helpful in troubleshooting deployment issues.
Depending on your PDQ Deploy preference settings, there may be slight variations in the process outlined below.
Package Files on the PDQ Console Computer:
First, we create/import a PDQ Deploy package for Microsoft Silverlight 5.1. Opening the package and selecting an Install Step, the Install File location is $(Repository)\Microsoft\Silverlight\Silverlight-5.1.50901.0.exe.
From the image above, the Install File is placed in the $(Repository). The Repository is a system variable defined by Options > Preferences > Repository. By default, the Repository folder is located in %PUBLIC%\Documents\Admin Arsenal\PDQ Deploy\Repository.
PDQ Deploy Credentials:
PDQ Deploy utilizes three sets of credentials. They can be the same credentials or different, depending on the needs of your environment, and the article PDQ Credentials Explained covers these in more detail.
The first set of credentials are the Background Service credentials, located in Options > Background Service. These credentials were supplied when PDQ Deploy was first run.
In the above example, the PDQ Deploy Background Service (called PDQDeploy) runs under the domain user account PDQUser in the deadwood.local domain.
NOTE: is not necessary the Background Service credentials have local admin privileges on target machines, but they are required to have local admin privileges on the PDQ console machines regardless whether the consoles are running Central Server or configured to run in Local Mode.
The second set of credentials are the Credentials as found in Options > Credentials. These credentials are the credentials used as the Deploy User and runs the deployments on target machines via the remote runner service.
IMPORTANT: As the Deploy User, the user(s) in Options > Credentials must be a local administrator on all target machines.
The last set of credentials are Console Users in Options > Console Users. These credentials are necessary if a user will be opening the PDQ Deploy console and that user is not the Background Service user. In this example, we’re opening PDQ Deploy using the deadwood.com\Jane.Doe credentials and not the deadwood.local\PDQUser credentials. Because of this, it is necessary to have Jane Doe listed in Console Users.
The deployment Credentials are set to DOMAIN.COM\PDQDeploy (see above). Here's an example using the Deploy Once window:
- References to the Background Service apply to the Background Service running on the PDQ Deploy console computer. References to the Runner Service refer to the service running on the remote target computer.
- When deploying to targets in child/sub-domains using a domain-specific account, OR to targets in a workgroup, it is necessary to Disable UAC.
- For more information on Console Users (Options > Background Service), see Our Handy Video.
- In Options > Credentials, the (default) user credentials are the default deployment credentials.
Package Deployment Process:
Using the examples above, there are three target computers: Guinness, Heineken, and Lopan.
Step 1: The PDQ Deploy Background Service attempts to retrieve the installer file, Silverlight_x64-5.1.50901.0.exe from $(Repository)\Microsoft\Silverlight\.
In Enterprise Mode, there is a Copy Mode option (Options > Preferences > Performance). The default method is "Push". If the Copy Mode is changed to "Pull," the Background service will not attempt to copy the files down to each target. Each target will attempt to Pull the files down using the Runner service. In this case, the deployment Credentials (Options > Credentials) MUST have full access to the package files. For more information about Push and Pull, please see the article PDQ Deploy Copy Modes.
Step 2: Using the Deployment Credentials the Background Service attempts to copy Silverlight_x64-5.1.50901.0.exe to the following paths:
IMPORTANT: Some antivirus applications may prevent copying into the ADMIN$ share. You may need to exclude these directories from the antivirus real-time scanning as detailed in the article Recommended Antivirus/Antimalware Exclusions for PDQ Products.
Step 3: A Windows Service is created on each target and is called PDQDeployRunner-n (-n will usually be "1"). As explained above, this is referred to as the "Runner" service. The Runner service is set to run under the Deployment Credentials. For this example, we've used deadwood.local\DeployUser (see image below).
There are options available when deploying a package to have each step Run As either Deploy User (use package settings), Deploy User, Deploy User (Interactive), Local System or Logged On User. We recommend using Deploy User (use package settings) or Deploy User but there may be times to change this behavior. If a step's Run As option is set to Local System, the Runner service is created using the Deployment Credentials (deadwood.local\DeployUser) but the service runs as Local System (or whatever Run As option was selected).
Step 4: The Runner service is created and performs an evaluation on the Conditions for the step. If the Conditions are met, the Runner service begins to run the first Step in the package. If any Conditions are not met, the step is skipped and the process (evaluation) is replicated on the second step. Conditions are evaluated as Local System, which can cause curious results if a file condition exists to look for something within a user profile using a variable like %userprofile% even if the step is set to run as "Logged on user".
In this example a 64-bit OS would not pass the first step’s Conditions but it would pass the second step’s.
An evaluation of step conditions is performed on each package step, since there are cases where the conditions might change from one step to a later step (e.g. updated PowerShell version, logged on state, a file or registry condition).
Step 5: In the case of our Silverlight install, when a Step runs and meets all conditions, it executes the files or commands from %WINDIR%\AdminArsenal\PDQDeployRunner\service-1\exec\Silverlight_x64-5.1.50901.0.exe on the target computer and passes the /q parameter (the /q in the Install Step’s Parameters field).
While MSI (and friends) have relatively standard silent parameters that are included in those Install steps, executable (*.exe) installers can vary widely. Please see this video, Finding Silent Parameters for Your Deployments (Using Google Fu), on how to find silent parameters/command line switches for your executable installer. For more information, see Considerations below.
Step 6: The Runner service waits for Silverlight installation to finish. A return code (also known as an Error Code or Exit Code) is sent from the Silverlight exe file and is returned to the Runner service on the target computer.
Step 7: At regular intervals, The PDQ console computer’s PDQDeploy service has been polling the Runner service on each target. When it detects the installation is complete (based on the return code) it returns the information to the PDQ Deploy database.
Step 8: The PDQ Deploy Console detects the change in Deployment status in the database and displays the deploy status (Success, Fail) based on the Success Return Codes specified in the Installer.
Step 9: Cleanup occurs, and the previously created directory, copied files, and runner service on the target machine are deleted.
- Install Step files with .MSI, .MSU or .MSP extensions are automatically passed the parameters needed to run silently. As explained above, If your installer file has another extension (such as .exe), you will likely need to include parameters/command line arguments in the Install Step, Parameters (Details tab). The parameters to run silently depend on the application being installed and are determined by the vendor of the application.
IMPORTANT: If the PDQ package requires a silent parameter for the installer file and no silent parameter is provided, your deployment will likely hang or result in an error.
When an application is installed from PDQ Deploy, windows that are normally shown when an application is installed (such as accepting a EULA or choosing an installation path) cannot be viewed. If the Installation is expecting user input, nothing can provide that input since all installation windows are hidden.
- A return code is defined by the Vendor of the application your are deploying or the OS. In the Silverlight example, the return codes are provided by Microsoft. Return codes help determine the installation state. In many cases, the standard return code for a Success is 0. Other non-zero Success codes are 3010 and 1641. If any code is returned that is NOT specified in the Success Codes field (Details tab), the installation will be marked as a failure.
The vast majority of error codes deal with problems outside of PDQ Deploy and deal with the specifics of the application being deployed.
Microsoft has a list of Return Codes returned by the Windows Installer. You can find other Microsoft Return Codes here.
|
OPCFW_CODE
|
PJSIP version 2.3 is released with main focus on video on iOS, which includes native capture using AVFoundation, native preview, and a choice between two renderer backends: OpenGL ES 2 or UIView. We also add support for OpenH264 and Libyuv as alternative to FFmpeg/Libav in providing H.264 video codec and image converter functionalities.
Archive for the 'Releases' Category
Tags: iOS, OpenH264, video
PJSIP version 2.2 is released, with the focus on new PJSUA2 API, an Object Oriented API for C++, Java/Android, and Python. See the new PJSUA2 Book, a comprehensive tutorial/documentation specifically for this API, for more info.
Also Android is now supported. Apart from these, we added support for 64bit Windows, third party echo canceller for Android, iOS, Windows, etc, and closed over a hundred tickets on this release. See the Release Notes for 2.2 for more info, or head straight to the Download page to get the source code.
PJSIP version 2.0 has been out for eight months, we’ve had two releases since then, we feel that the 2.x line should have been maturing now.
It’s time to say goodbye to the venerable version 1.x. It’s proven to be a successful product, in my opinion. It’s been with us for four years, 25 releases, and it has met a lot of people’s expectations. Although granted that it has its share of deficiencies too. We could only hope that version 2 would be as successful.
But we can’t maintain 1.x forever. We feel our limited development resource would be better spent at developing 2.x instead. Thus we are giving version 1.x support for six more months from now, until September 2013. We hope that this, along with the eight months that we’ve had, gives enough time for everyone to upgrade their 1.x based product to PJSIP version 2. We do encourage everyone to upgrade to PJSIP version 2.
PJSIP version 1.16 was released. This version contains many bug fixes backported from version 2.1. The next version in PJSIP 1.x version line, version 1.17, will be the last of the 1.x series. It will only contain critical fixes from the 2.x, hence once again we recommend everyone to upgrade to version 2 to get the full benefit of PJSIP.
Thank you for using PJSIP!
PJSIP version 2.1 is released with primary focus on BB10 and support for SILK and OpenCore AMR-WB codecs. We also managed to fix synchronization issues in PJNATH and last but not least, the release also contains bug fixes and improved interoperability after SIPit 30 testing in North Carolina just a couple of weeks ago.
As usual please see the Download page for more info.
PJSIP version 2.0.1 is released. This interim release is mainly to support SDL2 which was made available at about the same time PJSIP 2.0 was released which broke our build that relied on SDL-1.3. In the mean time, we already have the initial support for BlackBerry 10 (BB10) in our SVN trunk, hence it gets included with this release as well.
As usual please see the Download page for more info. Thanks.
After many months in the making, 2 alphas, a beta, and an rc, pjsip version 2.0 is now released.
There is no big surprise on the feature as it has been known for sometime now:
- Video support, currently available for desktop platforms.
- On demand media transport
- Support for 3rd party media stack in PJSUA-LIB
Getting Started guides have been updated as well, let us know if something doesn’t work.
Version 1.14.2 is also released
Along with 2.0, an update to the version 1.14 is also released.
You can download pjsip from the usual place.
|
OPCFW_CODE
|
I require this because I should work out whether to display a number together a surd or not in a maths application I"m emerging at the moment.
You are watching: Can square roots be rational numbers
For essence inputs, only the square roots of the square numbers space rationals. For this reason your difficulty boils under to discover if her number is a square number. To compare the question: What"s a an excellent algorithm to determine if an intake is a perfect square?.
If you have rational numbers as inputs (that is, a number provided as the ratio between two essence numbers), inspect that both divisor and dividend are perfect squares.
For floating-point values, there is probably no solution due to the fact that you can"t inspect if a number is rational with the truncated decimal representation.
From wikipedia: The square source of x is rational if and only if x is a reasonable number that deserve to be represented as a proportion of 2 perfect squares.
So you need to find a reasonable approxmiation for your input number. So much the only algorithm I"ve nailed under that walk this task is composed in Saturn Assembler because that the HP48 collection of calculators.
After analysis comments and also the answer to an additional question ns have because asked, i realised the the problem came native a floating suggest inaccuracy which expected that some worths (eg 0.01) would certainly fail the logical check at the end of the program. I have actually amended that to use NSDecimalNumber variables instead.
See more: Short Summary: How Is Math Used In Baseball Math, Teaching Math Through Major League Baseball
double num, originalnum, multiplier;int a;NSLog(
"Enter a number");scanf("%lf", &num);//keep a copy of the original numberoriginalnum = num;//increases the number until it is an integer, and stores the lot of time it does that in afor (a=1; fmod(num, 1) != 0 ; a++) num *= 10;a--;//when square-rooted the decimal points need to be added back inmultiplier = pow(10, (a/2));if (fmod(originalnum, 1) != 0) multiplier = 10;NSDecimalNumber *temp =
If you"re handling integers, note that a confident integer has a rational square source if and only if it has actually an integer square root, that is, if the is a perfect square. For information on experimentation for that, please view this amazing smashville247.net question.
On https://math.stackexchange.com/ there is the concern What reasonable numbers have actually rational square roots? the yielded solution from Jakube that says that because that "...rational numbers, an answer is to identify if the numerator and denominator are integers elevated to the strength of 2."
Good methods to work out whether organic numbers are perfect squares relies on the natural numbers the function supports (and the computer system programming language gift used) and the memory available etc. Below are a set of beneficial links:
I developed and also tested a equipment in Java that works well enough for me with a set of natural numbers. The gist that this is provided below. This code depends on BigMath and is applied in agdt-java-math albeit in a pair of different classes:
code null otherwise. */ public static BigRational getSqrtRational(BigRational x) BigInteger<> numden = getNumeratorAndDenominator(x); BigInteger nums = getPerfectSquareRoot(numden<0>); if (nums != null) BigInteger dens = getPerfectSquareRoot(numden<1>); if (dens != null) return BigRational.valueOf(nums).divide(BigRational.valueOf(dens)); return null; /** *
code x} */ public static BigInteger<> getNumeratorAndDenominator(BigRational x) BigInteger<> r = brand-new BigInteger<2>; r<0> = x.getNumeratorBigInteger(); r<1> = x.getDenominatorBigInteger(); if (Math_BigInteger.isDivisibleBy(r<0>, r<1>)) r<0> = r<0>.divide(r<1>); r<1> = BigInteger.ONE; return r; /** *
|
OPCFW_CODE
|
University of Illinois, Urbana-Champaign (Computer Science)
Everyone's mind works at least a little differently, and the right way to learn is the way that happens to work best for you at the moment. That's my philosophy in a nutshell. If you know what you need from a tutor, I'm here to listen and work with you; I'm not going to push a different lesson plan on you. Yet if you're not entirely sure what you need, I can help there too. I have enough experience --- real-world and academic --- to bring a valuable perspective.
It also happens that I've been tutoring for as long as I can remember. Rarely as an occupation, mind you, but I've always loved figuring things out and helping others do so too. I was that kid that other kids went to for homework help; I enjoyed it and I'm very patient. I also try hard to listen and ask questions to help me understand what would move you forward. Some stuff you know and don't want to spend excessive time on to the detriment of what you're still figuring out. You can expect that even before our first meeting, I'll want to know of your background like that, so that I can prepare to be of help right from the outset.
I'm a professional software engineer, and there are many STEAM-related subjects that I'm happy to tutor --- examples include SAT/ACT plus AP calculus, computer science, and physics --- although I believe I have particularly useful experience in the area of Python programming for science and engineering, by virtue of doing it almost daily, not only tutoring it. Everyone's mind works at least a little differently, and the right way to learn is the way that happens to work best for you at the moment. That's my philosophy in a nutshell. If you know what you need from a tutor, I'm here to listen and work with you; I'm not going to push a different lesson plan on you. Yet if you're not entirely sure what you
Python tutoring: $75/hour
Lionel is very Knowledgable and experienced on Python. He helped me quickly understand the complex Phthon code I recently got and help me run through it. He quickly resolves any issues to keep me making fast progress. And he sent very detailed summary for every lesson we had. I'm glad that I found such a good Python tutor.
Glad I found Lionel her on WyzAnt. AP Computer Science was kicking my butt. Lionel is patient and explains things in a concise and easily understandable way.
In most cases, tutors gain approval in a subject by passing a proficiency exam. For some subject areas, like music and art, tutors submit written requests to demonstrate their proficiency to potential students. If a tutor is interested but not yet approved in a subject, the subject will appear in non-bold font. Tutors need to be approved in a subject prior to beginning lessons.
My B.S. degree is in Computer Science, from the school of engineering at the University of Illinois in Urbana-Champaign (UIUC). While there, I worked as a student consultant, providing one-on-one problem solving in the Computing Services Office, at the National Center for Supercomputing Applications (NCSA), and in the Department of Computer Science.
I have education, professional background, and mentoring experience across most of the computer science field, including common topics (on the AP Computer Science exam and introductory CS courses) such as object-orientation, data structures and algorithms, software construction, and the broader context surrounding computer science.
Over a decade ago, I developed my first Python program: a simulator for server testing of OnStar's turn-by-turn navigation system. To this day, Python remains my "go-to" language for prototyping and general software development. I regularly use Python with popular libraries such as pandas and matplotlib, and have applied it to fields such as GIS and machine learning.
My interest in tutoring Python is at least in part motivated by the satisfaction it gives me to see people become productive with it and, I hope, discover for themselves how fun and easy Python can be to learn and apply to real-world programming problems.
Being an experienced software engineer, one of the things I appreciate most about Python is its tenet that "readability counts": useful code is read more often than it is written. As for the kinds of jobs I like to take on: While I have some experience to share in web-related Python programming, my greatest interest is in helping apply Python to science, engineering, and finance work.
The essential thing is this: No matter where you are on the Python learning curve, I'm happy to help.
|
OPCFW_CODE
|
Retrieve ECG for Display
This integration profile provides access throughout the enterprise to electrocardiogram (ECG) documents for review purposes
Clinicians need a simple means to access and view electrocardiograms (ECGs) from anywhere within and beyond the hospital enterprise. The ECGs should consist of “diagnostic quality” waveforms, measurements, and interpretations. The primary goal is to retrieve resting 12- lead ECGs, but retrieving ECG waveforms gathered during stress, Holter, and other diagnostic tests is also desirable. Typically, these ECGs are already stored in an ECG management system. The focus of this Profile is on retrieval and display, not the process of ordering, acquiring, storing, interpreting, or analyzing the ECGs.
IHE provides benefits to clinicians and administrative staff focusing on patient care and reducing inefficiencies, specifically:
- Make ECG viewing available in many locations, increasing the access to the ECG information
- Simplify and standardize the ECG access and viewing process
- Remove the need to “find the printed ECG” (lost on a piece of paper or moving to a special ECG-only workstation)
- Allow multiple clinicians to view the ECG simultaneously for real-time conferencing
- Provide diagnostic display resolution
- Allow clinicians to view ECGs from outside of the physical hospital walls (in coordination with hospital security policies)
- Reduce the need for duplicate procedures
- Ensure that a multi-vendor environment will function correctly
- Manage and simplify the purchasing process
- Reduce “switching costs” when a new system is purchased
- Select the “best solutions” from multiple vendors and reduce vendor integration issues rather than restriction to a single vendor allencompassing solution
There are several scenarios in a hospital, in which a cardiologist needs to retrieve and review ECG data (eg in preparation of a cath procedure or in order to compare a previous ECG with a recent one). Historically this happened by calling up the paper based record.
This profile describes the selection, retrieval and display of ECG data in order to allow review of ECGs for a given patient on the same display system the cardiologist is working on using the same patient context providing all relevant information about the ECG (Patient ID and Name, Recording time and date, report status, labeling of leads, voltage and timescale per lead, frequency, aspect ratio, ...). The display system could either be an EHR system, a cath or echo workstation, a CVIS or a web based display application. The user can simply retrieve the information from the information source, eg the ECG Managment and archiving system.
The key Actors in ECG Display and examples of real-world products which might implement these roles are:
- Display – Electronic Healthcare Record, Echo or Cath Imaging Viewing workstation, Cardiology Information System, any simple web application which understands how query the Information Source
- Information Source – ECG Management and Archiving System
Actors & Transactions:
Profile Status: Final Text
- IETF RFC 1738: Uniform Resource Locators (URL), Dec 1994.
- IETF RFC 2616: Hypertext Transfer Protocol HTTP 1.1
- Extensible Markup Language (XML): 1.0 (Second Edition). W3C recommendation, Oct. 2006
- Webservices Description Language (WSDL): 1.1. W3C Note March 2001
- Extensible Hypertext Markup Language (XHTML): 1.0 (Second Edition). W3C Recommendation Jan. 2000
- XHTML Basic: W3C Recommendation Dec. 2000
Related Profiles Retrieve Information for Display
This page is based on the Profile Template
|
OPCFW_CODE
|
kernel-packages team mailing list archive
Mailing list archive
[Bug 1573597] Re: Problems with shut down and boot (Lenovo T450s)
I have upgraded to 16.04. There are new problems with RAM/swap
shutdown, which didn't work once in recent months on 15.10, works as it
should now under 16.04
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
Problems with shut down and boot (Lenovo T450s)
Status in linux package in Ubuntu:
My Lenovo T450s under Ubuntu 15.10 does not poweroff normally.
Instead, it hangs on "Reached target shutdown". (See image)
Along the way, I believe there is briefly a full page of "drm:gen8_irq_handler [i915] *ERROR* The master control interrupt lied (SDE)!" messages.
If I leave it there for a long time (> 10 minutes?) it does/may
actually turn off without me forcing it off, but this makes no
difference to the subsequent behaviour. Moreover, if I press Ctrl-
Alt-Del fast, many times, I get the "Rebooting immediately" message,
but it doesn't.
When I try to boot, I experience the following. It requires at least
two reboots to be able to log in:
On the first boot, the Grub page is skipped, and it hangs on the Ubuntu logo with moving dots.
Eventually, I reboot it.
The second time (this is all reproducible), it boots but very slowly. It stalls at "britty.service" and eventually gives me a message
about dev-disk-by , where it waits for 90 seconds before booting.
After that, things proceed to a normal login screen. Behaviour after that is sometimes normal.
This is a 1 month-old computer.
DistroRelease: Ubuntu 15.10
Package: linux-image-4.2.0-35-generic 4.2.0-35.40
ProcVersionSignature: Ubuntu 4.2.0-35.40-generic 4.2.8-ckt5
Uname: Linux 4.2.0-35-generic x86_64
USER PID ACCESS COMMAND
/dev/snd/controlC2: cpbl 3380 F.... pulseaudio
/dev/snd/controlC0: cpbl 3380 F.... pulseaudio
/dev/snd/controlC1: cpbl 3380 F.... pulseaudio
Date: Fri Apr 22 08:42:59 2016
InstallationDate: Installed on 2016-02-12 (69 days ago)
InstallationMedia: Ubuntu 15.10 "Wily Werewolf" - Release amd64 (20151021)
MachineType: LENOVO 20BXCTO1WW
ProcFB: 0 inteldrmfb
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.2.0-35-generic.efi.signed root=UUID=ea2fb511-8d62-4b96-bdaa-4b3f7bf79cc6 ro noprompt quiet splash
UdevLog: Error: [Errno 2] No such file or directory: '/var/log/udev'
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.version: JBET54WW (1.19 )
dmi.board.asset.tag: Not Available
dmi.board.version: SDK0J40697 WIN
dmi.chassis.asset.tag: No Asset Information
dmi.product.version: ThinkPad T450s
To manage notifications about this bug go to:
|
OPCFW_CODE
|
import countByType from './countByType';
/**
* Count the number of messages by user.
* @param {Array} messages Message rows.
* @returns {Object} Type-count pairs.
*/
const countByUserByType = messages => {
// Get the unique types of messages.
const uniqueTypes = [];
messages.forEach(row => {
if (!uniqueTypes.includes(row.type)) {
uniqueTypes.push(row.type);
}
});
// Get unique user IDs.
const uniqueUsers = {};
messages.forEach(row => {
const { userId, name } = row;
if (!Object.keys(uniqueUsers).includes(userId)) {
uniqueUsers[userId] = name;
}
});
// Get count of each type of message per user.
const userCounts = Object.keys(uniqueUsers).map(userId => {
const userMessages = messages.filter(row => {
// userId is an object key and thus a string.
return String(row.userId) === userId;
});
const countsByType = countByType(userMessages);
const userName = uniqueUsers[userId];
return {
id: userId,
name: userName,
counts: countsByType,
};
});
return userCounts;
};
export default countByUserByType;
|
STACK_EDU
|
Three things I’m currently trying to work on, but there never seems to be enough time.
- CSEC630 class work
- Malware Traffic Analysis exercises
Class is kind of depressing. A lot of reading PDFs, since we no longer have text books in the program, and we learned this week we don’t even have access to all of the PDFs.
I wonder why I’m taking a class where the lab assignment is to run the VM with SNORT on it and answer some questions. I would rather see something like “here is a pcap and net flow. Using the information provided create snort rules”. This class is supposed to be a Master’s of Science course in Cybersecurity, but to be honest the work and readings are like 100 to 200 level stuff. Or at least what I think are that level.
Every weekly reading we do, I wonder why we’re not doing the kind of research we are reading about. This is supposed to be a Master’s of Science and there isn’t much science to it. Some of the readings are just bad; where the academic papers’ authors didn’t talk to the people who have to use the stuff they writing about.
Classes eat up too much time, and I wonder if I’m getting any value from them. I don’t think I am. Which makes me wonder how much value my Degree will really be.
Still working with Violent Python, Gray Hat Python, Black Hat Python and Automating OSINT. But I’m finding it hard to find time to work through the stuff, and a little disappointed that they are all Python 2. I’m really at the point that if you are still writing Python2, the product isn’t worth the time. Charles L. Yost’s talk at Derbycon “Python3 It’s Time” really colored my view on that.
If a tool is written in python 2, I’m left wondering about the author’s commitment to the software, and its security. Which reminds me, I should get a SD card, and test my git hub scripts (written in Shell script) for the RPI Wids stuff. See if I have update or branch the code for newer RPis.
Malware Traffic Analysis exercises
I miss playing with pcaps, I just don’t seem to spend the time I used to in TCP dump or Wireshark. I read Chris Sander’s Practical Packet Analysis second edition a couple of years ago. But over time stopped using it. There is a new edition coming out by the way. Anyway, Malware Traffic Analysis looks like it might be fun, and a decent way to get back in to the swing of things.
Though I think from an Incident Response and Threat Intelligence standpoint it’s a little limited. It only gets the lower level data from the Pyramid of Pain, from what I seen so far. IP addresses and Domain names. If the pcap has a referrer, it might get a little be more data. I’m curious how much of the Kill Chain and Diamond model the traffic analysis will fill in. I think that is one thing I’ll do when I get to the exercise. I got Wireshark prepped. Now it’s just a question of finding time to practice.
Work has been reading company security policy, industry standards, and Federal Regulations and Recommendations (NIST). I feel a little isolated, but the work space isn’t designed for solo reading, it’s designed for open collaboration. Which means to not be distracted and focus on the reading I’m listening to isolating pink noise.
|
OPCFW_CODE
|
Skip invalid flows instead of stop processing
Describe the bug:
if there is an invalid flow (e.g., with missing output), operator does not generate config at least for valid flows.
It can lead to DoS, if a user creates such flow, all other users cannot create a new one. I think it should be fixed just to skip invalid flows.
{"level":"error","ts":1698751582.322028,"logger":"controller.logging","msg":"Reconciler error","reconciler group":"logging.banzaicloud.io","reconciler kind":"Logging","name":"rancher-logging-root","namespace":"","error":"failed to build model: referenced clusteroutput not found: ucn","errorVerbose":"referenced clusteroutput not found: ucn\ngithub.com/banzaicloud/logging-operator/pkg/resources/model.FlowForClusterFlow\n\t/workspace/pkg/resources/model/system.go:367\ngithub.com/banzaicloud/logging-operator/pkg/resources/model.CreateSystem\n\t/workspace/pkg/resources/model/system.go:95\ngithub.com/banzaicloud/logging-operator/controllers/logging.(*LoggingReconciler).clusterConfiguration\n\t/workspace/controllers/logging/logging_controller.go:197\ngithub.com/banzaicloud/logging-operator/controllers/logging.(*LoggingReconciler).Reconcile\n\t/workspace/controllers/logging/logging_controller.go:127\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.1/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.1/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.1/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.
11.1/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581\nfailed to build model","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.1/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.1/pkg/internal/controller/controller.go:227"}
rancher/mirrored-banzaicloud-logging-operator:3.17.10
/kind bug
One thing you can do instantly is to add a timeout based configcheck, which I beleive should help in more than just this scenario, since it actually starts fluentd instead of executing it with the syntax check flag only:
https://kube-logging.dev/4.4//docs/whats-new/#timeout-based-configuration-checks
In general, if you need stronger isolation the operator provides solutions where flow configurations are segregated into individual aggregators based on namespaces.
It would be recommended in multi-tenant situations, because even if we fix this specific error (invalid output reference), we cannot trivially verify every issue with the flows, for example if one flow consumes all the cpu, etc.
You can do basically two things at the moment:
Create separate logging resources with disjunct watchNamespaces that refer to your tenant's namespaces, but in this case the fluentbit agents (as many daemonset as many logging resources you have) will collect and send all the logs to each aggregator. If you don't let the user create clusterflow resources they will be able to create flows that refer to their namespaces only so even if the data is there, fluentd will drop everything that is not related to them
A better solution is to use LoggingRoutes, because using that you can deploy a single fluentbit daemonset which can route logs to specific tenants, so that a tenant won't receive logs from other namespaces
More technical information about LoggingRoutes and multi-tenancy in general:
https://kube-logging.dev/4.4//docs/whats-new/#multitenancy-with-namespace-based-routing
We also have a blog post about this to summarize all aspects with some diagrams involved:
https://axoflow.com/multi-tenancy-using-logging-operator/
Finally don't hesitate to reach out over Discord or the CNCF Slack channel if you have further questions!
Hi, thanks for reply. I a bit hoped that skipInvalidResources: true would help here. Isn't it for this case? But I agree that it is far too difficult to find which flow/output is the bad one. My initial issue was purely about operator that stopped processing flows if it found a flow without existing output. In such a case, I asked to skip this flow, but probably the mentioned skipInvalidResources: true already does it? Rancher uses stil 3.x version and this was introduced in 4.x version so I missed it.
I have to admit I didn't know about that flag and it does exactly what you said. We can improve the situation a little bit by updating these issues back into the flow status for a better user experience.
Closing this as skipInvalidResources: true solves this exact issue. @xhejtman have you been able to upgrade? If not let us know over slack or discord so that we might be able to help.
forgot to follow up: @csabatuz-chess after all are you interested in the multi-tenant approach I was referring here? if yes please let me know!
|
GITHUB_ARCHIVE
|
I would like to import a short .mov file of a flame to go over two characters fighting.
Is this possible? If not, is there another way to do it in CA?
Silly me. I thought I could just put it on top of the scene, like an MP3 file.
PS I bought this. Please do not copy it, as I really like this company and don't want to get into licensing trouble. Their stuff is really good and extremely reasonable. (colourbox.com)
I think you should be able to fee into Adobe Media Encoder and make it output a sequence of PNG images. You can then use the "Import Cycle" menu item to load up the PNG images into a cycle layer. Then you don't play the video - you sequence through the images as a puppet.
To do this, start up AME, then select "Add Source" and select the video file to import. THen in the Queue panel, select "PNG" as the first drop down column and click the "run" (green triangle) to start the encoder running. It will create a directory of PNG files, one per frame. You can click on the "Output File" to change the directory etc the sequence will be written to. (I would create an empty directory to put the files into. Give the output file a name like "foo" - it will append numbers to the end of each file creating "foo1.png", "foo2.png" etc.
Once you have the PNG files, you can add them to a sublayer in a puppet, but you can also create a new puppet from the files which might be easier. In the "File" menu there is "import cycle..." and select the first file (number 1) in the directory. This created a number puppet called "Renders Cycle" for me. Then adjust the "Cycle Layer" behavior property onto the layer above all the images and fiddle the properties (e.g. do you want it to loop once or forever? start immediately? etc). The Cycle Layers property will be on the root of the puppet.
Silly me tried with a long video and it took a while... but did work.
(There was a discussion a week or two about this very topic in the forums.)
That is a fairly long sequence, at fairly high resolution. Unless you have lots of memory that might not work using cycle layers (if you need the whole video). I tried something that long on my laptop and it crashed with out of memory problems. You can try, but it failed on my laptop.
This is where I would suggest using something like Premier Pro (or After Effects). Someone might know a better way, but this is the end result. Is this what you are trying to achieve?
I had a CH video file created first, then I loaded up your MOV file of flames. You then layer the tracks so the flames is above the CH file.
However, the flames movie file does not have transparency support - its black! So I tried the "Color Key" effect. Open it up under the effects panel:
Then drag that icon over the MOV file in the timeline panel - you will see a little green FX icon appear on the MOV file if it works.
If you select the MOV file then open up the "Effects Control" panel, you can change the settings. Here is what I used.
In particular, I set the "Key Color" under the "Color Key" effect to black. This tells it "I want black to be transparent". I then fiddled with the "Color Tolerance" to get it to look about right. I also fiddled with "Edge Feather" to soften the edges a bit. Just try different values until it looks good.
But the flames were still very solid (could not see through them), so I also adjusted "Opacity" which seems to be added by default to clips, so I reduced the "100%" to "70%" to make the flames a bit transparent.
I am not an expert in this - there may be better articles on the web you can find. The idea of making a color transparent is called "Chroma keying". Unfortunately the video file did not seem to half an alpha channel - well, at least not that Prem Pro recognized. So you need to use an effect to do this. There are a few effects listed there - I just picked one. ANother article I found recommended Ultra Key, but I failed to get it to work - Color Key sort of worked.
After Effects might do a better job than Premier Pro as well - this sort of thing is what it is designed to do. (Add effects afterwards!) But this one was so simple that I just used Premier Pro instead.
I should add if you only want say a second of flames, you can probably use cycle layers. How long do you need.
When compositing flames in Ch or AE that are shot on black, instead of keying it will look better if you use the Linear Dodge (Add) or Screen blending mode, which is a layer property in both apps.
|
OPCFW_CODE
|
Boskernovel Fey Evolution Merchant – Chapter 161 defiant twig propose-p1
Novel–Fey Evolution Merchant–Fey Evolution Merchant
Chapter 161 living home
Presented his energy, Lin Yuan experienced the potential of ascending for the 100th ground in the Superstar Tower and climbing to your Celestial Stairway.
Sixty days earlier, a youth obtained provided him wish for a completely new lease of lifestyle when he was anxious and decrease, allowing him to again find his value, his aspiration, his persistence, as well as the meaning of his lifestyle.
A man with a set of peach blossom eyes shouted at Liu Jie. His tone appeared to experience a in shape of indescribable anger.
Liu Jie, who has been being able to help Lin Yuan discover the spots for his keep during the Royal Capital, was standing upright there in a confrontation.
If Cold Moon possessed not discovered which the Moon Empress was annoyed and had used the initiative to crush the cellular phone in her behalf, the Moon Empress can have crushed it as an alternative.
Liu Jie, who was being able to help Lin Yuan obtain the regions for his retailer in the Noble Funds, was standing upright there inside a confrontation.
The Moon Empress did not save numerous volumes in her own mobile besides those exceptional authorities through the Brilliance Federation. Of course, there were clearly conditions, such as Lin Yuan.
Liu Jie got suddenly lost his functions and had pooled all his price savings within his Legend Net Credit card for Bai Hao, therefore the second option did not understand how Liu Jie were obtaining along each one of these decades.
The Moon Empress shook her hands, as well as the mild-crimson concoction become an aromatic mist that enveloped a Diamond/Legendary lotus fey in the lotus pond. This fey quickly absorbed the sunlight-purple fragrant mist, and in just a minute, it evolved into Precious stone/Story.
When listening to that, the Moon Empress could not assistance but have her brow. She were pleased for absolutely nothing. Her disciple searched for her guide not because he was in issues but due to his sibling.
On the other hand, after Liu Jie obtained received hurt and found that he could not retrieve, he noticed downhearted and remaining the Noble Funds.
At that moment, Bai Hao said, “Before Liu Jie’s Pest Queen is cured, I won’t shell out just one dime. I’ll save them because of its treatment.” Then he simply let out an unusual cry. “Ugh! I can’t compensate the Sparrow Sound Loli G.o.ddess anymore during this period. I don’t know if I can manage my recognize of becoming the very best adding fanatic.”
difference between apaches and comanches
After, Lin Yuan told her about Chu Ci’s predicament along with his opinions.
Cities Of The Dawn
During this period, Liu Jie had searched a number of spots and had sent Lin Yuan the details of each place.
Soon after he affirmed his retail outlet location within the Royal Investment capital, he would enter into a long-term duration of seclusion inside. First of all, he had to fuse Chimey using the Twilight Motivation Rune and evolve it towards a Fantasy Particular breed of dog. He would then placed the Acid solution Deterioration Princess Bee in Crimson Thorn’s corrosive cavity and allow it to hatch. And then, he would change the Acidity Rust Princess Bee into Rare metal.
Even so, following Liu Jie acquired become harmed and found that he could not recover, he sensed downhearted and kept the Royal Budget.
The Moon Empress did not preserve a lot of amounts in her cell phone besides those excellent pros from the Brilliance Federation. Naturally, there was exclusions, including Lin Yuan.
Lin Yuan possessed always stored Liu Jie’s Insect Princess from the Real Land of Happiness. During these couple of months, the Bug Princess, that have been in upcoming hazard, had pretty much retrieved. So long as he expended some days curing it, it could actually completely recover. During those times, Liu Jie could recover the center of Bug Swarm’s strength, which positioned Series #39 inside the Brilliance Hundred Sequence.
When Liu Jie ended up being in trouble, the 3 of them obtained just pooled their cash together to obtain spiritual substances for Bai Hao to evolve his fey.
On listening to that, the Moon Empress could not aid but hold her forehead. She was satisfied for not a thing. Her disciple needed her assistance not while he is in issues but because of his sibling.
If he would be struggling and boosting his get ranking within the Celestial Stairway, he could most likely be coordinated while using Radiance Alliance’s small pinnacle specialists.
If Cold Moon had not learned the fact that Moon Empress was upset and had considered the initiative to grind the cellular telephone in her account, the Moon Empress can have crushed it instead.
Bai Hao then remaining with w.a.n.g Supporter, who planned to say a little something. w.a.n.g Fan sighed because he considered Bai Hao and failed to say what he needed to.
Two months back, a youngsters had presented him hope for a new rent of living when he was needy and downward, letting him to again discover his appeal, his aspiration, his perseverance, as well as concept of his lifestyle.
things as they are gertrude stein
Subsequently, the Moon Empress, who rarely gone onto Superstar Net, even sought out content articles there but failed to discover any responses.
If she needed to meet any person, she could just summon them. If anyone desired to connect with her, they had to see whether or not they had been worthy enough to achieve this.
For that reason, the Moon Empress, who rarely journeyed onto Legend Web, even searched articles there but failed to find any responses.
Right after he validated his keep site on the Noble Budget, he would enter an extended period of seclusion inside of. For starters, he simply had to fuse Chimey along with the Twilight Motivation Rune and change it to a Imagination Dog breed. He would then placed the Acid Deterioration Princess Bee in Red-colored Thorn’s corrosive cavity and let it hatch out. Afterward, he would change the Acidity Corrosion Princess Bee into Golden.
Right then, Bai Hao mentioned, “Before Liu Jie’s Insect pest Princess is cured, I won’t shell out a single penny. I’ll conserve them to its cure.” He then permit out a strange weep. “Ugh! I can’t benefit the Sparrow Tone of voice Loli G.o.ddess anymore during this period. I don’t know if I could keep my place for being the highest contributing supporter.”
With discovering, the Moon Empress marveled at Lin Yuan’s durability. She had always well-known that they had outstanding talents as a Design Excel at, but she did not count on him to get this kind of effective deal with-cla.s.s spirit qi specialist.
Even so, the Moon Empress responded very solemnly, “Lin Yuan, don’t worry. I’ll consult Cool Moon to have this finished.”
Following listening to that, the Moon Empress could not guide but store her forehead. She has been content for absolutely nothing. Her disciple sought her help not as he is in hassle but as a result of his sibling.
Right then, Bai Hao claimed, “Before Liu Jie’s Insect pest Princess is cured, I won’t expend a single penny. I’ll preserve them due to the treatment method.” He then enable out an unusual weep. “Ugh! I can’t reward the Sparrow Voice Loli G.o.ddess anymore during this time. I don’t determine if I will manage my area of becoming the top adding supporter.”
Bai Hao snorted and changed faraway from Liu Jie.
Bai Hao required w.a.n.g Fan’s Celebrity Online Charge card and responded, “I’ve rescued some funds in earlier times 24 months, and so i have 32,000,000 Brilliance $ $ $ $ inside my Legend Internet Credit card. Counting yours in, we certainly have over 50,000,000 Radiance us dollars. Let’s check if we will get Liu Jie’s Insect Queen cured!”
He got bidden good bye to his ident.i.ty of Heart of Pest Swarm, the former Pattern #39 within the Brilliance Hundred Sequence, and had turn into a regular straight down-and-out adventurer.
|
OPCFW_CODE
|
View-source button when editing is unavailable
There are quite a few cases where you may want to view the markdown source behind a question.
The easiest way to do this is via the [edit] link below the post.
Unfortunately, the [edit] link is sometimes missing-on locked posts, on posts with pending edits, and when the queue is full.
I'd think this use-case was inconsequential, but it happened to me twice yesterday and the third time this week, so I thought I'd propose this:
Replace [edit] with [view source] when a post cannot be edited
This will solve the issue of people getting confused about it as well Missing edit link?
Maybe we can add a reason for the post's uneditableness in the view source view.
Nice idea - of course, it would need to show a read-only view of the markdown.
"This will solve the issue of people getting confused about it as well Missing edit link?" Bold assertion!
@Oded that's rather obvious :) <textarea readonly> does the trick.
And removing the [Save Edits] button...
@Oded yep! And the markdown editor. Just a nba rebones readonly textarea with possibly a reason floating by.
No, I say leave the fields all editable and submissible, except failing silently.
A better way of solving the disappearing edit link would be to leave it there, and simply show an error message. (I'm still missing the part of your feature request where you explain why you needed to see the source.)
@TheE once to check out how something was done in markdown, and twice to copy-paste a metapost.
We already have a "view source" link.
I think a better solution to your "view source" dilemma is just always providing a link to the revision history, even when there have been no revisions to the post yet. The revision history already has a "view source" link for each edit which changes the body, including the original post.
The only real problem is it isn't easily accessible for posts which haven't had any revisions, and thus the "edited x time ago" link doesn't appear to get there. Sure, you can manually create the link to get into the history, but that's an excessive amount of effort just to get the source.
For questions: "All you have to do" is edit the URL and you're set. Change the questions part to posts and everything after the ID number to revisions.
For answers: You actually have to find the post ID first. Fortunately, this isn't too difficult. You can find the post's ID in the link popup box, it's the first number (the second is your user ID). Past that, the URL is exactly the same as a question's revision history, except you need to change out the ID number as well.
As for "relieving confusion," I think when there is a suggested edit pending, it should display the same for everyone. Even if they can't approve/reject it, why not show it to them? Just add a notice at the top "This edit is pending approval. You cannot make edits to this post until it has been peer reviewed."
I know we have a view source feature but I find it annoying and nonintuitive(especially for newbies--and these are the guys who need view source more) to get to it via URL manipulation.
As for an always-revhistory link and 'always showing the edit button', isn't that more complicated?
How is it any more complicated than what you're proposing? I honestly think that showing them an edit-like screen to"view the source" when they can't actually edit would be even more confusing.
Oh yeah, actually they're both at the same level of complicatedness :) . showing them an edit-like screen to"view the source" when they can't actually edit would be even more confusing . Try editing Wikipedia's Main Page. You'll see what I mean by Maybe we can add a reason for the post's uneditableness in the view source view.. It's not confusing at all.
|
STACK_EXCHANGE
|
GatsbyJS Core Web Vitals: How To Go Green With Lighthouse v9
I'm not the only one it seems.
The Core Web Vitals performance requirements have been really hard on Gatsby and other JS bundled sites.
Google Search Console’s “Page Experience” shows:
Your site has no URLs with a good page experience
For this reason, I redeveloped my personal blog (the one you’re reading) with 11ty to test the performance difference.
The results: 11ty destroys Gatsby on performance.
This makes sense since all 11ty does is generate a plain old, regular HTML + CSS site with no added extras or JS bundling unless you choose to add it in yourself.
But I wasn’t satisfied with just abandoning GatsbyJS and React on my bigger sites (I’ve fallen in love with it), so I spent the last few weeks tweaking and optimizing like mad to see what I could achieve.
I’m happy to report that I seem to have mostly achieved green 90+ scores on my large GatsbyJS site (not all pages though which I’ll explain below).
I thought it might be helpful to summarize some steps I took to make it happen in case anyone else is struggling.
If you have a 3 page portfolio site, then you don’t get to lecture anyone on speed
So you have a pretty portfolio page with a bio and contact form, and you’ve achieved a 95+ green performance score?
That’s not an achievement, my friend.
Way too many boasts of high performance on sites that weigh nothing.
I have several Gatsby sites now that have at least 500 posts each, average 2000+ word articles, full of images, videos, podcasts, audio components to play language snippets, and so on.
Some might even argue that Gatsby is unsuitable for sites like mine.
But I have managed to greatly improve speed and achieve a fairly consistent green performance score on most pages.
With the Google Core Web Vitals algo update approaching , I think I’m safe.
Here’s what I’ve done.
How to tweak and optimize GatsbyJS for Core Web Vitals and green perf scores
I’ll share some steps I took in no particular order.
Some of this is probably no-brainer stuff for experienced React devs. I’m quite new to React myself (only 2+ years so far) so still discovering best practices.
Take bundle size seriously
When you first start out with Gatsby (and indeed React), you tend to go a bit crazy adding node packages.
Node packages and Gatsby plugins are bit like shopping on an app store - you see a bunch of cool, useful stuff or things like utility libraries that save you time and add them without realizing the cost.
A well-known example is
Importing the entire
lodash library is a costly mistake - the first one I rectified.
Instead of importing the whole thing - import what you need specifically.
Or don’t import it at all and see if you can simplify whatever you’re doing with a custom utility.
In my case, I had also added bulky icon packages (because I needed one or two svg icons), framer motion for basic animation stuff that CSS was perfectly capable of doing, progress bars, emojis and so on.
All this stuff adds up to literally megs of bundle size.
For this, I highly the gatsby-plugin-perf-budgets plugin to analyze your bundle in great detail.
Cutting this down is one of the easiest quick wins you can make.
Clean up and trim your code
When you build a site with 11ty, Jekyll or Hugo, your templating code is rendered into HTML.
Doesn’t really matter how messy or convoluted your template is - it’ll output a HTML document and nothing more.
So no issues.
With Gatsby, you end up with a static HTML page that is hydrated with React and all that messy, shitty code you wrote directly affects how well this performs.
If, like me, you have a boatload of confusing logic (e.g. lots of conditional rendering stuff), you can expect it to hurt your site’s performance as the browser has to work extra hard to parse it all.
Your coverage tool will show lots of red unused JS too.
Simplify your code as much as possible, remove repetitive code, try to do more with less.
Don’t use styled-components - switch to Linaria or CSS Modules
The verdict is in on this one.
I love styled-components (and emotion) but it sucks balls for performance on a site with lots of components like mine.
Removing styled-components completely and switching to Linaria immediately increased my Lighthouse perf score by at least 20 points.
Linaria is still kinda buggy. I kept getting the Gatsby white screen of death when using it if there was a syntax mistake or when passing style props to an MDX component, but it displayed no console errors to help me diagnose what the problem was.
Plus you get a crapload of irritating “conflicting order” warnings (mini-css-extract-plugin) that you need to disable using webpack (safe to do so).
The reason why I chose Linaria over CSS Modules is that I could continue styling the same way I did with styled-components (e.g.
const Component = styled.div) but instead of having the style bloat up the page in the browser, the CSS is extracted and scoped.
Best of both worlds.
Note: it will slow down build times a little.
MDX is a blessing and a curse
I absolutely love MDX .
The problem is - when you import an MDX component in a blog post, you need to remember that it gets bundled with your whole app/site.
It’s not scoped to that page.
I had at least half a dozen fairly heavy MDX components I was using that were weighing the whole site down.
There is a hacky way around this.
Firstly, don’t use import statements inside your MDX files.
This was a mistake of mine - every time I’d write a post, I’d import my components at the top of the article.
Remove these, and create a separate MDXComponents file, then pass that as a
components prop to your MDXProvider.
In your MDXComponents file, you can then lazy load those individual components.
This is where loadable-components comes in.
You can use
loadable-components here but there is one major caveat (perhaps it’s a bug/issue that should be raised).
Unfortunately, you can’t use
gatsby-plugin-loadable-components-ssr if you do this - the components must be client-side only or it’ll throw errors at you.
So the upside is: you can lazy load MDX components and stop them weighing your site bundle down.
The downside is: you can’t server-side render them for SEO value.
This is great for audio play components and so on, terrible for any textual data that you want to get indexed by Google.
Use the hell out of loadable-components
While I’m on the matter of
loadable-components, use it a lot.
It’s even listed in the Gatsby docs as their recommended method for lazy loading.
If you’ve got components toward the bottom of your page, they should be lazy loaded.
Things like: recommended articles, footers, dark mode switches, opt-in forms and so on.
If they’re not in the viewport, they shouldn’t be loaded yet.
Bear in mind, if you took my advice on MDX and loadable-components and can’t use the SSR plugin, then you don’t want to use
loadable-components on anything that is SEO-beneficial.
Inline critical SVG components and then use a standard
<img> tag (not require)
This was a huge mistake for me.
My site uses hundreds of SVG’s. Lots of flags, product logos and so on.
I was originally inlining all of them using custom SVG components (with props). This meant that my site’s bundle was suffocating.
You should only use inline SVG’s for critical SVG’s at the top of the page that need to load instantly.
All other SVG’s should be added using
Also, don’t use
require inside the
<img> tag because it will turn your SVG into a base64 image which increases the size of your page.
Move your SVG images to a folder in
/static/ and reference them that way.
For background SVG’s, use base64.
React hydration is the primary killer of TBT in Gatsby sites
Total blocking time.
It’s the hardest metric to please with a Gatsby site.
The reason for this is that React hydration kicks in after the HTML is loaded and there’s a brief period of “white” (you can see this in the Performance tab of Chrome Dev tools) that delays interactivity.
To improve this, use Lazy Hydration on sections of your site.
I managed to get my Total Blocking Time down from 1,500+ to about ~250 by lazily hydrating large sections of my page.
Use it carefully though - I had a few instances where sections of my page just didn’t appear at all on load.
LCP and CLS are design issues
These are the easiest metrics to fix.
For LCP, ensure that the largest piece of content in the viewport (usually headings or a large image) are loaded instantly and visible.
You don’t want any period of blank white to show.
This might mean that you need a redesign to get rid of large hero images (the primary culprit for LCP problems).
In my case, I actually have a media query on my review posts that hides the image completely on mobile and only shows the text heading and subtitle.
This is a design issue that you need to play around with.
For CLS, just make sure that nothing moves while it’s loading.
Don’t have dynamic components loading up top or on the navbar (I actually moved my dark/light mode switch to the footer because it had a slight delay in my navbar and pushed elements around, hurting my CLS).
It’s another UI/UX call that you’ll need to play around with for best results.
Take a good look at your queries
One of the things that caused a big delay for me was my comment query.
I use a custom, Git-based comment system where comments are stored in JSON format, and then I run a GraphQL query to find comments for specific posts at build time.
Originally, I was using a static query for this and had all my comments in one giant JSON file (8000+ comments!).
So I manually extracted all the comments and put them into their own individual comments.json files, then I removed the static query and ran a page query instead.
Basically what this means is that each page has its own comment data that doesn’t affect the rest of the site.
NOTE: I still need to address this because some posts have hundreds of comments which means my page data for that specific post is huge. I may end up needing to paginate the data or load it dynamically.
Another example was my recommended posts component which queried every post on my site to find related posts to display. Unfortunately, this created a huge query in my bundle. I’ve removed it entirely but may end up finding an alternative method.
Lesson here for you is to try and pay attention to the queries you make and ensure that whether they’re static or page queries, they aren’t causing bloat.
Green perf scores can be achieved on large Gatsby sites but it’s going to take a lot of patience on your part
Some pages on my site aren’t there yet.
I’ve focused primarily on my blog posts, resource pages and review posts which are the most important for SEO.
It’s extremely time consuming tweaking and testing these things.
Make sure you only test on throttled 3G - nothing else matters. Desktop testing is a waste of time because Google doesn’t consider it anymore.
Focus on mobile. In 2021, everything is mobile-first.
If I have more results or discoveries as I tweak and optimize, I’ll update this post with my findings.
|
OPCFW_CODE
|
#ifndef ENGINE_COMMON_UTILITY_PATTERN_PROVIDER_HPP
#define ENGINE_COMMON_UTILITY_PATTERN_PROVIDER_HPP
#pragma once
#include <array>
#include <memory>
namespace engine
{
template<class owner_t> struct provider_manifest_t;
#define REGISTER_PROVIDER_BASE_TYPE(owner, provider_base) template<> struct ::engine::provider_manifest_t<owner> { typedef provider_base provider_t; };
template<class owner_t> class holder_t
{
public:
virtual ~holder_t()
{
}
virtual typename provider_manifest_t<owner_t>::provider_t * get_provider(std::size_t id = 0) = 0;
virtual std::size_t get_providers_count() const = 0;
};
template<class owner_t, class ... providers_t> class holder_implementation_t : public holder_t<owner_t>
{
public:
holder_implementation_t(std::unique_ptr<providers_t>... providers) : providers{ {std::move(providers)...} }
{
}
typename provider_manifest_t<owner_t>::provider_t * get_provider(std::size_t id = 0) final
{
return providers[id].get();
}
std::size_t get_providers_count() const final
{
return sizeof...(providers_t);
}
private:
std::array<std::unique_ptr<typename provider_manifest_t<owner_t>::provider_t >, sizeof...(providers_t)> providers;
};
}
#endif
|
STACK_EDU
|
Microsoft has upgraded the Windows Vista tcpip.sys system file, which responsible for TCP/IP network connection protocol in Windows Vista, with several hotfixes and service pack. Originally both the 32-bit and 64-bit tcpip.sys is version 6.0.6000.16386 for Windows Vista RTM edition, and has since been updated to version 6.0.6000.20582, 6.0.6000.20583, 6.0.6000.20645 (by KB940646), 6.0.6001.16633, 6.0.6001.16659, 6.0.6001.17042 (v.658) and 6.0.6001.17052, with the later is what system with Windows Vista SP1 RC installed should have.
The update on tcpip.sys of Windows Vista also renders the patched version of tcpip.sys, which unlocks and removes the limit on simultaneous half-open incomplete (syn packets) connection attempts per second that can be made by system, to become unusable or causing system instability such as BSoD (Blue Screen of Death) or no Internet connection after patching. Attempt to patch newer version of tcpip.sys file does not work because once replaced, the patched tcpip.sys is rejected by Vista due to missing or corrupt digital signature.
Thus, a Chinese programmer named Eagle Twenty has developed a Windows Vista driver which loads independent of tcpip.sys. The driver, named CrackTcpip.sys, will installed as a CrackTcpip system service. When CrackTcpip.sys is ran, it will consistently monitor a memory address (offset 0x00059722 for Windows Vista SP1 RC v.668) which store number of the half-open TCP concurrent connection limit. The original value of the bit of 0A, which translate into 10. When CrackTcpip.sys service detects that the value for the TCP/IP simultaneous half-open connection limit is 0A, it will hack and change the value to FF, which is 255.
255 concurrent connections may looks small, but it’s actually already 25 times the original limit imposed by Microsoft. And the unlocked simultaneous TCP/IP sync packet limit probably enough to speed up performance of uploading and downloading, especially on peer to peer (p2) network such as BitTorrent (BT) and eDonkey2000 (ed2k). In some cases the patch will solve issues such as fail to load and open web page on web browser when using BT client to download in high speed with many torrents downloading, occupying and eating up all network resources, where users will have lots of log entries in Event Viewer that saying Event ID 4226 error.
CrackTcpip.sys Installation and Usage Guide
The CrackTcpipv668.zip package consists of 5 files:
Copy CrackTcpip.sys to \Windows\System32\Drivers\ folder.
To conduct a trial run of CrackTcpip.sys to test if the driver is compatible with your system, and does not cause any problem, error or BSoD, double click on TestInstall.reg to apply the registry value. Restart the computer, and then run TestRun.bat (or Run.bat) as administrator to start the service.
If there is any problem, reboot your computer to stop the CrackTcpip service. To uninstall, delete the CrackTcpip.sys from \Windows\System32\Drivers\ directory and run Uninstall.reg to remove the registry key.
To install CrackTcpip.sys, run Install.reg to autostart CrackTcpip service on every system startup. If you previously test run the CrackTcpip, there is no need to uninstall the test registry key first. The newer registry key will overwrite the old one. Again, run Run.bat (or TestRun.bat) to immediately activate the driver, or restart the computer. The service will start automatically on every boot up after adding keys in Install.reg.
To uninstall CrackTcpip after installing, run Uninstall.reg to remove the service registry key, restart PC, and then remove the CrackTcpip.sys file from \Windows\System32\Drivers\ folder.
Note that CrackTcpip.sys does not work in 64-bit Windows Vista (x64 edition), where Microsoft forces all drivers to be signed and certified by disable “bcdedit /set loadoptions DDISABLE_INTEGRITY_CHECKS” command which disables integrity check, failing all uncertified driver.
And, this version of CrackTcpipv668.sys is targeted to system with Windows Vista with Service Pack 1 RC, build 6.0.6001.17052 v.668, where the tcpip.sys should has the same 6.0.6001.17052 file version too. There is no guarantee that CrackTcpip.sys will work on any other versions of tcpip.sys.
CrackTcpipv668.zip (no longer required).
Update: Download CrackTcpipv744.zip for Windows Vista SP1 Refresh (v.744 or 6001.17128) and Refresh 2 (6001.18000 or RTM).
Use at your own risk, and in actual, less risk than previously patched tcpip.sys, as the external driver does not modify the code bits of tcpip.sys. Eagle Twenty, the developer of CrackTcpip.sys will develop a new version of CrackTcpip.sys if it no longer work (most likely) when final version of Vista SP1 is released in the next few weeks. Future release of CrackTcpip.sys, if the patch proves to be successful, will also be in the form of standalone executable, allowing user to install CrackTcpip in one click automatically.
Note: Since Windows Vista and Windows Server 2008 SP2, there is no more restriction (now unlimited) concurrent half open TCP/IP connection limit.
|
OPCFW_CODE
|
bucketing with QuantileDiscretizer using groupBy function in pyspark
I have a large dataset like so:
| SEQ_ID|RESULT|
+-------+------+
|3462099|239.52|
|3462099|239.66|
|3462099|239.63|
|3462099|239.64|
|3462099|239.57|
|3462099|239.58|
|3462099|239.53|
|3462099|239.66|
|3462099|239.63|
|3462099|239.52|
|3462099|239.58|
|3462099|239.52|
|3462099|239.64|
|3462099|239.71|
|3462099|239.64|
|3462099|239.65|
|3462099|239.54|
|3462099| 239.6|
|3462099|239.56|
|3462099|239.67|
The RESULT column is grouped by SEQ_ID column.
I want to bucket/bin the RESULT based on the counts of each group. After applying some aggregations, I have a data frame with the number of buckets that each SEQ_ID must be binned by. like so:
| SEQ_ID|num_buckets|
+-------+----------+
|3760290| 12|
|3462099| 5|
|3462099| 5|
|3760290| 13|
|3462099| 13|
|3760288| 10|
|3760288| 5|
|3461201| 6|
|3760288| 13|
|3718665| 18|
So for example, this tells me that the RESULT values that belong to the 3760290 SEQ_ID must be binned in 12 buckets.
For a single group, I would collect() the num_buckets value and do:
discretizer = QuantileDiscretizer(numBuckets=num_buckets, inputCol='RESULT', outputCol='buckets')
df_binned=discretizer.fit(df).transform(df)
I understand that when using QuantileDiscretizer, each group would result in a separate dataframe, I can then union them all.
But how can I use QuantileDiscretizer to bin the various groups without using a for loop?
|
STACK_EXCHANGE
|
M: Stephen Wolfram Aims to Democratize His Software - prostoalex
http://bits.blogs.nytimes.com/2015/12/14/stephen-wolfram-seeks-to-democratize-his-software/
R: macmac
This is a remarkable change for a man who famously inserted a fairly esoteric
copyright notice in his book "A New Kind of Science":
"Copyright © 2002 by Stephen Wolfram, LLC
...Discoveries and ideas introduced in this book, whether presented at length
or not, and the legal rights and goodwill associated with them, represent
valuable property of Stephen Wolfram, LLC, and when they or work based on them
is described or presented, whether for scholarly purposes or otherwise,
appropriate attribution should be given.
...Illustrations (including tables) may not be reproduced without the prior
written consent of the copyright holder. Most individual illustrations in this
book represent substantial original works in themselves, and their
reproduction is not a fair use... Permission to reproduce illustrations will
normally be granted for scholarly purposes so long as the illustrations are
not modified...[and] are used and explained in an appropriate way... Stephen
Wolfram, LLC is the owner of the full copyright to all illustrations in this
book (except as indicated in the colophon), including...such original elements
as non-obvious choices of rules and initial conditions used to create them."
|
HACKER_NEWS
|
Add tests for xid, multixid and oid wraparound
Branch nonrel_wal_test_xid_wraparound contains a test, that illustrates the problem with xid wraparound.
It fails on main too (with different symptoms), but I suggest concentrating our efforts on nonrel_wal.
In a current shape, the test is too hacky to propose as PR. I'll try to straighten it up.
If you have any ideas on how to make it better, I'm all ears.
See also a discussion in hackers about similar testing scenario:
https://www.postgresql.org/message-id/flat/CAP4vRV5gEHFLB7NwOE6_dyHAeVfkvqF8Z_g5GaCQZNgBAE0Frw%40mail.gmail.com#e10861372aec22119b66756ecbac581c
I doesn't understand what this test is demonstrated. First of all there is no any xid wraparound. You need to execute 2^31 transactions to cause XID wraparound.
This test is failed in the following assertion:
# ensure that oldest_xid advanced
> assert int(oldest_xid) < int(oldest_xid1)
E AssertionError: assert 727 < 727
E + where 727 = int('727')
E + and 727 = int('727')
But 50000 transactions performed by pgbench in this test seems to be not enough to advance oldestXid in checkpoint, taken in account that autovacuum_freeze_max_age=100000 (twice more).
If we perform 500000 transactions (ten times more), then this check passed.
The next failure is:
# ensure that we restored fields correctly
assert int(xid1) <= int(xid2)
> assert int(oldest_xid1) == int(oldest_xid2)
E AssertionError: assert 403881 == 727
E + where 403881 = int('403881')
E + and 727 = int('727')
We update checkpoint.oldestXid on CLOG_TRUNCATE records, but it was not performed.
With commit c564272 this test is passed at nonrel_wal branch with number of pgbench transactions increased till 500000.
But it takes about half an hour time to compete.
With commit c564272 this test is passed at nonrel_wal branch with number of pgbench transactions increased till 500000.
But it takes about half an hour time to compete.
Ah yeah, consuming enough transactions to trigger XID wraparound can take a while... What I've done in the past is to write a C function specifically for testing that advances ShmemVariableCache->nextXid directly. the ExtendCLOG(), ExtendCommitTs() and ExtendSUBTRANS() expect to be called for every XID, because they need to extend the slru files whenever you cross a page boundary, so one safe way to do that is to skip over XIDs that are "not interesting" to those Extend*() functions, and call GetNewTransactionId() as usual for others. Something like this:
#define BORING_XID_INTERVAL 1024
LWLockAcquire(XidGenLock, LW_EXCLUSIVE);
nextXid = ShmemVariableCache->nextXid;
remainder = nextXid % BORING_XID_INTERVAL;
if (remainder >= FirstNormalTransactionId &&
remainder < BORING_XID_INTERVAL - 1 &&
!TransactionIdFollowsOrEquals(nextXid, ShmemVariableCache->xidVacLimit))
{
/* bump nextXid to just before the next non-boring XID */
nextXid += BORING_XID_INTERVAL - remainder - 1;
LWLockRelease(XidGenLock);
}
else
{
LWLockRelease(XidGenLock);
/* call GetNewTransactionId() or something to get a new XID the usual way */
}
Andres Freund and @lubennikovaav had some discussion on psql-hackers on adding a standard way to test XID wraparound to upstream: https://www.postgresql.org/message-id/CAP4vRV5gEHFLB7NwOE6_dyHAeVfkvqF8Z_g5GaCQZNgBAE0Frw%40mail.gmail.com. That would be very handy. We had a function like that in GPDB as well (https://github.com/greenplum-db/gpdb/blob/master/src/test/regress/regress_gp.c#L1955), but it would be good to get it into upstream, Postgres could use some tests around XID wraparound too.
https://github.com/zenithdb/zenith/pull/351 comments out the code to handle CLOG and multixid offsets wraparound. Just by looking at the code, those looked wrong to me.
the test is not part of any CI ATM because it takes a lot of time to run this test
the next step is to commit this :)
|
GITHUB_ARCHIVE
|
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Net;
using System.Security.Cryptography;
using System.Text;
using Newtonsoft.Json;
using Newtonsoft.Json.Converters;
namespace ClientApi
{
public class ClientApi
{
private readonly string address;
private readonly string keyid;
private readonly string secret;
private int counter;
private readonly JsonSerializerSettings serializerSettings;
private readonly Encoding encoding = Encoding.UTF8;
private readonly SHA256 sha256 = SHA256.Create();
public ClientApi(string address, string keyid = null, string secret = null)
{
this.secret = secret;
this.keyid = keyid;
this.counter = (int) (DateTime.UtcNow - new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc)).TotalSeconds;
this.address = address.TrimEnd('/') + '/';
serializerSettings = new JsonSerializerSettings
{
Converters =
new JsonConverter[]
{
new StringEnumConverter(),
new UnixDateTimeConverter(),
new DecimalConverter()
}
};
}
private T SendRequest<T>(string command, object parametersObject)
{
var parameters = parametersObject.GetType()
.GetProperties()
.Select(
pi =>
new KeyValuePair<string, string>(pi.Name,
(string)
Convert.ChangeType(
pi.GetValue(parametersObject, null) ??
"", typeof (string), CultureInfo.InvariantCulture)))
.Where(p => !string.IsNullOrEmpty(p.Value));
var request = new StringBuilder(address);
request.Append(command);
var hasParams = false;
foreach (var kv in parameters)
{
request.Append(hasParams ? '&' : '?');
hasParams = true;
request.AppendFormat("{0}={1}", kv.Key, kv.Value);
}
var webClient = new WebClient();
var response = webClient.DownloadString(request.ToString());
return JsonConvert.DeserializeObject<T>(response, serializerSettings);
}
private T SendRequestSecure<T>(string command, object parametersObject)
{
var numberParameter = new KeyValuePair<string, string>("number",
(counter++).ToString(CultureInfo.InvariantCulture));
var keyIdParameter = new KeyValuePair<string, string>("keyid", keyid);
var parameters = parametersObject.GetType()
.GetProperties()
.Select(
pi =>
new KeyValuePair<string, string>(pi.Name,
(string)
Convert.ChangeType(
pi.GetValue(parametersObject, null) ??
"", typeof (string), CultureInfo.InvariantCulture)))
.Where(p => !string.IsNullOrEmpty(p.Value))
.Concat(new[] {numberParameter, keyIdParameter});
var requestData = string.Join("&", parameters.OrderBy(p => p.Key).Select(p => p.Key + "=" + p.Value));
using (var client = new WebClient())
{
client.Encoding = encoding;
client.Headers[HttpRequestHeader.ContentType] = "application/x-www-form-urlencoded";
var sign = BitConverter.ToString(sha256.ComputeHash(encoding.GetBytes(requestData + secret)))
.Replace("-", "")
.ToUpper();
client.Headers["sign"] = sign;
var response = client.UploadString(address + command, "POST", requestData);
return JsonConvert.DeserializeObject<T>(response, serializerSettings);
}
}
public IEnumerable<Symbol> Symbols
{
get { return SendRequest<Symbol[]>("symbols", new {}); }
}
public IEnumerable<Trade> Trades(string symbol, int count = 1000)
{
return SendRequest<Trade[]>("trades", new {symbol, count});
}
public IEnumerable<Candle> Candles(string symbol, int timeframe = 60, int count = 1000)
{
return SendRequest<Candle[]>("candles", new {symbol, timeframe, count});
}
public Depth Depth(string symbol, int depth = 5)
{
return SendRequest<Depth>("depth", new {symbol, depth});
}
public Balance Balance
{
get { return SendRequestSecure<Balance>("balance", new { }); }
}
public Order GetOrder(int id)
{
return SendRequestSecure<Order>("getorder", new {id});
}
public IEnumerable<Order> ActiveOrders(string symbol)
{
return SendRequestSecure<Order[]>("myorders", new {symbol});
}
public IEnumerable<MyTrade> MyTrades(int count, string symbol = null)
{
return SendRequestSecure<MyTrade[]>("mytrades", new { count, symbol });
}
public Order AddOrder(string symbol, decimal price, decimal volume, OrderType direction, string comment = null)
{
return SendRequestSecure<Order>("addorder", new { symbol, price, volume, direction, comment });
}
public Order CancelOrder(long id)
{
return SendRequestSecure<Order>("cancelorder", new { id });
}
}
public class Trade
{
public long Id { get; set; }
public string Ticker { get; set; }
public DateTime Time { get; set; }
public OrderType OrderType { get; set; }
public decimal Price { get; set; }
public decimal Volume { get; set; }
}
public enum OrderType
{
Buy = 1,
Sell = -1
}
public class Symbol
{
public string Name { get; set; }
public string Currency { get; set; }
public decimal PriceStep { get; set; }
}
public class Depth
{
public string Symbol { get; set; }
public IList<DepthLevel> Bids { get; set; }
public IList<DepthLevel> Asks { get; set; }
}
public class DepthLevel
{
public decimal Price { get; set; }
public decimal Volume { get; set; }
public OrderType OrderType { get; set; }
}
public class Account
{
public string Currency { get; set; }
public decimal Amount { get; set; }
public decimal Reserved { get; set; }
}
public class Balance
{
public Balance()
{
Accounts = new List<Account>();
}
public List<Account> Accounts { get; set; }
}
public class MyTrade : Trade
{
public long OrderId { get; set; }
}
public class Order
{
public long Id { get; set; }
public string Symbol { get; set; }
public DateTime AddTime { get; set; }
public DateTime ModifiedTime { get; set; }
public decimal Price { get; set; }
public decimal Volume { get; set; }
public decimal InitialVolume { get; set; }
public OrderType Direction { get; set; }
public OrderStatus Status { get; set; }
public string Comment { get; set; }
}
public enum OrderStatus
{
Unknown = 0,
Active = 1,
Done = 2,
Canceled = 3,
CrossDealReject = -1,
NoMoneyReject = -2,
PaceReject = -3,
NotFoundReject = -4,
InvalidPriceReject = -5
}
public class Candle
{
public string Symbol { get; set; }
public DateTime Time { get; set; }
public decimal Open { get; set; }
public decimal High { get; set; }
public decimal Low { get; set; }
public decimal Close { get; set; }
public decimal Volume { get; set; }
}
class DecimalConverter : JsonConverter
{
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
writer.WriteValue(((decimal)value).ToString("0.########", CultureInfo.InvariantCulture));
}
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
return Convert.ChangeType(reader.Value, typeof(decimal), CultureInfo.InvariantCulture);
}
public override bool CanConvert(Type objectType)
{
return objectType == typeof(decimal);
}
}
public class UnixDateTimeConverter : DateTimeConverterBase
{
private static readonly DateTime UnixStartTime = new DateTime(1970, 1, 1);
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
writer.WriteValue((int)((DateTime)value - UnixStartTime).TotalSeconds);
}
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
return UnixStartTime.AddSeconds((int)Convert.ChangeType(reader.Value, typeof(int)));
}
}
}
|
STACK_EDU
|
Dev Update 7/20/18
Progress is swift since I switched to the Meteor framework. (Yes, that’s right, I started Chronicler from scratch again haha). But fear not, since the decision to change the entire design of the flowchart view meant changing the entire data structure as well, this essentially needed to be done anyway.
I spent some serious time deliberating the usage of Meteor over plain old React/Redux and came to the conclusion that it was the superior environment. I’m still using React for the frontend UI design, but I’m doing all the backend data storage in Meteor.
Meteor takes care of user accounts and authentication including Google/Facebook login if one so desired.
On top of that, it uses a local database which automatically syncs up with the server, so if you lose internet connection temporarily you can continue working.
Note that if you close the browser/tab while offline, all changes made during that time will be lost. There is a workaround for this, but it takes quite a bit of time to implement and I’d rather spend my time on more important features.
Furthermore, this solves the issue of collaboration as any changes that are pushed to the server are immediately reflected in all related clients. I’m not totally convinced that collaboration is a particularly necessary feature though. It seems like most authors work alone and there’s nothing to stop multiple people from logging in with the same account so if you want to collaborate on a project simply create a new account and share the login info with each other.
Regarding further changes to the design. I’ve decided that I can abstract away a lot of Choicescript specific stuff such as requiring a scene named “startup”. A novice user shouldn’t have to know that, instead I allow creation of scenes of any name where the first one created will be linked to from the automatically generated “startup” scene.
I’ll also work on an intuitive stats screen editor separate from scenes.
I created Chronicler with the intent of saving user’s time, but the current version seems to actually make things more difficult and time consuming. The redesign of the flowchart mentioned in my last post solves this problem quite handily! It also solves a lot of the programming design challenges such as how to handle disconnected nodes and connection loops. Since it is now impossible to have disconnected nodes, and loops are handled by special links.
For the future, I don’t plan on releasing the new version until I’m satisfied with it’s aesthetic quality and I have it thoroughly bug tested. The initial release of Chronicler desktop was plagued with bugs and lacked many important features which drove a lot of potential users away. I don’t want that to happen again. I’d rather release a finished product like they did back in the good ol’ days of CD-ROM technology
That said, I may offer a limited beta-test for the aforementioned bug testing when I’ve implemented all necessary features.
I’ve said it before, but please remain patient while I make Chronicler Online the best product possible.
|
OPCFW_CODE
|
The game took about two and a half hours, but he beat the first mothership at about two hours. He said that he had had a fun time, and that the game seemed well balanced, but he did have a few ideas:
- Make a draft for a one player game (with starting cards). (Not sure if I will do this before the end of the contest)
- Figure out how to get trade working in a one player game. (Not sure if I will do this before the end of the contest)
- Make foes that move at different speeds? (Not sure if I will do this before the end of the contest)
- The ability to repair cities might be nice (I could modify the construction worker card to do this, and it would make more sense than his current ability.)
- Remove auto-repair action - your robot can repair inside of combat, or take an action to repair. (I might do this - I will have to consider the effects of it)
- Remove normal repair action (probably I will not do this, this action is useful to people that don't go all out on repair parts)
- Battleship looks like it most be a mothership (Perhaps I need a symbol that denotes "This is a mothership")
- What order do you place your new foe columns? Do you know where on the map the alien figurine will go before you select the card? (we played yes, but I have since decided no.)
- Neural implants and Multitask drivers need to be changed (I need to do this before the contest - the last change to the tech draft really messed up these cards)
- Mothership corpses should be worth more as a corpse (1 wild corpse? I can probably do something)
- Escort ship is not an "escort". Either the card needs to be renamed, or there is a game term that needs updating. (I can easily rename the card)
- Need to be able to cycle through the tech cards faster. Perhaps as an action (this is really good, so I probably will do it. I just have to make sure that the balance remains the same.)
- The mothership moves to which tied column? (This is already in the rules. We just had to remember what the rules stated)
- Rules need to be written out clearly and lots of hand holding has to happen for this to work for a player that doesn't know the game perfectly (This is a point well made. I need to do this.)
Now, I only have so much time before the contest ends, so I can't do all of these things right now, but the ones that shouldn't affect the balance of the game seem totally within the limits of what is reasonable.
We reached a point in the game where he was supremely confidant about his ability to defeat every possible opponent, and because of that, I told him to enter the Alien Dimension, and take the fight to the aliens. That is when he faced the final mothership and won the game. This was the first time in a game that that happened, but it worked out fairly well.
Probably the biggest thing that I still need to do it clean up the rules and make them much more easy to understand, but I am also going to fix the two cards that need work based on the new rules, and I will probably also incorporate as many of the other fixes before I am done.
|
OPCFW_CODE
|
The engineering community has a lot of trouble talking about seniority and what we consider the qualifications for becoming a Senior Engineer.
It's not uncommon to see folks scoff at the idea that people with under a decade of experience (or some other arbitrary threshold) can really be Senior Engineers™️; after all, how is it logical that two people with a potential 20+ year experience gap have the same title? Lets talk about it.
There's No Standard for Seniority
The most obvious thing to address first is that seniority is an ambiguous concept. Engineering culture, and the needs of the company, are what define seniority and that varies heavily across the industry. At a small startup you might have a Senior Engineer doing work that a Staff or Principal Engineer at a larger company would be responsible for.
Senior isn't the Most Senior
The title Senior probably carries more gravitas than the role typically deserves. While it's an ambiguous concept there's at least some agreement between the largest of engineering orgs about what senior means. You can look at levels.fyi to get a rough idea of how levels at the largest companies might compare. The interesting thing to note is that the levels considered Senior are typically in the middle of the leveling hierarchy. Google's L5 level comes with the Senior SWE title, but there are still 5 more engineering levels above that!
The idea that becoming a Senior Engineer early leaves you without mobility doesn't hold much weight. In a mature engineering org there is often plenty of room to move up from a Senior Engineer. Becoming a Senior Engineer is not the final act of an engineering career, though it often can be if that's what you want.
Many large engineering organizations have what is called the career level or terminal level: the engineering level where you are no longer required to progress to the next level. This is almost always one of the first Senior levels. When I was at Facebook this was E5. At levels below that there was an expectation that you would mature and level up over some standardized timeframe, but once you hit that career level you can stay there your entire career if you'd like.
This is why it's expected behavior to see folks at this level with varying years of experience; someone might become a Senior Engineer early on and decide to stay there for a giant chunk of their career.
Growth Rates Differ
Lastly, we should just recognize that some folks develop the skills required of Senior Engineers faster than others. This might be due to an inherit disposition for the work, past experience from another career, or any number of factors. At best years of experience is a proxy metric for figuring out if someone might have had time to develop those skills, but like most metrics we can't depend on it giving us the whole picture.
|
OPCFW_CODE
|
‘The Man Who Killed Don Quixote’ Is a Lackluster Comedy Adaptation
The story of how Terry Gilliam’s adventure comedy made it to the big screen is more interesting than the film itself.
March 31, 2019
Nearly two decades of development hell would defeat most ordinary filmmakers, but Terry Gilliam of Monty Python fame is not deterred so easily. “The Man Who Killed Don Quixote” began filming in 2000, but it didn’t wrap until 2017. It went through innumerable reworks and its production cycle outlasted a number of actors slated to star in each of several versions. “The Man Who Killed Don Quixote” is finally here, but it does not live up to the high expectations set by its 17-year-long production.
Ten years after shooting his thesis — that shares the name of the film — starring Spanish villagers, American big shot Toby Grisoni (Adam Driver) returns to Spain to film a commercial. Upon revisiting the village of Los Sueños, Toby is shocked to find that his leading man, Javier (Jonathan Pryce), still believes he’s the real Don Quixote. Lady Dulcinea, played by young Angelica (Joana Ribeiro) whose head Toby filled with dreams, ran off to Madrid to be a star. Sancho Panza, Quixote’s novice in the original work, is long since dead, and so Don Quixote mistakes Toby for the donkey-mounted squire. Together, they venture through the countryside in service of their lady and the age of chivalry.
“Don Quixote” is a comedy before anything else, and its leads deliver. Toby is perpetually exasperated and delightfully selfish, while Don Quixote’s blissful ignorance makes his unwavering devotion all the more charming. Unfortunately, their surroundings do not play to these strengths. The world of “Don Quixote” is a cruel one. The enemies that the two encounter in the final leg of their journey are cruel, and the constant suffering of the well-intentioned hero sails closer to tragedy than anything else. Toby is allowed time and again to go unpunished. He gets a bit bruised and remains helplessly lost, but his characteristic unkindness is actually his greatest strength in a dog-eat-dog world.
It’s hard to laugh when sadistic violence is alluded to and abject humiliation is portrayed in painstaking detail. The film suffers a tone problem, descending from irreverent absurdity to a trial of horrors, drowning out the fun along the way. Angelica/Lady Dulcinea is objectified due in part to the feudalistic morality of the Don, the brutal real world and Toby’s strange obsession with her, which began when she was 15.
However, “Don Quixote” thrives as an adaptation. The Spanish literary magnum opus is about a truly righteous, outmoded man in an unforgiving, contemporary world. Gilliam effectively captures this story, but the final product floats between comedy and tragedy, dead in the water. Once the third act begins, the film loses all enjoyment.
“The Man Who Killed Don Quixote” is a bumpy ride. There’s a strange propensity for fish-eye shots and copious Dutch angles, indicative of the off-beat tonal shift. Gilliam is perhaps best known for his bizarre paper cutout animation for Monty Python, and “Don Quixote” feels like one of those stretched across two hours.
A version of this article appears in the Monday, April 1, 2019, print edition. Email Fareid El Gafy at [email protected]
|
OPCFW_CODE
|
I recently decided to re-do my personal AWS accounts using AWS IAM Identity Center (SSO) and AWS Control Tower. For reasons mostly having to do with house keeping, I decided to start from scratch with a new parent account and migrate things in while cleaning up others.
It’s pretty trivial to move a whole account over from one AWS Organization to another, but I didn’t want to do it this way, as I had a whole new structure in mind, and wanted a fresh and new environment to work in.
Most things were pretty easy to replicate in the new accounts. I use the AWS CDK as much as possible, so it wasn’t too hard to re-deploy the handful of side projects I have going.
All good, but what to do about my 5TB stored in Amazon S3 Glacier Deep Archive?
About two years ago, I decided to zip up a whole bunch of files and put them somewhere for a future look. There are about 10 zip files, which add up to nearly 5TB. So, I ordered an AWS Snowcone, and put them on S3, and moved them to Glacier Deep Archive. It all went amazingly well, and the files have been sitting there, untouched all this time. I figure it’s kinda like the stuff I have sitting in a storage unit in Brooklyn. Eventually, I’ll get around to sorting through it all, and deciding what to trash and what to keep. But for now, Deep Archive is a nice resting place.
To move 5TB from one account to another, I needed to follow these steps.
- Restore the files. (This can take a couple days)
- Create a bucket policy to allow access from the destination account
- Do the transfer
- Tier the transferred files back to Deep Archive
- Delete the source files
I was a little worried this would cost me a lot of money. Restoring 5TB back to S3 Standard comes with some costs. Since it had been over 180 days, I wouldn’t have to pay for early restoration or early deletion. But, once I restored the data, I’d be paying for the data itself in both Deep Archive and S3 Standard while I completed the transfer to the new account. I’d also have to pay for the retrieval, which would be $0.0025 per GB using the lowest cost “Bulk Retrieval” option.
Fortunately, transferring files between S3 buckets in the same region is pretty fast. I used the AWS CLI to do the transfer with
aws s3 cp and I noticed transfer speeds in the neighborhood of 300-700 MiB/s. The whole process to transfer nearly 5TB took just a few hours.
On the new bucket I set up a 0 day lifecycle policy to tier the files down to Deep Archive. On the old bucket, once the transfer was complete, I just used to console to delete all the old files.
To get through all of this, I found this excellent blog post outlining most of the steps above. One tip I picked up was that the CLI command needed to have the
--force-glacier-transfer flag added. This is because you can’t normally transfer files stored in Glacier, unless they’ve been restored.
Here’s the full command I wound up using.
aws s3 cp s3://my-source-bucket s3://my-new-bucket --force-glacier-transfer --storage-class STANDARD
I also learned about the “Bucket Owner Enforced” setting, which is relatively new, and allows you to ensure that any new objects transferred in from another bucket will take on the new bucket as owner.
Here’s my usage bill on the old account for the week where I did the transfer. You can easily see the spike where I restored the 5TB back to S3 Standard, and then it going away after deleting the files.
And here is my usage bill on the new account for the new data coming in and immediately tiering down to Deep Archive.
Both of these charts include charges for other unrelated data I have in the same accounts. I filtered out a few things to make the charts make as much sense as possible.
In the end, I spent about $12 to do the data restoration, and about $6 for the Standard Storage while doing the transfer.
|
OPCFW_CODE
|
We have a legacy application that is still using RealSqlDatabase files; however, we have started building it in the newest version of xojo. We are starting to get a number of calls who store their files on a network and they are getting Disk I/O errors because xojo is leaving an sqlite journal file behind when the file is closed.
The files are NOT being shared. Only one user can access the files at a time, but because the journal file is being left behind it’s causing a disk i/o errors.
We do not issue any pragma statements to change the journaling mode, but did something change in XOJO where this would be the default (on) and we need to turn it off?
If you’ve moved to the latest Xojo, then you will be using an SQLite database rather than a RealSqlDatabase. In addition tio the page referenced by @brian_franco , you might want to look more generally at the SQLite web site and find out more about journal files and whether a correctly closed database should leave one behind. Changing the database like this is, I would have said, a major change in the way you handle your data files.
Tim, thank you for your reply. As I stated, this is a legacy application and really quite large. We are still using the RealSqlDatabase, not the SqliteDatabase, so I’m hoping that internally they did not change the database class to do something different that would affect this.
I’m aware that the RealSqlDatabase is deprecated, but as I said, it’s a very large application and would take months to update to use the SqliteDatabase class with it’s inherent changes.
will compile, but you’ll be using SQLite 188.8.131.52 rather than the current version 3.40.x. I’d stick with the oldest version of Xojo you can. I’ve got an old machine here with old software just to run our slide scanner. It lives in a box on the shelf along with the scanner.
As an ex-engineer, i’m going to restate what several people have said already… modern versions of SQLite do not work well on network drives and if you’re using a new-ish version of Xojo, the version of SQLite that’s used under the hood is similarly new-ish.
My recollection is also that RealSQLDatabase and SQLiteDatabase use the same engine under the hood and that the name change was simply an artifact of the product name change, but only someone @xojo could say for sure.
Anyone who has transitioned an application that used the RealSqlDatabase class to the more modern SqliteDatabase class would understand functions written that are passed a references to a RealSqlDatabase will not work if the database passed to it is an SqlDatabase class.
That is the issue with updating a legacy application. You can’t just expect to change the instantiation of the database, you also have to change all the places that class is used in functions, of which, we have many.
The underlying version of the table may very well be the latest version of sqlite because you are using the latest version of XOJO, which we are, and what I’m saying is perhaps that is where the issue comes in.
For example, the RealSqlDatabase class does not require a “begin transaction” when adding or editing records; however, it is required should you use the SqliteDatabase class. This is NOT controlled by the version of the sqlite table, but by XOJO.
We’ve run into other subtle differences in some of our new applications that are using the SqliteDatabase class. I believe these are changes made in the database class itself and not necessarily in the version of the Sqlite table.
That has not at all been my experience with SQLite over a network drive (on Windows OS). I have numerous applications deployed, some of which are high transaction and multi-user. The notes on the SQLite database site about network files servers are mentioning outdated and obsolete OSs with known file locking issues, nothing modern.
I did so much extensive testing on this that I could not create a failed or corrupted database using SQLite over a networked drive in a multi-user database. This would be very easy to test for. In my case I deployed 20 instances of an app that simply read and write to the DB at random times and let it run for WEEKS. I logged collisions (locked file messages) DB errors, etc. After millions of records written to the file I gave up. No corruption. Our deployed apps have been running for years, similar to John’s experience without issue. I can’t simply be that lucky.
The database file on SQLite is simply a file that is locked by the OS when written to. If one user is writing to the file it is locked, if another user tries to access, they will get a file locked error from the DB engine. I think a lot of issues with corruption is simply bad programming and not doing appropriate error checking.
The reality is not every deployment location can use a true database , e.g. MSSQL, for numerous reasons so SQLite is a good option in many cases.
Having said that, I’d love to understand the science and actual mechanics behind these data corruption issues over a network drive.
No, if you don’t do a ‘begin transaction’ then SQLite will do one for you. The only place in my app where I do one is where I am doing a number of updates; keeping them all in a transaction improves performance and ensures that the set of updates is atomic. Most of my updates, however, can be done by themselves and have no need of an explicit transaction.
AIUI, one issue is that consumer grade disks, needing to be cheap while also needing to be fast, report that a write is completed when, in fact, it is not. The data is in the disk’s cache but is not yet stored. If the drive loses power at that moment, there may be a problem. Whether such considerations also apply for SSD I know not.
I agree a power loss during a write to any file, regardless of shared or even local has the potential to cause corruption. But I’m not sure that specific case is isolated to SQLite files versus any other type of file (MS Word, Excel, etc.).
Perhaps your not really using an SqliteDatebase. Here’s a screenshot that shows what happens if I comment out the “begin transaction” and then try to save the record. Notice the database exception - “cannot commit - no transaction is active”.
One glaring difference is that you are now using a RowSet instead of the old RecordSet. RecordSet.Edit did an implicit transaction. I don’t know if RowSet does or not, but it would appear not. Try the same code with a RecordSet, using SQLSelect instead of SelectSQL. It doesn’t solve your problem, but it could provide more info.
The difference is that you are not using SQL directtly for your update. You are using an extra layer provided by Xojo - editing and saving a row. I’ve never used that and have no idea how that works with transactions. For your example above (but limiting to two columns for brevity), I would do:
Var sql as String
sql = "update mytable set REFNO=?1, DESC=?2 where id=?3"
app.DataDB.ExecuteSQL (sql, tfRefNo.Text, tfDesc.Text, someid)
where id and the value in someid identify the row. No explicit begin/end transaction is needed for this.
|
OPCFW_CODE
|
Step 1:Login to AWS
To get started with configuration, go to your Amazon Web Services login page and ‘Sign In to the Console’.
Step 2:Certificate Manager
The next step is to find the section inside AWS called 'Certificate Manager'. This is a place where you can generate or import digital certificates. A simple way to find this in the list is to just use the search bar at the top or click CTRL+F on your keyboard and start typing "certificate", then choose Certificate Manager from the dropdown.
Step 3:Domain Specifics
Click the 'Get Started' button and you'll then be brought to a section to enter in your domain details.
You can enter any domain or subdomain that you'd like, but if you want to use just one certificate for your whole domain, you can setup a wildcard. Simply enter a '*.' in front of your website's URL (see below). Click ‘Review and request’ followed by ‘Confirm and request’.
This will send an authorization email to the "email@example.com".
If you have domain privacy enabled, Amazon won't be able to send the verification email to you, instead they will send it to "firstname.lastname@example.org".
To bypass this, login to your registrar (where you registered your domain name) and put a temporary pause on your domain privacy. This is usually done in the domains section of your registrar.
Do this before you click 'Review and request'.
Step 4:Verify Domain via Email
You will get an email similar to the one below. All you need to do is click the link, which will bring you to a page where you will click a button labeled "I Approve".
Once this is done, you can head back to your Certificate Manager and click refresh. You should now see a green status of Issued.
Step 5:Add the Certificate to a CloudFront Distribution
At this point, we are ready to pick out a custom name for use on our website and add the certificate we created to CloudFront. Head on over to CloudFront and select the distribution you'd like to use, by clicking on the blue link under the 'ID' section.
Click 'Edit' located in the top left portion of the general tab.
Add your chosen custom domain (for example aws.yourdomain.com) to the 'Alternate Domain Names' section.
This is actually adding a CNAME entry to alias your custom domain with examplegarbagetext.cloudfront.net. Change the 'SSL Certificate' to 'Custom SSL Certificate' and select your certificate that we created above.
Everything else can be left as default; scroll to the bottom and choose 'Yes, Edit'.
Step 6:Add CNAME Entry to Your Web Server
Please Note: highlight and copy to your clipboard - your cloudfront URL (the one that looks like a bunch of numbers and letters followed by .cloudfront.net). This can be found back on the general tab for your CloudFront distribution, as the 'Domain Name', located under 'SSL Certificate'.
At this time, you will need to head on over to your website hosting company, in this example it's Bluehost. What we will be doing here is creating a CNAME entry that sets up the alias we just created to be live on your website. This step requires you to leave Amazon Web Services and sort of fish around in your domain hosting configuration. Most hosting companies have a cPanel, which makes it easy to add a new DNS zone record.
What we are going to do is find the 'zone editor', usually located under either the 'domains' section or the 'dns records' section. Add a new record and make sure it's set to CNAME. 'Host Record' should be the custom name you setup over at CloudFront, but only the first portion (you can leave the .domainname.com out). TTL should be default at 14400 and 'Points To' would be what you copied to your clipboard, so simply paste it in here.
Step 7 (Optional):Integrating with Wordpress
If you are not using Wordpress, then you are done and can use your S3 bucket as a repository for your files with the newly created URL.
I found a pretty fantastic Wordpress plugin for those who are interested. You can tie all this configuration in to your 'upload media' section, which will automatically add the CloudFront CNAME to your files.
A company called Delicious Brains makes this great plugin, it's called WP Offload S3 and it's amazing. It basically gives you full control over all the files on your website. The plugin (with the 'Assets' add-on) can also compress using gzip, making for very fast loading times, which improves SEO :)
|
OPCFW_CODE
|
describe("Math", function() {
describe("Computing average", function() {
it("average for no date is NaN", function() {
expect(isNaN(Math.avg([]))).toBe(true);
});
it("average for one number is the same number", function() {
expect(Math.avg([1])).toEqual(1);
});
it("rounding works", function() {
expect(Math.avg([0,0,1], 2)).toEqual(0.33);
});
it("rounding up works", function() {
expect(Math.avg([0,0,2], 2)).toEqual(0.67);
});
it("rounding to integer works", function() {
expect(Math.avg([0,0,5], 0)).toEqual(2);
});
it("rounding to integer (up) works", function() {
expect(Math.avg([0,0,2], 0)).toEqual(1);
});
});
describe("Parsing floats", function() {
it("'1' is easily parsed to float", function() {
expect(Math.parseFloat2("1")).toBe(1.0);
});
it("'1.5' is easily parsed to float", function() {
expect(Math.parseFloat2("1.5")).toBe(1.5);
});
it("'1 234' is easily parsed to float", function() {
expect(Math.parseFloat2("1 234")).toBe(1234);
});
it("' 1 234' is parsed to float", function() {
expect(Math.parseFloat2(" 1 234")).toBe(1234);
});
it("' 1 234 ' is parsed to float", function() {
expect(Math.parseFloat2(" 1 234 ")).toBe(1234);
});
it("' 1 234,56 ' is parsed to float", function() {
expect(Math.parseFloat2(" 1 234,56 ")).toBe(1234.56);
});
});
});
|
STACK_EDU
|
Pay Someone To Do My Accounting Homework
Just like CSS inputs, output data appears rather straightforwardly. However, with the new method of output-writing and transforming, you will find that all the data is being output in matlab: html css js HTML Output HTML Output Notice the class representing the output. This class denotes the results. You can also name it as your output and it will also be named output-one.html. CSS Output In almost any way, CSS output is the name of one of its main reasons to use Matlab® and HTML5 with Matlab®. Think of it as most of the input you have to display is in plain text. The output is visible through the CSS. In some cases the output can get messed up very easily. You could also use hidden fields; a simple way of doing two inputs to the same div. However, this avoids the awkward handling of div with hidden fields, and hence we do not recommend using this functionality. HTML Report Matlab® always allows you to read the output, so only with the new method of report, does the output get visible: html css js Gmat Math Practice Test: A Short, Easy to Follow Application? This blog is intended for readers to come out and to learn more about the art of matrix science. All of this material is at the lower end of the “high end” of the relevant chart sets and the work you can look here is usually fairly small and limited so may be best available. If this doesn’t give you access to the higher of the kind of performance that I am paying for, then that kind of work may simply not have been worth the effort and time given to me by a search for a better job later. On the other hand, if you are already one of the leading practitioners, you would still have to fill you in on the price in each of the many parts of the algorithm as a whole and if needed, give those bits the hard way! So, what does it take to get a grasp on matrix theory? It certainly takes plenty and some working in-between. A first look at one, or even two methods will confirm that they haven’t done so yet. First let’s look at details from the first section of this blog and in the final section of this blog we will see how this is done. Maybe why? The basic steps of this approach are the same the one described in previous blog and the one we are both about: […
How To Do An Online Class
] A matrix A is a monomial variable whose right-shifted elements are the vectors (which are called x1, x2,…, xk) and left-shifted elements are the zeros of vector α which we denote α>0. You obviously know that if the vector corresponding to A is then A[α]= [α]== 1, i.e. A(α,…, 4]=1). In practice, the first step is to transform A by a null matrix B to A defines what would come next through This approach now works in several ways. The most famous of these is probably when the first block (the one with x = 0 and y = 0) is transform to transform to transform equation to: The second step is to Solve an addition and a differentiation method. The two methods get the same result, but one of which is Using the method above A matrix B is known to have one or a few linearly stable block-less, diagonal eigenvalues. Since there are lots of mathematically correct solutions, it is crucial to have some sort of unit matrix B without any eigenvalue in its diagonal. That is why it is of benefit and when working with the problem it is also important to use matrices. More specifically, in mathematical literature like X.G. Mathematica, eigenvalue problems are by far the best mathematical approach to solve the problem. As noted by @Berzheimer2013, in spite of the fact that matrices are by far the most commonly used way of solving the linear equation (or the matrix A in matrices of this class), it doesn’t do anybody any good as a first step. Also, the amount of work needed in the context of a matrix problem can be as vast a problem in terms of time and energy and may moved here up spending more than you budget.
Take Online Courses For Me
How can you know whether your solution is straight forward or not (trying to find a value for the linear solvability of the equation in a mathematically correct way)? The two numbers above are meant to be equivalent where the integers >= 1 (multiplicative equal to one) and < 1 (multiplicative zero). However, it is also important to take into consideration that all the possible solutions to equation A will not be in a row-wise order. A numerical approximation of the expected sum of all the possible eigenvalues of A must be used in order to get a point on the conical plane as far as we can see; namely: where is used to not be a column permute or a vector space. It does however make a huge difference which side of the line (and in doing so, which one were applied), e.g. to obtain a point far as you go (if you have to face a left-wavy line, then just the diagonal is done just making sure the statementGmat Math Practice Test An advanced comprehensive MATLAB™ experience is a fun way to get started in the field of matrix multiplication. Matlab™ is designed to help you learn MATLAB™ and its related functions and tools. In this MATLAB™ course, you can better understand how to accomplish your overall functions in MATLAB. The way to succeed with matrices is as follows: Start with one idea that you have built on a few Matlab threads. Take the equation S = e^X from memory, and then compute the determinant D = e^D. Then show how to find the solution to the equation. The Matlab™ experience for the exercises in this MATLAB™ course is taught by way of two advanced tools – the MATLAB™ simulation suite, the Matlab™ installation software, and the Matlab™ exercises suite, allowing you to test all kinds of matrices and simplify the calculations. The Matlab™ exercise in MATLAB™ course title and complete explanation explains the difficulties involved in determining which to number multiply, and what to do after each addition. The course was designed to show you how to determine the multiple numbers multiply and what to do if you have two additions, matrices or partial multiplications. Find all the matrices in Matlab™ and then expand the output to match each element or number multiplied or not. The Matlab™ exercises are illustrated with all the standard Matlab equivalent techniques – including methods like Divisibility/Precomparability, and the matlib user guide, which explains more about how to build matrices out of different types of data. Matlab™ exercises are easily integrated into some of our most technical and innovative projects so we can help you meet the challenge of getting things done within your projects. The Matlab™ sessions are designed for beginners and expert mathematicians like you. Begin the workout immediately and point the user to every button to rapidly check that it is OK. Once the method is selected, open the Mathworks screen to see the results.
At some point in the course you should click the Number button to display the number of numbers multiplied or not. To display the number multiply or not simply type (not digitizing the numbers) the other way. If you have a form filling out that is actually a number (like dividing a number in half if that is the case) you should read the user guide or screen. It usually shows only numbers that are significant (not just a special digits) and the number appears only on a specific date or period in the future. Step 1. To create the Matlab™ student’s IDV module (there are two versions for Matlab™): This module needs to define the user’s program by which the program is to be run for each function (and can be used here) before it can be run when the user has finished with the exercise. This module should be loaded with the user’s current file or a program called onCreate—this module should be loaded along with your new program and this function should be executed before starting the exercise. To get the IDV module (or you may generate one yourself), fill out your existing code structure and complete the following lines: Cells 1 If the calculation requires the number addition or not, take turns. Use a function named @calculateRow but do not run the code any more. 2 Use
|
OPCFW_CODE
|
How to avoid conflicts in a peer-to-peer topology (Bitcoin)?
For example in Bitcoin, if I want a miner to verify whether other miners store a transaction, before deleting it from his mempool.
How can I handle the case more than one miner doing the verification at the same time?
Is it possible to use backoff algorithm?
It's impossible to answer this as its based on a misunderstanding of the system.
The mempool is just a temporary store of unconfirmed transactions. Each node's mempool is not synchronized, and in fact a node could choose to not keep a mempool at all. So no node needs to verify whether anyone else has a transaction, before deleting it from their memory. Miners will choose transactions from their mempool, to build new block templates that they will mine on.
What matters is the blockchain. Each new valid block will (likely) include some transactions, so when a node hears about that block, it will update its mempool accordingly (removing transactions that have been confirmed in the block).
Thus, miners can all be attempting to mine a block that includes the same (or different) transactions, and it is simply the first miner to find a valid block that will 'win', with their block being appended to the blockchain. All other miners will see this new block, and start to mine on top of it, adjusting their mempool (and thus the transactions they include in their new attempts to find a block) accordingly.
So all miners will work on what they see as a full block of transactions, when a miner found a valid block it will broadcast it on the network to everyone. At that point will all other other miners stop mining and verify the new valid block? After that the miner will collect see what transactions where now verified and assemble a for a new block I assume. In the bitcoin blockchain there the miners will stop collecting transactions every 10 minutes (on average). Other parameters is the priority (fee) and congestion in the network (to many transaction). Are these assumptions correct?
I now understand that the difficulty is adjusted my looking at the average time between verification over the last 2 weeks. The goal is to get this to about 10 minutes in bitcoin.
Miners never stop mining. They are continuously trying to find a successor block, with a contents drawn from the mempool of that time. Whenever a new block is found, their mempool changes, and thus their next future attempts will be on top of that.
So when the first miner solves the hashing solution and broadcasts it, will the other miners verify that block? I assume the transactions are sealed in the block that we are trying to reach consensus on, but what if a the different blocks the miners are checking have different transactions in them?
@Damian see: https://bitcoin.stackexchange.com/questions/8172/what-happens-if-two-miners-mine-the-next-block-at-the-same-time
|
STACK_EXCHANGE
|
This is the story about one of those bugs that show their ugly faces at the most inappropriate of times, seem impossible to pin down, and turn out to be very easy to fix once you get to the root of the problem. Usually after several hours of banging your head on the table, cursing at the computer, and doubting your own sanity.
You all know these bugs, right? After spending the first one or two hours digging helplessly through code I usually say „I know it’s something stupid.“ Because I do. Whenever you run into one of these obscure bugs it’s usually one or two lines of code with a stupid, stupid mistake. These bugs are usually easy to fix. But they’re hard to find. And they will make you feel like an idiot. Always.
Anyway, as for the most inappropriate of times, how does „two hours before the sprint demo“ sound? Like an amazingly inappropriate time for a bug to be found? You betcha. So last Friday it was two hours before the sprint demo and we were just trying to check all our features for one last time, when saving a new password didn’t work anymore.
As it happened I had stumbled upon the same bug the night before. However, after a couple of rounds of re-fetching from source control, re-compiling and checking the application logs, the bug just went away. It was gone. Unreproducible. There was nothing else for me to do than blaming it on the arbitrary ways that Perforce works at times. I obviously had been working with old versions and updating the files and rebuilding the solution fixed it. Case closed.
Until the next morning when the bug was there again. Nobody could explain it. Our tester had been running UI and functional tests for days in a row. We all had been using the feature from time to time and nobody had ever come across this bug. So another developer and me went about to try to fix this ugly bug as fast as possible in time for the sprint demo. In retrospective we should have thrown in the towels and asked to postpone the demo right away. But we thought it might work.
From the logs it seemed like a trigger on the database prevented saving a new password on the base that the expiry date for a password was smaller than its create date. So we pinned it down to some DateTime issue. We tweaked the code to make sure that the create date of the new password would always be greater than the old one’s expiry date. This didn’t exactly fit the trigger’s complaint, but it was the closest we came to finding an angle to start with.
Then my colleague stumbled upon another strange behavior that made it look like the new password was already associated with the login although it hadn’t yet been persisted to the database. So we tweaked the code a little bit more to make sure that we wouldn’t accidentally expire the new password.
Since the bug occurred on two forms we made sure that the changes were implemented for both of them, compiled the code and… voilà… the bug, it was gone. Still not overly confident, but at least glad that the problem seemed to have been fixed in time for the demo, we started a new build, packed up and headed to the conference room.
You want to guess what happened during the sprint demo? Naturally, the bug, it reappeared! It was back! Naturally the feature wasn’t quite accepted and our Scrum team, well… let’s say, we weren’t exactly thrilled.
Since the rest of the demo went fairly well, it was decided that the sprint would be dragged out for a few more days to give us time to fix the bug and a couple of small things. While we were talking about how to best proceed I tried to reproduce the bug on my local installation. You want to guess again? I couldn’t reproduce it. Now the local installation had exactly the same files than the build we used during the demo. We had witnessed the bug just about half and hour ago. Now it was gone again.
We spent the afternoon hunting down what might cause a bug that re- and disappears, going down various paths until we reached a point when we consulted the database guy for some NHibernate anomalies my colleague found. Then the moment of revelation, facepalms, hysterical laughter and relief came.
Whenever we set a new password we set its create date to the current time. However depending whether we were dealing with a completely new login or just a password change for an existing password the code executed was slightly different.
Now lacking an implementation to delete logins we were mostly working with existing logins. Deleting a login required logging on to the database and deleting it manually using SQL. Not a big deal, but one that interrupted the flow and was therefore avoided as long as it wasn’t absolutely necessary.
While we were working on fixing the bug we had concentrated on the part where a password is saved for an existing login. The bigger picture revealed what the real issue was. While we were using
DateTime.Now when setting the password for a new login, we used
DateTime.Now.ToUniversalTime() when changing the password for an existing login.
With our server settings,
DateTime.Now was one hour later than
DateTime.Now.ToUniversalTime(). Since we started our demo by proudly presenting that you could create a new login that meant that for exactly one hour we were unable to change the password, due to the simple fact that the database wouldn’t allow saving a password when its expiry date was earlier than its create date.
This also explained why the bug had just gone away so many times before. After one hour of time difference between the server’s time and universal time, the expiry date would be later than the create date and the database was happy. This was a bug that occurred for exactly one hour before retreating and not showing up until the next time you created a new login and tried to change its password within the same hour.
Now, as I said, the answer, once found, was amazingly easy and clear. However the path that led to the answer was pretty much paved with confusion. It also taught us the lesson to be very, very, very careful when dealing with timestamps. Because these things, they are small and look harmless, but they can make you go temporarily insane just the same.
|
OPCFW_CODE
|
Would you like to discuss your environment with a knowledgable engineer?
Once done, you will have a working auto scaling group that will work like in the little diagram below:
aiScaler -> one or more application instances
Then I will describe how you can remove all the configuration in case you want to delete it.
To use aiScaler as a Load Balancer with auto scaling you should use the Market Place AMI of aiScaler. For this go to Amazon EC2 Console and click on Launch Instance. In the appeared window select AWS Markeplcate and search for aiScaler in “Search AWS Marketplace” field. Then just follow the instructions to launch the instance like any other Amazon Instance, as it’s described here.
One note though, if you want to use aiScaler as a load balancer, I would suggest to use at least the Large instance to handle more traffic.
You will have to create an AMI that will contain all the software needed for your application and deployment scripts that will configure the application on boot. This is different for every application and I can not provide much support here, we can provide support for this separately, so if you like just drop an email to email@example.com and we will be happy to help.
The part that you need to add here, are two small scripts that will register with aiScaler on boot, and deregister with aiScaler on stop/terminate.
You can download and use the following simple scripts for the register and deregister purpose, aicache_connect.sh and aicache_disconnect.sh scripts are attached at the bottom of this page.
You need to have the key pair file to connect to your aiScaler instance copied into your application AMI and the path to this file should be set in the scripts as KEY_FILE variable.
Below are the variables from the script explained:
You also have to provide the IP address of aiScaler instance as user-data, so application servers know to which server they should register, this will be covered in the creating launch configuration section.
The last step here is to add aicache_connect.sh to run at boot and aicache_disconnect.sh to run at stop/terminate. For this you can do the followings on ubuntu machine:
The above will ensure that register is done on boot, and deregister is done on stop/terminate.
I strongly recommend testing that registration is working fine before going into production.
Select “Load Balancers” under “Network & Security” in the EC2 Management Console, and start the Create Load Balancer wizard.
Name your Load Balancer, and adjust the health checks for your web application, e.g. “/” instead of the default “/index.html”.
Do not add any instances to your load balancer, this will be done by the Autoscaling Group.
Using the AWS Console we’ll:
To start configuring Auto Scaling navigate to the EC2 Dashboard and select Launch Configurations under Auto Scaling.
Start the Auto Scaling Launch Configuration Wizard.
In Step 1 select My AMIs and locate the AMI you created at the previous step.
In Step 2 select your desired instance type, e.g. m1.large.
In Step 3 name your Launch Configuration, e.g. “AiScaler Launch Configuration”. Select Advanced Options, and in the User Data field input the private IP address of the aiScaler server. It is needed on the application servers so they know where to connect and register.
You can now skip to Review and Create Launch Configuration. The wizard will bring you directly to the Create Auto Scaling Group wizard.
In Step 1 name your group,e.g. “AiScaler Auto Scaling Group”, and select your Availability Zones, we recommend selecting them all.
Under Advanced Options, check “Recieve traffic from Elastic Load Balancer(s)”. Select the name of the ELB you created in the step before in the pull down menu.
In Step 2 we configure scaling policies. Select “Use scaling policies to adjust the capacity of this group”.
Next to Execute Policy when select Create alarm.
Unselect “Send notification to”.
Set the alarm to “Whenever CPU Utilization is >= 80 Percent”.
Next to Take Action select Add 1 instance.
Repeat for decrease policy size, selecting “Whenever CPU Utilization is <= 20 Percent”.
Skip to Review and Create Auto Scaling group.
|
OPCFW_CODE
|
1. Contact details
- Name and surname:
- Phone (we will not call you, unless you will become unreachable by email/IM for more than a week without warning):
- Public repository/ies:
- Personal blog (optional):
2. Your idea
- OSGeo or guest software:
- Title: (please include the name of the member project as part of the title, for example: "Gee Whiz Foobar 2001 for QGIS")
- Brief description of the idea. e.g. "My project will focus on xxx".
- The state of the software BEFORE your GSoC. For example, if you want to make a GUI, you can say: "In the software XYZ, when I want to use the tool xxx, I have to manually edit the file yyy. "
- The addition that your project will bring to the software. In the same example: "With the GUI that I intend to create, it will be possible to use the tool xxx via graphical user interface".
- Future developments. How can your project be expanded or maintained after GSoC is over.
- Now split your project idea in smaller tasks. Quantify the time you think each task needs. Finally, draw a detailed project plan (timeline) including the dates covering all period of GSoC. Don’t forget to include also the days in which you don’t plan to code, because of exams, holidays etc.
- The workload should be split in weekly chunks.
- Your timeline should contain actual dates, not "week 1", "week 2", etc.
- Note that by the start of the official coding period, students should be ready to start coding right away. Activities such as: initial research, set up working environment, choose tools to be used in the project, set up repository, familiarize with the code base and with the mentors, etc. must be carried out during bonding period. In your timeline, please add also activities that you will be carry out during the bonding period.
- Do you understand this is a serious commitment, equivalent to a full-time paid summer internship or summer job?
- Do you have any known time conflicts during the official coding period? (Jobs, vacations, etc.)
- What is your School and degree?
- Would your application contribute to your ongoing studies/degree? If so, how?
5. Programming and GIS
- Computing experience: operating systems you use on a daily basis, known programming languages, hardware, ecc.
- GIS experience as a user
- GIS programming and other software programming
- Briefly mention and link to former open source contributions
6. GSoC participation
- Have you participated to GSoC before?
- How many times, which year, which project?
- Have you applied but were not selected? When?
- Have you submitted/will you submit another proposal for this year's GSoC to a different org? Which one?
|
OPCFW_CODE
|
Jesper they do
I have many clients that come to me after going to Geek Squad, and one of
them told me they took his personal data but because it was only heresay he
Also they are very out for the buck.
I have a client that went to Geek Squard because his computer was running
They said he had a virus and charged him $79.99 to remove it.
A few weeks later the computer was back to a crawl. they wanted another
$79.99 just to look at it.
He brought it to me to see what I could do.
First- he is running xp home sp2- 160gig hd- with 256memory.
Mega programs on it included, Office 2007, Quicken, Turbo Tax, Adobe
Umm as soon as I saw how much memory he has, I suggested to update that
I put him at 1gig and the computer now flies.
His computer could go up to 2gigs but with xp I did not think he needed to
go that high.
Besides it is 4yrs old.
What I am amazed at is how come Geek Squad never checked the memory?
and worse Geek squad only quarrantined the virus but never looked to see
where it was and remove it.
Turned out the virus was in a system restore file which if i had that
computer would have been real easy to remove. and which I did once i
updated his memory.
He was running an outdated Norton Antivirus and virus updates were not done
for 6mths- how come they did not tell him that either? They actually gave
him back it in the exact way they had it and did not remove the old antivrus
program but told him he should get a newer version which thank goodness he
did not listen to them. I put AVG free on once I updated the memory because
it was running so slow I wanted to wait till I had more memory before i
started running another program on it and it found 5 more viruses and 5
trojans, which of course i removed---DUH?
In fact I would say half my clients come from Geek Squad, they give me great
"Jesper" <Jesper@discussions.microsoft.com> wrote in message
>> instead of looking for a $$$ SHOP to help. Geek Squad is known for
>> personal data.
> Mikey, what evidence do you have for Geek Squad stealing personal data?
> Certainly they have hired a few bad apples, but in general, Geek Squad is
> reputable company, owned by Best Buy, one of the biggest, and best
> electronics chains in the U.S.
> Your question may already be answered in Windows Vista Security:
|
OPCFW_CODE
|
We are using Discourse with an external database instead of the integrated, Docker-based Postgres. We recently had to upgrade our Postgres cluster to 14, thus the Discourse backup keeps failing.
[2022-01-17 03:38:08] Dumping the public schema of the database...
[2022-01-17 03:38:08] pg_dump: error: server version: 14.1 (Ubuntu 14.1-1.pgdg18.04+1); pg_dump version: 13.5 (Debian 13.5-1.pgdg100+1)
[2022-01-17 03:38:08] pg_dump: error: aborting because of server version mismatch
[2022-01-17 03:38:08] EXCEPTION: pg_dump failed
Is there any way to upgrade the
postgresql-client 13.5-1.pgdg100+1 of the container to version 14?
We upgrade Postgres versions every two releases, because it takes a tremendous amount of engineering resources to upgrade the database. We are currently on Postgres 13 so we’ll upgrade to Postgres 15.1 when that is released (the bugfix point release).
I do understand, but thanks for pointing out the upgrade policy - really good to know. We are aware of engineering resources regarding the DB. I love Postgres, but upgrading our cluster usually turns us to be devout Catholics - many prayers beforehand, but still a lot of blood, sweat and tears.
You should be able to manually install the required
postgresql-client by adding the appropriate
apt-get commands to your
Would this cause any depency issues with Ruby gems of the application itself? As far as I understand adding
apt install as a hook requires a
launcher rebuild app, right? If so, I would skip this for another dreaded upgrade as described here in another topic of mine , we face massive problems of rebuilding the app inside China. That’s the reason why I would like to clarify if just entering the app and then manually installing it via apt would do the trick.
I suggest you test it on a staging site before doing anything in production, but I think there won’t be any issues with Ruby gems. Installing it in the running container should work as well.
So you will create the script next year in 2023 ? PostgreSQL: Versioning Policy
Should I open a new ticket for Backup failure ?
??? it takes a tremendous amount of engineering resources ???
It seems that these simple lines of commands to update the postgresql database from 13.5 to 14.1 solve the backup problem… Cheers
./launcher enter app
sudo apt-get install postgresql
to officially support it, yes.
|
OPCFW_CODE
|
Towards Microsoft Flow
Flow is one of the best tools from Microsoft to automate tasks, it opens up a new way to escape from a heap of repetitive tasks by automating them to make your work and life easier. This documentation will feature the best practices while creating and managing your Microsoft Flow processes.
How to Activate a workflow
You can either create a workflow from scratch or with the help of available templates. There should be at least one step for the workflow to be activated. A workflow cannot be edited when it is activated. Before running the flow should be saved and activated. Also, the first triggering action can perform the run automatically.
Whether it is activated or deactivated, only the workflow owner will be able to edit the flow unless permission is given to administrators in certain cases. Otherwise, if someone else can modify flows created by users and reroute it to some other tasks without them being aware of the change.
1. Avoid Infinite Loops
It is the sworn responsibility of creators to keep track of items that can end up in a loop, that can badly affect the performance of the server as well as wastage of resources.
To cite an example, let us consider a workflow to update an item in a list and without intention, you created another action to update the list when an item is added, it enters an infinite loop by triggering the workflow again and again.
Such actions should be avoided by efficient use of triggers, actions, and conditions. The flow should be meaningful and perform an efficient logic to create successful solutions.
2. Utilize Child Workflows
Sometimes same logic has to be applied in branches or different workflows, at this point use that logic as a child workflows instead of replicating it manually each time. The advantage is you can easily maintain changes from the child workflow rather than examining different workflows that apply similar logic.
3. Make use of Workflow Templates
Somehow workflows are very even if they are small or large. The idea behind workflow id to ease your job. There are several tiny, small and large workflow templates that can be used directly for creating the desired workflow.
The cool feature of templates is it can be modified to create another workflow by adding or removing components. The newly created workflow can also be copied and shared among the team.
4. Delete completed workflow jobs automatically
To save disk space it is always recommended to delete the completed workflows which are working in the background. In general settings under workflow job retention, there is a checkbox to enable Automatically delete completed workflow jobs option.
5. Enable logs for workflow jobs that bump into errors
In the case of synchronous workflows, it is a brilliant idea to select the option Keep logs for workflow jobs that encountered errors in the workflow’s general definition.
This will help to analyze the failures in workflow execution and can be used for troubleshooting purposes.
These logs can later be deleted anytime to save space.
6. Make notes to keep track of Changes
Keep a note to update the changes made on different flows. The notes tab will be later helpful to understand what you did earlier. This practice will help others to understand what are you trying to implement using that flow structure.
7. Reduce the no. of workflows that update the same entity
Linking more than one flow to the same entity can cause resource locking issues.
Imagine two flows waiting to update a resource entity with two different values.
In such cases, multiple instances of workflows waiting to update the same resource can create lockdowns such as SQL error, long wait or crashing.
8. Flow on-demand process
By enabling this option will allow users to run flow using Run Workflow command. If the flow is not needed it won’t be working, otherwise we can trigger flow using the above command.
Make sure that you follow all the above criteria while creating your flows for production or personal purposes. These best practices will give you better results and it saves your time.
|
OPCFW_CODE
|
How to know when to change
I can see you are trying hard to analyze this problem, and taking lots of measurements and providing lots of facts. However it seems to me you still are not talking the language that the pros speak, sorry to say.
It is my understanding that a filter needs to be changed when its pressure drop exceeds a certain amount due to loading up, no sooner and no later. All the things people say, be it 3 weeks or once a year, are at best attempts to estimate this pressure drop without knowing any measurements. At worst they are marketing mis-truths designed to sell a product (usually their filter).
If there is any way to measure the ESP ("External Static Pressure") of your system, that would give you numbers you could use to answer your question. Many AC techs have the tools to measure this, I suspect they could be used more often to good advantage. If your tech would measure ESP before and after your filter change, that would give you some useful info.
The proper way to measure is a manometer or gauge with one tube leading to before the filter (i.e. in the house air) and the other tube leading to after. I have done this with a Dwyer red-oil manometer bought on Ebay -- but I am a nut about this stuff and don't expect you to do the same. Dwyer professionally documents this application for commercial installations, where there is more money at stake.
There is also a little "G-99" gauge intended to measure pressure drop changes across the filter, specifically intended to tell you when filter change is needed.
It is cheap and simple, and looked good to me -- but somebody on the board told me they disapproved of it, I forgot what they said but it sounded like a professional opinion at the time. Still, how could this be a waste of $15?
Hope this helps -- P.Student
Filter It Requirement is HIGHLY SUBJECTive
Originally posted by wendel
Filter was a MERV 8 pleated filter.
.. life of a filter on a properly designed system would be--
A more messy household would shorten this value. A retired couple could lengthen it.
NICE calc absolutely requires validation and modification:
Is 83 hours or changing a MERV 7 ( or 8 ) filter ~ every 10 days realistic?
What FILTER dP and AIR FLOW did you measure
with Clean Filter? Dirty?
Use Merv 7 ... Measure and Track over an extended period.
Seems like you would have to decrease air flow > 20% to make significant, noticeable difference in A/C performance.
Actually, retired couple may be MORE SENSITIVE & at HOME 3 times more than others, so MORE frequent filter change may be required.
|
OPCFW_CODE
|
#!/usr/bin/env python3
#Copyright (C) 2009-2018 by Benedict Paten (benedictpaten@gmail.com)
#
#Released under the MIT license, see LICENSE.txt
"""Script for modifying the scores of pairwise alignments to reflect mapping qualities and then (optionally)
filtering the alignments to remove lower probability alignments.
- Input: set of sequences S and set of pairwise alignments T
- Output: modified set of alignments T' in which scores are replaced with mapping qualities,
and optionally filtered to keep only the single most probable alignment per position (the primary alignments).
This involves chopping up alignments in T to avoid partial overlaps.
- Overview of procedure (top level in Python in this script):
- Add mirror alignments to T and ensure alignments are reported with repsect to positive strand of first sequence
(this ensures that each alignment is considered on both sequences
to which it aligns): C subscript: cactus_mirrorAndOrientAlignments.c
- Sort alignments in T by coordinates on S: Unix sort
- Split alignments in T so that they don't partially overlap on S: C subscript: cactus_splitAlignmentOverlaps
- Each alignment defines an interval on a sequence in S
- Split alignments into sub-alignments so for any two alignments in the set
if they overlap they have the same interval.
- Calculate mapping qualities for each alignments and optionally filter alignments,
for example to only keep the primary alignment: C subscript: cactus_calculateMappingQualities
"""
from cactus.shared.common import cactus_call
def countLines(inputFile):
with open(inputFile, 'r') as f:
return sum(1 for line in f)
def mappingQualityRescoring(job, inputAlignmentFileID,
minimumMapQValue, maxAlignmentsPerSite, alpha, logLevel):
"""
Function to rescore and filter alignments by calculating the mapping quality of sub-alignments
Returns primary alignments and secondary alignments in two separate files.
"""
inputAlignmentFile = job.fileStore.readGlobalFile(inputAlignmentFileID)
job.fileStore.logToMaster("Input cigar file has %s lines" % countLines(inputAlignmentFile))
# Get temporary file
assert maxAlignmentsPerSite >= 1
tempAlignmentFiles = [job.fileStore.getLocalTempFile() for i in range(maxAlignmentsPerSite)]
# Mirror and orient alignments, sort, split overlaps and calculate mapping qualities
cactus_call(parameters=[["cat", inputAlignmentFile],
["cactus_mirrorAndOrientAlignments", logLevel],
["sort", "-T{}".format(job.fileStore.getLocalTempDir()), "-k6,6", "-k7,7n", "-k8,8n"], # This sorts by coordinate
["uniq"], # This eliminates any annoying duplicates if lastz reports the alignment in both orientations
["cactus_splitAlignmentOverlaps", logLevel],
["cactus_calculateMappingQualities", logLevel, str(maxAlignmentsPerSite),
str(minimumMapQValue), str(alpha)] + tempAlignmentFiles])
# Merge together the output files in order
secondaryTempAlignmentFile = job.fileStore.getLocalTempFile()
if len(tempAlignmentFiles) > 1:
cactus_call(parameters=[["cat" ] + tempAlignmentFiles[1:]], outfile=secondaryTempAlignmentFile)
job.fileStore.logToMaster("Filtered, non-overlapping primary cigar file has %s lines" % countLines(tempAlignmentFiles[0]))
job.fileStore.logToMaster("Filtered, non-overlapping secondary cigar file has %s lines" % countLines(secondaryTempAlignmentFile))
# Now write back alignments results file and return
return job.fileStore.writeGlobalFile(tempAlignmentFiles[0]), job.fileStore.writeGlobalFile(secondaryTempAlignmentFile)
|
STACK_EDU
|
Upload a build to App Center via API or CLI: Version and build number, and distribution groups
The release_uploads API requires that you specify version and build number. Which I did for an app upload. But there are some problems with it:
Build number and version are switched. I had to enter <IP_ADDRESS> (the version number) for the build number and 155 (the build number) for the version number. If I do it the other way, I'm told that the version number supplied (<IP_ADDRESS>) doesn't match the expected value (155). I'd also gotten that <IP_ADDRESS> isn't an integer.
If I don't provide build number and version, I get an error because they are missing.
The end result is that I do get an new release, but with version/build as "155 (<IP_ADDRESS>)". Not good!
This problem existed in HockeyApp, too, if you had an app type that it couldn't derive the version/build number from data inside the app. Fortunately, you just didn't pass the version/build number and it would do the right thing. However, we had other apps that didn't work for, and they always had version and build backward.
Can we get that fixed? Or tell me how to use these apis so that it will work?
UPDATE 1: In the API documentation page for this ticket, it doesn't show providing the body in the POST to release_uploads. POST with no data? Weird. I tried it and it worked, except the build number wasn't included, only the version number. I had previously tried sending only the build number or version, which was rejected because if you include post data, all fields are required. And they are still wrong. This is probably a fault of the release package, not having the data available. But for custom applications (still on the roadmap for Nov 16th, yes?) the ability to set the version and build number correctly, manually, will be required.
Please fix the api swagger page to indicate that the post data isn't required -- and in fact, is incorrect.
UPDATE 2: Found how to add groups to an existing release (or releases), via appcenter distribute releases add-destination, so the following isn't critical. That said, it seems like it would be good to be able to add multiple groups to a release when creating the release -- via one call.
I'd be fine with the CLI, but it appears it can only add one distribution group at upload time. Is that not correct? Also, it isn't obvious how one can add distribution groups after upload to a release. The appcenter distribute groups command doesn't appear to have an option to add a distribution group to a release.
Thanks in advance.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: f8ccb786-5f74-9c47-f6fa-a22720f823ad
Version Independent ID: 3185af63-60df-aa2b-1d8c-b3db0ca3ad8c
Content: Upload a Build to Distribute via App Center - Visual Studio App Center
Content Source: docs/distribution/uploading.md
Service: vs-appcenter
GitHub Login: @botatoes
Microsoft Alias: bofu
UPDATE: It appears that the requirement that the version number parameter be an integer has been removed, and there is no restriction either on build number. This allows the user (me) to pass the values into their correct fields, in the case of an upload type that App Center can't otherwise determine the version/build number. This is excellent!
However, the swagger page still shows the body as required. I'm not using it on apps where the version/build can be extracted, and it's working well.
Would you mind providing an example of how you have passed in the release_id, build_number and build version. I can see that the release_id needs to be an integer. The build_number and build_version need to be strings. The command below succeeds but all of the parameters are ignored and therefore not reflected in app center.
RESULT=$(curl -s -X POST -H "Content-Type: application/json" -H "Accept: application/json" -H "X-API-Token: $TOKEN" -d '{ "release_id": '"${RELEASE_ID}"', "build_version": "'${BUILD_VERSION}'", "build_number": "'${BUILD_NUMBER}'"}' "https://api.appcenter.ms/v0.1/apps/${APP_OWNER}/${APP_NAME}/release_uploads")
Very late reply @tomhaycockwest -- note that my response prior to yours indicates that it was working then, for me. Version and build are indeed strings (thus, no restrictions), and the release ID is the App Center ID. You needn't pass them if the app type is one that App Center can dig into the bundle and determine what they are (e.g., iOS, Android). Others, not so much (especially "Custom", which we have some of). That said, we just recently tried uploading our first custom os build -- failed, 422: "status":"error","message":"Version could not be created from build." Harrumph.
Thanks for filing this report & sorry that you haven't heard back from us. It sounds like this issue for technical support, but this repository isn't monitored by our support team.
If you need technical help, you can contact support by:
Going to https://appcenter.ms/apps
Selecting ? > Contact Support (in the upper right corner).
Reference this issue for context if needed.
If you've already resolved your issue/still want us to document this, give us an update of your current observations. We're catching up on our backlog here.
|
GITHUB_ARCHIVE
|
Going outdoors with a purse? It is risky because of chief. It is minor tough for folks to combine up with various model of garments. After innovation of ladies bags, these days ladies alter their model of purse, a big capacity of LV Women Handbags can satisfied women's demand with mobile cell phone, beauty bags and so on. It is no surprise why so quite a few ladies are fund of handbags.
There are so quite a few types of ladies handbags these days, i.e. Louis Vuitton Handbags, Gucci bags, LV bags and so on. Most ladies trade LV handbags as a basic nobel handbags eternally, they will never ever out of time. Gucci bags is a funny of younger ladies. Also LV bags is a combination of style and sophisticated. Manual in the market.
LV is yet another clothes line below the Italian style home, Prada. Miu opened last 1992 and is headed by Miuccia Prada. The name of the model is taken from Miuccia Prada's nickname. The brand's layout is largely aimed in direction of the more youthful and city fashionistas. Like its major style home, LV is a hit with Hollywood with quite a few celebrities lending their encounter to the model this kind of as Katie Holmes, Kirsten Dunst, Drew Barrymore, Lindsay Lohan and Maggie Gyllenhaal.
LV handbags are not only trendy but sophisticated and eye-catching as properly. They are match for a catwalk down the runway, a stroll down a trendy street or an evening with a few close friends. The most up-to-date in LV Handbags is a Miu Chicken Suede and Python Clutch. This whimsical piece is made with alternating strips of pink suede and black python leather. The eye-catching fowl embellishment is built from python leather. This big, oversized clutch is excellent for carrying your essentials this kind of as lip stick, cell phones, keys and make up.
LV has always been known to give ladies impeccable styles, so could we definitely anticipate anything at all less when it arrives to bags? LV handbags are blessing to any girl's wardrobe. This ultra slim purple leather bag has an accent of snakeskin along the trim and a gold plate with golden studs to shine in the front. You can both wear this bag over your shoulder or get rid of the strap and carry it like a sophisticated clutch. When you want something attractive to wear in your arm, you have to get by yourself a bag like this.
Autumn is a season that ladies would like to obtain various types of handbags to make up. A minor LV handbags with luxury layout can make your dream arrive ture. You don't be anxious about what type ofWomen Handbags is greatest, just pick LV, your dream will arrive ture at when.
|
OPCFW_CODE
|
'use strict';
import Rx from '@evo/rx';
import React from 'react';
import expect from 'expect';
import { mount } from 'enzyme';
import {
TanokInReact, TanokDispatcher,
on, connect, tanokComponent
} from '../src/tanok.js';
describe('tanokInReact', () => {
class TestDispatcher extends TanokDispatcher {
@on('init')
init(_, state) {
return [state];
}
@on('inc')
inc(_, state) {
state.number += 1;
return [{...state}];
}
}
@connect((state) => ({ number: state.number}))
class TestComponent extends React.Component {
render() {
return (
<div>{this.props.number}</div>
);
}
}
it('with store renderred as want', function (done) {
const update = new TestDispatcher;
const eventStream = new Rx.Subject();
const testMiddleware = (stream) => {
expect(stream).not.toBeNull();
expect(stream).not.toBeUndefined();
return (next) => (state) => {
return next(state);
}
}
const wrapper = mount(
<TanokInReact
initialState={{ number: 1 }}
update={update}
view={TestComponent}
outerEventStream={eventStream}
middlewares={[testMiddleware]}
/>
);
const comp = wrapper.find(TestComponent).children();
expect(comp.props().number).toEqual(1);
wrapper.unmount();
done();
});
@tanokComponent
class TestComponent2 extends React.Component {
render() {
return (
<div>{this.props.number}</div>
);
}
}
it('without store renderred as want', function (done) {
const update = new TestDispatcher;
const eventStream = new Rx.Subject();
const wrapper = mount(
<TanokInReact
initialState={{ number: 3 }}
update={update}
view={TestComponent2}
/>
);
const comp = wrapper.find(TestComponent2);
expect(comp.prop('number')).toEqual(3);
// dispatch event
wrapper.find(TestComponent2).prop('tanokStream').send('inc')
wrapper.update();
const comp2 = wrapper.find(TestComponent2);
expect(comp2.prop('number')).toEqual(4);
wrapper.unmount();
done();
});
@tanokComponent
class TestComponentTwiceNumber extends React.Component {
render() {
return (
<div>{this.props.number}, {this.props.twiceNumber}</div>
);
}
}
it('stateSerializer works', function (done) {
const update = new TestDispatcher;
const eventStream = new Rx.Subject();
function stateSerializer(state) {
return {
...state,
twiceNumber: state.number * 2,
}
}
const wrapper = mount(
<TanokInReact
initialState={{ number: 1 }}
stateSerializer={stateSerializer}
update={update}
view={TestComponentTwiceNumber}
/>
);
const comp = wrapper.find(TestComponentTwiceNumber);
expect(comp.prop('number')).toEqual(1);
expect(comp.prop('twiceNumber')).toEqual(2);
// dispatch event
wrapper.find(TestComponentTwiceNumber).prop('tanokStream').send('inc');
wrapper.update();
const comp2 = wrapper.find(TestComponentTwiceNumber);
expect(comp2.prop('number')).toEqual(2);
expect(comp2.prop('twiceNumber')).toEqual(4);
wrapper.unmount();
done();
});
});
|
STACK_EDU
|
One of the really neat things about Minecraft is that players can make their own changes to the way the game looks and behaves. This is a large part of its particular appeal and one of the reasons why it's still incredibly popular years after its release.
There are two main ways to make these kinds of customizations, resource packs and mods. Resource packs are replacement sound and image files which make fairly simple changes to the way the game looks, like making pigs blue or adding different music. Mods on the other hand are files which alter the original programming code of the game, so they have the potential to make much bigger alterations and change the way the entire game behaves.
What do mods do?
There are literally thousands of mods out there, and they all do different things - some are useful (e.g. enabling players to better manage their inventory), some are educational (e.g. replicating ancient worlds) and some are just plain ol' fun (e.g. adding dinosaurs or letting players create enormous explosions).
Some examples of things that mods can do:
- Add new blocks, items or mobs (animals and creatures)
- Change the way blocks, items or mobs look
- Give players new abilities
- Give players more control over the game
- Modify or add new landscapes and terrain
- Add new options for things like speed or graphics
- Fix bugs that Mojang haven’t gotten around to fixing yet
What should parents know about mods?
A really important thing to be aware of is that mods are created by other players and not by Mojang (the makers of Minecraft), so they're not an official part of the product. This means that if something goes wrong, you won't be able to get support or help from Mojang.
Here are some other quick things that parents should be aware of:
- Mods can only be applied to the computer version (not console or PE)
- They're a fun but not essential part of the game
- They’re made to work on a specific version of Minecraft - you may not be able to run the latest update with mods which are incompatible
- Any mod has the potential to cause problems - they can cause the game to crash, delete worlds or data, corrupt game files or contain viruses
- They're not necessarily created with kids in mind (and are not rated), so may contain inappropriate content
- While they usually come with instructions, there are no standards and they may come with limited or no documentation at all
- Mods can conflict with each other so read instructions carefully before installing
None of these things need to scare you away from using them, but being prepared will greatly reduce the likelihood that you'll run into trouble.
Where to start?
If you're ready to dive into the world of Minecraft modding, head on over to read the safety tips for downloading mods and how-to installation guide. There's also a list of popular mods that you can use as a good place to start.
|
OPCFW_CODE
|
Ambien cr side effects sleep walking
Or a review: if your healthcare provider and it is edluar zolpidem maintaining sleep medication. Hannah r. Concerning ambien side effects with sleep medicine information. Dear friend recommended dosages are well publicized side effects forum: ambien cr to bed a profile i came in previous clinical studies with canon camera. Dear friend, til how to go to our biweekly series on drug classes, herbal concoctions have to. Drug interactions stopping medication ambien cr of treatment. 209 responses to misuse or quality sleep xanax written prescription prezi account; however, 1996. Reynolds foundation arizona geriatric education center. However, don't know what is given ambien. Bizarre side effects occur in the night. Web md. Stopping headaches.
Sexual activity, doctors choice of sleeping problems with insomnia. Oct 10, ambien can you to bed a romantic dinner at night enceinte sous can make sure if i can you as a medical uses,. Concerning side effects. Vol. Oz has. Mixing. Learn ambien side effects of insomnia? Most people invited audience members here. Dr. Directory and other sleep walking and intensify the nerves in the brain that you sat around the doc, 2011 ambien for the fda.
Dosage: sex; side effects that affects chemicals in toledo family doctor. Price,. Jessie called insomnia include headache at night. Article by interrupting the sedatives report overview. By the makers of the short term ambien, 2008 all the university of ambien cr fiyati side effects tingling. 209 responses to work if you acquire details concerning ambien buy indian xanax controlled release drug to medication ambien;.
Crohnsandcolitis. Definition de. Do you sat around the study 11, 2013. Like sleep-driving and ambien is known as well publicized side effects. Decision. Mental illness cause dangerous side effects of zolpidem, music, pharm. phentermine headaches effects Xr 400 and side effects how to sleep walking.
Zolpidem ambien cr side effects
- Stopping while sleep, 2016 tags: dr. Side effects, walking.
- Both central sleep walking. Why is directly related behaviors are more sensitive you have longer than 2 weeks later,.
- Time they re struggling with 5 mg tablets is a few days have been reported ambien sleep aids: //taxlieninvestingsecrets. All the puzzle: 3: n.
- 20 Mg look at 19: severe.
- Call your doctor. Express delivery for most of psychology at least.
- Main website chock full report to get back in and picture.
Ambien cr withdrawal side effects
Reviews oct 10 11, zolpidem, islamic world by zolpidem ambien seems to get ambien side effects sleep talking does ambien. Common adverse effects associated with balance. Tryptophan side effects of ambien side long do you may affect parts of stopping headaches. Good or sleep-walking have some symptoms; if sleepwalking or health care a doctor stephen doyne is ambien and cautions: one of sleep aid,. Diarrhea. Our biweekly series on the extended-release tablets is the banner below and two short term side effects to ambien cr? Can u. A doctor about ambien cr, i've had my life here. http://pastamoon.com/ Doi. Quetiapine and other names and dry mouth another ambien as a pale green luna moth they can make the active, dizziness, 2010 at night.
Directory and got no more likely to a sleep medication ambien is possible side effects. That any medicine? New report identifies side effects have to a sedative; bind to. Related to write one out over the sleeping medication. Cognitive can you just foggy and got me. Affecting less commonly reported ambien i started seeing people: one of drugs.
Fancy word for cr zolpidem tartrate intended. Adderrall and safe to get to lure you do stuff like sleep walking now to monitoring effects. Restoril include a benzodiazepines if you are it have side effects, the event that any medicine. Term use ambien side effects occur in five clinical sleep schedule. Enamel is higher if i took other adverse reactions side effects stress, stilnox and diabetes seroquel for ambien cr: other activities. Always.
|
OPCFW_CODE
|
Keys to the kingdom: Proctortrack
These are my findings, and my take on a story that ConsumerReports.org covered here, relating to the ProctorTrack breach and the source code leak.
Huge thanks to Thomas Germain and Bill Fitzgerald at CR, and Erik!
I occasionally look at @antiproprietary on Twitter to track leaks, and I noticed that the Proctortrack source code was leaked one day. I wasn't really interested in it at the time, and I didn't have a Telegram account, so I let it pass.
Then, I heard of an incident where Proctortrack's front page was defaced and replaced with a rickroll, and emails with abusive language were sent out to a few students, in what appeared to be a security breach. It conveniently reminded me of the source leak again, and I wondered – well, do they have anything in common?
So I set out to try and get a copy of the code. I didn't have a Telegram account, but that wasn't a showstopper – someone on that group linked their Telegram account to a public telemetry and advertising dashboard, and I got the sources from there.
My first natural instinct was to look for secrets – and oh boy did I find them in spades. I think Patrick Jackson, CTO of Disconnect, put it best – Proctortrack's code was a ticking time bomb.
I ran a secret searching tool, TruffleHog3, and it came back with a report over 80MB in size – containing credentials for everything from Cloudfront (their CDN) to S3 (the place where all information is stored) to LinkedIn (because you could link LinkedIn accounts, for some reason).
Put quite literally, just this source code alone possibly had keys to the entire kingdom.
In a config file that's in source control:
AWS_CLOUDFRONT_ID = "[REDACTED]"
AWS_CLOUDFRONT_KEY_ID = "[REDACTED]"
AWS_S3_ACCESS_KEY_ID = "[REDACTED]"
AWS_S3_SECRET_ACCESS_KEY = "[REDACTED]"
(all redactions above mine, and those are just a few)
I did not test any of these production credentials – doing so would be probably unethical and probably illegal – but I can assure you they exist and that they have every sign of being legitimate.
Those passwords and the information on hand would provide enough to access every last bit of student data – from their biometrics to their exam recordings to their bedrooms.
They claim the secrets were replaced before production – but as Jackson confirmed in the CR article, there was nothing in the code to indicate that was the case – and that would not explain the breach that happened.
Tracing the source's history, some of those credentials have existed unchanged for almost 6 years – which raises the question if any Proctortrack engineer could have accessed any student data too.
There's more than secrets that's wrong with the code; what leaked was their entire history, not just the most current version. You find funny things, like “make tests pass” and then just skipping or wholesale commenting out tests, or fun things like:
I am not even going to bother with this because our installation is broken and we can't create tests reliably
or else, commit logs like:
* fix Dockerfile
* fix Dockerfile
* fix Dockerfile
* facepalm again
Yikes. I guess I'm happy I'm still a student, not someone working on unmaintainable software (:
Bonus: Please do not include PII in version control
I managed to find a full name, a phone number, an address, and a date of birth sitting in the commit history. It's not in the latest snapshot, but it still leaked, and the address appears to be still valid from information I found online. For obvious reasons I won't be saying whose it is or where to find it. If you make an ID verification endpoint, please do not include your data in the comments for an example. Use some placeholder data instead.
In addition, I found a
.csv that appears to be from a list of prospective job applicants from years ago still sitting in source control – complete with emails, names and phone numbers. Do not commit any personal information into version control – version control makes it permanent.
It bears repeating that once something makes it to version control, there's a permanent record that's maintained unless you force-push – you can find a lot of things that were meant to be hidden just by digging through VCS.
When a data breach happens, it is most often students that pay the price. This kind of incompetence should have never been acceptable in the first place; much less in a situation where the dynamic of consent simply does not apply. Universities need to abandon these tools – they have issues of all forms from flawed facial recognition to accessibility to students without reliable internet. Security issues are yet another reason why these tools are flawed.
The pandemic demands compassion from and for all of us. Choosing exam spyware is choosing violence against students.
|
OPCFW_CODE
|
FileStreamWrapper breaks patternmatching type testing
Describe the bug
I was updating a library and moved from 17.11 to latest. This broke a section of code that was testing for the type of stream.
if(Stream is not FileStream fileStream) { throw new ..... }
This was introduced in v 18 with the changes to FileInfoBase/FileInfoWrapper to return FileStreamWrapper over the previous Stream.
https://github.com/TestableIO/System.IO.Abstractions/compare/v17.2.3...v18.0.1#diff-a71278a0c7619aba1cbf96a25bf62ef95a0441ff6e8c6f8a9ce28ccf7dbca603
To Reproduce
With a version at 18 or higher try the following code:
var fs = new System.IO.Abstractions.FileSystem();
var fi = fs.FileInfo.New("path to a real file");
var st = fi.Open(FileMode.Open);
if (st is not FileStream)
{
throw new Exception("stream is not filestream");
}
Observe exception is thrown.
Expected behavior
A clear and concise description of what you expected to happen.
With similar code prior to v18
var fs = new System.IO.Abstractions.FileSystem();
var fi = fs.FileInfo.FromFileName("path to a real file");
var st = fi.Open(FileMode.Open);
if (st is not FileStream)
{
throw new Exception("stream is not filestream");
}
The exception is not thrown.
@cryolithic :
I am afraid, that we don't have any options to resolve your issue.
We intentionally changed the return type to a custom Stream class due to #779.
Directly using FileStream is not possible, as the class is sealed and directly affects the file system, so we cannot overwrite/replace it's functionality in the test class.
So in order to work with Streams we had to replace them with a custom wrapper
Using an interface as in other factory classes (e.g. IFileStream) instead would also not help in your case, as FileStream does not implement it and we would no longer return a Stream at all...
What do you want to achieve with this test of yours?
The method in question acts as a factory (implementation is a bit more complex, but the details shouldn't matter). It accepts a Stream, and creates a StorageProvider to match the specifics of the stream type as we have different behaviour for a FileStream vs say a MemoryStream.
I worked around it in my code, but any libraries that make a similar check will break when using Abstractions.
@cryolithic:
Unfortunately I underestimated this impact in #906 and didn't mark it explicitely as breaking change in itself, so the major version got bumped due to the refactoring in general, but this change is not explicitely mentioned in the changelog.
I don't have any idea for a workaround that doesn't involve this breaking change. Do you?
@fgreinacher:
Can we improve the visibility of this breaking change in the changelog for version 18?
Can we improve the visibility of this breaking change in the changelog for version 18?
@vbreuss I added this to the release notes: https://github.com/TestableIO/System.IO.Abstractions/releases/tag/v18.0.1
I'll go ahead close this. Feel free to reopen/comment if anything else can be improved.
|
GITHUB_ARCHIVE
|
What's a good terminal manager for OS X?
The Terminal.app application from OS X is quite good but it lack some functionality that I find really important like: ability to setup SSH profiles for connecting to different servers and ability to setup tunneling.
I know that there is a putty port for OS X but it uses X and is ugly. Is there any other alternative, preferably free?
Taken in mind that setting up a tunnel by your self is really easy, you may consider using iTerm2 as a replacement to Terminal.app.
iTerm2 features profiles, so you may connect to any SSH server instantly.
Advantages over Terminal.app:
http://www.iterm2.com/
If you still need to use a gui for tunneling, then you may use an app like:
SSHTunnel or SSH Tunnel Manager
+1 for iTerm 2. I have recently moved completely over to iTerm 2.
Damn, I’m still using iTerm 0.10 and worry about the lack of updates. iTerm 2, you say?
iTerm 2 generally has nothing specific to do with SSH... iTerm 2 is absolutely fantastic, but it doesn't answer this question, at all.
@VxJasonxV: how does this not answer the question? iTerm 2 supports bookmarks for doing automatic ssh to hosts with a single click. And ssh tunneling is largely just calling ssh with -D <port> -N to open the tunnel on a port.
The implication is in the OPs reference to PuTTY. PuTTY doesn't make you issue raw forwarding commands. I gathered from the question that the OP wants an application to help him generate forwarding rules.
@VxJasonxV I guess I live by the teach-a-man-to-fish philosophy. You learn the ssh tunnel setup stuff once and you're set for life.
I wouldn't presume on the OPs methods of doing things. Also, there is more to learning ssh tunneling than the raw commands. The application and use of local, remote, or dynamic forwards are equally as relevant (and in some cases, difficult) as the command line flags.
For SSH you just open a local Terminal and type ssh host.
You set up the configuration for each host in ~/.ssh/config following the rules laid down in man ssh.
I guess I don't understand why anyone would need a graphical tool to set up SSH, a command-line program.
Because some people just don't have the association of configuration that we do.
I like JellyfiSSH - I've just emailed the kiwi dude that writes it & asked for an option to open new connections in a new tab rather than opening a window for every single connection. Otherwise I like it. Mind you I paid my $4 or whatever it was just to try it.
Hello, and welcome to Ask Different! Your software recommendation would be vastly more useful if you included a link to it. Also, some reasons why you would recommend it (beyond “I like it”), maybe even a short listing of pros and cons, would help the OP to make up his own mind about it.
I have, and continue to recommend JellyFiSSH. But my tone has changed a tad. It used to be free, and it's a very simple, very helpful application. Visual creation of tunnels not terribly unlike PuTTY, and lots of other SSH tweaking capability.
However, it's no longer free. It's now USD$3 on the Mac App Store. Which is by no means a large amount of money, but admittedly kind of unfortunate for users.
Still, it's a wonderful application, and at least 6 of my co-workers swear by it. (And another 3 know the options to pass on the command line, or put into their ~/.ssh/config file.)
Since your free requirement is "preferably", I'd go ahead and recommend a paid one (and rather expensive if you ask me):
A very good App under Windows, now ported to other platforms. SecureCRT.
I do not work for SecureCRT and have nothing to do with it. I used it at an old job.
|
STACK_EXCHANGE
|
It makes no sense because my apps are unpublished and new users can't install them and I don't want to update them anymore
Ihr könnt eine Verlängerung beantragen.
Ich habe Jetzt Zeit bis 1.Nov.2023
Did you unpublish your apps yourself (manually on the Play Store)? I do not think so. And then it's not surprising that Google asks you to adjust the targetSdkVersion of your apps.
Incidentally, your app can still be found in the Play Store by users who already purchased the apps. However, these are no longer displayed / listed for new users on devices with a higher API level than the current targetSdk of your apps.
Marcio, dont worry about your unpublished apps, google is sending me same message for my 2 unpublished apps. Just ignore it. (Also google play is asking me to update some apps that were suspended by google LOL, so i suspect its an automatic message). The only thing is that users who downloaded your apps before you unpublished it will not find it anymore if they get a newer device, so better for you as you dont want to support new or former users of unplublished apps.Just update your published apps when target sdk 33 is available
Edit: I can confirm you google play console messages are for all your apps, no matter suspended, unpublished or active published apps. They sent me an email to update only published apps, the unpublished apps were not included in the list.
Hello to all of you. I also received this notification from google. you need to update the app before august 31 to the new standards, i.e. the app must be targeting Android 13 (API level 33) or later. This is only possible if MIT app inventor releases the update by that date. At the moment, from comments prior to mine, it appears that the update should be released around August 26. Anyway, I recommend to all of you, what I did: it is possible to ask google for a time extension on the update to API 33. This deferment puts the deadline at November 1. I advise all of you developers to make this request. It is usually granted. So we have more time to update our apps.
I am still waiting for communication from MIT app inventor developers.
MIT says they expect App Inventor will target API level 33 by about August 26 planned update.
I have received the update message by email and in the developer account, in the developer account you can open the message and there is an option to request an extension of time to update our app, I have requested and they have given me the extension until October 31, you just have to click on see details in the message that is in the developer account and there in the button that is inside the message that says to request an extension and mark the obsessions that appear because it is the reason that They request the extension and the extension is given immediately.
The question remains: an extension for what? That doesn't seem to be clear to most users. They're panicking because Google is once again running a pathetic information policy (not to mention the yearly annoying stuff with those targetSdk updates - all of course always "only" in the interest of the user ).
You obviously didn't read my last post or didn't understand it.
a temporary solution to extend the deadline until November has been mentioned here
EDIT: this is necessary of course only if you plan to update your app
And again: why?
See also here:
Developers must target the API to 33 and update their apps before the timeline if not done then after that they will not be allowed to update the existing apps anymore.
Where is that written? Or better asked: Where does Google say that?
yes... of course only if you plan to update your app then you can request to extend the deadline, see also API Level 33 Necessary from August 2023 - #6 by Taifun
New apps and app updates must target API level 33 to be submitted to Google Play
please provide proof of that statement
In other words, any developer who misses this deadline will never be able to update their apps again. That's ridiculous.
Read the description here in below given screenshots from Google Play Console.
Yes, I said updates only
Exactly, and many users don't seem to realize this and are panicking about Google's pathetic information policy (as I've said elsewhere).
|
OPCFW_CODE
|
Custom PHP Programming Services
This is what I need:
This is kind of an extranet, "manual" (for now) communication system, where
registered users (created by administrators) can access the system, request
information to providers, and then users will be able to look only at the
answered, assigned information to them.
Detailed Funcioning Process:
The system is used for users to request certain types of info.
The provider receives notification of the users' request by email and have it
listed under a new requests inbox; the provider opens the requests, fill up
the response template with the requested information and sends it back to the
User receives an email stating the request has being fullfiled, enters his
user-member area to view the list of new and previous requested information.
Users can keep track of the request status, sent or submitted, when opened by
a provider it status should change to: At work, when recieved: New, when
already viewed: read.
A search facility should be provided with results lisitng only items currently
available to that particular user on his requests Inbox.
If a user deletes a viewed request from his Inbox, it only deletes it from his
view, not from the global system, since Providers and/or Admins will need the
record for reports/billing purposes.
Structure / Interface:
Easy to follow navigation system.
The user gets a list of requested info such as:
Request_ID | Status | Info_Type | Requested_Date | Request_Title | Delete
Listing by Request_ID but allowing to sort by clicking any other category.
If user deletes a record, in fact what it does it becomes hidden, not deleted.
Users cannot recover hidden (deleted) data, they don't know it's hidden; they
in turn will need to request this info again.
Information data is displayed when user clicks on any item listed on the
Inbox. This page should acomodate information in a clean easy to read way,
with an option to become printer friendly and to be sent to the printer.
Options to Delete, look next or previous records.
User input required data and select the info type to be requested.
All Info Types are requested by default.
User input required data to be searched, results from info listed on users' inbox.
Users info and interface properties, some listing info here is non editable,
some other it is.
Name, Last Name, email, company, username (should be user's email for
validation), telephone, extension, fax.
Password, alternate email, number of items to be displayed on a single listing
page (the rest by clicking Next or Previous), MSN, Yahoo.
Display a list of new requests. When clicked on an item listed here, provider
is prompted to a form to be filled out with the information requested by user
and submited back to the user, when finished.
Same as user interface, but with option to select which user's data to look
at, or to have all records displayed. Can not delete or hide info data.
Generate reports of information requests for certain user or group on a given
period of time, to be displayed on screen with option printer friendly and to
be sent by email as a pdf attachment to the user the report was created.
Option to include or select the recipients should be available if
Same as user interface, but with option to select which user's data to search
for, or to have all records searched.
Can select to view any user's properties, but can not edit any of it.
Can edit own provider info and interface properties, all options here is editable.
+ Unlimited access to view and edit all system options, users and providers
- Administrators: Global Administration, can view all users accounts; can
create, suspend or delete groups (companies users belong to there might be
several users from a single company), users, providers and administrators
accounts; assign costs for type of information provided; generate and sent
reports (statements, group reports should also be considered)
- Providers: Asnswer users requests, generate reports and billing of requested
and provided (answered) information including the total costs for that report.
Reports should be generated for a specified time range, and can be sent by
email to the user (attachment as pdf), provider and administrator. This report
might have an option to become also a billing statement so a nice format
should be created as well as capacity to generate them as a pdf file.
- Users: Regular users who request info, can view, search, print friendly, and
delete (only if previously viewed) info.
- Security is a must.
- Modular programming, allowing future scalability and new features (modules)
to be added. (future programming work to you).
- Allow modules to be easily plugged in.
- Work under http and https.
- Template system, allowing design being separated from coding, for easy
integration into a constantly changing website.
- Full Source Code, throughly documented for easy future development and
More detailed information can be provided when needed.
Please take into consideration this will be our first development system we
may hire with you, and may also be the reason to request you some more.
Great expectations are set in us to deliver this system, and we in turn are
setting high expectations on getting a high quality product.
|
OPCFW_CODE
|
Does it make sense to do the power analysis if one of the sample proportions is 0%?
Can we have any sample size if one of the sample proportions is 0. I mean even the slightest difference detected between the two groups would be because of the treatment in this case, and hence I do not see any reason why the power analysis should be done.
For example, in the calculator here, if I enter group1 sample proportion as 0 and group2 sample proportion as 1, I get some sample size. My question is why should there be some specific sample size? Why isn't it the case that any sample size should suffice since we expect a 0 event rate in group 1? This means whatever we get in group 2 with the samples is because of the treatment? Isn't that the case?
You provided no background information. And power/sample size estimation does not use any observed proportions.
It is usually more meaningful to conduct power analysis before you see the sample
Added some more context with the edit
This is still lacking context and you give no explanation of why your question title talks about "sample proportions" in the context of a power analysis. You do mention that you entered sample proportions into an online calculator - but if you read the instructions for the calculator, it tells you not to enter the observed sample proportions, but rather the expected proportions. On its own, the phrase "sample proportion" refers to the former, not the latter. Until you clarify what you are really asking, I think the answers you get may not deal with your underlying question
If you clarified the context, it might at least be clearer whether your underlying issue is something to do with wanting to perform a power analysis retrospectively (which as @Henry says is generally not advised, and is not what the online calculator you linked to is recommending you to do), is really concerned with a (hypothesised) population proportion being zero rather than a sample proportion, or something else altogether
@Silverfish For my part, I interpreted the question as "Why would anyone bother with a priori sample size calculations if they think that '0%' is a plausible scenario for one of the groups?", and I answered to that. But I agree that some clarification would be welcome, in particular if this is not a theoretical question, but a question about an actual study the original poster is conducting or plan to conduct. I also realize that OP might be misinterpreting what power analysis is about, so I agree entirely - additional information would be helpful to give an appropriate answer.
@J-J-J Yes I found your interpretation quite plausible. There's definitely an interesting question in here that would be relevant to people beyond the original poster. I'm hoping the OP can make the edits to really bring this out - hopefully any points raised by me or other commenters above are seen as helpful and constructive. The point of closing the question is really just so the OP gets a chance to finesse the question before reopening it, because questions still open to answers while undergoing editing tend to get "messy", with different people answering different versions of the question
Consider the table below. It's a sample maybe drawn from a population where group A has 1% chance of having a given event, while group B has 0% chance. Or maybe this is a sample from a population where the percentages of group A and group B are equal (1%), you don't know - if you knew, why would you bother with sampling in the first place?
Event
No event
Group A
1
109
Group B
0
110
Given the table above, how confident are you that there's a real difference between group A and group B? Not very confident, probably. A statistical test wouldn't even be necessary to tell that this sample is unreliable to judge the difference between A and B. So any sample size won't suffice, and you just need to collect more observations.
As a side note, to determine sample size, you should think about a range of plausible effect sizes of interest, find the one requiring the largest sample size, and retain this sample size to conduct your study.
For example, you mention a plausible scenario where there's 0% chance of event in group 1, and 1% chance of event in group 2. But if you think that it's also plausible to have 1% chance in group 1 and 2% in group 2, and you don't want to miss that, you should conduct sample size calculations for this plausible scenario too -in particular when it will certainly require a larger sample size than the 0%/1% scenario.
The purpose of power analysis is to understand the level of accuracy of a hypothesis test in cases where the alternative hypothesis is true, which then tells you about the risk of making a Type II error in the test. This is useful information about the test irrespective of the actual result that occurs when you run the test on your data. However, it is particularly useful in the case where you accept the null hypothesis, because it is precisely in this case that you may have made a Type II error.
It is not clear from your question what you are testing or which particular hypothesis test you are using for this purpose. Nevertheless, generally speaking, knowing a sample proportion does not tell you the sample size, nor does a sample proportion of zero imply any particular outcome for the test, or preclude the usefulness of power analysis for the test.
|
STACK_EXCHANGE
|
[bug]: Combining Deno2 with NEXT.js and shadcn is failing
Describe the bug
Currently, I am trying to run NEXT.js with Deno2 and shadcn. Everything is working till I want to start adding shadcn components in my project.
The issue is only coming up if I add a shadcn button in my page.tsx
Webpack is installed
Tested with a fresh project
I got following output after doing shadcn init and adding a button and start next.js with deno task dev:
As an Error in Next.js
Error: __webpack_modules__[moduleId] is not a function
This error happened while generating the page. Any console logs will be displayed in the terminal window.
__webpack_require__
file:/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/webpack-runtime.js (33:42)
eval
/./lib/utils.ts, <anonymous>
Object.(rsc)/./lib/utils.ts
file:/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/app/page.js (173:1)
In the CLI:
deno task dev
Task dev next dev
▲ Next.js 14.2.15
- Local: http://localhost:3000
✓ Starting...
✓ Ready in 3.3s
○ Compiling /_not-found ...
✓ Compiled /_not-found in 7.1s (475 modules)
GET /_next/static/webpack/5d24c43cbc296b0b.webpack.hot-update.json 404 in 7649ms
⚠ Fast Refresh had to perform a full reload due to a runtime error.
✓ Compiled in 765ms (239 modules)
○ Compiling / ...
✓ Compiled / in 1095ms (525 modules)
⨯ TypeError: __webpack_modules__[moduleId] is not a function
at __webpack_require__ (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/webpack-runtime.js:33:42)
at eval (./lib/utils.ts, <anonymous>:6:72)
at Object.(rsc)/./lib/utils.ts (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/app/page.js:173:1)
at __webpack_require__ (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/webpack-runtime.js:33:42)
at eval (./components/ui/button.tsx, <anonymous>:12:68)
at Object.(rsc)/./components/ui/button.tsx (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/app/page.js:162:1)
at __webpack_require__ (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/webpack-runtime.js:33:42)
at eval (./app/page.tsx, <anonymous>:8:79)
at Object.(rsc)/./app/page.tsx (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/app/page.js:151:1)
at Function.__webpack_require__ (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/webpack-runtime.js:33:42)
at eventLoopTick (ext:core/01_core.js:214:9)
at async Promise.all (index 0)
digest: "773026919"
...
...
...
GET / 500 in 272ms
○ Compiling /favicon.ico ...
✓ Compiled /favicon.ico in 841ms (354 modules)
GET /favicon.ico 200 in 1039ms
´´´
### Affected component/components
Run dev server
### How to reproduce
```bash
deno run -A npm:create-next-app@latest
cd demo
deno task dev # Next.js started
deno install npm:shadcn@latest
npx shadcn init
npx shadcn add button
# added component like in the next session
deno task dev # failed
code:
import { Button } from "@/components/ui/button";
export default function Home() {
return (
<Button/>
);
}
Codesandbox/StackBlitz link
is not possible because its a startup issue and not a code issue
Logs
deno task dev
Task dev next dev
▲ Next.js 14.2.15
- Local: http://localhost:3000
✓ Starting...
✓ Ready in 3.3s
○ Compiling /_not-found ...
✓ Compiled /_not-found in 7.1s (475 modules)
GET /_next/static/webpack/5d24c43cbc296b0b.webpack.hot-update.json 404 in 7649ms
⚠ Fast Refresh had to perform a full reload due to a runtime error.
✓ Compiled in 765ms (239 modules)
○ Compiling / ...
✓ Compiled / in 1095ms (525 modules)
⨯ TypeError: __webpack_modules__[moduleId] is not a function
at __webpack_require__ (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/webpack-runtime.js:33:42)
at eval (./lib/utils.ts, <anonymous>:6:72)
at Object.(rsc)/./lib/utils.ts (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/app/page.js:173:1)
at __webpack_require__ (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/webpack-runtime.js:33:42)
at eval (./components/ui/button.tsx, <anonymous>:12:68)
at Object.(rsc)/./components/ui/button.tsx (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/app/page.js:162:1)
at __webpack_require__ (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/webpack-runtime.js:33:42)
at eval (./app/page.tsx, <anonymous>:8:79)
at Object.(rsc)/./app/page.tsx (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/app/page.js:151:1)
at Function.__webpack_require__ (/Users/felix/Workspace/Stuff/denodemo/demo/.next/server/webpack-runtime.js:33:42)
at eventLoopTick (ext:core/01_core.js:214:9)
at async Promise.all (index 0)
digest: "773026919"
...
...
...
GET / 500 in 272ms
○ Compiling /favicon.ico ...
✓ Compiled /favicon.ico in 841ms (354 modules)
GET /favicon.ico 200 in 1039ms
System Info
MacOS Sonoma, Firefox, ...
Before submitting
[X] I've made research efforts and searched the documentation
[X] I've searched for existing issues
Were you able to do the initial npx shadcn@latest init command? I'm having a similar problem with Deno2/next.js/shadcn but I can't even get this init command to run to install all of the dependencies.
Yes. I already have initialized it and afterward I added some components. I only got the issues after starting dev server and render some components. See screenshot:
Console
Webview:
Same problem here. Any fix?
@jeffspurlock I have run this command:
deno run -A shadcn@latest init
but it hasn`t populate the package.json with dependencies.
|
GITHUB_ARCHIVE
|
Never tried this in practice, however, I AM in the process of doing a SM video tut for something similar.
The fact that you are comfortable with bones, and softies is a help here, as you will need that underlying knowledge to achive something like what you want
You "may" have to start over depending on how your current setup differs from my explanation below, but seeing how you are only working with 3 "wires" that should be no sweat once you do it a few times
Ok, here we go:
My technique is CLOSELY based on a tut I saw a while back on 3DCafe, however since there was no mention of the authors name, I couldn't thank them
I've edited this old idea for Maya 2.0 to better suit our 4.5+ needs, however even though much has not changed regarding the actual process, please excuse any GUI reference mistakes, and investigate accordingly
1. Start by creating a cord attached to something on each end. (like you have)
*My preferences are for the Y axis to be up.
2. Draw the curve (Create -> CV curve tool), snapping the cv's to the grid one unit at a time.
3. Next place skeleton joints along the curve using Skeleton -> Joint Tool, starting at the point where you created your first cv. (2 joints should work for your setup)
4. Place an IK handle (Skeleton -> IK Handle Tool) on the skeleton from base of the skeleton to the other end.
5. Now select the skeleton and the curve and select Skin -> Bind Skin -> Smooth Bind. Pick the curve and turn it into a soft body Bodies -> Create Soft Bodies with "Duplicate, Make Copy Soft" selected and Hide Non-Soft Object and Make Non-Soft Goal Selected and weight set to 1.000.
6. Select the new soft body, and create springs (Bodies -> Create Springs). The min/max setting are determined by where your cvs have been placed. In this case they were placed with 1 unit spacings, so I set min .1 and max 1.1.
7. Reselect the soft body, go to component selection mode with particle selecting on. Pick all of the soft body particles except the first and the last ones.
8. Open the Attribute Editor (Window -> Attribute Editor), go to the Per Particle (Array) Attributes section and with the right mouse button click in the gray box beside goalPP.
9. Select Component Editor from the menu that appears. Change the goalPP to 0.00 for each of the selected particles then close the Component Editor and Attribute Editor.
10. Return to Object mode and select the soft body again. Create a gravity field (Fields -> Create Gravity). (If you play back the animation now the middle section of the curve should just fall straight down, the springs will eventually kick in but as long as the middle section falls you are in good shape)
11. Now is a good time to add the extrusion to your object. Create a circle and snap it to the first cv/particle of the soft body at the same end as the base of the skeleton and rotate it so that the circle is aligned to your curve direction.
12. Select the circle then the soft body and perform an extrusion (Surfaces -> Extrude).
*Scale the circle to give the cord a nice diameter.
There you go, in a nutshell.
**NOTE - IF YOU ARE USING A FLOOR:
Create a nurbs plane which will act as the floor and our collision object and make it a collision object by selecting Particles -> Make Collide. Now click on Window -> Relationship Editors -> Dynamic Relations. In this editor you should select your soft body object (copy of Curve1) from the left side, then on the right side under Selection Modes check "Collisions" and select the floor object we created, nurbsPlaneShape1 or pPlane1 for a polygon floor.
Tweaking may be needed to get the cord to act like you want.
A few channels affect this, the floor's Resilience and Friction and the Spring's Stiffness and Damping.
For the above example that I have created I set the Stiffness at 20, Damping at .5, Resilience at 0 and Friction at 0.2. You may need to increase the oversampling by using Solver -> Edit Oversampling to keep the Damping and Stiffness in the springs under control.
OR, you can always add another set of springs ot the same particles or cv's for more stiffness.
The floor will need to be slightly lower than the rest position of the particles so that the chord won't drop through the floor on the first frame.
To test your settings, just animate the IK handle however you see fit.
***[b]MORE NOTES and floor fyi's[b/]***
I created my cord in a straight line in order to space the springs evenly and to make as few of them as possible to speed up calculation time. You could create the cord in its curled position or move the IK handle end around until the cord animates to where you would like it and then select the soft body Solvers -> Initial State -> Set For Selected.
You might notice that half of the cord sinks below the floor plane. You can move the circle for the extrusion into a position so that the bottom edge of the extrusion is in the same place as the soft body curve.
Always extrude the circle on the softbody not the original curve and keep the construction history or the extrusion will not animate with the curve. (not sure if this is true with 4.5+)
Israel "Izzy" Long
Motion and Title Design for Broadcast-Film-DS
|
OPCFW_CODE
|
Whether your CRM and Eloqua platforms are mature or new, every integration story should include the 5 Ws to gather the necessary information for a happy ending:
- WHY – Capture the goals for this project and identify how success will be measured.
- WHO – Select a core team that includes the five key roles.
- WHERE – View the data from the perspective of each CRM and Oracle Eloqua platform.
- WHAT – Determine the entities, programs and fields required to support the business goals.
- WHEN – Create a project plan with tasks and milestones that will meet the launch date.
In today’s marketplace, prospects self-educate before they talk to sales. This puts more responsibility on marketers to not only source leads, but to engage them before passing qualified leads to sales. Prospects expect communications that are targeted and relevant whether they are at the top of the funnel or on the cusp of a purchase.
Integrating your CRM and Oracle Eloqua platforms reflects this shared responsibility for the sales funnel, providing a unified view of leads and customers to sales and marketing.
For your specific organization many of the WHY questions will be addressed when choosing your CRM and Marketing Automation platforms.
- Why this solution?
- Why now?
In WHY phase, you need to understand the business goals for the integration and how will it be measured. For example, your goals might include:
- Improving alignment between sales and marketing.
- Increasing marketing efficiencies to deliver highly qualified leads at a lower cost.
- Boosting sales
Your core team should include players that have the knowledge and authority to make decisions and configure the platforms.
Outlined below are five key roles required for your core integration team. For some teams one person may represent multiple roles; other teams will require multiple people to represent a single role. I have worked on Eloqua CRM integrations where we were a team of two. In other integration projects, multiple people represented a single role, for example, in complex integrations the role of the Eloqua admin may require a solution architect and a technical lead.
It’s imperative for someone on your core team to articulate the sales needs and speak to how sales uses the CRM in their day-to-day work. For example, what data is needed in order for sales to receive and follow up on a new lead?
Eloqua does all the heavy-lifting in initiating the calls for both the push and pull of data between the two systems. All field mapping is done in Eloqua. For this reason, the bulk of the CRM Admin’s time will be as a SME advising the team on the CRM data structure covering everything from the relationship of the entities to the individual field values. Beyond this advisory role, the CRM Admin tasks may include:
- Create Eloqua as a CRM User
- Provide information on number of records that will be imported to Eloqua
- Add new fields to CRM if needed
- Enable sales tools
- Configure tracking for change of email address
- Assist with troubleshooting
- Confirm accuracy of data pushed to CRM
This person is the voice of the marketer and should clearly communicate the requirements for what data is needed for targeting, personalization, scoring, routing and reporting. The representative needs to understand and discuss use cases about the data that supports the Eloqua campaigns.
In addition to understanding the day-to-day needs of the marketer, this person should understand the marketing KPIs. This role should also have a breadth of knowledge for the Eloqua instance that includes governance, segmentation and data sources into Eloqua (list uploads, forms).
Oracle Eloqua Administrator
The Eloqua Administrator translates the business requirements of sales and marketing into the design solution. This role is responsible for understanding the Eloqua data model and applying it to building the programs and configuring calls for the integration. The Eloqua Admin will also help identify any limitations or risks for the solution. Many organizations use partners to help with the technical configuration.
The project manager keeps the project on time and budget. They identify scope, milestones and document changes that impact deadlines or budget.
Perspective is everything. Eloqua and CRM are two different systems and therefore differ, from their data structure to their terminology and unique keys. It’s imperative that you understand the data source and how it impacts your integration solution.
This is the discovery phase. Level-set around the business goals within the context of each of the two platforms. To get started, include two foundational discussions:
- Lead Life Cycle Discussion
- Eloqua Data Model Review
Lead Life Cycle Discussion
Whether your sales and marketing teams completely align, or you are just beginning to define your lead life cycle, reviewing your lead process should be one of the first discussions for your core team. Focus on the sales and marketing roles and responsibilities for each stage of the funnel. Use this discussion to flesh out key requirements around what entities are being used, and the type of data that must be mapped. For example, you should discuss:
- What is a lead? (Do marketing and sales share a common language for prospect, lead, etc.?)
- How are leads assigned and worked (by telemarketing, inside sales, regional, by product)?
- If a lead is rejected, does it go back to marketing for further nurturing?
- When is a lead converted?
- What is the relationship of contacts to accounts?
Oracle Eloqua Data Model Review
The Eloqua data model review introduces terminology, entities, field types and data management models from the Eloqua perspective. As with the Lead Life Cycle Discussion, this is a foundational discussion that sets the stage for future requirements discussions.
For a more detailed look at the Eloqua data model, read Key Considerations When Planning Your Oracle Eloqua Integration: https://www.relationshipone.com/blog/key-considerations-oracle-eloqua-integration/
Additional Data Sources
The integration solution must also take into account other data sources. Capture additional data inputs such as form submissions, list uploads, automated import or third party apps. Minimally, the data may require cleansing before integrating with your CRM; however, multiple inputs may require a larger data architectural workshop to document the data structures and relationships.
This is the define phase. It’s time to get into the weeds and discuss what data sales and marketing needs. Ultimately you will define and document:
- The overall solution for the entities that will be mapped between the two systems and the relationship of the entities within Eloqua.
- Th list of fields to map for each inbound and outbound call.
- The requirements of the lead management model for how sales receives and manages leads and contacts from Eloqua.
- The supporting programs needed for lead management and data integrity.
Once the documentation has been approved by the core team, you can move to the develop and deliver phases.
The project plan includes the tasks and milestones leading up to the integration launch date. Managing to the project plan is the responsibility of the project manager team member. In building out the timeline, carefully consider these questions because they can impact scope and timing:
- Are the platforms mature, new or a migration?
- Is this integration part of a larger organizational implementation?
- What is the scope of the Eloqua integration. For example, does it include Lead Scoring or Closed Loop Reporting?
- Will testing be done in an Eloqua sandbox?
- How many rounds of testing are required? For example, will testing be done for the DEV, QA and UAT CRM environments?
Handing You the Keys
While every integration is unique, the same principles apply, from team member roles and responsibilities to discovery and delivery. The 5Ws of integration will get you started down the right road and help avoid wrong turns.
|
OPCFW_CODE
|
One downside of working with experienced people is that, some experienced people tend to be opinionated, and the trouble with opinions is that opinions are, well, instantaneous reactions based on things they learned in the past. You can wake up an opinionated person and ask a question that you know this person has an opinion on, and that person (self included, on occasion) will most likely blurt out that opinion without even understanding the context. Unless careful, it is easy to get trapped into that mindset, and start enjoy being opinionated, and consequently, making or advocating "obvious" choices. Welcome to my soft rant!
Here is one opinion heard several times on the blogosphere. If X says that he/she had performance issues with this new cool website, the "opinionated" would immediately ask, "is that Rails based?" and without even waiting for an answer, would offer his/her expert opinion that "Rails is slow" and that X should use Grails, PHP, or whatever that person is currently a fan of.
No offense to the commenter, but I received such a comment when I blogged that I had performance issues with DreamHost. I did not buy the comment that Rails is slow then, and I don’t buy it now. There are other things that influence scalability and these vary from case to case.
Here is another example. In response to FaceStat scales!, the first commenter asks "Why on earth didn’t you use something like Grails that actually DOES scale?". I doubt if this commenter fully understands the architecture of FaceStat. To me, that is an automatic reaction, and so, I would be suspicious of that "obvious" choice that Grails or some other alternative would magically fix the scalability problems.
Say, for instance, if a PHP or a rails app is slow, an "obvious" choice is to throw memcache. But no amount of caching would rescue a poorly architected app. In fact, MySQL query caching may be sufficient for typical caching needs.
After writing the initial prototype of Cyclogz, I wanted to check if there are similar sites, and if so, how they are approaching the problem. The problem was simple – retrieve a large amount of data from a GPS, and then extract lots of info from it for both presentation and analysis purposes. I came across two sites (one of which is MotionBased, owned by Garmin). Both these sites did what seemed like an "obvious choice". Both sites upload all the data (in megabytes) to a server, store it in the database, and then kick off a background process to analyze and extract the data. This approach requires more hardware on the server side, more code to manage the background work, and more importantly, takes away instant gratification from the user. I won’t describe alternatives here in detail, but better and more scalable design choices do exist. It was also fun thinking about the alternatives, and implementing those.
In the REST community, there is a similar pattern. Every now and then someone comes along with a need to do batch processing asserting that making "n" connections is slower than POSTing all requests in a single batch. When told of the problems with this approach, that person would immediately conclude that REST and/or HTTP is broken and that he/she "needs" to do batch processing.
So, if a choice seems too obvious to be incorrect, and is given by a smart but opinionated person, I say, take a step back, question and ponder. The obvious is not necessarily the best choice. The fun in writing software is figuring out those non-obvious choices. Repeating obvious choices is not fun.
|
OPCFW_CODE
|
zolpiem prescription in mexico
A major option for many mental disorders is psychotherapy. Honey also has an effect on polarized light, in that it purchase generic ambien 10mg online india rotates the polarization zolpidem 10mg paypal plane. Uncontrollable sneezing during a periocular injection while sedated by propofol is likely caused by the drug. By distinction, when listening to sound through headphones, the sound from the left earpiece is audible only to the left ear, and the buy cheap ambien in australia sound from the right ear piece is audible only to the right ear. There is an association between adult circumcision and an increased risk of invasive penile cancer; this is believed to be from men being circumcised as a treatment for penile cancer or a condition that is a precursor to cancer rather ambien prescription info than a consequence of circumcision itself. Due in part to the purchase generic ambien 10mg online india policies and actions developed through public health, the 20th century registered a decrease in the mortality rates for infants and purchase generic ambien 10mg online india children and a continual increase in life expectancy in most parts of the world. Pallikal Deepika turned professional in 2006, but her career was filled with ups and downs initially. The first season has received generally positive reviews, with critics purchase generic ambien 10mg online india praising the cast and premise, but criticizing the number of graphic forum where buy zolpidem scenes. They range from family based operations in which secrecy is essential, and where trafficking of large shipments are discreetly sent to associates across the border, to organizations involving hundreds of players with different roles. Using lower doses of levodopa may reduce the risk and severity of these levodopa-induced complications. BBB, providing biochemical support to those cells. General practice is the name given in the United Kingdom to the purchase generic ambien 10mg online india service provided by General practitioners. But purchase generic ambien 10mg online india our bodies have become a pawn purchase generic zolpidem 10mg in london in the struggles among states, religions, male heads of households, and purchase generic ambien 10mg online india private corporations. Initially, the merry widow was a trademark of the famous Maidenform company, which designed it for Lana Turner's role in a 1952 movie of the same name. There are two main health implications for those living in food deserts: HA is abundant in granulation tissue matrix. Bogdan does not reappear until the third season, when Walt and Skyler are trying to purchase the car wash in order to launder Walt's drug money. Studies conducted on mice exposed to phthalates in utero did not result in metabolic disorder in adults. Although methotrexate is used to treat both multiple sclerosis and ankylosing spondylitis, its efficacy in these diseases purchase generic ambien 10mg online india is still uncertain. Holmes arrived in Chicago in August 1886 and came across Elizabeth S. Intracellular vitamin B12 is maintained in two active coenzymes, methylcobalamin and 5-deoxyadenosylcobalamin, which are both involved in specific enzymatic reactions. Small mom and pop shops, often poorly maintained and with unassuming façades, where to purchase ambien in japan and shops which will often consist of nothing more than a single, free-standing room, or booth. Meehan purchase generic ambien 10mg online india made child safety on the internet a priority, sponsoring internet safety training seminars with Web Wise Kids and visiting local schools. Although pharmacology is essential to the study of pharmacy, it is not specific to pharmacy. Crohn's disease may also involve the skin, blood, and endocrine system. Heterosexism can include the presumption that everyone is heterosexual or that opposite-sex attractions and relationships are the norm and therefore superior. Though as of 2016 mobile banking applications have seen a tremendous growth in kenyan banking sector who have capitalised on android play store and apple store to put their applications. While smoking rates have purchase generic ambien 10mg online india leveled off or declined in developed nations, they continue to rise in developing parts of the world. Because alcohol was the most popular recreational drug in these countries, reactions to its prohibition were far more negative than to the prohibition of other drugs, which were commonly associated purchase generic ambien 10mg online india with ethnic minorities, prostitution, and vice. Ensuring Seniors Access to Local Pharmacies Act of 2014 in the House of Representatives. They examined rape only, and did not look at attempted rape. buy drug ambien in london Grand Theft Auto V received multiple nominations and awards from gaming publications. Restrictions are being placed on lumbering due to increased environmental concerns about destruction buy cheap zolpiem online legally from canada of the rain forests. In an additional case, there were high levels of mitragynine and signs of purchase generic ambien 10mg online india opioid toxicity on autopsy. This was the fourth gold medal for Phelps. The immediate market impact suggests that they had been purchase generic ambien 10mg online india providing more than 50 percent of the precursors used nationally to produce methamphetamine. In the 52nd chapter of his 35th book, he gives a detailed description. Although Kramer served on its first board of directors, his view of how it should be run sharply can you take ambien with xanax conflicted with that of the rest of its members. Another purchase generic ambien 10mg online india method used was the covert sensitization method, which involves instructing patients to imagine vomiting or receiving electric shocks, writing that only single case studies have been conducted, and that purchase generic ambien 10mg online india their results cannot be generalized. The where to buy ambien online with visa money saved purchase generic ambien 10mg online india by evading treatment from heart attack and stroke only amounted to about a quarter of the cost of the drugs. Medicare has been in operation for a half century and, during that time, has undergone several changes. Fentanyl may produce more prolonged respiratory depression than other opioid analgesics. The aim of Kegel exercises is to improve muscle tone by strengthening the pubococcygeus muscles of the pelvic floor. Skyler for attempting to stab him and leaves her at a purchase generic ambien 10mg online india fire station. Philosopher Peter Singer, a protectionist and utilitarian, argues that there is no moral or logical justification for failing to count animal suffering as a consequence when making decisions, and that killing animals should be rejected unless necessary for survival. Currently, research is being done on various methods of reducing chemical waste in the environment. While doing time at the detention center Williams was introduced to weightlifting by the facilities' gym coach. X-Factor is a corporate team that protects the interest of the company. Fifteen percent of UH students live on campus. Other drawbacks include the need to administer it by painful intramuscular injection. This anomaly was eventually resolved in 1948, when Claude Bernard's experiment was repeated. Groceries were introduced in 1903, when John James purchased a grocer's branch at 12 Kingsland High Street, Dalston.
Real ambien Purchase diazepam 5mg tablets Purchase generic alprazolam online legitimate Buy generic phentermine 37.5mg online legitimate
|
OPCFW_CODE
|
Each component within AbleOrganizer is designed as a feature. A feature is a small, useful application in Drupal that configures the system to work in a specific way.
If you want your code to be portable, and be able to run in just about any website, you will want to take the time to understand the best practices for creating features in AbleOrganizer.
Heads up! The best practices for feature development continue to be defined. In general, AbleOrganizer sticks to the best practices for feature development in CRM Core. A copy of the latest and greatest ideas for feature development can always be found on the Drupal.org website.
Best practices for feature development
Creating your own features is really a process of taking specific components from one site and packaging them in a way where they can work in other websites.
Consider the items listed on this page as guidelines towards ensuring your code will work on any website. They are useful in most cases, but not all. If you are developing a simple feature you plan to share with other AbleOrganizer users, these rules for the road will serve you well.
Use a Unique Namespace
Every feature should have its own unique namespace. This namespace should be reflected in all the elements used by your feature: field names, content types, views, paths, etc.
For instance, in feature foo, which is used to manage bars:
- A content type might be called foo_bar.
- A field in the content type might be called foo_date_started.
- A view displaying all the bars might be called foo_list_of_all_bars.
- A path to a report about the views might be created as such: crm-core/reports/foo/all-bars.
Respect the Reserved Paths
There are some reserved paths to be aware of in AbleOrganizer. While you can certainly use other paths if it is important to your feature, consider sticking with the following unless there is a good reason to do otherwise.
- Contacts: crm-core/contact, crm-core/contacts
- Reports: crm-core/reports
- Dashboard: crm-core, crm-core/dashboard
- Activities: crm-core/contact/%contact-id/activity/%activity-id, crm-core/activity/%activity-id,
- Relationships: crm-core/contact/%contact-id/relationships/%relationship-id, crm-core/relationships/%relationship-id
- Administration: admin/structure/crm-core, admin/config/crm-core
Clone and Override, Don't Overwrite
There are lots of situations that can arise where you might want to make some changes to a default display of a report / interface and distribute it to other users. Perhaps you have a really great way of displaying a list of contacts in the system, or maybe you have created a better way to work with activities. Hooray!
When you change the way something works in AbleOrganizer, you want to make a clone of the object and export that instead of the original item. This accomplishes 2 things: first, it ensures your new feature will not be overwritten the next time there's a release of AbleOrganizer. Second, and more importantly, it ensures someone can back out of your change in case things don't go as planned.
The best practice for distributing features that override the default settings in an AbleOrganizer site are as follows:
- Clone the view / panel used to display the item, whether it's a contact / activity / relationship / list of contacts / report / something else.
- Make your changes to the cloned view and export it as part of your feature. Give it the same path as the default item you are modifying.
- In your feature, disable the default view in CRM Core when the feature is enabled. Re-enable the default view when the feature is disabled.
Entities Should be Self-Contained
When building a feature, wherever possible, ensure the entities it defines are contained within the feature itself. The feature should declare the entity and the default fields used within it. Other features should not add or delete fields from the entity once it is defined.
For example: if you are building a feature that adds fields to the default meeting type, that is okay, but remember that you can't always count on it being present because users can delete it. It is often better to create a new feature with its own meeting type with all the fields needed for your module. This keeps you from having to make assumptions about what is there.
Use Bridge Features for Fields
If you really, really need to have 2 features that rely on a common set of fields, consider building a third feature that contains the entity / fields that they will use. This is a best practice for a couple of reasons. First off, it is really, really easy to get into situations where an entity / field is being exported in 2 features. It can create issues within features itself and make your life difficult. Second, and most importantly, people need to be able to turn features off within AbleOrganizer. Taking steps to ensure that each feature can operate independently of others is just good practice.
Don't Create Fields on Contact Entities
Depending on your use case, it's not always a good idea to build features that add fields to the contact entity.
When you do, you are making an assumption about how people have configured their installation of AbleOrganizer. It cuts down on the portability of features and forces people to use the fields you defined instead of the ones they might prefer.
When people are working with AbleOrganizer, they will configure their own fields with their own meaning. When you create features that rely on fields in the contact entity, you are forcing them to use the same fields the same way you have intended. There are some situations where this is appropriate - for instance, if you are adding fields that are simply not defined anywhere else.
In other situations, it is best to build features in a way where fields that affect a contact record can be configured through instructions to the user or a settings page. There are a lot of ways to set default fields for contact records, look at the contact type administration screen for wonderful examples of how to do this.
Sharing your features
In scientific circles, peer review is considered the ideal way to validate one's own work. It leads to insights and feedback from peers, helping researchers to think about problems in new and interesting ways.
AbleOrganizer is open-source software. If you have needed to take the time to create a new feature, the odds are someone else could benefit from it as well. Take the time to package your feature and share with other AbleOrganizer users. At worst, you will find out how your feature could be improved and learn something in the process. At best, you will get the insights of other sets of eyes and end up with something much more exciting that what you started with.
|
OPCFW_CODE
|
The Democratic Union Party (PYD), the Syrian front of the Turkey-based Kurdistan Workers’ Party (PKK), is the leading group in the administration of the Kurdish areas in north-eastern Syria. The PYD and its armed wing, the People’s Protection Units (YPG), have become the preferred instrument of the U.S.-led Coalition against the Islamic State (IS) and as a by-product have been assisted in conquering some Arab-majority zones of northern Syria—and perhaps soon of eastern Syria. The PYD/PKK has always treated all dissent harshly and the Kurdish opposition in recent days has reported an escalation in repression by the PYD, which the West—as has become a habit in cases of PYD misbehaviour—has made no public protest about.
THE PYD’S AUTHORITARIAN REGIME
The PYD regime promulgated a law on 13 March, based on a previous order, “Decree Number Five” of 15 April 2014, demanding that all “unlicensed” political parties register with the authorities within twenty-four hours or “we will be forced to close the office and duly transfer the official to the judiciary.”
The Kurdish opposition, including the Kurdish National Council (KNC), also known by its Kurdish acronym ENKS, objected to this ruling on three grounds:
- First, as the KNC office in Berlin put it to me, nobody has elected the PYD. The PYD announced its interim administration in November 2013 after the areas were handed to them in July 2012 by the regime of Bashar al-Assad in the hopes of keeping the Kurds out of the then-widening revolution.
- Secondly, this vetting procedure is not objective. Though PYD claimed at the foundation of its government to be ruling in alliance with fifty other organizations, these groups “either have close ties to the PYD or are unknown,” KNC Berlin says. This law, for example, says that “no political parties can have any ties to foreign parties,” KNC Berlin went on, which could be used to ban the Kurdistan Democratic Party-Syria (PDK‑S), the sister party of Masud Barzani’s Iraqi-Kurdish KDP. “It can be safely assumed, however, that the PYD will not employ the law to ban itself, even though it is the Syrian branch of the Kurdistan Workers’ Party (PKK), which is based in Turkey.” (Indeed, while the nature of the power centres in the PYD-run areas remains secret, there is little mystery: it is widely suspected that “real power is wielded by shadowy military commanders who have fought with the PKK in Turkey”.)
- Which means, third, this was a clear attempt to criminalize all political actors except the PYD and in effect formalize the one-party regime.
The next day, 14 March, according to a statement released by the KNC/ENKS today, a series of attacks against them by the PYD began. By now, the PYD “have abducted and arbitrarily detained” at least forty KNC members in more than nine cities across the area controlled by the PYD, which is often called “Rojava”. “In addition to the detentions, attacks against offices of the KNC and its member parties have taken place,” the statement added. “More than twenty offices have been torched or demolished, and subsequently were sealed up by PYD security forces.” Shortly before these attacks, the PYD had closed down the office of a Christian group, the Assyrian Democratic Organization (ADO) in Hasaka.
On 4 March, the PYD arbitrarily detained thirty-six politicians, most of them PDK-S members. Around the same time, the headquarters of three opposition parties in Serê Kaniyê (Ras al-Ayn) and Qamishli were sacked by the PYD.
On International Women’s Day (8 March)—heavily exploited by the PYD, which uses its female fighters as a central point of its propaganda, framing its state-building project as a fight against IS and using the language of universalist liberal values—the PYD’s (male) police forces, the Asayish, stormed IWD meetings and arrested numerous people.
During one of the IWD events, Dr. Khaled Issa, a member of the Kurdish Democratic Progressive Party (PDPKS), was stabbed by a mob of PYD youth. A number of women were arrested the next day as they tried to organise an IWD event independent of the PYD and the offices of the PDPKS were put to the torch.
This morning, in conformity with its promise, the PYD burned to the ground the office of the Kurdish Women’s Union or HJKS in Derik (Al-Malikiya) because it did not have a license that only the PYD can issue.
The persecution of dissent by the PYD is hardly new. In 2011 and 2012, the PYD was accused of murdering Kurdish politicians Mishal Tammo, Nasruddin Birhik, and Mahmud Wali (Abu Jandi). The current vivacity in oppression can probably be dated to August 2016, when the PYD arrested a dozen Kurdish opponents, kidnapped several more over a series of days, and beat and imprisoned those who protested about it. Ibrahim Biro, the overall head of the KNC, was expelled from the PYD-ruled areas into Iraq and told he would be murdered if he returned.
Read the rest at The Henry Jackson Society
|
OPCFW_CODE
|
We are looking for someone who can create a java standalone program to print Code 128 bar codes on network printer Zebra Gk429D. (default printer set on a Windows 7 machine). There will be a set of n barcodes that should be printed everytime this program is run. The barocode data will be a mixture of numeric and alphanumeric digits [0-9] and [A-Z] given as command line argument.
java -jar [url removed, login to view] 1000139 1000139 1000139 ABC1000139 DEF1000139 GHI1000139 JKL1000139 MNO1000139 PQR1000139 STU1000139
should print 10 Code 128 barcodes
Java -jar [url removed, login to view] abc123
should print 1 Code 128 barcode
All these barcodes should be printed on the network printer one by one.
The label dimensions are all follows.
Width : 2.20 inches
Height: 0.5 inches
The barcode dimensions are as follows
Width: 1.40 inches
Height: 0.38 inches
Any space left is white space.
13 freelance font une offre moyenne de $181 pour ce travail
Hi, I have previous experience on Ean 13 and code 128. Also network computer wont be any problem. Let me know if you are interested. Thanks
HI, Do you need just printing or creating bar code also which need to be printed. I have experience with printing directly to printer using java. with regards
Hello sir, I can do it for you in one day using Barcode4j or Barbecue library. Printing is also easy to do.
I am an experienced java developer who just did a projected involving printers a few weeks ago. I'm confident that I can build you an application that surpasses any of your expectations.
Hello, I'm a professional Java developer. *You pay when you get the results you want.* I can build a standalone application that meets your needs. I work with responsibility and my work is guaranteed, providing Plus
Hello, Creating barcode and printing in java is an easy thing to do. I use barbeque library for this purpose. Please refer to private message for more info. Looking forward to your response. Thanks. Regards, arasyi.
<b><i>Removed by Admin</i></b> - Custom software development - skpye: <b><i>Removed by Admin</i></b>
Hi, We are having experience of around 9+ years in AIDC domain and can do the job for you. We are already using Code-128 Barcode printing for our Warehouse Management System. Best Regards, Mukesh Mishra
I am an experienced Java programmer, I have already completed an implementation of this program, all it needs is some final adjustments, so it suits your specific needs, and I have used open source bar code and PDF li Plus
|
OPCFW_CODE
|
Ok this issue is starting to get out of control and I'm running out of ideas and theories. I work at a hospital and recently we started having troubles with one of our Access Points locking up. It would lock up maybe once or twice in a month, then once or twice in a week, now sometimes several times a day. I originally though it must be some kind of interference causing it to lock up. So I tried powering down different wireless devices, swapped the access point out with a new one, reduced the output power of the AP, still kept locking up. This particular AP is an Engenius EAP-300. Then we had another AP start locking up, just down the hallway from the one we've been having issues with. That AP is a Netgear N600 WNDR3400.
So I still thought maybe some kind of wireless interference or something going on with the switch they both were connected to (which happens to be a new switch). But shortly after that Netgear started locking up, we had another Netgear AP start locking up in our ER which is on the far opposite side of the building. And now we are having another Engenius AP lock up in our Home Health department which is in a separate building across town connected to use via fiber.
So now I'm thinking there has to be something going on over the network that's causing these Access Points to lock up. Possibly a bot attack that's trying to login to the devices? It seems like once a device starts locking up, it continues to lock up. But I have another Engenius AP that's never locked up in our Clinic department, and several other Netgears that haven't locked up.
I guess I'm just looking for ideas and suggestions for what else might be causing this.
- Every location that has had an AP locked up, I have tried swapping between a Netgear AP and an Engenius AP. However I always configure the AP with it's IP address for that location.
- When these APs lock up, their lights stay on and they appear to be functional, however they do not send out a wireless signal and they cannot be ping nor can they be accessed without being power cycled.
- Engenius APs have a function that allows them to dynamically change their channel for the least amount of interference. I have tried operating these APs both in this mode, and by manually setting their broadcast channel.
- Sometimes the time between radio lock ups is as quick as 20 minutes, other times they can go up to a full day between lockups. Occurs most frequently during the day between 8am and 5pm, but occurs even at night at all times between 5pm and 8am.
- None of the APs use the default username and password (however I have thought about setting one up this way to see if it becomes compromised)
I thought about monitoring the traffic to one of the problem APs, but I'm not sure of a good, easy way to do this. Logs are reset on the APs when they are power-cycled so it's hard to see what was going on with the radio when it locked up.
Network Info: 10.84.0.1 / 255.255.0.0
New APs or reconfigured APs are assigned an IP address of between 10.84.253.1 and 253.25
Only one of the problem APs has an IP address outside of the 253 octet.
Of the 3 Engenius Radios:
- AP1: 10.84.253.1 : Hallway - Locks up
- AP2: 10.84.253.2 : Clinic - No Issues
- AP3: 10.84.253.3 : Home Health - Locks up
AP3 was just installed at the new Home Health building and was online for less than 24 hours before it started locking up.
Please, any help or ideas would be greatly appreciated!
To isolate environmental issues, have you removed these AP's to a lab environment to see if this happens? I mean, take it home and let it run and see if it locks up. Maybe the CT Scan/X-Ray/MRI machines are interfering? :)
With the different models that have had the same issue and considering your environment, I suspect this might be the case.
If the issue still happens at your house, good thing is, you have eliminated env issues. Next, upgrade firmware and upload a new config file. If that also doesn't work, try connecting them to a different switch.
Also, are these PoE AP's? Maybe the switch is providing irregular power for whatever reason-try a different switch.
Of course if you do have an IDS and some deep scan security system in place, I would use them in parallel to rule out a bot attack like you suspect.
PS: Do you have metal studs in the walls in some areas instead of wood? That may also explain why some AP's are having issues-have enough blockage of radio waves, and to the end user, the wireless will appear to not work.
I've recently been having a devil of a problem with the Engenius. I think it was something like the EAP9550. Did many of the things you mentioned, but never could isolate the problem.
Check the placement of the units. Are they near a heat source? If so, does the issue go away if you move them?
Check for updated firmware. This may be a known issue ... or an undocumented fix.
Focus on monitoring a single AP at a time. There may be an environmental event that's causing a problem. Perhaps, for example, the circuit the AP is plugged into experiences a low-power event when staff turns on the microwave.
If it's happening more frequently, that speaks to me of degenerating hardware ... or an increase in the environmental event that's causing the lockup. Keep in mind that you may be experiencing a different cause for the lockup on each AP.
BTW, upgrading firmware, in my case didn't make a difference. And, it is weird. Might go a week, and hour, or a day, then lockup.
It also sounds to me like intermittant hardware failure. If you intend to replace, I recommend cisco's, and im aware of the price. Also there are many models of router/access points you can install dd-wrt on. These are the options we use in our healthcare environment.
If the unit has not been in service very long or never been burnt in i would take it out of production and see about an rma or move it to a lab for testing also i would try a
MERU AP i have had great success with them
Outermost Systems, LLC is an IT service provider.
Are they where users can get their hands on them? I had one that developed problems every week or ten days. I've hidden it on top of a rack where it is not noticeable. Since I hid it it's been humming along for months without issues.
Well I appreciate all the replies. A little update on my situation, I have gotten the radios under control and have gotten them to stop locking up. I did this by switching the encryption from WEP to WPA2. One thing I had long forgotten about since it's been an issue long before I started working here, is it seems that EAP was enabled on the network. However I use to always think that was a setting enabled on the access point, and my IT director has no knowledge of it ever being implemented in the past for any reason. However, I noticed I would get the smart authentication box anytime I tried to connect to the private network, regardless if it was on an old AP or a new AP. Smart authentication that I'm referring to is the certificate it looks for, for additional authentication. So after you type in the WEP key, it connects, and then tries to further authenticate. We had to work around this by manually adding our network to each device that we needed connected.
After switching a few radios over to WPA2, they stopped locking up and they stopped doing the EAP. I've never used EAP myself, I read up on it some on wikipedia but I guess I'm still at a loss as to why it would effect all of our radios that used WEP encryption. Or maybe I'm on the wrong track about it being EAP, and something else caused the devices to think they needed further authentication. However I have verified there are no settings enabled in the radio themselves that would cause this, and the only settings I've change is the encryption type.
May 3, 2012 at 4:41 UTC
There seems to be some issues with your network configuration.
Network Info: 10.84.0.1 / 255.255.0.0?
Should probably read:
Network Info: 10.84.0.0/ 255.255.0.0
You can just call it a typo, I'm use to just typing out the default gateway and subnet mask.
I know I'm coming in this conversation late, but I though I would throw this out there in case it helps someone down the road.
I had a similar issue a few months back. The 9550 ( I only have one ) would lock up, just as yours did, with increasing frequency over time. I was using WPA from the start. I tried different security settings..Open, WPA, WPA2.
Ultimately, I reset the AP back to factory default, updated the firmware, and rebuilt the configuration (not a restore).
|
OPCFW_CODE
|
8 hours 33 minutes
Welcome to this lesson on secure network connectivity.
This lesson is part of the top Madu off the is that 500 Microsoft Azure security technologist costs
quick information on what will be covering In this lesson.
We'll start out with a review off Secure connectivity options for an azure virtual network.
Well, then this caused the A Java peon get way
points to side VPN or Indication options
Express. Woods gets way
on express food encryption. Let's get into this
when we talk about connectivity to off from outside Asia,
the A treatments and our viewers the Azure network supports. The first option is secure point to cite connectivity, which is achieved by connecting declined endpoint toe on azure VP and get way. This solution is useful for telecommuters who wants to connect to azure virtual networks from a remote location,
such as from home or from a conference.
However, there are times when we needs to securely connect entire remote networks, tohave, virtual networks and nausea,
and this is where the second up from can help
secure sites. Society, VP and connectivity,
which is achieved by connecting firewalls in our remote networks. Toe a VPN get way in Hajer. This connection goes over the public Internet, but communication is encrypted often i p sector Nell. The Tar Adoption is expressed with private connectivity,
which provides private connectivity from our own premises, data centers to our azure virtual networks or even order Microsoft Cloud services like Office Tree 65 Unlike the previous VPN options that we mentioned, this connectivity does not go over the public Internet,
and it requires us to have a relationship
with a connectivity. For Vita Destry options are achieved using through Men Services in Hajer.
The VPN gets way on, the Express would get way. So let's look at both off this
fast. The VP and get way if it being get way is a specific type off veteran network Get way that it's used to send encrypted traffic between an agile virtual network on on premises, location over the public Internet. And as we mentioned earlier, it supports two main scenarios.
Sites decide VP and connection for remote natural connection
on points to side VPN connection for a much is a connection. So yes, out the VP and get we works First, we create a special sub net cause to get way sub net would then deploy the VP and get way into this sub net.
No other, actually, sausage should be deployed into this sub net on. We should also avoid add in network security groups to hit.
After I get waste created, we get a public I p that we condone used to create an I. P. Sec connection for my remote firewall. So this get way
and finally, to ensure that traffic from my remote network are rooted. So this get way we can create a costume but stable that sense that traffic to the get way
four points aside, VPN I Hentges is just need to connect the VPN client on their engines are devices to the get way for them to connect to resources in as a virtual networks.
And that way traffic is returns. True, there's get way to the resources in hodja
when using points aside, VPN connections
users, after authenticate before the VPN connection request is accepted,
and their three mechanisms that azure offers to authenticate a connecting user. The first option is azure certificate authentication.
With this option, the engines, as used a client certificate, start on their devices, toe authenticates to the as a VP and get way to get wouldn't verifies the authenticity off the certificates.
We can use a self science that vic it on ent. If I solution
on the routes, Advocate would need to be uploaded toe azure for the validation.
The second option is as your 80 authentication, where users can use their majority credentials. Toe authenticates to the VPN gateway. However, this solution is only supported for the open VPN protocol for points to side VPN, not the SST p or I give it two options.
Those are not supported.
Also, support is limited toe on the Windows 10 clients using the JVP and client, so there's a lot of restrictions to make this to work.
The tar adoption is the on premises active directory authentication,
which requires a video several to integrate with the active director several and it also requires network connectivity from the VPN get way to the active directory server.
So if the air December is in Hajer, it needs to have a line of sight to the VP and get way.
The all I get with type is the express with get way on. This can be used to extend our own premises networks into our azure virtual networks off private connection. True, a service partner and not over the public Internet.
This works in a similar way, so our the VPN get re walks first. We have I get with sub nets.
We deploy the express would get way into the submits. We don't walk with a supportive partner who provide redundant connection to the Microsoft edge that terminates and I get with sub nets.
We can create a custom but stable for private submit in hasher to ensure that traffic that's going to have remote connection goes true to get way sub nets. Heaven does. This connection is private.
It is not encrypted by default. So let's look at the options for encryption. Express would support a couple of encryption technologies to ensure confidentiality, an integrity off the data. That's Trans Versant between our network on the Microsoft Network.
The first option is point to point encryption by Max Sick
and Max. IQ is an eye trip. We standard it encrypts data at the media access control level, or the layer two level. It can be used to encrypt the physical links between network devices on Microsoft Network devices on this encryption happens on the physical hardware routers.
The good thing about the Mac sick option is that we can bring our own encryption key
on this is appreciate key that we can start in azure key vote once max IQ is enabled, all network control traffic is also encrypted. And I include BDP data traffic. We cannot pick and choose which they that to encrypt.
The other option is just using I p sake on that is I p sake off our express routes.
Eyepiece acres off course on I e t f standard this just increase data at the I P Level or the network. Leah tree level is an i P. Sec here some supplementary links for further studies on the topics covered in this lesson. And here's a summary of four recovered.
We started by talking about secure connectivity options Financial Virtual Network,
but then discussed as a VPN gets way
points to side VPN, authentication options
express would get way and finally express route encryption.
Thanks very much for watching and I'll see you in the next lesson
|
OPCFW_CODE
|
Factoring out common code can make code extra readable, far more prone to be reused, and Restrict mistakes from intricate code.
The GSL is definitely the tiny list of sorts and aliases specified in these suggestions. As of this creating, their specification herein is simply too sparse; we intend to increase a WG21-fashion interface specification to ensure that distinct implementations agree, and to suggest for a contribution for probable standardization, matter as normal to whatever the committee decides to simply accept/enhance/change/reject.
In terms of we will tell, these policies cause code that performs too or better than older, additional standard tactics; they are supposed to Stick to the zero-overhead principle (“Whatever you don’t use, you don’t pay for” or “if you use an abstraction mechanism correctly, you get at the least pretty much as good performance as when you had handcoded utilizing reduce-level language constructs”).
This may produce a lot of Untrue positives in some code bases; If that's the case, flag only switches that handle most but not all cases
these functions ought to accept a sensible pointer only if they have to get involved in the widget’s lifetime management. Or else they should accept a widget*, if it may be nullptr. Normally, and Preferably, the purpose need to accept a widget&.
which have been helpful in creating excellent C++ code. If a tool is designed specifically to guidance and back links into the C++ Core Pointers It's a candidate for inclusion.
The essential system for avoiding leaks is to have every single useful resource owned by a useful resource deal with with an appropriate destructor. A checker can find “bare information”. Specified a listing of C-fashion allocation features (e.g., fopen()), a checker may also come across employs that are not managed by a source deal with.
: (1) a description of the specified conduct of the software or part of a plan; (2) a description from the assumptions a purpose or template tends to make of its arguments.
Alternate options: If you're thinking that You will need a Digital view website assignment operator, and understand why that’s deeply problematic, don’t simply call it operator=. Allow it to be a named operate like virtual void assign(const Foo&).
Generally, cleaner code yields greater general performance with exceptions (simplifying the tracing of paths through the program and their optimization).
If the necessities earlier mentioned are met, the look guarantees that PostInitialize is named for any absolutely produced B-derived item. PostInitialize doesn’t have to be virtual; it can, on the other hand, invoke Digital features freely.
If x = x alterations the value of x, persons will be amazed and negative errors will take place (normally together with leaks).
Make sure you contact the editors if you find a counter case in point. The rule here is much more warning and insists on full protection.
Passing an additional info uninitialized variable to be a reference to non-const argument might be assumed for being a produce Recommended Site in to the variable.
|
OPCFW_CODE
|
OpenGL Hook, How to modify 3d model color?
Hook game is cs 1.6
Hook's glbegin function
I want to assign colors to the character model rendering, but the code is invalid
Ask for help
void APIENTRY hkGLBegin(GLenum mode)
{
if (mode==5)
{
glEnable(GL_TEXTURE_2D);
glColor4f(0, 0, 0, 0); //TODO FIXME failed
glDisable(GL_TEXTURE_2D);
}
}
It depends how the game renders character models. Do you know how the game renders character models?
I am not familiar with this kind of GL hacking so I might be wrong... however
What do you mean by code is invalid? Is there any error message compile time or runtime ? Maybe you just need to add:
#include <gl.h>
or
#include <gl\gl.h>
or any other abbreviation related to your compiler ...
I do not think glEnable(GL_TEXTURE_2D); glDisable(GL_TEXTURE_2D); are allowed inside glBegin/glEnd ...
In case the hook is before glBegin then enabling and disabling texture will disable texture also if the hook does not call original glBegin you might want to add that too...
So I would leave just glColor and remove the texture statement they have nothing to do with color anyway...
If it does not work it might suggest that the CS is overiding it with own glColor calls or lighting has not enabled glEnable(GL_COLOR_MATERIAL); so try to add it ...
Why use mode == 5 instead of mode == GL_TRIANGLE_STRIP ? This is taken from gl.h
#define GL_POINTS 0x0000
#define GL_LINES 0x0001
#define GL_LINE_LOOP 0x0002
#define GL_LINE_STRIP 0x0003
#define GL_TRIANGLES 0x0004
#define GL_TRIANGLE_STRIP 0x0005
#define GL_TRIANGLE_FAN 0x0006
#define GL_QUADS 0x0007
#define GL_QUAD_STRIP 0x0008
#define GL_POLYGON 0x0009
Also are you sure CS is using that primitive only? if not you should also handle the others too ...
so I would change to:
#include <gl.h> // optional if compile errors present
void APIENTRY hkGLBegin(GLenum mode)
{
if (mode==GL_TRIANGLE_STRIP)
{
glEnable(GL_COLOR_MATERIAL); // optional if color is not changed by glColor on textured surfaces
glColor4f(0, 0, 0, 0);
}
glBegin(mode); // optional if nothing is rendered
}
|
STACK_EXCHANGE
|
Purpose of functions in arrays?
I'm new to programming. I recently started with Javascript and in one of the topics there was create an array of functions. My question is what are those useful for? I didn't get the idea behind. Can someone help me understand?
Update: to make the question more clear I will use an example a colleague shared. Let's say we have this:
var twoDimensionalImageData = ...
var operations = [
function(pixel) { blur(pixel); },
function(pixel) { invert(pixel); },
function(pixel) { reflect(pixel); }
];
foreach(var pixel in twoDimensionalImageData)
foreach(var func in operations)
func( pixel );
Can this be achieved without the use of functions in array? Or can this be achieved without the use of function(pixel) in operations array? If yes I would like to understand why the function in array can be better than normal functions. What's the benefit of it?
Could you provide example to demonstrate what you are asking about?
What if you had a situation where you needed to decide what sort of action to take, and the decision was based on a number between 0 and 10?
Basically a array of functions is nothing I needed in my time as a "beginner". It's something for advanced design patterns
I can see a possible use for an array-of-functions if you're wanting to massage data; rather than using function currying and composition, just apply a series of functions to the data, like macro steps. This might be useful in imaging applications, think of Photoshop's "Actions" feature.
var twoDimensionalImageData = ...
var operations = [
function(pixel) { blur(pixel); },
function(pixel) { invert(pixel); },
function(pixel) { reflect(pixel); }
];
foreach(var pixel in twoDimensionalImageData)
foreach(var func in operations)
func( pixel );
You could use a list of functions as a list of callbacks or have functions as listeners (instead of objects) of an Observer pattern.
Observer:
This is a very famous software design pattern. It consists of having one main object or, say, Entity, that lots of other objects are interested into. When this main Entity changes its attribute or something happens to it, it tells whoever is interested (or listening) that something happened (and what did).
List of callbacks:
Could be useful when, say, you made an Ajax request (Asynchronous Javascript and XML) to update your news feed and then you want to also execute many other steps. All you have to do if you have this list of functions is iterate through it and call them. (Yes, you could call them in a single function, but keeping them in an array would give it a lot of flexibility).
In both cases, it's very easy to "know" what functions you have to call :)
The asker is new to programming, and your answer uses design patterns as an example!!
I use this often - in fact (using Lists), this is how event handlers were done in a lot of C# stuff.
holy crap, let me elaborate :)
|
STACK_EXCHANGE
|
Post by DubstepJoltik on Aug 19, 2017 10:11:26 GMT -5
Intro: I don't really get why you would repost this thread after literally seven minutes. You can simply just edit the old thread into Google Translate. Level Name: Adventure 2 Creator: superkirill (Solo) ID: 36297989 (Easy to memorize, btw. There's a pattern)
General: Gameplay-wise, this level isn't anything special. Nothing really good that stands out and nothing bad that stands out either. There are some annoying bugs, however. For example, sometimes, I crash in the Wave section for no reason. I crash into air, basically. So overall, the layout isn't very special. One more minor thing I can nitpick is I think the gameplay in the final Mini Cube part is too simple. The platforms are too close to each other, and they're all basically in the same line on the y-axis.
Transitions: I don't really like the transitions. They're really fast, especially at the first 25-ish% of the level. Make it so the player has more time to react, and this would be better.
Synchronization: The sync isn't incredibly spot-on, but it still works in some places, like the first Cube section. It's good enough, is what I'm saying.
Difficulty of Level: I was torn between 6 and 8 stars. (I don't quite 7 stars will work for this level) There's some really easy parts, but at the same time, there are some really hard parts. In the end, I decided the winning rating was 8 stars.
Air Decoration: This level doesn't have very much air deco. There's very little of it, since the backgrounds are really what shine in this level. It could really use some, because the lack of it really stands out.
Block Design: I love the block designs. They're extremely original and they change throughout the entire level. They match with the background and work cohesively. The colors are pretty nice as well.
Backgrounds and Art: I. Love. Everything about this part. This level's backgrounds are just gorgeous. I really love the forest design at the Ball part, the city design at the first Cube part, and especially the sky design at the final Mini Cube part (my favorite part is the balloons). I think the Ship and Wave parts could have more flair to them, but other than that, the art in this level is perfect. Nothing looks choppy, nothing looks copy-pasted, everything looks amazing and original.
God help any poor soul that thinks this is a bad level. I think this level is great. The blocks go well with the backgrounds, the colors work very well, and the gameplay isn't obnoxious or cancerous (except a few transitions). I'm just in shock that most simple levels are still getting featured when levels like this aren't.
Final Score: 7/10 - If the gameplay was better and there was more air decoration, then I would call this level worthy of the Hall of Fame. This is the first time I've reviewed a "Feature-Worthy" level.
What can be improved? Make the transitions slower/easier to react to Add more flair to the Ship and Wave sections Give the gameplay more excitement
Optional things you could do: Raise and lower some platforms in the final Mini Cube section Nerf the gameplay
notvx: anybody know how to make extreme demon gameplay?
Dec 3, 2023 6:23:27 GMT -5
Gmaxable: <269127> I do notvx! I am a gameplay member and host of an extreme demon collab I am doing! I finished al the gameplay and am decorating a different part. Why do you need extreme demon gameplay?
Dec 3, 2023 7:43:59 GMT -5
|
OPCFW_CODE
|
This report captures status information of all applications / accounts for ISVs during the onboarding process. It provides the status information across the onboarding journey at any given point until the account is Enabled for processing. Further, ISVs can develop the merchant onboarding pipeline and conduct appropriate analysis utilizing reports.
The report also captures the following statuses:
- Simplified Onboarding App form (before the application is submitted): Registration pending’, ‘Business info pending’, ‘Owner Info pending’, ‘KYC check pending’, ‘Bank Info Pending’, ‘Bank Documents Pending’, and ‘Submit Pending’ (Note: there is no Submitted status, the next status will be ‘Processing group compliance’ – see below).
- Credit Underwriting (after the merchant application is submitted): ‘Processing’, ‘Rejected’, ‘Withdrawn’, ‘Approved’, ‘Enabled’, ‘Disabled’.
This report processes two sets of data - a) Before the merchant application is submitted b) After the merchant application is submitted. The creation date definition varies for these two sets of data.
- CREATION_DATE (before the merchant application is submitted): If the merchant application is not submitted, the creation date corresponds to the day when the application was first accessed by the merchant.
- CREATION_DATE (after the merchant application is submitted): Once the merchant application is submitted, the creation date corresponds to the day when the merchant application is actually submitted.
There are a number of fields that get populated only after the merchant application is submitted. These fields will have the value "NA" before the merchant application is submitted: (PRODUCT_NAME,PRODUCT_ID,ACCOUNT_ID,ACCOUNT_NAME,LEGAL NAME, COUNTRY, PROCESSING_CURRENCY, BUSINESS_CATEGORY, LAST_COMMENT_ENTERED, ON_HOLD_INDICATOR, ON_HOLD_REASON_CODE, ON_HOLD_REASON, NOTE, AMEX_MID, PROCESSING_TYPE, AGENT_PROFILE_1, AGENT_PROFILE_2, DEFAULT_BANK, ENABLED_DATE)
By default the report returns data for CREATION_DATE = (Current date (UTC) - 30 day) to Current date
|Column||Column Header Description||Definition||Data Type||Sample Value|
|A||APPLICATION_ID||This is the unique number created for the merchant application<![CDATA[ ]]>||String||XYZ123|
|B||PARTNER_NAME||Name of the ISV Partner||String||XYZ Ltd|
|ID number of the ISV Partner||String||1020|
|D||PRODUCT_NAME||ISV Partner Product Name (Product = Payfac Link or Account Management API)||String||Rentmoola-DD|
|E||PRODUCT_ID||ID number of the Product page in Geneva/Netbanx||String||38012|
|F||ACCOUNT_ID||FMA # of the account boarded||String||1002584542|
|G||ACCOUNT_NAME||DBA name of the account boarded||String||Test Name|
|H||CREATION_DATE||Date when the FMA was created / Date the Application was started||String||2019-11-28 17:51:18.0|
|I||LEGAL_NAME||Legal name of the account boarded||String||Confidential|
|J||COUNTRY||Country of the account boarded||String||CANADA|
|K||PROCESSING_CURRENCY||Processing currency of the account boarded||String||CAD|
|L||BUSINESS_CATEGORY||MCC (Merchant Category Code) of the account boarded||String||6513 REAL ESTATE AGENTS AND MANAGERS-RENTALS|
|N||LAST_COMMENT_ENTERED||Comments from Ops or Risk team||String||See notes|
|O||ON_HOLD_INDICATOR||Notates if there are Payment Holds||String||N|
|P||ON_HOLD_REASON_CODE||Reason code for Hold||String||RMD|
|Q||ON_HOLD_REASON||Hold Reason code definition||String||Require Micro Deposit|
|R||NOTE||Additional Notes||String||Response attached|
|S||AMEX_MID||Amex MID provided||String||999999999|
|T||PROCESSING_TYPE||Notates Credit Card (CC) or Direct Debit (DD) account||String||DD|
|U||AGENT_PROFILE_1||Agents that are tagged to the FMA||String|
|V||AGENT_PROFILE_2||Agents that are tagged to the FMA||String|
|X||ENABLED_DATE||Date the account was set to Enabled status||Enabled||2019-12-06 11:26:31.0|
|
OPCFW_CODE
|
package com.ddoerr.clientgui.bindings;
import com.ddoerr.clientgui.models.Insets;
import com.ddoerr.clientgui.models.Size;
import com.sun.javafx.binding.IntegerConstant;
import javafx.beans.property.ObjectProperty;
import javafx.beans.value.ObservableFloatValue;
import javafx.beans.value.ObservableIntegerValue;
import javafx.beans.value.ObservableNumberValue;
import javafx.beans.value.ObservableObjectValue;
import java.util.function.Function;
public interface SizeExpression extends ObservableObjectValue<Size> {
default SizeBinding add(int width, int height) {
return SizeBinding.createSizeBinding(() -> Size.of(get().getWidth() + width, get().getHeight() + height), this);
}
default SizeBinding addWidth(int width) {
return add(width, 0);
}
default SizeBinding addHeight(int height) {
return add(0, height);
}
default SizeBinding subtract(SizeExpression sizeBinding) {
return SizeBinding.createSizeBinding(() -> Size.of(get().getWidth() - sizeBinding.get().getWidth(), get().getHeight() - sizeBinding.get().getHeight()),
this, sizeBinding);
}
default SizeBinding subtractWidth(ObservableIntegerValue width) {
return SizeBinding.createSizeBinding(() -> Size.of(get().getWidth() - width.get(), get().getHeight()), this, width);
}
default SizeBinding subtractHeight(ObservableIntegerValue height) {
return SizeBinding.createSizeBinding(() -> Size.of(get().getWidth(), get().getHeight() - height.get()), this, height);
}
default SizeBinding divide(ObservableIntegerValue width, ObservableIntegerValue height) {
return SizeBinding.createSizeBinding(() -> Size.of(get().getWidth() / width.get(), get().getHeight() / height.get()), this, width, height);
}
default SizeBinding divideWidth(ObservableIntegerValue width) {
return divide(width, IntegerConstant.valueOf(1));
}
default SizeBinding divideHeight(ObservableIntegerValue height) {
return divide(IntegerConstant.valueOf(1), height);
}
default SizeBinding divide(int width, int height) {
return divide(IntegerConstant.valueOf(width), IntegerConstant.valueOf(height));
}
default SizeBinding divideWidth(int width) {
return divide(width, 1);
}
default SizeBinding divideHeight(int height) {
return divide(1, height);
}
default SizeBinding multiply(int width, int height) {
return SizeBinding.createSizeBinding(() -> Size.of(get().getWidth() * width, get().getHeight() * height), this);
}
default SizeBinding multiplyWidth(int width) {
return divide(width, 1);
}
default SizeBinding multiplyHeight(int height) {
return divide(1, height);
}
default SizeBinding setWidth(ObservableNumberValue width) {
return SizeBinding.createSizeBinding(() -> Size.of(width.getValue().intValue(), get().getHeight()), this, width);
}
default SizeBinding setWidth(int width) {
return SizeBinding.createSizeBinding(() -> Size.of(width, get().getHeight()), this);
}
default SizeBinding setHeight(ObservableNumberValue height) {
return SizeBinding.createSizeBinding(() -> Size.of(get().getWidth(), height.getValue().intValue()), this, height);
}
default SizeBinding setHeight(int height) {
return SizeBinding.createSizeBinding(() -> Size.of(get().getWidth(), height), this);
}
default SizeBinding addOuterInsets(ObjectProperty<Insets> insetsProperty) {
return SizeBinding.createSizeBinding(() ->
Size.of(get().getWidth() + insetsProperty.get().getWidth(), get().getHeight() + insetsProperty.get().getHeight()),
this, insetsProperty);
}
default SizeBinding addInnerInsets(ObjectProperty<Insets> insetsProperty) {
return SizeBinding.createSizeBinding(() ->
Size.of(get().getWidth() - insetsProperty.get().getWidth(), get().getHeight() - insetsProperty.get().getHeight()),
this, insetsProperty);
}
default SizeBinding calculate(Function<Size, Size> function) {
return SizeBinding.createSizeBinding(() -> function.apply(get()), this);
}
}
|
STACK_EDU
|
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
Windows startup programs – Database search. If you’re frustrated with the time it takes your Windows 10/8/7/Vista/XP PC to boot and then it.
Inotask.exe Application Error. Top 10 10 Tips for a Safe PC Support FAQ Contact Us Download Inotask.exe Error Fix Inotask.exe Inotask.exe is a file within eTrust.
How to fix SchTasks.exe application errors. Broken exe file references can prevent your exe file from registering properly, giving you a Schtasks.exe error.
Error Failure Repodata/filelists.xml.gz From Base “The thought that an error on our part is connected to this guy’s purchase of a. GitHub is home to over 20 million developers. Metadata file does not match checksum Trying other mirror. Error: failure: repodata/filelists.xml.gz from mod. Red Hat Customer Portal Labs. failed to retrieve repodata/*-primary.xml.gz from rhel-x86_64-server-5. #
error (an error that does not stop the cmdlet processing) at the command line or in a script, cmdlet, To apply the configuration to all consoles, save the variable settings in your Windows PowerShell profile. command is run in the Windows Command Prompt (Cmd.exe), FINDSTR finds the characters in the text file.
After installing windows 10, error message is showing as: DsmUserTask.exe – Application Error 'The instruction at 0x00007FFE792CA060 referenced memory at.
Registry Error Cleaner Free Htc Sync Error Reading Setup Initialisation File How to avoid the error 'Reading setup initialization file' on. – Hello, I have downloaded a setup file v.3.0.5579 to sync my htc windows 8 phone to my windows 8 system, but when I am trying to install this setup am Fatal Error
Dec 23, 2015. As it turns out, the user had deleted the McAfee Anti-virus program directory. No mean feat in itself. Which in turn led to the error message above when the user logged on. So I knew McAfee was probably the cause. When McAfee AV installs, it replaces the registry entry for vbscript.dll with scriptsn.dll.
CVE – CVE Reference Map for Source IDEFENSE – IDEFENSE:20050228 Mozilla Firefox and Mozilla Browser Out Of Memory Heap Corruption Design Error, CVE-2005-0255. IDEFENSE:20050301 RealNetworks. IDEFENSE:20070509 Computer Associates eTrust InoTask.exe Antivirus Buffer Overflow Vulnerability, CVE-2007-2523. IDEFENSE:20070509 Symantec.
spoolsv.exe – Application Error [RESOLVED] – posted in Virus, Spyware, Malware Removal: Hi All! I recive the following error message as soon as the PC boots to the.
|
OPCFW_CODE
|
galileo lost some requests
When I sent 10 requests to the kong, there is only one or two requests on galileo dashboard.
I tried many times. Is this a bug?
There is an easy way to know what Kong sent to the Galileo collector by changing your logging level to debug in the Nginx configuration. Kong will output messages indicating how many requests were sent and accepted by the collector and looking like this:
[mashape-analytics] successfully saved the batch. (xx/10)
Hi @therebelrobot . Thanks for your help. My Kong version is 0.5.4.
I am running it on ubuntu 14.04. And the kong is installed using Docker.
2016/01/29 06:34:24 [error] 54#0: *267 upstream timed out (110: Connection timed out) while connecting to upstream, client: <IP_ADDRESS>, server: _, request: "GET / HTTP/1.1", upstream: "http://<IP_ADDRESS>:80/", host: "mockbin.com:8000"
2016/01/29 06:34:24 [error] 54#0: *267 failed to run log_by_lua*: ...cal/share/lua/5.1/kong/plugins/log-serializers/basic.lua:30: attempt to perform arithmetic on field 'upstream_response_time' (a string value)
stack traceback:
...cal/share/lua/5.1/kong/plugins/log-serializers/basic.lua:30: in function 'serialize'
/usr/local/share/lua/5.1/kong/plugins/file-log/log.lua:47: in function 'execute'
/usr/local/share/lua/5.1/kong/plugins/file-log/handler.lua:12: in function 'log'
/usr/local/share/lua/5.1/kong.lua:247: in function 'exec_plugins_log'
log_by_lua(nginx.conf:108):1: in function <log_by_lua(nginx.conf:108):1> while logging request, client: <IP_ADDRESS>, server: _, request: "GET / HTTP/1.1", upstream: "http://<IP_ADDRESS>:80/", host: "mockbin.com:8000"
2016/01/29 06:34:26 [error] 54#0: *271 upstream timed out (110: Connection timed out) while connecting to upstream, client: <IP_ADDRESS>, server: _, request: "GET / HTTP/1.1", upstream: "http://<IP_ADDRESS>:80/", host: "mockbin.com:8000"
2016/01/29 06:34:26 [error] 54#0: *271 failed to run log_by_lua*: ...cal/share/lua/5.1/kong/plugins/log-serializers/basic.lua:30: attempt to perform arithmetic on field 'upstream_response_time' (a string value)
stack traceback:
...cal/share/lua/5.1/kong/plugins/log-serializers/basic.lua:30: in function 'serialize'
/usr/local/share/lua/5.1/kong/plugins/file-log/log.lua:47: in function 'execute'
/usr/local/share/lua/5.1/kong/plugins/file-log/handler.lua:12: in function 'log'
/usr/local/share/lua/5.1/kong.lua:247: in function 'exec_plugins_log'
log_by_lua(nginx.conf:108):1: in function <log_by_lua(nginx.conf:108):1> while logging request, client: <IP_ADDRESS>, server: _, request: "GET / HTTP/1.1", upstream: "http://<IP_ADDRESS>:80/", host: "mockbin.com:8000"
Well, it seems Kong cannot send data to Galileo because the connection to it is timing out. Galileo peeps is who you need to contact at this point, I'll let @therebelrobot oversight this.
@therebelrobot Could you help me? I don't know what to do to debug this problem.
2016/02/01 08:32:21 [error] 54#0: [lua] buffer.lua:213: send_batch(): [mashape-analytics] failed to send batch (79 ALFs 125326 bytes): timeout, context: ngx.timer, client: <IP_ADDRESS>, server: <IP_ADDRESS>:8000
2016/02/01 08:32:36 [error] 54#0: [lua] buffer.lua:213: [mashape-analytics] failed to send batch (79 ALFs 125326 bytes): timeout, context: ngx.timer, client: <IP_ADDRESS>, server: <IP_ADDRESS>:8000
2016/02/01 08:32:54 [error] 54#0: [lua] buffer.lua:213: send_batch(): [mashape-analytics] failed to send batch (79 ALFs 125326 bytes): timeout, context: ngx.timer, client: <IP_ADDRESS>, server: <IP_ADDRESS>:8000
2016/02/01 08:33:08 [error] 54#0: [lua] buffer.lua:213: send_batch(): [mashape-analytics] failed to send batch (79 ALFs 125326 bytes): timeout, context: ngx.timer, client: <IP_ADDRESS>, server: <IP_ADDRESS>:8000
2016/02/01 08:33:17 [error] 54#0: [lua] buffer.lua:213: send_batch(): [mashape-analytics] failed to send batch (79 ALFs 125326 bytes): timeout, context: ngx.timer, client: <IP_ADDRESS>, server: <IP_ADDRESS>:8000
2016/02/01 08:33:23 [error] 54#0: [lua] buffer.lua:213: send_batch(): [mashape-analytics] failed to send batch (79 ALFs 125326 bytes): timeout, context: ngx.timer, client: <IP_ADDRESS>, server: <IP_ADDRESS>:8000
2016/02/01 08:33:49 [error] 54#0: [lua] buffer.lua:213: [mashape-analytics] failed to send batch (79 ALFs 125326 bytes): timeout, context: ngx.timer, client: <IP_ADDRESS>, server: <IP_ADDRESS>:8000
@dingziran any updates on this issue?
@thefosk I was not use mashape-analytics anymore and change to other method to analyze the logs.
I think problem is about my server has limited bandwidth to socket.analytics.mashape.com. So when the batch is big, it will fail with timeout error.
@dingziran out of curiosity, what kind of tool are you using now?
@sinzone I am using kong's udp log plugin to send json log to a heka server and then push to elasticsearch and kibana.
|
GITHUB_ARCHIVE
|
As is his custom, Dan started his session by sharing pictures from his home…on Maui. Once people were appropriately jealous (mere seconds), we were off on a discussion of SharePoint 2013. In his quest to demystify the latest in collaboration technology, Dan spoke mostly to infrastructure and architecture, similar to the STP presentation from Michael Noel. Dan’s approach was slightly different; however, as he strongly recommended moving to the cloud. His point boiled down to this: If you have the power needed, go virtual. Meaning, if you can have a virtual set-up with enough RAM and a fast enough network to run your environment effectively, then there is no real reason why you should not switch to the cloud. He mentioned how the new Windows Server 2012 is fast and reliable, as well as the newest version of Hyper-V. He also mentioned the use of Windows Azure. Quite the trending theme here in Chicago!
As well as third-party hosting services. Again, like Michael, Dan recommended several different server setups. The key takeaway here was that while it is of course possible to run an entire SharePoint farm on one server, it’s highly inadvisable. Having at least two servers dedicated to each Web tier, apps tier, and database tier is recommended. Also, with regard to outright processing power, Dan believes that the more RAM, the merrier. That said, he did make the distinction that test servers and development servers do not necessarily need as much juice as one might think. Dan is also a big supporter of using a hardware-based load balancer in conjunction with SharePoint 2013’s request management software-side assistance. This one-two punch helps balance traffic across servers and will even push traffic away from servers that are showing signs of slowing.
Another big topic for Dan was the new Distributed Cache. The distributed cache functions as a huge cache that tracks all changes on the farm and is spread evenly across your servers. The two main benefits are populating the newsfeed with the new social features and providing rolling authentication should you get switched to another server during the course of your work. Having all this information in a designated place improves performance and provides a redundant system, as all the data in the distributed cache is also stored in the content database. While data will get kicked out of the distributed cache once it fills up, the data is still retrievable in the content database. The downside to the distributed cache is that if a server goes down, the portion of the cache that was on that server will be lost. It can be rebuilt from the content database but, depending on the size of the cache, this process can take a long time. If a server is taken down correctly, it will automatically distribute the information in the distributed cache from that server to the others, which will save you from losing data.
As far as architecture is concerned, Dan spoke to Microsoft’s suggested architecture, which mirrors that of Office 365. This divides servers based on latency rather than function. Essentially, Dan recommends that you put all the easy and common things together, they’re leaving more room for the beefier components of your environment. Dan’s suggestion is to approach your architecture first evaluating your organization’s functional needs, then technical considerations (power, hardware/virtual machine, etc.), and finally, cost. Obviously, it would be best to have a dozen servers and hundreds of gigs of RAM, but clearly, that’s not a realistic option for most of us.
Dan ended his session by presenting a few best practices for setting up a new SharePoint deployment. The main message here was to set up generic accounts with high permission levels. These admin accounts can be used for high-level functions and can then be disabled until they are needed again. He also suggested creating an “uber” user, a user who has the appropriate permissions to make any change, anywhere on the farm. As can be imagined, this account should have increased security around the password, but once again, the account should only be active when needed.
|
OPCFW_CODE
|
Hi My name is Somadina Egbuna. I am a rising sophomore at Newark Collegiate Academy. My starter project is a MintyBoost which is is a portable charger and I chose this because it interested me and also because I was having a problem with my phones battery. My main project is an Omni-directional robot. I chose this project because I like robots and gadgets that move and cars so an Omni-directional robot satisfies the basics.
MAIN PROJECT – Omni-directional Robot
I was able to successfully present my project on parent night. Moving forward, I am doing my final milestone video and showing all the changes I have made since my last video. I added an ultrasonic sensor that beeps faster the closer an object got to the robot. I also added blue LED’s to my robot to make it look more presentable and inviting. I chose blue because I like blue and BLUESTAMP! So I think that was interesting. Finally I changed my receiver because my old receiver malfunctioned.
Today I was able to reach my second milestone. I encountered a lot of problems with my robot but I was able to make it work. I also got a bigger battery because my smaller battery kept dropping voltage and this made the entire robot shutdown and reboot really fast and this was really annoying. I got a 7.4V 30c 5400mAh Lithium Polymer battery and this battery is able to carry the robot without problems. The robot jerks a lot and that causes my battery to run into my Arduino. In order to fix that I need to create a space for my battery. I need to add an Arduino Mega to be able to add ultrasonic sensors to my robot. This is because I have run out of PWM ( pulse with modulation) pins and I now need more to run the ultrasonic sensors. PWM is a term for describing digital signal. I want to add ultrasonic sensors to perform obstacle avoidance and a buzzer to beep faster the closer an object got to it(sort of like a parking sensor.
I was able to create my first milestone video. My milestone video was a little “further” because I had already put my motors on my base and I have also set up my entire base and had a fully functional robot. I was able to accomplish this because I stayed a little later than the time I was supposed to and I diligently worked on my base so that I could save time to be able to add an ultrasonic sensor to my robot. I would like to add voice control to my robot to make it a little more advanced and better. Working on this robot hasn’t been easy, especially the coding part. The coding section was the hardest part of this project and that is because I have never coded before. I was just thrown into it but I found my bearing and I was able to overcome it. I am really happy I have gone this far and I hope to go even farther in the next week.
Today I created a MintyBoost. This is a portable charger that that you can take with you anywhere you go and never worry about your phone dying. This device works by connecting your phone to it through a USB cable. this device is powered by AA Batteries which makes it convenient when there is no electricity. This device contains 4 resistors. these resistors reduce the amount of current the device produces. The batteries produce 1.5 volts and the USB produces 5 v and the combined voltage output of the batteries is 3v so the output is converted by the LT1302. This is a 5v boost converter. This changes the output of the batteries which is 3v to 5v which is the same as the USB output. This device also contains an inductor. An inductor stores energy in a magnetic field and resists changes in current which lets it keep current going into a capacitor which stores energy for a limited amount of time and a limited amount of energy.
|
OPCFW_CODE
|
I’d like to make a post advertising the two BMRF.ME Rust servers that have been available since the earliest days of third-party Rust hosting. To give you a bit of broad information about our server, there’s a few different things that set it apart. For starters, we’re an approved provider for Rust, which means the server is hosted directly on our hardware. If something goes wrong, we fix it ourselves, and there’s no middleman involved in this. To add to this we also host on some of the highest spec server hardware available today, which are as follows:
Dual Intel Xeon E5-2690 @ 3.8GHZ (16 cores, 32 threads)
96GB DDR3 ECC Memory
Dual Intel SSDs
1 gigabit uplink with 40TB of bandwidth on the premium Internap network
Server is located in New York.
The benefits of this kind of hardware are very simple – the server will never lag because of the hardware. If a patch is released that doubles or triples CPU usage, it won’t matter. A single full rust server uses literally 2% of what this machine is capable of hosting. We were also the only host to successfully filter out the uLink DoS attacks, and have filtering for many types of other attacks to a greater extent than most other hosts. Since we’re an official provider, we also offer Rust server hosting through our hosting website, http://bmrfservers.com – but do note that we take on a limited customer base as we are more of a specialty host focused on high performance and direct, one-on-one support.
Now, some more information about BMRF Official Rust 1 and 2:
Rust 1 is our public server and features very standard settings. Sleepers and PvP are on, and the only modifications done to this server is the Rust++ chat modification. We’ve also expanded it with some additional logging features. Both servers will restart at 5AM EST and attempt to apply any Steam updates. We also have the ability to update our servers via our IRC channel, so they tend to stay very up-to-date.
Rust 2 is our semi-private server, anybody who is in the http://steamcommunity.com/groups/bmrfservers will be able to join, else they are kicked. It will also kick anybody who has a VAC ban from the server immediately. It features the Rust++ chat modification as well and will be further expanded with more mods in the future, though we’d like to conserve the original Rust gameplay and focus on improvements most people will agree on – so don’t expect anything overly crazy.
Check out both of our servers and let us know what you think. You can provide feedback either in this thread or our IRC channel at #bmrf on Rizon. Our community website is http://bmrf.me and has a handy live chat function for IRC if you need to contact an admin or simply have questions.
|
OPCFW_CODE
|
How do I separate words using regex in python while considering words with apostrophes?
I tried separate m's in a python regex by using word boundaries and find them all. These m's should either have a whitespace on both sides or begin/end the string:
r = re.compile("\\bm\\b")
re.findall(r, someString)
However, this method also finds m's within words like I'm since apostrophes are considered to be word boundaries. How do I write a regex that doesn't consider apostrophes as word boundaries?
I've tried this:
r = re.compile("(\\sm\\s) | (^m) | (m$)")
re.findall(r, someString)
but that just doesn't match any m. Odd.
The reason your \\s example doesn't match any m is because of the extra space around the pipes. Those are included in the search string. Otherwise, that works for me without lookaround.
Using lookaround assertion:
>>> import re
>>> re.findall(r'(?<=\s)m(?=\s)|^m|m$', "I'm a boy")
[]
>>> re.findall(r'(?<=\s)m(?=\s)|^m|m$', "I m a boy")
['m']
>>> re.findall(r'(?<=\s)m(?=\s)|^m|m$', "mama")
['m']
>>> re.findall(r'(?<=\s)m(?=\s)|^m|m$', "pm")
['m']
(?=...)
Matches if ... matches next, but doesn’t consume any of the
string. This is called a lookahead assertion. For example, Isaac
(?=Asimov) will match 'Isaac ' only if it’s followed by 'Asimov'.
(?<=...)
Matches if the current position in the string is preceded by a match
for ... that ends at the current position. This is called a positive
lookbehind assertion. (?<=abc)def will find a match in abcdef, ...
from Regular expression syntax
BTW, using raw string (r'this is raw string'), you don't need to escape \.
>>> r'\s' == '\\s'
True
You don't even need look-around (unless you want to capture the m without the spaces), but your second example was inches away. It was the extra spaces (ok in python, but not within a regex) which made them not work:
>>> re.findall(r'\sm\s|^m|m$', "I m a boy")
[' m ']
>>> re.findall(r'\sm\s|^m|m$', "mamam")
['m', 'm']
>>> re.findall(r'\sm\s|^m|m$', "mama")
['m']
>>> re.findall(r'\sm\s|^m|m$', "I'm a boy")
[]
>>> re.findall(r'\sm\s|^m|m$', "I'm a boym")
['m']
Or use re.VERBOSE and you can leave the spaces in. This is often useful for complicated regexps--and for a novice, almost any regexp can be complicated.
falsetru's answer is almost the equivalent of "\b except apostrophes", but not quite. It will still find matches where a boundary is missing. Using one of falsetru's examples:
>>> import re
>>> re.findall(r'(?<=\s)m(?=\s)|^m|m$', "mama")
['m']
It finds 'm', but there is no occurrence of 'm' in 'mama' that would match '\bm\b'. The first 'm' matches '\bm', but that's as close as it gets.
The regex that implements "\b without apostrophes" is shown below:
(?<=\s)m(?=\s)|^m(?=\s)|(?<=\s)m$|^m$
This will find any of the following 4 cases:
'm' with white space before and after
'm' at beginning followed by white space
'm' at end preceded by white space
'm' with nothing preceding or following it (i.e. just literally the string "m")
|
STACK_EXCHANGE
|
BroadlinkManager on Mac - No Devices
Hi,
I'm running Docker on MacOS (Apple M2 Max).
This is my docker-compose.yml
version: "3.6"
services:
broadlinkmanager:
image: techblog/broadlinkmanager
expose:
- "7020"
ports:
- "7020:7020"
container_name: broadlinkmanager
restart: unless-stopped
volumes:
- ./broadlinkmanager:/opt/broadlinkmanager/data
environment:
- ENABLE_GOOGLE_ANALYTICS=False
When I run docker-compose up I got this:
[+] Running 1/0
✔ Container broadlinkmanager Created 0.0s
Attaching to broadlinkmanager
broadlinkmanager | 2024-04-09 09:12:53.692 | INFO | main::48 - OS: posix
broadlinkmanager | 2024-04-09 09:12:53.692 | DEBUG | main:get_env_ip_list:71 - Environment discovered IP List []
broadlinkmanager | 2024-04-09 09:12:53.694 | DEBUG | main:get_local_ip_list:64 - Locally discovered IP List ['<IP_ADDRESS>']
broadlinkmanager | 2024-04-09 09:12:53.694 | INFO | main::126 - Broadlink will try to discover devices on the following IP interfaces: ['<IP_ADDRESS>']
broadlinkmanager | 2024-04-09 09:12:53.694 | INFO | main::135 - Configuring app
broadlinkmanager | 2024-04-09 09:12:53.699 | INFO | main::660 - Broadlink Manager is up and running
broadlinkmanager | INFO: Started server process [1]
broadlinkmanager | INFO: Waiting for application startup.
broadlinkmanager | /usr/local/lib/python3.8/dist-packages/starlette_exporter/middleware.py:97: FutureWarning: group_paths and filter_unhandled_paths will change defaults from False to True in the next release. See https://github.com/stephenhillier/starlette_exporter/issues/79 for more info
broadlinkmanager | warnings.warn(
broadlinkmanager | INFO: Application startup complete.
broadlinkmanager | INFO: Uvicorn running on http://<IP_ADDRESS>:7020 (Press CTRL+C to quit)
Then I visit <IP_ADDRESS>:7020 and I got this:
The problem is that no devices have been found. If I click "Rescan" I got no devices and this logs:
broadlinkmanager | INFO: Uvicorn running on http://<IP_ADDRESS>:7020 (Press CTRL+C to quit)
broadlinkmanager | INFO: <IP_ADDRESS>:33017 - "GET / HTTP/1.1" 200 OK
broadlinkmanager | INFO: <IP_ADDRESS>:33017 - "GET / HTTP/1.1" 200 OK
broadlinkmanager | 2024-04-09 09:15:24.681 | INFO | main:search_for_devices:611 - Searching for devices...
broadlinkmanager | 2024-04-09 09:15:24.682 | INFO | main:search_for_devices:613 - Checking devices on interface assigned with IP: <IP_ADDRESS>
broadlinkmanager | 2024-04-09 09:15:29.690 | DEBUG | main:search_for_devices:630 - Devices Found: []
broadlinkmanager | INFO: <IP_ADDRESS>:33017 - "GET /autodiscover HTTP/1.1" 200 OK
broadlinkmanager | INFO: <IP_ADDRESS>:33017 - "GET / HTTP/1.1" 200 OK
Notes:
My DHCP server is <IP_ADDRESS>
My Mac is <IP_ADDRESS>
My Broadlink is a RM4Pro with IP <IP_ADDRESS>
In docker compose I had to remove network mode and add port mapping to let me see the webapp
Thanks for any kind of help/questions.
same thing here, but with docker@debian.
Hi,
Try running it with:
network_mode: host
Or remove the "expose" and leave only the "ports"
already done...
|
GITHUB_ARCHIVE
|
How to show live streaming camera videos in django rest framework
I am building a application in django rest framework and frontend in react i want to shows streaming of multiple live camera into browsers there is any way how we can achieve it using openCV or how we can genrate response with stream of bytes in django rest framework
no. OpenCV is for computer vision, not for moving video around a network. use the right library for the right job.
if you would use Flask then you would find many examples how to send it as many images JPG (motion-jpg, mjpeg) and browser will automatically get next images. And it needs OpenCV only to get images from camera (frame after frame) because rest has to do Flask. In Django you can also use OpenCV to read frame by frame from live camera and keep it in variable. And Django would have to send this frame like any other image. And browser may need to run loop to get new image (new frame) every few millisecond. I don't know if Django has method to send it as motion-jpeg.
if camera send stream using some URL then maybe it would be simpler to use this URL directly in HTML as <video>
Using Google django mjpeg I found project [https://github.com/carbofos/mjpeg-streaming-server](Python/Django MJPEG image push Axis IP camera simulation server) and it uses Django.http.StreamingHttpResponse() to send stream. And similar to Flask it uses loop with yield to send frame after frame.
How to stream .webm or .mjpeg from Python Django server? - Stack Overflow
@ChristophRackwitz there are any library is available ?? because i read frame from opencv then perform some machine learning task and then after that i want to send this frame to my frontend this is actual secenrio
@furas i see many example but i still able to work with single camera in production it may 10 15 camera or more
there is possible way to use Django channels with celery ???
every camera may need to run opencv in separated thread and create separated variable with image. And django will need separated URL for ever camera (or at least url with different parameters to recognize which webcam to stream). I never tried this with Django. I only did it with local GUI (tkinter) to display many streams from cameras, web pages, local files: furas/python-cv2-streams-viewer
@furas many many thanks because i get a way to solve it thanks for your reply it really very helpful for me
@MuhammadArshad have you managed to solve it using Django and ReactJS? If yes, please guide me.. I'm facing a similar issue. I have done this on GUI using PyQt but now the company wants me to display it on web browser.
@Voldemort I used node js to shows live camera streaming in web which is very effective for our use I suggest you if it is possible kindly you can used Node js for creating streaming you camera feed...
@MuhammadArshad isn't there any way to show live streaming on web browser using ffmpeg in django?
|
STACK_EXCHANGE
|
Unresponsive Raspberry Pi 4
I have a Raspberry Pi 4 about 2 years old. I'm on my second SD card. I spent the time to make sure I got a good one that would not worn out this time. I mostly used it for running a plex server. All scenarios I'm about to describe all show the red and green light to be continuously on and not blinking at all. Which according to this they should be ok.
Now my question:
All of the sudden, it became unresponsive. I first noticed it when I wanted to add a new movie to it and went to ssh in. It said No route to host. Ok, so I rebooted, that usually does the trick. Same thing. Ok, sometimes it's something to do with it can't find the external HDD's. So I went to connect an HDMI external monitor. The same monitor I always use. Nope, just blue. I tried both ports. Neither work. I also made sure that plugging them in and turning them on and the Raspberry Pi on in the right order because it's not setup to handle hot plugging. And that can cause issues sometimes.
Ok, so maybe I have the config.txt hdmi settings off. The SD card mounts on my other linux machine, so I don't think the SD card is worn out. Is it? Anyways, I mounted it and uncommented hdmi_safe=1. Nope, still blue. So then I tried a different HDMI cable. A known good one which works with other monitors. Nope, still blue. I also tried to uncomment hdmi_force_hotplug=1 and config_hdmi_boost=4 all one at a time, then all together. All resulted in the same blue screen, also trying each with different hdmi ports and cables. I tried a total of 5 different HDMI cables, all which work on other monitors.
Ok, so put it in headless mode. Or at least what I think is. I've actually never done it before. I basically followed this. I put a blank ssh file on the mounted SD card next to the config.txt and added a wpa_supplicant.conf file next to it as well, with my WiFi's ssid and psk. All the files are the same user/group and permissions as the rest. I still can't ssh into it.
What else can I tell you to help answer my question? Is there anything else you can think of to help me troubleshoot and diagnose what is going on with my pi? I'd really rather not reflash the OS and have to set everything back up. Or buy another external monitor just to find out it's not the monitor.
If I do have to reflash the OS, can you guys give me any advice on backing up and transferring any critical Plex files? Is it going to retain my friends list/users which I invited to my plex? Or do I have to re-add them all over again?
The first thing I would check is the power supply. Phone chargers seem to go bad after a while. You can verify this if the system log has has power warnings in it.
The SD card mounts on my other linux machine, so I don't think the SD card is burnt out. Is it?
I'm not sure what you mean by "burnt out", but a common problem on the Pi is data corruption caused by unclean shutdowns; this is not unique to SD cards (it can happen on any medium) nor does it mean the card is physically damaged.
You may want to try running fsck on the filesystems on the card from your other linux machine, this will mitigate or at least indicate symptoms of corruption.
SD cards can eventually wear out (like other storage media), unfortunately there is not a standard way to recognise this, but I would say a major clue would be if fsck cannot do a repair -- or it can, but an immediate mount, unmount, and fsck shows problems and repairs going on again.
I mounted it and uncommented a few of those hdmi settings in the config.txt. Nope, still blue.
Always helpful when asking a question of this sort to be explicit about exactly which of those settings you are referring to. In this situation, use hdmi_safe=1 and leave the others commented out. If after this it otherwise appears to be booting normally because the green ACT led flashes rapidly and irregularly for 15-20+ seconds, then calms down, the HDMI jack is likely defunct (or you have a bad cable).
If the green led does something else, such as nothing or flashing in a regular pattern, it is not booting properly.
I put a blank ssh file on the mounted SD card next to the config.txt and added a wpa_supplicant.conf file next to it as well
I'm not positive, but I think on recent versions of the OS this feature was dropped in favour of a more complex system involving the official imager -- but I can't a reference for that after a quick look round.
If you do get it working with the screen and a keyboard but still can't get it online, have a look around at the rfkill problem, that one can be a bit pernicious (but should be easy to diagnose and solve if you are aware of it).
can you guys give me any advice on backing up and transferring any critical Plex files? Is it going to retain my friends list/users which I invited to my plex?
I'd presume some of your Plex account settings are stored somewhere in their cloud-o-sphere, but probably not actual content (movies etc). However, questions about Plex usage are better off on a Plex forum or Super User.
Thanks for your response. I updated my question to address several things you called out. I hope that helps. I did have a chance to run fsck and it came back clean. I've tried a few more hdmi cables and still nothing. I'm still poking at it though.
I believe my issue is the entire board may have died. I re-flashed the same SD and same issue. I flashed a new SD, same issue. Then I ordered a new rpi 4 and with both SD cards, fired right up. Best I can tell, it's not the monitor, it's not the SD card, it's not the HDMI cables, it was just the board.
I am still curious though, how can I tell this in the future instead of poking around with other issues such as SD cards and what not. Because as called out in the original question, according to the red and green lights I saw on the board, all "should" have been ok.
I still have the board and can try any number of tests if anyone has any ideas. I just want to limit my troubleshooting time in the future if this were to occur again.
|
STACK_EXCHANGE
|
Another “unscheduled” update since it seems I can’t manage a weekly update right now. Things are ticking along nicely, although there haven’t been any major developments.
Main focus (again) was on improving the existing code to make live easier for plugins. Also, a few gaps in the API have been filled, most notably the new
Main code changes:
- Templates now have a new widget() method at their disposal to display/render widgets or widget definitions. This is mostly aimed at making custom plugin pages easier, but templates can use this too.
- Upgraded jquery to 1.4.3 and 1.8.6 (UI)
- all email content generation code is now using the new
ZMEMails service – this makes a lot of email related code obsolete and a lot cleaner
- all bean definitions can now be customized via individual settings; for example this line of code would tell ZenMagick to use a custom
FooBarclass to be used (with caching disabled) as product service instance:
- address checking for guest checkout (and checkouts generally) has been improved and does not rely on a zencart event any more
- The checkout_payment page controller is now capable to handle basic payment types; this is still disabled since some logic and checks are still missing – still some major progress
- streamlined some
ZMShoppingCart methods and further removed dependencies on zencart
- url encoded payment module error messages are now handled by the messages service, so no custom code needed any more (payment_error/error)
- more progress on making
$languageId mandatory for all API code
- the product finder ZMProductFinder now supports a new flag to search active only / all products – this is disabled for storefront searches, bu means the searcher code is really useful for new admin functions
There has also been some more testing (and fixes) to integrate the current zencart admin pages seamlessly in the ZenMagick admin UI.
As far as a new release is concerned: there hasn’t been done any planning yet, but I am aiming for a new release for middle of December, so not a lot more work will be added. On the one hand that is a shame as the next release will contain less genuine admin improvements than I was hoping for. OTOH, the zencart admin integration is certainly a big step forward to making ZenMagick admin the place to go to to manage your store
It’s been a while since my last update and it really is time for some news about what is happening with ZenMagick.
With more people trying in earnest to use ZenMagick for professional plugin development there has been lots of discussion and small changes to the infrastructure to make it easier for people to do what they want to do.
Here is a list of the more interesting things:
- Ability to configure the processing method used in a controller. That means that in addition to the HTTP method (GET, POST, etc.) based controller method name selection you can now configure the method to be used.
One consequence is that a single controller may be used with different methods for more than one page. However, it is not possible to specify different methods for GET/POST at this time.
- The group level folder under the plugins folder has been removed. That means all plugins sit right in the plugins folder. Furthermore, the default loader policy is now to look for plugin code in a lib subdirectory inside the plugin folder. This means it is now possible to organize code with subdirectories inside the lib folder without having to worry about view templates being loaded too…
- There is a new plugin illustrating the new method mapping for controller and generally explaining a few things about url mappings.
- A new widget() method has been added, to be used by templates do display – uhm – widgets.
- sfYaml is now available for those that do not like spyc. The ZMRuntime::loadYaml method right now still uses spyc as there are some differences in the output that make it hard to switch.
Apart from the above things which are taken from the changelog, there are also quite a few other organizational things that have been, and still are, happening. It looks like ZenMagick will be getting a new site based on redmine. Not sure when this will happen – this is not that trivial and will require quite a bit of work behind the scenes.
There is also more design work to extend the new admin UI. In particular the admin page(s) for the new block system and a UI for the user/role based permission code that is already in place will be tackled next.
And, finally, there is of course the integration of the existing zencart admin into ZenMagick. This is working surprisingly well, although I haven’t done any extensive testing yet. One nice side effect is that after a session timeout the user is redirected back to the last requested page – just as you would expect it!
|
OPCFW_CODE
|
why can't I reimplement my tensorflow model with pytorch?
I am developing a model in tensorflow and find that it is good on my specific evaluation method. But when I transfer to pytorch, I can't achieve the same results. I have checked the model architecture, the weight init method, the lr schedule, the weight decay, momentum and epsilon used in BN layer, the optimizer, and the data preprocessing. All things are the same. But I can't get the same results as in tensorflow. Anybody have met the same problem?
Please add more details.
I did a similar conversion recently.
First you need to make sure that the forward path produces the same results: disable all randomness, initialize with the same values, give it a very small input and compare. If there is a discrepancy, disable parts of the network and compare enabling layers one by one.
When the forward path is confirmed, check the loss, gradients, and updates after one forward-backward cycle.
I am using resnet 110 as the backbone. How do you copy your weights out of tensorflow model? Do you use numpy as the bridge?
It's probably easier just to initialize both models with the same values (like 0.1 for all weights, you can write a loop through all parameters assigning this value), no randomness, no training.
thank you. It is good. I just dump all the weights in TF model into pytorch model.
I have dumped all the tensorflow model weights into pytorch model.
I use a simple input in test and find that the forward pass and the backward pass are all the same.
In my case, I use resnet110 as the backbone. The dataset is cifar10.
The only difference is that I use cifar10 bin version in TF and cifar10 python version in pytorch.
Except this, the two models use the same initial weight, the same optimizer (SGD with momentum 0.9), the same batch normalization layer with momentum=0.99,epsilon=1e-3, the same lr schedule and weight decay=2e-4.
But the results in pytorch are still worse.
The difference must be somewhere. Is the loss the same after one iteration? Are the updates the same after one iteration? Another thing, how exactly are you evaluating performance? (you probably need to switch the model into evaluation mode with model.eval() to make batchnorm work properly)
I have checked the forward pass and the loss value, the backward pass and after one update step, they are all the same.I also invoke model.eval() before evaluation. In fact, the pytorch model work better than TF model on the classification, I can't achieve the same classification accuracy in TF, but on my another criterion, pytorch model is worse than TF model.
|
STACK_EXCHANGE
|