At this crisis point in history - what could possibly create these rare and extraordinary gains?

An Arizona multi-millionaire's revolutionary initiative is 
helping average Americans  find quick and lasting stock market success.

Since the Coronavirus came into our lives this slice of the stock market has given ordinary people the chance to multiply their money by 96% in 21 days on JP Morgan.


Trading  | September 13, 2018

Facebook COO Sheryl Sandberg won praise for her testimony before Congress last week (though her confused responses to questions about hate speech on the company’s platform aggravated SJWs). But sadly for Facebook shareholders, regulatory threats (not to mention AG Jeff Sessions’ “openness” to investigating tech giants for consumer protection or antitrust violations) continues to weigh on the company’s share price.

And with FANG stocks still limping away from their worst market rout in months, Facebook CEO Mark Zuckerberg has published a 3,000 note on his Facebook page summarizing the company’s efforts to combat election interference on its platform.

Zuck

While Zuck touted certain “triumphs” like the company’s use of algos to delete more than 1 billion fraudulent accounts purportedly created by foreign agents, and the company’s push to hire another 10,000 humans to weed out more sophisticated imposters, he also admitted that Facebook can’t secure elections in the US – and indeed around the world – on its own. For that, it needs assistance from journalists, the government and – most importantly – other tech firms.

“The last point I’ll make is that we’re all in this together. The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm. While I’d always rather Facebook identified abuse first, that won’t always be possible. Sometimes we’ll only find activity with tips from governments, other tech companies, or journalists. We need to create a culture where stopping these threats is what constitutes success — not where the information that uncovered the attack came from. For the complexity of the challenges ahead, this is the best way forward.”

According to Zuck, Thursday’s note will be the first of several updates to be published on his page before the end of the year.

Meanwhile, the company’s shares have edged higher in pre-market trading after shedding nearly 3% on Wednesday.

Read Zuck’s full note below:

* * *
My focus in 2018 has been addressing the most important issues facing Facebook — including defending against election interference, better protecting our community from abuse, and making sure people have more control of their information. As the year wraps up, I’m writing a series of notes outlining how I’m thinking about these issues and the progress we’re making. This is the first note, and it’s about preventing election interference on Facebook.

These are incredibly complex and important problems, and this has been an intense year. I am bringing the same focus and rigor to addressing these issues that I’ve brought to previous product challenges like shifting our services to mobile. These issues are even harder because people don’t agree on what a good outcome looks like, or what tradeoffs are acceptable to make. When it comes to free expression, thoughtful people come to different conclusions about the right balances. When it comes to implementing a solution, certainly some investors disagree with my approach to invest so much in security.

We have a lot of work ahead, but I am confident we will end this year with much more sophisticated approaches than we began, and that the focus and investments we’ve put in will be better for our community and the world over the long term.
Overview

One of our core principles is to give people a voice. That’s why anyone can post what they want without having to ask permission first. We also believe deeply in the power of connection. When people can connect with each other, they can build communities around shared interests wherever they live in the world.

But we’ve also seen how people can abuse our services, including during elections. Our responsibility is to amplify the good and mitigate the harm.

In 2016, our election security efforts prepared us for traditional cyberattacks like phishing, malware, and hacking. We identified those and notified the government and those affected. What we didn’t expect were foreign actors launching coordinated information operations with networks of fake accounts spreading division and misinformation.

Today, Facebook is better prepared for these kinds of attacks. We’ve identified and removed fake accounts ahead of elections in France, Germany, Alabama, Mexico, and Brazil. We’ve found and taken down foreign influence campaigns from Russia and Iran attempting to interfere in the US, UK, Middle East, and elsewhere — as well as groups in Mexico and Brazil that have been active in their own country. We’ve attacked the economic incentives to spread misinformation. We’ve worked more closely with governments — including in Germany, the US and Mexico — to improve security during elections. And we’ve set a new standard for transparency in the advertising industry — so advertisers are accountable for the ads they run. Security experts call this “defense in depth” because no one tactic is going to prevent all of the abuse.

While we’ve made steady progress, we face sophisticated, well-funded adversaries. They won’t give up, an they will keep evolving. We need to constantly improve and stay one step ahead. This will take continued, heavy investment in security on our part, as well as close cooperation with governments, the tech industry, and security experts since no one institution can solve this on their own.

In this note, I’ll outline the main efforts we’re focused on, what we’ve found and learned over the last two years, and what more we need to do to help protect the free and fair elections at the heart of every democracy.

Fake Accounts

One of our most important efforts is finding and removing fake accounts. We’ve found that fake accounts are the source of much of the abuse of our services, including during elections.

With advances in machine learning, we have now built systems that block millions of fake accounts every day. In total, we removed more than one billion fake accounts — the vast majority within minutes of being created and before they could do any harm — in the six months between October and March. You can track our progress in removing fake accounts through our

Transparency Report.

Like most security issues, this is an arms race. The numbers are so large because our adversaries use computers to create fake accounts in bulk. And while we are quickly improving our ability to detect and block them, it is still very difficult to identify the most sophisticated actors who build their networks manually one fake account at a time. This is why we’ve also hired a lot more people to work on safety and security — up from 10,000 last year to more than 20,000 people this year.

The information operations we’ve seen, including during the 2016 election, typically use networks of fake accounts to push out their messages while hiding their real identities. Increasingly, we also see our adversaries co-opting legitimate accounts in an effort to better hide their activity. By working together, these networks of accounts boost each other’s posts, creating the impression they have more widespread support than they actually do. For example, we recently identified and took down several fake accounts involved in promoting a legitimate protest event on Facebook, claiming they were attending, and encouraging others to do the same.

One of the challenges we face is that the content these pages share often does not violate our Community Standards — the rules that govern what is allowed on Facebook. In the example above, the event would have been allowed under our policies, as would encouraging others to attend. This was clearly problematic though, and the violation was that the accounts involved were inauthentic. In another example, a campaign we found tried to sow division by creating both pro-immigration and anti-immigration pages. Again, many of the posts on these pages were similar to posts from legitimate immigration activists, but they were clearly problematic as part of a coordinated inauthentic campaign.

Identifying and removing these campaigns is difficult because the amount of activity across our services is so large. In these cases, we’ll typically get a lead that we should look into suspicious activity — it’s either flagged by our technical systems or found by our security team, law enforcement, or an outside security expert. If we find accounts that look suspicious, we investigate them to see what other accounts and pages they’ve interacted with. Over the course of an investigation, we attempt to identify the full network of accounts and pages involved in an operation so we can take them all down at once. We’ll often involve the government and other companies and, where possible, we’ll tell the public. While we want to move quickly when we identify a threat, it’s also important to wait until we uncover as much of the network as we can before we take accounts down to avoid tipping off our adversaries, who would otherwise take extra steps to cover their remaining tracks. And ideally, we time these takedowns to cause the maximum disruption to their operations.

In the last year, as we have gotten more effective, we have identified and taken down coordinated information campaigns around the world, including:

• We identified that the Internet Research Agency (IRA) has been largely focused on manipulating people in Russia and other Russian-speaking countries. We took down a network of more than 270 of their pages and accounts, including the official pages of Russian state-sanctioned news organizations that we determined were effectively controlled and operated by the IRA.

• We found a network based in Iran with links to Iranian state media that has been trying to spread propaganda in the US, UK, and Middle East, and we took down hundreds of accounts, pages, and groups, including some linked to Iran’s state-sponsored media.

• We recently took down a network of accounts in Brazil that was hiding its identity and spreading misinformation ahead of the country’s Presidential elections in October.

• Although not directly related to elections, we identified and removed a coordinated campaign in Myanmar by the military to spread propaganda.

We know we still have work to improve the precision of our systems. Fake accounts continue to slip through without detection — and we also err in the other direction mistakenly taking down people using our services legitimately. These systems will never be perfect, but by investing in artificial intelligence and more people, we will continue to improve.

One advantage Facebook has is that we have a principle that you must use your real identity. This means we have a clear notion of what’s an authentic account. This is harder with services like Instagram, WhatsApp, Twitter, YouTube, iMessage, or any other service where you don’t need to provide your real identity. So if the content shared doesn’t violate any policy, which is often the case, and you have no clear notion of what constitutes a fake account, that makes enforcement significantly harder. Fortunately, our systems are shared, so when we find bad actors on Facebook, we can also remove accounts linked to them on Instagram and WhatsApp as well. And where we can share information with other companies, we can also help them remove fake accounts too.

Misinformation

Fake accounts are one of the primary vehicles for spreading misinformation — especially politically-motivated misinformation and propaganda. However, we’ve found that misinformation is spread in three main ways:

• By fake accounts, including for political motivation;

• By spammers, for economic motivation, like the ones that have been written about in Macedonia; and

• By regular people, who often may not know they’re spreading misinformation.

Beyond elections, misinformation that can incite real world violence has been one of the hardest issues we’ve faced. In places where viral misinformation may contribute to violence we now take it down. In other cases, we focus on reducing the distribution of viral misinformation rather than removing it outright.

Economically motivated misinformation is another challenge that may affect elections. In these cases, spammers make up sensationalist claims and push them onto the internet in the hope that people will click on them and they’ll make money off the ads next to the content. This is the same business model spammers have used for a long time, and the techniques we’ve developed over the years to fight spammers apply here as well.

The key is to disrupt their economic incentives. If we make it harder for them to make money, then they’ll typically just go and do something else instead. This is why we block anyone who has repeatedly spread misinformation from using our ads to make money. We also significantly reduce the distribution of any page that has repeatedly spread misinformation and spam. These measures make it harder for them to stay profitable spamming our community.

The third major category of misinformation is shared by regular people in their normal use of our services. This is particularly challenging to deal with because we cannot stop it upstream — like we can by removing fake accounts or preventing spammers from using our ads. Instead, when a post is flagged as potentially false or is going viral, we pass it to independent fact-checkers to review. All of the fact-checkers we use are certified by the non-partisan International Fact-Checking Network. Posts that are rated as false are demoted and lose on average 80% of their future views.

Taking these strategies together, our goal around misinformation for elections is to make sure that few, if any, of the top links shared on Facebook will be viral hoaxes.

Ads Transparency and Verification

Advertising makes it possible for a message to reach many people, so it is especially important that advertisers are held accountable for what they promote and that fake accounts are not allowed to advertise.

As a result of changes we’ve made this year, Facebook now has a higher standard of ads transparency than has ever existedwith TV or newspaper ads. You can see all the ads an advertiser is running, even if they weren’t shown to you. In addition, all political and issue ads in the US must make clear who paid for them. And all these ads are put into a public archive which anyone can search to see how much was spent on an individual ad and the audience it reached.

This transparency serves a number of purposes. People can see when ads are paid for by a PAC or third party group other than the candidate. It’s now more obvious when organizations are saying different things to different groups of people. In addition, journalists, watchdogs, academics, and others can use these tools to study ads on Facebook, report abuse, and hold political and issue advertisers accountable for the content they show.

We now also require anyone running political or issue ads in the US to verify their identity and location. This prevents someone in Russia, for example, from buying political ads in the United States, and it adds another obstacle for people trying to hide their identity or location using fake accounts.

One challenge we faced when developing this policy is that most of the divisive ads the Internet Research Agency ran in 2016 focused on issues — like civil rights or immigration — and did not promote specific candidates. To catch this behavior, we needed a broad definition of what constitutes an issue ad. And because a lot of ads touch on these types of issues, we now require many legitimate businesses to get verified, even when their ads are not actually political. Given that the verification process takes a few days, this is frustrating to many companies who rely on our ads to drive their sales.

When deciding on this policy, we also discussed whether it would be better to ban political ads altogether. Initially, this seemed simple and attractive. But we decided against it — not due to money, as this new verification process is costly and so we no longer make any meaningful profit on political ads — but because we believe in giving people a voice. We didn’t want to take away an important tool many groups use to engage in the political process.

Independent Election Research Commission

I want to ensure we’re doing everything possible to understand the different ways adversaries can abuse our services — as well as the impact of these services on elections and democracy overall. No matter how much we dig, or how impartial we aim to be, we recognize the benefits of independent analysis to understand all the facts and to ensure we’re accountable for our work.
To do this, we set up an independent election research commission earlier this year with academics and foundations. Its role is to identify research topics and select — through a peer-review process — independent research to study them. The commission will share Facebook data with those researchers so they can draw their own conclusions about our role in elections, including our effectiveness in preventing abuse, and so they can publish their work without requiring approval from us. We’ve worked with industry experts to ensure this is done in a way that protects everyone’s privacy.

Our goal is not only to improve our own work on elections and civic discourse, but also to create a new model for how academics can work with the private sector. As the press has reported, we have had serious issues with academics using data from our services, including most recently the situation involving Cambridge University researcher Alexandr Kogan and Cambridge Analytica. A few years earlier we faced concerns about research done internally to understand whether social networks make people happier or more depressed.

As a result of these controversies, there was considerable concern amongst Facebook employees about allowing researchers to access data. Ultimately, I decided that the benefits of enabling this kind of academic research outweigh the risks. But we are dedicating significant resources to ensuring this research is conducted in a way that respects people’s privacy and meets the highest ethical standards. Longer term, my hope is that this type of research gains widespread support and grows into a broader program covering more areas in the coming years.

Coordinating With Governments and Companies

Preventing election interference is bigger than any single organization. It’s now clear that everyone — governments, tech companies, and independent experts such as the Atlantic Council — need to do a better job sharing the signals and information they have to prevent abuse. Coordination is important for a few reasons:

First, bad actors don’t restrict themselves to one service, so we can’t approach the problem in silos either. If a foreign actor is running a coordinated information campaign online, they will almost certainly use multiple different internet services. And beyond that, it’s important to remember that attempts to manipulate public opinion aren’t the only threat we face. Traditional cyberattacks remain a big problem for everyone, and many democracies are at risk of attacks on critical election infrastructure like voting machines. The more we can share intelligence, the better prepared each organization will be.

Second, there are certain critical signals that only law enforcement has access to, like money flows. For example, our systems make it significantly harder to set up fake accounts or buy political ads from outside the country. But it would still be very difficult without additional intelligence for Facebook or others to figure out if a foreign adversary had set up a company in the US, wired money to it, and then registered an authentic account on our services and bought ads from the US. It’s possible that we’d find this ourselves since there are often multiple ways to identify bad actors. However, this is an example where tighter coordination with other organizations would be very useful.

Our coordination with governments and industry in the US is significantly stronger now than it was in 2016. We all have a greater appreciation of the threats, so everyone has an incentive to work together. And in countries like Germany, for example, we shared information directly with the government to improve security during last year’s elections. But real tensions still exist. For example, if law enforcement is tracking a lead’s public activity on social networks, they may be reluctant to share that information with us in case we remove the account.

The last point I’ll make is that we’re all in this together. The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm. While I’d always rather Facebook identified abuse first, that won’t always be possible. Sometimes we’ll only find activity with tips from governments, other tech companies, or journalists. We need to create a culture where stopping these threats is what constitutes success — not where the information that uncovered the attack came from. For the complexity of the challenges ahead, this is the best way forward.

Conclusion

In 2016, we were not prepared for the coordinated information operations we now regularly face. But we have learned a lot since then and have developed sophisticated systems that combine technology and people to prevent election interference on our services.

This effort is part of a broader challenge to rework much of how Facebook operates to be more proactive about protecting our community from harm and taking a broader view of our responsibility overall.
One of the important lessons I’ve learned is that when you build services that connect billions of people across countries and cultures, you’re going to see all of the good humanity is capable of, and you’re also going to see people try to abuse those services in every way possible.

As we evolve, our adversaries are evolving too. We will all need to continue improving and working together to stay ahead and protect our democracy.


A revolutionary initiative is helping average Americans find quick and lasting stock market success.

275% in one week on XLF - an index fund for the financial sector. Even 583%, in 7 days on XHB… an ETF of homebuilding companies in the S&P 500. 


{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

You might also like


Stocks | January 28

Stocks | January 28

Investing, Stocks | January 27

Investing | January 27