脆弱性情報・セキュリティ情報(ベンダー)

ベンダー脆弱性情報・セキュリティ情報(RSS)


Microsoft Security Response Center Recent content on Microsoft Security Response Center

  • Congratulations to the Top MSRC 2024 Q1 Security Researchers! 
    on 2024年4月17日 at 4:00 PM

    Congratulations to all the researchers recognized in this quarter’s Microsoft Researcher Recognition Program leaderboard! Thank you to everyone for your hard work and continued partnership to secure customers. The top three researchers of the 2024 Q1 Security Researcher Leaderboard are Yuki Chen, VictorV, and Nitesh Surana! Check out the full list of researchers recognized this quarter here.

  • Toward greater transparency: Adopting the CWE standard for Microsoft CVEs
    on 2024年4月8日 at 4:00 PM

    At the Microsoft Security Response Center (MSRC), our mission is to protect our customers, communities, and Microsoft from current and emerging threats to security and privacy. One way we achieve this is by determining the root cause of security vulnerabilities in Microsoft products and services. We use this information to identify vulnerability trends and provide this data to our Product Engineering teams to enable them to systematically understand and eradicate security risks.

  • Embracing innovation: Derrick’s transition from banking to Microsoft’s Threat Intelligence team
    on 2024年4月2日 at 4:00 PM

    Meet Derrick, a Senior Program Manager on the Operational Threat Intelligence team at Microsoft. Derrick’s role involves understanding and roadmapping the complete set of tools that Threat Intel analysts use to collect, analyze, process, and disseminate threat intelligence across Microsoft. Derrick’s love of learning and his natural curiosity led him to a career in technology and ultimately, to his current role at Microsoft.

  • Update on Microsoft Actions Following Attack by Nation State Actor Midnight Blizzard
    on 2024年3月8日 at 5:00 PM

    This blog provides an update on the nation-state attack that was detected by the Microsoft Security Team on January 12, 2024. As we shared, on January 19, the security team detected this attack on our corporate email systems and immediately activated our response process. The Microsoft Threat Intelligence investigation identified the threat actor as Midnight Blizzard, the Russian state-sponsored actor also known as NOBELIUM.

  • Faye’s Journey: From Security PM to Diversity Advocate at Microsoft
    on 2024年2月29日 at 5:00 PM

    Faye, a veteran at Microsoft for 22 years, has had a career as varied as it is long. Her journey began in 2002 as the first desktop security Project Manager (PM) in Microsoft IT. From there, she transitioned into owning a deployment team that deployed to desktops and handled operations for Office’s first few customers.


Cisco Japan Blog Cisco Japan Blog をご確認ください。シスコの社員や技術エキスパートだけでなく、役員からも、シスコの最新情報およびソート リーダシップをお届けします。


Google Online Security Blog The latest news and insights from Google on security and safety on the Internet.

  • Your Google Account allows you to create passkeys on your phone, computer and security keys
    by Kimberly Samra on 2024年5月2日 at 8:59 PM

    Sriram Karra and Christiaan Brand, Google product managersLast year, Google launched passkey support for Google Accounts. Passkeys are a new industry standard that give users an easy, highly secure way to sign-in to apps and websites. Today, we announced that passkeys have been used to authenticate users more than 1 billion times across over 400 million Google Accounts.As more users encounter passkeys, we’re often asked questions about how they relate to security keys, how Google Workspace administrators can configure passkeys for the user accounts that they manage, and how they relate to the Advanced Protection Program (APP). This post will seek to clarify these topics.Passkeys and security keysPasskeys are an evolution of security keys, meaning users get the same security benefits, but with a much simplified experience. Passkeys can be used in the Google Account sign-in process in many of the same ways that security keys have been used in the past — in fact, you can now choose to store your passkey on your security key. This provides users with three key benefits:Stronger security. Users typically authenticate with passkeys by entering their device’s screen lock PIN, or using a biometric authentication method, like a fingerprint or a face scan. By storing the passkey on a security key, users can ensure that passkeys are only available when the security key is plugged into their device, creating a stronger security posture.Flexible portability. Today, users rely on password managers to make passkeys available across all of their devices. Security keys provide an alternate way to use your passkeys across your devices: by bringing your security keys with you.Simpler sign-in. Passkeys can act as a first- and second-factor, simultaneously. By creating a passkey on your security key, you can skip entering your password. This replaces your remotely stored password with the PIN you used to unlock your security key, which improves user security. (If you prefer to continue using your password in addition to using a passkey, you can turn off “Skip password when possible” in your Google Account security settings.)Passkeys bring strong and phishing-resistant authentication technology to a wider user base, and we’re excited to offer this new way for passkeys to meet more user needs.Google Workspace admins have additional controls and choiceGoogle Workspace accounts have a domain level “Allow users to skip passwords at sign-in by using passkeys” setting which is off by default, and overrides the corresponding user-level configuration. This retains the need for a user’s password in addition to presenting a passkey. Admins can also change that setting and allow users to sign-in with just a passkey.When the domain-level setting is off, end users will still see a “use a security key” button on their “passkeys and security keys” page, which will attempt to enroll any security key for use as a second factor only. This action will not require the user to set up a PIN for their security key during registration. This is designed to give enterprise customers who have deployed legacy security keys additional time to make the change to passkeys, with or without a password.Passkeys for Advanced Protection Program (APP) usersSince the introduction of passkeys in 2023, users enrolled in APP have been able to add any passkey to their account and use it to sign in. However users are still required to present two security keys when enrolling into the program. We will be updating the enrollment process soon to enable a user with any passkey to enroll in APP. By allowing any passkey to be used (rather than only hardware security keys) we expect to reach more high risk users who need advanced protection, while maintaining phishing-resistant authentication.

  • Detecting browser data theft using Windows Event Logs
    by Google on 2024年5月1日 at 1:14 AM

    Posted by Will Harris, Chrome Security Team Chromium's sandboxed process model defends well from malicious web content, but there are limits to how well the application can protect itself from malware already on the computer. Cookies and other credentials remain a high value target for attackers, and we are trying to tackle this ongoing threat in multiple ways, including working on web standards like DBSC that will help disrupt the cookie theft industry since exfiltrating these cookies will no longer have any value. Where it is not possible to prevent the theft of credentials and cookies by malware, the next best thing is making the attack more observable by antivirus, endpoint detection agents, or enterprise administrators with basic log analysis tools. This blog describes one set of signals for use by system administrators or endpoint detection agents that should reliably flag any access to the browser’s protected data from another application on the system. By increasing the likelihood of an attack being detected, this changes the calculus for those attackers who might have a strong desire to remain stealthy, and might cause them to rethink carrying out these types of attacks against our users. Background Chromium based browsers on Windows use the DPAPI (Data Protection API) to secure local secrets such as cookies, password etc. against theft. DPAPI protection is based on a key derived from the user's login credential and is designed to protect against unauthorized access to secrets from other users on the system, or when the system is powered off. Because the DPAPI secret is bound to the logged in user, it cannot protect against local malware attacks — malware executing as the user or at a higher privilege level can just call the same APIs as the browser to obtain the DPAPI secret. Since 2013, Chromium has been applying the CRYPTPROTECT_AUDIT flag to DPAPI calls to request that an audit log be generated when decryption occurs, as well as tagging the data as being owned by the browser. Because all of Chromium's encrypted data storage is backed by a DPAPI-secured key, any application that wishes to decrypt this data, including malware, should always reliably generate a clearly observable event log, which can be used to detect these types of attacks. There are three main steps involved in taking advantage of this log: Enable logging on the computer running Google Chrome, or any other Chromium based browser. Export the event logs to your backend system. Create detection logic to detect theft. This blog will also show how the logging works in practice by testing it against a python password stealer. Step 1: Enable logging on the system DPAPI events are logged into two places in the system. Firstly, there is the 4693 event that can be logged into the Security Log. This event can be enabled by turning on "Audit DPAPI Activity" and the steps to do this are described here, the policy itself sits deep within Security Settings -> Advanced Audit Policy Configuration -> Detailed Tracking. Here is what the 4693 event looks like: <Event xmlns&equals;"http&colon;&sol;&sol;schemas&period;microsoft&period;com&sol;win&sol;2004&sol;08&sol;events&sol;event">&NewLine; <System>&NewLine; <Provider Name&equals;"Microsoft-Windows-Security-Auditing" Guid&equals;"&lcub;&period;&period;&period;&rcub;" &sol;>&NewLine; <EventID>4693<&sol;EventID>&NewLine; <Version>0<&sol;Version>&NewLine; <Level>0<&sol;Level>&NewLine; <Task>13314<&sol;Task>&NewLine; <Opcode>0<&sol;Opcode>&NewLine; <Keywords>0x8020000000000000<&sol;Keywords>&NewLine; <TimeCreated SystemTime&equals;"2015-08-22T06&colon;25&colon;14&period;589407700Z" &sol;>&NewLine; <EventRecordID>175809<&sol;EventRecordID>&NewLine; <Correlation &sol;>&NewLine; <Execution ProcessID&equals;"520" ThreadID&equals;"1340" &sol;>&NewLine; <Channel>Security<&sol;Channel>&NewLine; <Computer>DC01&period;contoso&period;local<&sol;Computer>&NewLine; <Security &sol;>&NewLine; <&sol;System>&NewLine; <EventData>&NewLine; <Data Name&equals;"SubjectUserSid">S-1-5-21-3457937927-2839227994-823803824-1104<&sol;Data>&NewLine; <Data Name&equals;"SubjectUserName">dadmin<&sol;Data>&NewLine; <Data Name&equals;"SubjectDomainName">CONTOSO<&sol;Data>&NewLine; <Data Name&equals;"SubjectLogonId">0x30d7c<&sol;Data>&NewLine; <Data Name&equals;"MasterKeyId">0445c766-75f0-4de7-82ad-d9d97aad59f6<&sol;Data>&NewLine; <Data Name&equals;"RecoveryReason">0x5c005c<&sol;Data>&NewLine; <Data Name&equals;"RecoveryServer">DC01&period;contoso&period;local<&sol;Data>&NewLine; <Data Name&equals;"RecoveryKeyId" &sol;>&NewLine; <Data Name&equals;"FailureId">0x380000<&sol;Data>&NewLine; <&sol;EventData>&NewLine;<&sol;Event> The issue with the 4693 event is that while it is generated if there is DPAPI activity on the system, it unfortunately does not contain information about which process was performing the DPAPI activity, nor does it contain information about which particular secret is being accessed. This is because the Execution ProcessID field in the event will always be the process id of lsass.exe because it is this process that manages the encryption keys for the system, and there is no entry for the description of the data. It was for this reason that, in recent versions of Windows a new event type was added to help identify the process making the DPAPI call directly. This event was added to the Microsoft-Windows-Crypto-DPAPI stream which manifests in the Event Log in the Applications and Services Logs > Microsoft > Windows > Crypto-DPAPI part of the Event Viewer tree. The new event is called DPAPIDefInformationEvent and has id 16385, but unfortunately is only emitted to the Debug channel and by default this is not persisted to an Event Log, unless Debug channel logging is enabled. This can be accomplished by enabling it directly in powershell: &dollar;log &equals; &grave;&NewLine; New-Object System&period;Diagnostics&period;Eventing&period;Reader&period;EventLogConfiguration &grave;&NewLine; Microsoft-Windows-Crypto-DPAPI&sol;Debug&NewLine;&dollar;log&period;IsEnabled &equals; &dollar;True&NewLine;&dollar;log&period;SaveChanges&lpar;&rpar;&NewLine; Once this log is enabled then you should start to see 16385 events generated, and these will contain the real process ids of applications performing DPAPI operations. Note that 16385 events are emitted by the operating system even for data not flagged with CRYPTPROTECT_AUDIT, but to identify the data as owned by the browser, the data description is essential. 16385 events are described later. You will also want to enable Audit Process Creation in order to be able to know a current mapping of process ids to process names — more details on that later. You might want to also consider enabling logging of full command lines. Step 2: Collect the events The events you want to collect are: From Security log: 4688: "A new process was created." From Microsoft-Windows-Crypto-DPAPI/Debug log: (enabled above) 16385: "DPAPIDefInformationEvent" These should be collected from all workstations, and persisted into your enterprise logging system for analysis. Step 3: Write detection logic to detect theft. With these two events is it now possible to detect when an unauthorized application calls into DPAPI to try and decrypt browser secrets. The general approach is to generate a map of process ids to active processes using the 4688 events, then every time a 16385 event is generated, it is possible to identify the currently running process, and alert if the process does not match an authorized application such as Google Chrome. You might find your enterprise logging software can already keep track of which process ids map to which process names, so feel free to just use that existing functionality. Let's dive deeper into the events. A 4688 event looks like this - e.g. here is Chrome browser launching from explorer: <Event xmlns&equals;"http&colon;&sol;&sol;schemas&period;microsoft&period;com&sol;win&sol;2004&sol;08&sol;events&sol;event">&NewLine; <System>&NewLine; <Provider Name&equals;"Microsoft-Windows-Security-Auditing" Guid&equals;"&lcub;...&rcub;" &sol;>&NewLine; <EventID>4688<&sol;EventID>&NewLine; <Version>2<&sol;Version>&NewLine; <Level>0<&sol;Level>&NewLine; <Task>13312<&sol;Task>&NewLine; <Opcode>0<&sol;Opcode>&NewLine; <Keywords>0x8020000000000000<&sol;Keywords>&NewLine; <TimeCreated SystemTime&equals;"2024-03-28T20&colon;06&colon;41&period;9254105Z" &sol;>&NewLine; <EventRecordID>78258343<&sol;EventRecordID>&NewLine; <Correlation &sol;>&NewLine; <Execution ProcessID&equals;"4" ThreadID&equals;"54256" &sol;>&NewLine; <Channel>Security<&sol;Channel>&NewLine; <Computer>WIN-GG82ULGC9GO&period;contoso&period;local<&sol;Computer>&NewLine; <Security &sol;>&NewLine; <&sol;System>&NewLine; <EventData>&NewLine; <Data Name&equals;"SubjectUserSid">S-1-5-18<&sol;Data>&NewLine; <Data Name&equals;"SubjectUserName">WIN-GG82ULGC9GO&dollar;<&sol;Data>&NewLine; <Data Name&equals;"SubjectDomainName">CONTOSO<&sol;Data>&NewLine; <Data Name&equals;"SubjectLogonId">0xe8c85cc<&sol;Data>&NewLine; <Data Name&equals;"NewProcessId">0x17eac<&sol;Data>&NewLine; <Data Name&equals;"NewProcessName">C&colon;&bsol;Program Files&bsol;Google&bsol;Chrome&bsol;Application&bsol;chrome&period;exe<&sol;Data>&NewLine; <Data Name&equals;"TokenElevationType">&percnt;&percnt;1938<&sol;Data>&NewLine; <Data Name&equals;"ProcessId">0x16d8<&sol;Data>&NewLine; <Data Name&equals;"CommandLine">"C&colon;&bsol;Program Files&bsol;Google&bsol;Chrome&bsol;Application&bsol;chrome&period;exe" <&sol;Data>&NewLine; <Data Name&equals;"TargetUserSid">S-1-0-0<&sol;Data>&NewLine; <Data Name&equals;"TargetUserName">-<&sol;Data>&NewLine; <Data Name&equals;"TargetDomainName">-<&sol;Data>&NewLine; <Data Name&equals;"TargetLogonId">0x0<&sol;Data>&NewLine; <Data Name&equals;"ParentProcessName">C&colon;&bsol;Windows&bsol;explorer&period;exe<&sol;Data>&NewLine; <Data Name&equals;"MandatoryLabel">S-1-16-8192<&sol;Data>&NewLine; <&sol;EventData>&NewLine;<&sol;Event>&NewLine; The important part here is the NewProcessId, in hex 0x17eac which is 97964. A 16385 event looks like this: <Event xmlns&equals;"http&colon;&sol;&sol;schemas&period;microsoft&period;com&sol;win&sol;2004&sol;08&sol;events&sol;event">&NewLine; <System>&NewLine; <Provider Name&equals;"Microsoft-Windows-Crypto-DPAPI" Guid&equals;"&lcub;...&rcub;" &sol;>&NewLine; <EventID>16385<&sol;EventID>&NewLine; <Version>0<&sol;Version>&NewLine; <Level>4<&sol;Level>&NewLine; <Task>64<&sol;Task>&NewLine; <Opcode>0<&sol;Opcode>&NewLine; <Keywords>0x2000000000000040<&sol;Keywords>&NewLine; <TimeCreated SystemTime&equals;"2024-03-28T20&colon;06&colon;42&period;1772585Z" &sol;>&NewLine; <EventRecordID>826993<&sol;EventRecordID>&NewLine; <Correlation ActivityID&equals;"&lcub;777bf68d-7757-0028-b5f6-7b775777da01&rcub;" &sol;>&NewLine; <Execution ProcessID&equals;"1392" ThreadID&equals;"57108" &sol;>&NewLine; <Channel>Microsoft-Windows-Crypto-DPAPI&sol;Debug<&sol;Channel>&NewLine; <Computer>WIN-GG82ULGC9GO&period;contoso&period;local<&sol;Computer>&NewLine; <Security UserID&equals;"S-1-5-18" &sol;>&NewLine; <&sol;System>&NewLine; <EventData>&NewLine; <Data Name&equals;"OperationType">SPCryptUnprotect<&sol;Data>&NewLine; <Data Name&equals;"DataDescription">Google Chrome<&sol;Data>&NewLine; <Data Name&equals;"MasterKeyGUID">&lcub;4df0861b-07ea-49f4-9a09-1d66fd1131c3&rcub;<&sol;Data>&NewLine; <Data Name&equals;"Flags">0<&sol;Data>&NewLine; <Data Name&equals;"ProtectionFlags">16<&sol;Data>&NewLine; <Data Name&equals;"ReturnValue">0<&sol;Data>&NewLine; <Data Name&equals;"CallerProcessStartKey">32651097299526713<&sol;Data>&NewLine; <Data Name&equals;"CallerProcessID">97964<&sol;Data>&NewLine; <Data Name&equals;"CallerProcessCreationTime">133561300019253302<&sol;Data>&NewLine; <Data Name&equals;"PlainTextDataSize">32<&sol;Data>&NewLine; <&sol;EventData>&NewLine;<&sol;Event>&NewLine; The important parts here are the OperationType, the DataDescription and the CallerProcessID. For DPAPI decrypts, the OperationType will be SPCryptUnprotect. Each Chromium based browser will tag its data with the product name, e.g. Google Chrome, or Microsoft Edge depending on the owner of the data. This will always appear in the DataDescription field, so it is possible to distinguish browser data from other DPAPI secured data. Finally, the CallerProcessID will map to the process performing the decryption. In this case, it is 97964 which matches the process ID seen in the 4688 event above, showing that this was likely Google Chrome decrypting its own data! Bear in mind that since these logs only contain the path to the executable, for a full assurance that this is actually Chrome (and not malware pretending to be Chrome, or malware injecting into Chrome), additional protections such as removing administrator access, and application allowlisting could also be used to give a higher assurance of this signal. In recent versions of Chrome or Edge, you might also see logs of decryptions happening in the elevation_service.exe process, which is another legitimate part of the browser's data storage. To detect unauthorized DPAPI access, you will want to generate a running map of all processes using 4688 events, then look for 16385 events that have a CallerProcessID that does not match a valid caller – Let's try that now. Testing with a python password stealer We can test that this works with a public script to decrypt passwords taken from a public blog. It generates two events, as expected: Here is the 16385 event, showing that a process is decrypting the "Google Chrome" key. <Event xmlns&equals;"http&colon;&sol;&sol;schemas&period;microsoft&period;com&sol;win&sol;2004&sol;08&sol;events&sol;event">&NewLine; <System>&NewLine; < &period;&period;&period; >&NewLine; <EventID>16385<&sol;EventID>&NewLine; < &period;&period;&period; >&NewLine; <TimeCreated SystemTime&equals;"2024-03-28T20&colon;28&colon;13&period;7891561Z" &sol;>&NewLine; < &period;&period;&period; >&NewLine; <&sol;System>&NewLine; <EventData>&NewLine; <Data Name&equals;"OperationType">SPCryptUnprotect<&sol;Data>&NewLine; <Data Name&equals;"DataDescription">Google Chrome<&sol;Data>&NewLine; < &period;&period;&period; >&NewLine; <Data Name&equals;"CallerProcessID">68768<&sol;Data>&NewLine; <Data Name&equals;"CallerProcessCreationTime">133561312936527018<&sol;Data>&NewLine; <Data Name&equals;"PlainTextDataSize">32<&sol;Data>&NewLine; <&sol;EventData>&NewLine;<&sol;Event> Since the data description being decrypted was "Google Chrome" we know this is an attempt to read Chrome secrets, but to determine the process behind 68768 (0x10ca0), we need to correlate this with a 4688 event. Here is the corresponding 4688 event from the Security Log (a process start for python3.exe) with the matching process id: <Event xmlns&equals;"http&colon;&sol;&sol;schemas&period;microsoft&period;com&sol;win&sol;2004&sol;08&sol;events&sol;event">&NewLine; <System>&NewLine; < &period;&period;&period; >&NewLine; <EventID>4688<&sol;EventID>&NewLine; < &period;&period;&period; >&NewLine; <TimeCreated SystemTime&equals;"2024-03-28T20&colon;28&colon;13&period;6527871Z" &sol;>&NewLine; < &period;&period;&period; >&NewLine; <&sol;System>&NewLine; <EventData>&NewLine; < &period;&period;&period; >&NewLine; <Data Name&equals;"NewProcessId">0x10ca0<&sol;Data>&NewLine; <Data Name&equals;"NewProcessName">C&colon;&bsol;python3&bsol;bin&bsol;python3&period;exe<&sol;Data>&NewLine; <Data Name&equals;"TokenElevationType">&percnt;&percnt;1938<&sol;Data>&NewLine; <Data Name&equals;"ProcessId">0xca58<&sol;Data>&NewLine; <Data Name&equals;"CommandLine">"c&colon;&bsol;python3&bsol;bin&bsol;python3&period;exe" steal&lowbar;passwords&period;py<&sol;Data>&NewLine; < &period;&period;&period; >&NewLine; <Data Name&equals;"ParentProcessName">C&colon;&bsol;Windows&bsol;System32&bsol;cmd&period;exe<&sol;Data>&NewLine; <&sol;EventData>&NewLine;<&sol;Event> In this case, the process id matches the python3 executable running a potentially malicious script, so we know this is likely very suspicious behavior, and should trigger an alert immediately! Bear in mind process ids on Windows are not unique so you will want to make sure you use the 4688 event with the timestamp closest, but earlier than, the 16385 event. Summary This blog has described a technique for strong detection of cookie and credential theft. We hope that all defenders find this post useful. Thanks to Microsoft for adding the DPAPIDefInformationEvent log type, without which this would not be possible.

  • How we fought bad apps and bad actors in 2023
    by Edward Fernandez on 2024年4月30日 at 12:59 AM

    Posted by Steve Kafka and Khawaja Shams (Android Security and Privacy Team), and Mohet Saxena (Play Trust and Safety) A safe and trusted Google Play experience is our top priority. We leverage our SAFE (see below) principles to provide the framework to create that experience for both users and developers. Here's what these principles mean in practice: (S)afeguard our Users. Help them discover quality apps that they can trust. (A)dvocate for Developer Protection. Build platform safeguards to enable developers to focus on growth. (F)oster Responsible Innovation. Thoughtfully unlock value for all without compromising on user safety. (E)volve Platform Defenses. Stay ahead of emerging threats by evolving our policies, tools and technology. With those principles in mind, we’ve made recent improvements and introduced new measures to continue to keep Google Play’s users safe, even as the threat landscape continues to evolve. In 2023, we prevented 2.28 million policy-violating apps from being published on Google Play1 in part thanks to our investment in new and improved security features, policy updates, and advanced machine learning and app review processes. We have also strengthened our developer onboarding and review processes, requiring more identity information when developers first establish their Play accounts. Together with investments in our review tooling and processes, we identified bad actors and fraud rings more effectively and banned 333K bad accounts from Play for violations like confirmed malware and repeated severe policy violations. Additionally, almost 200K app submissions were rejected or remediated to ensure proper use of sensitive permissions such as background location or SMS access. To help safeguard user privacy at scale, we partnered with SDK providers to limit sensitive data access and sharing, enhancing the privacy posture for over 31 SDKs impacting 790K+ apps. We also significantly expanded the Google Play SDK Index, which now covers the SDKs used in almost 6 million apps across the Android ecosystem. This valuable resource helps developers make better SDK choices, boosts app quality and minimizes integration risks. Protecting the Android Ecosystem Building on our success with the App Defense Alliance (ADA), we partnered with Microsoft and Meta as steering committee members in the newly restructured ADA under the Joint Development Foundation, part of the Linux Foundation family. The Alliance will support industry-wide adoption of app security best practices and guidelines, as well as countermeasures against emerging security risks. Additionally, we announced new Play Store transparency labeling to highlight VPN apps that have completed an independent security review through App Defense Alliance’s Mobile App Security Assessment (MASA). When a user searches for VPN apps, they will now see a banner at the top of Google Play that educates them about the “Independent security review” badge in the Data safety section. This helps users see at-a-glance that a developer has prioritized security and privacy best practices and is committed to user safety. To better protect our customers who install apps outside of the Play Store, we made Google Play Protect’s security capabilities even more powerful with real-time scanning at the code-level to combat novel malicious apps. Our security protections and machine learning algorithms learn from each app submitted to Google for review and we look at thousands of signals and compare app behavior. This new capability has already detected over 5 million new, malicious off-Play apps, which helps protect Android users worldwide. More Stringent Developer Requirements and Guidelines Last year we updated Play policies around Generative AI apps, disruptive notifications, and expanded privacy protections. We also are raising the bar for new personal developer accounts by requiring new testing requirements before developers can make their app available on Google Play. By testing their apps, getting feedback and ensuring everything is ready before they launch, developers are able to bring more high quality content to Play users. In order to increase trust and transparency, we’ve introduced expanded developer verification requirements, including D-U-N-S numbers for organizations and a new “About the developer” section. To give users more control over their personal data, apps that enable account creation now need to provide an option to initiate account and data deletion from within the app and online. This web requirement is especially important so that a user can request account and data deletion without having to reinstall an app. To simplify the user experience, we have also incorporated this as a feature within the Data safety section of the Play Store. With each iteration of the Android operating system (including its robust set of APIs), a myriad of enhancements are introduced, aiming to elevate the user experience, bolster security protocols, and optimize the overall performance of the Android platform. To further safeguard our customers, approximately 1.5 million applications that do not target the most recent APIs are no longer available in the Play Store to new users who have updated their devices to the latest Android version. Looking Ahead Protecting users and developers on Google Play is paramount and ever-evolving. We're launching new security initiatives in 2024, including removing apps from Play that are not transparent about their privacy practices. We also recently filed a lawsuit in federal court against two fraudsters who made multiple misrepresentations to upload fraudulent investment and crypto exchange apps on Play to scam users. This lawsuit is a critical step in holding these bad actors accountable and sending a clear message that we will aggressively pursue those who seek to take advantage of our users. We're constantly working on new ways to protect your experience on Google Play and across the entire Android ecosystem, and we look forward to sharing more. Notes In accordance with the EU's Digital Services Act (DSA) reporting requirements, Google Play now calculates policy violations based on developer communications sent. ↩

  • Accelerating incident response using generative AI
    by Kimberly Samra on 2024年4月27日 at 2:27 AM

    Lambert Rosique and Jan Keller, Security Workflow Automation, and Diana Kramer, Alexandra Bowen and Andrew Cho, Privacy and Security Incident ResponseIntroductionAs security professionals, we're constantly looking for ways to reduce risk and improve our workflow's efficiency. We've made great strides in using AI to identify malicious content, block threats, and discover and fix vulnerabilities. We also published the Secure AI Framework (SAIF), a conceptual framework for secure AI systems to ensure we are deploying AI in a responsible manner. Today we are highlighting another way we use generative AI to help the defenders gain the advantage: Leveraging LLMs (Large Language Model) to speed-up our security and privacy incidents workflows.Incident management is a team sport. We have to summarize security and privacy incidents for different audiences including executives, leads, and partner teams. This can be a tedious and time-consuming process that heavily depends on the target group and the complexity of the incident. We estimate that writing a thorough summary can take nearly an hour and more complex communications can take multiple hours. But we hypothesized that we could use generative AI to digest information much faster, freeing up our incident responders to focus on other more critical tasks - and it proved true. Using generative AI we could write summaries 51% faster while also improving the quality of them. Our incident response approachWhen suspecting a potential data incident, for example,we follow a rigorous process to manage it. From the identification of the problem, the coordination of experts and tools, to its resolution and then closure. At Google, when an incident is reported, our Detection & Response teams work to restore normal service as quickly as possible, while meeting both regulatory and contractual compliance requirements. They do this by following the five main steps in the Google incident response program:Identification: Monitoring security events to detect and report on potential data incidents using advanced detection tools, signals, and alert mechanisms to provide early indication of potential incidents.Coordination: Triaging the reports by gathering facts and assessing the severity of the incident based on factors such as potential harm to customers, nature of the incident, type of data that might be affected, and the impact of the incident on customers. A communication plan with appropriate leads is then determined.Resolution: Gathering key facts about the incident such as root cause and impact, and integrating additional resources as needed to implement necessary fixes as part of remediation.Closure: After the remediation efforts conclude, and after a data incident is resolved, reviewing the incident and response to identify key areas for improvement.Continuous improvement: Is crucial for the development and maintenance of incident response programs. Teams work to improve the program based on lessons learned, ensuring that necessary teams, training, processes, resources, and tools are maintained.Google’s Incident Response Process diagram flowLeveraging generative AI Our detection and response processes are critical in protecting our billions of global users from the growing threat landscape, which is why we’re continuously looking for ways to improve them with the latest technologies and techniques. The growth of generative AI has brought with it incredible potential in this area, and we were eager to explore how it could help us improve parts of the incident response process. We started by leveraging LLMs to not only pioneer modern approaches to incident response, but also to ensure that our processes are efficient and effective at scale. Managing incidents can be a complex process and an additional factor is effective internal communication to leads, executives and stakeholders on the threats and status of incidents. Effective communication is critical as it properly informs executives so that they can take any necessary actions, as well as to meet regulatory requirements. Leveraging LLMs for this type of communication can save significant time for the incident commanders while improving quality at the same time.Humans vs. LLMsGiven that LLMs have summarization capabilities, we wanted to explore if they are able to generate summaries on par, or as well as humans can. We ran an experiment that took 50 human-written summaries from native and non-native English speakers, and 50 LLM-written ones with our finest (and final) prompt, and presented them to security teams without revealing the author.We learned that the LLM-written summaries covered all of the key points, they were rated 10% higher than their human-written equivalents, and cut the time necessary to draft a summary in half. Comparison of human vs LLM content completenessComparison of human vs LLM writing stylesManaging risks and protecting privacyLeveraging generative AI is not without risks. In order to mitigate the risks around potential hallucinations and errors, any LLM generated draft must be reviewed by a human. But not all risks are from the LLM -  human misinterpretation of a fact or statement generated by the LLM can also happen. That is why it’s important to ensure there is human accountability, as well as to monitor quality and feedback over time. Given that our incidents can contain a mixture of confidential, sensitive, and privileged data, we had to ensure we built an infrastructure that does not store any data. Every component of this pipeline - from the user interface to the LLM to output processing - has logging turned off. And, the LLM itself does not use any input or output for re-training. Instead, we use metrics and indicators to ensure it is working properly. Input processingThe type of data we process during incidents can be messy and often unstructured: Free-form text, logs, images, links, impact stats, timelines, and code snippets. We needed to structure all of that data so the LLM “knew” which part of the information serves what purpose. For that, we first replaced long and noisy sections of codes/logs by self-closing tags (<Code Section/> and <Logs/>) both to keep the structure while saving tokens for more important facts and to reduce risk of hallucinations.During prompt engineering, we refined this approach and added additional tags such as <Title>, <Actions Taken>, <Impact>, <Mitigation History>, <Comment> so the input’s structure becomes closely mirrored to our incident communication templates. The use of self-explanatory tags allowed us to convey implicit information to the model and provide us with aliases in the prompt for the guidelines or tasks, for example by stating “Summarize the <Security Incident>”.Sample {incident} inputPrompt engineeringOnce we added structure to the input, it was time to engineer the prompt. We started simple by exploring how LLMs can view and summarize all of the current incident facts with a short task:Caption: First prompt versionLimits of this prompt:The summary was too long, especially for executives trying to understand the risk and impact of the incidentSome important facts were not covered, such as the incident’s impact and its mitigationThe writing was inconsistent and not following our best practices such as “passive voice”, “tense”, “terminology” or “format”Some irrelevant incident data was being integrated into the summary from email threadsThe model struggled to understand what the most relevant and up-to-date information wasFor version 2, we tried a more elaborate prompt that would address the problems above: We told the model to be concise and we explained what a well-written summary should be: About the main incident response steps (coordination and resolution).Second prompt versionLimits of this prompt:The summaries still did not always succinctly and accurately address the incident in the format we were expectingAt times, the model lost sight of the task or did not take all the guidelines into accountThe model still struggled to stick to the latest updatesWe noticed a tendency to draw conclusions on hypotheses with some minor hallucinationsFor the final prompt, we inserted 2 human-crafted summary examples and introduced a <Good Summary> tag to highlight high quality summaries but also to tell the model to immediately start with the summary without first repeating the task at hand (as LLMs usually do).Final promptThis produced outstanding summaries, in the structure we wanted, with all key points covered, and almost without any hallucinations.Workflow integrationIn integrating the prompt into our workflow, we wanted to ensure it was complementing the work of our teams, vs. solely writing communications. We designed the tooling in a way that the UI had a ‘Generate Summary’ button, which would pre-populate a text field with the summary that the LLM proposed. A human user can then either accept the summary and have it added to the incident, do manual changes to the summary and accept it, or discard the draft and start again. UI showing the ‘generate draft’ button and LLM proposed summary around a fake incident Quantitative winsOur newly-built tool produced well-written and accurate summaries, resulting in 51% time saved, per incident summary drafted by an LLM, versus a human.Time savings using LLM-generated summaries (sample size: 300)The only edge cases we have seen were around hallucinations when the input size was small in relation to the prompt size. In these cases, the LLM made up most of the summary and key points were incorrect. We fixed this programmatically: If the input size is smaller than 200 tokens, we won’t call the LLM for a summary and let the humans write it. Evolving to more complex use cases: Executive updatesGiven these results, we explored other ways to apply and build upon the summarization success and apply it to more complex communications. We improved upon the initial summary prompt and ran an experiment to draft executive communications on behalf of the Incident Commander (IC). The goal of this experiment was to ensure executives and stakeholders quickly understand the incident facts, as well as allow ICs to relay important information around incidents. These communications are complex because they go beyond just a summary - they include different sections (such as summary, root cause, impact, and mitigation), follow a specific structure and format, as well as adhere to writing best practices (such as neutral tone, active voice instead of passive voice, minimize acronyms).This experiment showed that generative AI can evolve beyond high level summarization and help draft complex communications. Moreover, LLM-generated drafts, reduced time ICs spent writing executive summaries by 53% of time, while delivering at least on-par content quality in terms of factual accuracy and adherence to writing best practices. What’s nextWe're constantly exploring new ways to use generative AI to protect our users more efficiently and look forward to tapping into its potential as cyber defenders. For example, we are exploring using generative AI as an enabler of ambitious memory safety projects like teaching an LLM to rewrite C++ code to memory-safe Rust, as well as more incremental improvements to everyday security workflows, such as getting generative AI to read design documents and issue security recommendations based on their content.

  • Uncovering potential threats to your web application by leveraging security reports
    by Google on 2024年4月24日 at 2:15 AM

    Posted by Yoshi Yamaguchi, Santiago Díaz, Maud Nalpas, Eiji Kitamura, DevRel team The Reporting API is an emerging web standard that provides a generic reporting mechanism for issues occurring on the browsers visiting your production website. The reports you receive detail issues such as security violations or soon-to-be-deprecated APIs, from users’ browsers from all over the world. Collecting reports is often as simple as specifying an endpoint URL in the HTTP header; the browser will automatically start forwarding reports covering the issues you are interested in to those endpoints. However, processing and analyzing these reports is not that simple. For example, you may receive a massive number of reports on your endpoint, and it is possible that not all of them will be helpful in identifying the underlying problem. In such circumstances, distilling and fixing issues can be quite a challenge. In this blog post, we'll share how the Google security team uses the Reporting API to detect potential issues and identify the actual problems causing them. We'll also introduce an open source solution, so you can easily replicate Google's approach to processing reports and acting on them. How does the Reporting API work? Some errors only occur in production, on users’ browsers to which you have no access. You won't see these errors locally or during development because there could be unexpected conditions real users, real networks, and real devices are in. With the Reporting API, you directly leverage the browser to monitor these errors: the browser catches these errors for you, generates an error report, and sends this report to an endpoint you've specified. How reports are generated and sent. Errors you can monitor with the Reporting API include: Security violations: Content-Security-Policy (CSP), Cross-Origin-Opener-Policy (COOP), Cross-Origin-Embedder-Policy (COEP) Deprecated and soon-to-be-deprecated API calls Browser interventions Permissions policy And more For a full list of error types you can monitor, see use cases and report types. The Reporting API is activated and configured using HTTP response headers: you need to declare the endpoint(s) you want the browser to send reports to, and which error types you want to monitor. The browser then sends reports to your endpoint in POST requests whose payload is a list of reports. Example setup:#  Example setup to receive CSP violations reports, Document-Policy violations reports, and Deprecation reports  Reporting-Endpoints: main-endpoint="https://reports.example/main", default="https://reports.example/default"# CSP violations and Document-Policy violations will be sent to `main-endpoint`Content-Security-Policy: script-src 'self'; object-src 'none'; report-to main-endpoint;Document-Policy: document-write=?0; report-to=main-endpoint;# Deprecation reports are generated automatically and don't need an explicit endpoint; they're always sent to the `default` endpoint Note: Some policies support "report-only" mode. This means the policy sends a report, but doesn't actually enforce the restriction. This can help you gauge if the policy is working effectively. Chrome users whose browsers generate reports can see them in DevTools in the Application panel: Example of viewing reports in the Application panel of DevTools. You can generate various violations and see how they are received on a server in the reporting endpoint demo: Example violation reports The Reporting API is supported by Chrome, and partially by Safari as of March 2024. For details, see the browser support table. Google's approach Google benefits from being able to uplift security at scale. Web platform mitigations like Content Security Policy, Trusted Types, Fetch Metadata, and the Cross-Origin Opener Policy help us engineer away entire classes of vulnerabilities across hundreds of Google products and thousands of individual services, as described in this blogpost. One of the engineering challenges of deploying security policies at scale is identifying code locations that are incompatible with new restrictions and that would break if those restrictions were enforced. There is a common 4-step process to solve this problem: Roll out policies in report-only mode (CSP report-only mode example). This instructs browsers to execute client-side code as usual, but gather information on any events where the policy would be violated if it were enforced. This information is packaged in violation reports that are sent to a reporting endpoint. The violation reports must be triaged to link them to locations in code that are incompatible with the policy. For example, some code bases may be incompatible with security policies because they use a dangerous API or use patterns that mix user data and code. The identified code locations are refactored to make them compatible, for example by using safe versions of dangerous APIs or changing the way user input is mixed with code. These refactorings uplift the security posture of the code base by helping reduce the usage of dangerous coding patterns. When all code locations have been identified and refactored, the policy can be removed from report-only mode and fully enforced. Note that in a typical roll out, we iterate steps 1 through 3 to ensure that we have triaged all violation reports. With the Reporting API, we have the ability to run this cycle using a unified reporting endpoint and a single schema for several security features. This allows us to gather reports for a variety of features across different browsers, code paths, and types of users in a centralized way. Note: A violation report is generated when an entity is attempting an action that one of your policies forbids. For example, you've set CSP on one of your pages, but the page is trying to load a script that's not allowed by your CSP. Most reports generated via the Reporting API are violation reports, but not all — other types include deprecation reports and crash reports. For details, see Use cases and report types. Unfortunately, it is common for noise to creep into streams of violation reports, which can make finding incompatible code locations difficult. For example, many browser extensions, malware, antivirus software, and devtools users inject third-party code into the DOM or use forbidden APIs. If the injected code is incompatible with the policy, this can lead to violation reports that cannot be linked to our code base and are therefore not actionable. This makes triaging reports difficult and makes it hard to be confident that all code locations have been addressed before enforcing new policies. Over the years, Google has developed a number of techniques to collect, digest, and summarize violation reports into root causes. Here is a summary of the most useful techniques we believe developers can use to filter out noise in reported violations: Focus on root causes It is often the case that a piece of code that is incompatible with the policy executes several times throughout the lifetime of a browser tab. Each time this happens, a new violation report is created and queued to be sent to the reporting endpoint. This can quickly lead to a large volume of individual reports, many of which contain redundant information. Because of this, grouping violation reports into clusters enables developers to abstract away individual violations and think in terms of root causes. Root causes are simpler to understand and can speed up the process of identifying useful refactorings. Let's take a look at an example to understand how violations may be grouped. For instance, a report-only CSP that forbids the use of inline JavaScript event handlers is deployed. Violation reports are created on every instance of those handlers and have the following fields set: The blockedURL field is set to inline, which describes the type of violation. The scriptSample field is set to the first few bytes of the contents of the event handler in the field. The documentURL field is set to the URL of the current browser tab. Most of the time, these three fields uniquely identify the inline handlers in a given URL, even if the values of other fields differ. This is common when there are tokens, timestamps, or other random values across page loads. Depending on your application or framework, the values of these fields can differ in subtle ways, so being able to do fuzzy matches on reporting values can go a long way in grouping violations into actionable clusters. In some cases, we can group violations whose URL fields have known prefixes, for example all violations with URLs that start with chrome-extension, moz-extension, or safari-extension can be grouped together to set root causes in browser extensions aside from those in our codebase with a high degree of confidence. Developing your own grouping strategies helps you stay focused on root causes and can significantly reduce the number of violation reports you need to triage. In general, it should always be possible to select fields that uniquely identify interesting types of violations and use those fields to prioritize the most important root causes. Leverage ambient information Another way of distinguishing non-actionable from actionable violation reports is ambient information. This is data that is contained in requests to our reporting endpoint, but that is not included in the violation reports themselves. Ambient information can hint at sources of noise in a client's set up that can help with triage: User Agent or User Agent client hints: User agents are a great tell-tale sign of non-actionable violations. For example, crawlers, bots, and some mobile applications use custom user agents whose behavior differs from well-supported browser engines and that can trigger unique violations. In other cases, some violations may only trigger in a specific browser or be caused by changes in nightly builds or newer versions of browsers. Without user agent information, these violations would be significantly more difficult to investigate. Trusted users: Browsers will attach any available cookies to requests made to a reporting endpoint by the Reporting API, if the endpoint is same-site with the document where the violation occurs. Capturing cookies is useful for identifying the type of user that caused a violation. Often, the most actionable violations come from trusted users that are not likely to have invasive extensions or malware, like company employees or website administrators. If you are not able to capture authentication information through your reporting endpoint, consider rolling out report-only policies to trusted users first. Doing so allows you to build a baseline of actionable violations before rolling out your policies to the general public. Number of unique users: As a general principle, users of typical features or code paths should generate roughly the same violations. This allows us to flag violations seen by a small number of users as potentially suspicious, since they suggest that a user's particular setup might be at fault, rather than our application code. One way of 'counting users' is to keep note of the number of unique IP addresses that reported a violation. Approximate counting algorithms are simple to use and can help gather this information without tracking specific IP addresses. For example, the HyperLogLog algorithm requires just a few bytes to approximate the number of unique elements in a set with a high degree of confidence. Map violations to source code (advanced) Some types of violations have a source_file field or equivalent. This field represents the JavaScript file that triggered the violation and is usually accompanied by a line and column number. These three bits of data are a high-quality signal that can point directly to lines of code that need to be refactored. Nevertheless, it is often the case that source files fetched by browsers are compiled or minimized and don't map directly to your code base. In this case, we recommend you use JavaScript source maps to map line and column numbers between deployed and authored files. This allows you to translate directly from violation reports to lines of source code, yielding highly actionable report groups and root causes. Establish your own solution The Reporting API sends browser-side events, such as security violations, deprecated API calls, and browser interventions, to the specified endpoint on a per-event basis. However, as explained in the previous section, to distill the real issues out of those reports, you need a data processing system on your end. Fortunately, there are plenty of options in the industry to set up the required architecture, including open source products. The fundamental pieces of the required system are the following: API endpoint: A web server that accepts HTTP requests and handles reports in a JSON format Storage: A storage server that stores received reports and reports processed by the pipeline Data pipeline: A pipeline that filters out noise and extracts and aggregates required metadata into constellations Data visualizer: A tool that provides insights on the processed reports Solutions for each of the components listed above are made available by public cloud platforms, SaaS services, and as open source software. See the Alternative solutions section for details, and the following section outlining a sample application. Sample application: Reporting API Processor To help you understand how to receive reports from browsers and how to handle these received reports, we created a small sample application that demonstrates the following processes that are required for distilling web application security issues from reports sent by browsers: Report ingestion to the storage Noise reduction and data aggregation Processed report data visualization Although this sample is relying on Google Cloud, you can replace each of the components with your preferred technologies. An overview of the sample application is illustrated in the following diagram: Components described as green boxes are components that you need to implement by yourself. Forwarder is a simple web server that receives reports in the JSON format and converts them to the schema for Bigtable. Beam-collector is a simple Apache Beam pipeline that filters noisy reports, aggregates relevant reports into the shape of constellations, and saves them as CSV files. These two components are the key parts to make better use of reports from the Reporting API. Try it yourself Because this is a runnable sample application, you are able to deploy all components to a Google Cloud project and see how it works by yourself. The detailed prerequisites and the instructions to set up the sample system are documented in the README.md file. Alternative solutions Aside from the open source solution we shared, there are a number of tools available to assist in your usage of the Reporting API. Some of them include: Report-collecting services like report-uri and uriports. Application error monitoring platforms like Sentry, Datadog, etc. Besides pricing, consider the following points when selecting alternatives: Are you comfortable sharing any of your application's URLs with a third-party report collector? Even if the browser strips sensitive information from these URLs, sensitive information may get leaked this way. If this sounds too risky for your application, operate your own reporting endpoint. Does this collector support all report types you need? For example, not all reporting endpoint solutions support COOP/COEP violation reports. Summary In this article, we explained how web developers can collect client-side issues by using the Reporting API, and the challenges of distilling the real problems out of the collected reports. We also introduced how Google solves those challenges by filtering and processing reports, and shared an open source project that you can use to replicate a similar solution. We hope this information will motivate more developers to take advantage of the Reporting API and, in consequence, make their website more secure and sustainable. Learning resources Monitor your web application with the Reporting API | Capabilities | Chrome for Developers A Recipe for Scaling Security – Google Bug Hunters


AWS Security Blog The latest AWS security, identity, and compliance launches, announcements, and how-to posts.

  • AWS achieves Spain’s ENS High 311/2022 certification across 172 services
    by Daniel Fuertes on 2024年5月6日 at 10:23 PM

    Amazon Web Services (AWS) has recently renewed the Esquema Nacional de Seguridad (ENS) High certification, upgrading to the latest version regulated under Royal Decree 311/2022. The ENS establishes security standards that apply to government agencies and public organizations in Spain and service providers on which Spanish public services depend. This security framework has gone through

  • AWS is issued a renewed certificate for the BIO Thema-uitwerking Clouddiensten with increased scope
    by Ka Yie Lee on 2024年5月4日 at 3:18 AM

    We’re pleased to announce that Amazon Web Services (AWS) demonstrated continuous compliance with the Baseline Informatiebeveiliging Overheid (BIO) Thema-uitwerking Clouddiensten while increasing the AWS services and AWS Regions in scope. This alignment with the BIO Thema-uitwerking Clouddiensten requirements demonstrates our commitment to adhere to the heightened expectations for cloud service providers. AWS customers across the Dutch public sector can

  • Authorize API Gateway APIs using Amazon Verified Permissions and Amazon Cognito
    by Kevin Hakanson on 2024年4月25日 at 3:02 AM

    Externalizing authorization logic for application APIs can yield multiple benefits for Amazon Web Services (AWS) customers. These benefits can include freeing up development teams to focus on application logic, simplifying application and resource access audits, and improving application security by using continual authorization. Amazon Verified Permissions is a scalable permissions management and fine-grained authorization service

  • Using Amazon Verified Permissions to manage authorization for AWS IoT smart home applications
    by Rajat Mathur on 2024年4月24日 at 4:37 AM

    This blog post introduces how manufacturers and smart appliance consumers can use Amazon Verified Permissions to centrally manage permissions and fine-grained authorizations. Developers can offer more intuitive, user-friendly experiences by designing interfaces that align with user personas and multi-tenancy authorization strategies, which can lead to higher user satisfaction and adoption. Traditionally, implementing authorization logic using

  • 2023 ISO 27001 certificate available in Spanish and French, and 2023 ISO 22301 certificate available in Spanish
    by Atulsing Patil on 2024年4月19日 at 3:48 AM

    French » Spanish » Amazon Web Services (AWS) is pleased to announce that a translated version of our 2023 ISO 27001 and 2023 ISO 22301 certifications are now available: The 2023 ISO 27001 certificate is available in Spanish and French. The 2023 ISO 22301 certificate is available in Spanish. Translated certificates are available to customers



Check Point Research Latest Research by our Team

  • 6th May – Threat Intelligence Report
    by tomersp@checkpoint.com on 2024年5月6日 at 8:21 PM

    For the latest discoveries in cyber research for the week of 29th April, please download our Threat_Intelligence Bulletin. TOP ATTACKS AND BREACHES In a joint statement with Germany and NATO, the Czech Republic uncovered a cyber espionage campaign by Russian state affiliated actor APT28. These cyber-attacks targeted Czech institutions using a new vulnerability in Microsoft The post 6th May – Threat Intelligence Report appeared first on Check Point Research.

  • 29th April – Threat Intelligence Report
    by lorenf on 2024年4月29日 at 7:32 PM

    For the latest discoveries in cyber research for the week of 29th April, please download our Threat_Intelligence Bulletin. TOP ATTACKS AND BREACHES Germany has revealed a sophisticated state-sponsored hacking campaign targeting Volkswagen, orchestrated by Chinese hackers since 2010. The attackers successfully infiltrated VW’s networks multiple times, extracting thousands of documents critical to automotive technology, including The post 29th April – Threat Intelligence Report appeared first on Check Point Research.

  • 22nd April – Threat Intelligence Report
    by tomersp@checkpoint.com on 2024年4月22日 at 9:50 PM

    For the latest discoveries in cyber research for the week of 22nd April, please download our Threat_Intelligence Bulletin. TOP ATTACKS AND BREACHES MITRE Corporation disclosed a security event that occurred in January 2024. The attack, which is linked to Chinese APT group UNC5221, involved exploitation of two zero-day vulnerabilities in Ivanti VPN products. The attacker The post 22nd April – Threat Intelligence Report appeared first on Check Point Research.

  • 2024 Security Report: Podcast Edition
    by etal on 2024年4月18日 at 10:00 PM

    Once every year, Check Point releases an annual report reviewing the biggest events and trends in cybersecurity. In this episode we’ll break down the latest iteration, focusing on its most important parts, to catch you up on what you need to know most in 2024. The post 2024 Security Report: Podcast Edition appeared first on Check Point Research.

  • 15th April – Threat Intelligence Report
    by lorenf on 2024年4月15日 at 8:16 PM

    For the latest discoveries in cyber research for the week of 15th April, please download our Threat_Intelligence Bulletin. TOP ATTACKS AND BREACHES Japanese optics giant Hoya Corporation has been a victim of a ransomware attack that impacted its major IT infrastructure and various business divisions. Hunters International ransomware gang claimed responsibility for the attack and The post 15th April – Threat Intelligence Report appeared first on Check Point Research.


  • CSIRTとは?主な役割や設置の際の注意点を解説
    by Blog on 2024年5月1日 at 9:30 AM

    ビジネスに欠かせないインターネット、パソコンやスマートフォンなどのデジタル機器ですが、最近は不正アクセス・個人情報の流出などのトラブルが増えています。この記事ではこうしたセキュリティトラブルに対応する専門チーム・CSIRTとは何か、その役割やメリット、SOCやPSIRTとの違い、導入する際の注意点などを解説します。専門知識がない・CSIRTを設置したくても人材を確保できない方におすすめの対策も紹介します。   CSIRTとは CSIRT(シーサート:Computer Security Incident Response Team)とは、セキュリティの監視、セキュリティインシデントの原因調査・分析・事後対応を行うチームのことです。 デジタル化が進んだ影響を受け、サイバー攻撃が増加している現代において、攻撃を受けた場合に迅速に対応するCSIRTに注目が集まっています。 実際、国内のCSIRT構築運用支援サービス市場の売上金額は年々上昇しています。ITRの調査によると、2016年の売上金額は61億円でしたが、2022年は113億円に上昇しています。 参照元:ITR「CSIRT構築運用支援サービス市場規模推移および予測」   ・CSIRTとSOCの違い CSIRTとよく似たチームにSOC(ソック:Security Operation Center)がありますが、CSIRTとSOCは基本的な役割と機能が異なります。 CSIRTの主な役割は、セキュリティインシデント発生時に被害拡大の防止・根本解決などを実施することです。 一方、SOCは組織内のセキュリティを監視し、サイバー攻撃のチェックや分析を行います。SOCがインシデントを検知した際はCSIRTに報告し、対応を委ねます。   ・CSIRTとPSIRTの違い CSIRTと同じように注目を集めているPSIRTとの違いも把握しておきましょう。 PSIRT(ピーサート:Product Security Incident Response Team)もインシデントが発生した際に対応するチームです。ただし、CSIRTとは対応する範囲が異なります。 PSIRTは自社が提供した製品やサービスに関連するセキュリティインシデントに対応します。PSIRTは外部に提供した製品・サービスを保護する目的で設置されるため、社内ネットワークのトラブルはCSIRTが対応します。   CSIRTの主な役割 CSIRTの役割は、インシデントが起きてしまった時の事後対応、発生を抑える事前対応、そしてセキュリティマネジメントの3つに集約されます。それぞれについて詳しく解説します。   ・インシデント事後対応 CSIRTの主な役割は、セキュリティインシデントの事後対応です。セキュリティインシデントが発生すると、事前に検討した処理を行い、被害を最小限にとどめてシステムを復旧します。まずインシデントの検知から始まり、トリアージ(優先順位付け)、インシデントレスポンス(対応)、報告・情報公開の4段階で解決を図ります。 また、発生したインシデントの分析・対応・復旧だけでなく、セキュリティ専門家や他部署に協力を仰ぎ、他のメンバーとも情報交換を行いながら再発防止策の検討やセキュリティ強化対策を実施します。   ・インシデント事前対応 インシデント発生に備える事前対応も行います。防止対策の検討と導入、ナレッジ共有や社員教育、トレーニングの実施、さらには管理体制の見直しなどを行いながら予防します。 流行しているウイルスや脆弱性情報などの収集・分析と共有、セキュリティ監査・セキュリティツールの管理や開発も役割のひとつです。社内外の組織とセキュリティ情報共有や連携も行います。他社の事例なども含めて最新の情報を収集・分析し、自社のセキュリティ対策に活用する場合もあります。このようにセキュリティ対策の質自体を高める活動も重要な役割です。   ・セキュリティマネジメント 情報システム部門だけでなく、組織全体がセキュリティに対して正しい知識を持ち、迅速に対応できるように教育することもCSIRTの役割です。インシデントは必ず起きるもの、という認識を社員全員が持ち、組織全体のセキュリティ意識を高めることがインシデントの発生を抑えることに役立ちます。それだけでなく、インシデントの早期発見にもつながり、被害を最小限に食い止めることも可能です。   企業にCSIRTを設置する際の注意点 CSIRTを導入するにあたって特に気をつけたいのは、経営陣の理解を十分に得ることと、外部連携の重要性を認識し、積極的にコミュニケーションを取ることです。   ・経営層の理解を得る CSIRTの設置と運用は、経営課題として企業全体で取り組む必要があることを経営陣・決済担当者に理解してもらうことが重要です。そのためには、CSIRTを導入する必要性やメリットを伝え、理解と協力を得る必要があります。 そのためには、セキュリティインシデントが自社に及ぼす被害・損失の例を伝え、予防・被害を最小限に抑えるためにCSIRTの設置が有効であることや、起こった時に最善策が取れるように準備する重要性やメリットを伝えることです。   ・外部とも連携して設置する インシデントが起こった際、被害を最小限に抑えるためにも関連組織や他のCSIRT、SOCといった外部との連携を構築しておくことが重要です。特にSOCとはしっかりコミュニケーションを取っておきましょう。SOCとの連携不足はインシデント発生時の対応が遅れるなど、被害が拡大する恐れがあります。 また、常に迅速・的確な対応ができるように、監査部門、コンプライアンス部門、広報部門などと連携して情報共有・協力体制を構築したり、外部から専門家を招いて社内教育を実施したりするのも効果的です。   まとめ CSIRTとはセキュリティインシデントに対応する役割を担うチームです。CSIRTの設置には人的リソースの確保が不可欠です。セキュリティインシデントが発生した際に迅速に対応するうえで欠かせないものの、人手不足などの理由から人材確保が難しい場合もあります。そのような場合は、CSIRTの人的負担を軽減できるWAFサービスも同時に導入を検討しましょう。 The post CSIRTとは?主な役割や設置の際の注意点を解説 first appeared on Cloudbric(クラウドブリック).

  • 「AWS Activate」プロバイダーに認定、5,000ドルのAWSクレジットを提供
    by cloudbric on 2024年4月17日 at 10:00 AM

    このたび、ペンタセキュリティ株式会社は、アマゾン ウェブ サービス(以下AWS)のスタートアップ支援プログラムである「AWS Activate」のプロバイダーに認定されました。   情報セキュリティ企業であるペンタセキュリティは、クラウド型WAFサービスの開発ノウハウを活かし、AWS認定ソフトウェアとしてAWS WAFに特化した運用管理サービス「Cloudbric WMS for AWS」やAWS WAF専用のマネージドルール「Cloudbric Rule Set」を提供してきました。 今回、ペンタセキュリティがAWS Activate プロバイダーに認定されたことにより、スタートアップ企業は5,000ドル(US)相当の AWSクレジットを受け取ることができ、Clourbricの利用料金に充当することが可能になります。また、AWS ソリューション アーキテクトによる技術支援、パーソナライズされたコンテンツや限定オファーを受けられるAWS Activateコンソールへのアクセスも可能になります。 AWS Activateを通じてClourbricのサービスを導入することで、スタートアップ企業のセキュリティを強固にできるとともに、ビジネス競争力の向上や事業成長にも貢献できると考え、今回の参画に至りました。   ▼AWS Activate の詳細および申し込みはこちら https://www.cloudbric.jp/aws-activate/   ▽AWS Activateとは https://aws.amazon.com/jp/activate/activate-landing/ The post 「AWS Activate」プロバイダーに認定、5,000ドルのAWSクレジットを提供 first appeared on Cloudbric(クラウドブリック).

  • Basic認証とは?メリット・デメリットや脆弱性を徹底解説
    by Blog on 2024年4月16日 at 11:30 AM

    Webアプリケーションの認証方式の中でも、極めて簡便な方法のひとつがBasic認証(ベーシック認証)です。Basic認証は、手軽にアクセス制限をかけることができますが、セキュリティ上の問題点も指摘されています。今回の記事では、Basic認証とは何か、改めてわかりやすく解説し、メリットと注意点も紹介します。   Basic認証(ベーシック認証)とは Basic認証(ベーシック認証)とは、Webサイトにアクセス制限を施すための認証方法のひとつで、比較的簡単に導入できるため、広く用いられています。Basic認証によって制限されたページを閲覧するには、正確なユーザー名(ID)とパスワードの入力が必要となります。正しく入力が行われないと画面にエラーメッセージが表示されます。「基本認証」とも呼ばれます。 一般に公開されているWebサイトの中で、有料会員のみが閲覧できるページを作成したり、社内の特定のメンバーのみ利用できるページを作ったりするときによく利用されます。また、公開前のページの閲覧に制限をかけたい場合や、自作のポートフォリオを特定のクライアントにのみ閲覧してもらいたい場合などにも利用できます。 Basic認証は「.htaccess」および「.htpasswd」の2種類のヘッダーによって設定されます。認証を施したいフォルダに「.htaccess」および「.htpasswd」のファイルを設定し、それぞれに特定のコードを作成するだけで完了します。 ユーザーがリンクをクリック、またはURLを入力すると、ブラウザからWebサーバーに向けてリクエストが送信されます。この時、Basic認証が導入されている場合、Webサーバーからブラウザに認証が必要であることが伝えられます。これにより、ブラウザ上に認証ダイアログが表示され、ユーザー名およびパスワードの入力認証を求めます。認証された後、特定のユーザーだけがアクセス可能なページや階層の利用が可能となります。 Basic認証は、Webサーバーの機能であり、基本的にはほとんどのWebサーバーで使用可能です。ただし、レンタルサーバーを使用している場合には設定が行えない場合があります。   Basic認証のメリット Basic認証は長く用いられてきた認証方法であり、主に以下3つのメリットがあります。   ・簡単に設定できる Basic認証は「.htaccess」ファイルと「.htpasswd」ファイルの2つのヘッダーの設置のみで使用が可能なため、比較的簡便に設定できます。ファイルの作成はメモ帳で行えるため、急場しのぎの場合や簡易的にセキュリティ対策が必要な時に効果的です。手軽に認証機能を追加したい場合に有効な手段です。   ・ログイン情報が記憶される Basic認証に成功した後、ブラウザを閉じなければ、別のWebサイトを見た後でもまた認証なしで閲覧できます。また、Basic認証に一度成功すれば、ユーザー名とパスワードはこの時使用したブラウザに記憶されます。次にログインする際に再入力の手間がかかりません。 ただし、別のデバイスやブラウザからアクセスする際には再度認証が必要となります。また、ブラウザの種類やネットワーク状態によってはログイン情報の記録ができない場合があります。加えて、スマホでもログイン情報が記憶されないことが多いです。   ・ディレクトリ単位でアクセス制限ができる Basic認証は「.htaccess」ファイルを置いたディレクトリが認証の範囲となるため、同じ階層に一括でアクセス制限を加えることが可能です。また、「.htaccess」ファイル内に細かい設定を施すことで、特定のページや範囲にのみアクセスに制限を施すことも可能であり便利です。PDFファイルや画像などにもアクセスに制限をかけられます。   Basic認証のデメリット Basic認証の主なデメリットは以下3点です。   ・クローラーが巡回できない クローラーとはWebサイトの情報を自動で収集するプログラムで、Webの検索結果を表示するために動いています。検索エンジンの検索結果はクローラーが巡回して収集した情報をもとに表示されています。 しかし、Basic認証を施すことで、クローラーも制限されたページを巡回できなくなり、検索結果に表示されなくなります。SEO対策で検索結果を上位表示させたい場合にはBasic認証は悪影響になるため、避けた方が望ましいです。   ・サーバーをまたいだ認証設定が不可能 Basic認証によりアクセスが制限される範囲は、ディレクトリ単位となるため、複数のサーバーをまたいだ設定は不可能です。複数のサーバーが存在する場合は、それぞれのサーバーごとにファイルを設定する必要があります。   ・セキュリティが脆弱 Basic認証では、ユーザー名とパスワードはBase64という簡単なコードに変換されますが、デコードによって元の文字列が簡単にわかってしまいます。認証を行うたびに、ユーザー名とパスワードが暗号化されないまま送信されるので、通信を傍受して情報を盗み取られるリスクがあります。 また、一度ログインすると、ブラウザにユーザー名とパスワードが保存される仕組みで、ログアウトの機能がありません。そのため、悪意のある第三者がブラウザを勝手に利用すれば、情報の漏えいや悪用のおそれがあります。パソコンの共用を避けたり、一時的に席を離れる時はパソコンの画面にロックをかけたりといった対策が必要です。 Basic認証は手軽な認証方法ですが、それだけでは不十分です。機密情報を含まない情報を、限られたユーザー間でやり取りする場合のみ、Basic認証を利用しても問題ありませんが、それ以外のケースでは、より安全性の高い認証方式を採用するべきです。また、WAFの導入によってセキュリティ対策を強化すると良いでしょう。   まとめ Basic認証は簡単に設定することができるうえ、ディレクトリ単位でアクセス制限ができるため、急ぎでセキュリティ対策を行う場合や細かく制限をかけたい時に便利な認証方法です。ただし、脆弱性も指摘されています。 脆弱性をカバーするには、より安全な認証方式を採用するほか、Webサイトの保護に特化したセキュリティ対策・WAFサービスの導入が有効です。Basic認証で対応しきれない悪意のある攻撃からWebサイトを保護できます。 また、「Cloudbric WAF+」は、Webセキュリティに必須なWAF・DDoS攻撃対策・脅威IP遮断サービスなど5つのサービスがひとつに統合されているため、セキュリティをより強化したい企業におすすめです。   ▼WAFをはじめとする多彩な機能がひとつに。企業向けWebセキュリティ対策なら「Cloudbirc WAF+」 ▼製品・サービスに関するお問い合わせはこちら The post Basic認証とは?メリット・デメリットや脆弱性を徹底解説 first appeared on Cloudbric(クラウドブリック).

  • 「2024 Globee Awards for Cybersecurity」でCloudbric WAF+が銀賞を受賞
    by cloudbric on 2024年4月10日 at 10:00 AM

    このたび、ペンタセキュリティ株式会社は、2024年の 「Globee Awards for Cybersecurity」において、セキュリティハードウェア部門で金賞(WAPPLES)、Webアプリケーションセキュリティ&ファイアウォール部門で銀賞(Cloudbric WAF+)、データセキュリティ部門で銅賞(D’Amo)を受賞しました。   今年20回目を迎えた「Globee Awards for Cybersecurity」は、サイバーセキュリティ分野における優れた企業、製品、個人を表彰することを目的にしています。さまざまな組織や業界から集まった500名以上の専門家が審査を行い、卓越した業績を上げた受賞者が選定されました。 クラウド型WAFサービス「Clourbric WAF+」は『セキュリティに詳しくない非専門家も手軽に運用できるサービスである』点が評価され、今回の受賞に至りました。   2024年の受賞リスト https://globeeawards.com/cybersecurity/winners/ The post 「2024 Globee Awards for Cybersecurity」でCloudbric WAF+が銀賞を受賞 first appeared on Cloudbric(クラウドブリック).

  • 【イベント】「Japan IT Week 春」 情報セキュリティEXPOに出展
    by cloudbric on 2024年4月3日 at 10:00 AM

    このたび、ペンタセキュリティ株式会社は、2024年4月24日(水)~26日(金)に東京ビッグサイトで開催される「Japan IT Week 春」の情報セキュリティEXPOに出展します。   ■出展内容 ペンタセキュリティのブースでは、データ暗号化ソリューション「D’Amo」と、クラウド型セキュリティプラットフォーム「Cloudbric」を紹介します。 サイバー攻撃が巧妙化し、企業の持つ機密情報・個人情報の漏えい事故が多発する昨今、サイバーセキュリティ対策は急務です。ペンタセキュリティでは、外部からの攻撃を防御するサービス(Cloudbric)から企業内部にあるデータを暗号化する製品(D’Amo)まで取り揃えており、企業のセキュリティ課題を解決します。   ■開催概要 名称:Japan IT Week 春 情報セキュリティEXPO 主催:RX Japan株式会社 開催日時:2024年4月24日(水)~26日(金)10:00-18:00(最終日のみ17:00まで) 会場:東京ビッグサイト 東ホール 小間番号:26-5(KOREA PAVILION内、KISIA共同出展) URL:https://www.japan-it.jp/spring/ja-jp/about/ist.html 来場事前登録(無料): https://www.japan-it.jp/spring/ja-jp/register.html?code=1022280177481867-HU2   The post 【イベント】「Japan IT Week 春」 情報セキュリティEXPOに出展 first appeared on Cloudbric(クラウドブリック).


Barracuda バラクーダネットワークス メール保護、アプリケーション/クラウドセキュリティ、ネットワークセキュリティ、データ保護


  • wizSafe Security Signal 2024年3月 観測レポート
    by SOCチーム on 2024年4月26日 at 10:26 AM

    本レポートでは、2024年3月中に発生した観測情報と事案についてまとめています。 目次 DDoS攻撃の観測情報 IIJマネージドセキュリティサービスの観測情報 Web/メールのマルウェア脅威の観測情報 セキュリティインシ … "wizSafe Security Signal 2024年3月 観測レポート" の続きを読む

  • wizSafe Security Signal 2024年2月 観測レポート
    by SOCチーム on 2024年3月28日 at 1:17 PM

    本レポートでは、2024年2月中に発生した観測情報と事案についてまとめています。 目次 DDoS攻撃の観測情報 IIJマネージドセキュリティサービスの観測情報 Web/メールのマルウェア脅威の観測情報 セキュリティインシ … "wizSafe Security Signal 2024年2月 観測レポート" の続きを読む

  • wizSafe Security Signal 2024年1月 観測レポート
    by SOCチーム on 2024年2月28日 at 3:23 PM

    本レポートでは、2024年1月中に発生した観測情報と事案についてまとめています。 目次 DDoS攻撃の観測情報 IIJマネージドセキュリティサービスの観測情報 Web/メールのマルウェア脅威の観測情報 セキュリティインシ … "wizSafe Security Signal 2024年1月 観測レポート" の続きを読む



トレンドマイクロ セキュリティブログ セキュリティ(ウイルスや脆弱性による攻撃)の最新動向を追うなら、Regional TrendLabs ウイルス解析担当者が執筆するトレンドマイクロ セキュリティ ブログ。

  • サイト移転のお知らせ
    by Trend Micro on 2022年6月30日 at 8:19 AM

    セキュリティブログは新設サイトに移動しました。最新の記事はこちらから The post サイト移転のお知らせ first appeared on トレンドマイクロ セキュリティブログ.

  • デジタル環境のアタックサーフェス(攻撃対象領域)を理解する
    by Trend Micro on 2022年6月29日 at 7:30 PM

    トレンドマイクロの最新調査から、増大するデジタル環境で攻撃を受けやすい領域(以下、アタックサーフェス(攻撃対象領域))に対してサイバーセキュリティのリスク管理に苦慮する企業の実態が明らかになりました。 デジタル環境のアタ... The post デジタル環境のアタックサーフェス(攻撃対象領域)を理解する first appeared on トレンドマイクロ セキュリティブログ.

  • Codexとサイバー攻撃④:Codexは攻撃者の活動に悪用できるのか?
    by Trend Micro on 2022年6月28日 at 10:02 PM

    このブログシリーズでは、自然言語処理モデル「Generative Pre-trained Transformerの第3バージョン(GPT-3)」の機能を持つ「Codex」についてさまざまな視点を交えて解説し、開発者だけで... The post Codexとサイバー攻撃④:Codexは攻撃者の活動に悪用できるのか? first appeared on トレンドマイクロ セキュリティブログ.

  • Codexとサイバー攻撃③:タスクの自動化と出力内容の一貫性
    by Trend Micro on 2022年6月23日 at 8:00 PM

    このブログシリーズでは、自然言語モデル「Generative Pre-trained Transformerの第3バージョン(GPT-3)」の機能を持つ「Codex」についてさまざまな視点を交えて解説し、開発者だけでなく... The post Codexとサイバー攻撃③:タスクの自動化と出力内容の一貫性 first appeared on トレンドマイクロ セキュリティブログ.

  • 「偽サイト騒動」の背後に不審なWebプロキシサイトを確認
    by セキュリティエバンジェリスト 岡本 勝之 on 2022年6月22日 at 8:15 PM

    この6月に入り、官公庁や市町村のWebページの「偽サイト」が検索上位に登場するなどの報告が相次ぎ、15日にはNISC(内閣サイバーセキュリティセンター)から注意喚起が発出される事態となりました。トレンドマイクロでこれら「... The post 「偽サイト騒動」の背後に不審なWebプロキシサイトを確認 first appeared on トレンドマイクロ セキュリティブログ.


    Feed has no items.

Nota Bene | Eugene Kaspersky Official Blog in Japanese ユージン・カスペルスキーは語る – 公式ブログ


株式会社FFRIセキュリティ 株式会社FFRIセキュリティ