This is the 5.4.64 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl9ZDSIACgkQONu9yGCS aT5GkA/+I3VF/vpyQXLEY3lCOxUPWsbsU+NTx5x6g4ujFLPzzXISvxaQT3FdyTez 73nRDbEUwGX8b1Ruylg6PoRjNAilwvYB8gS/4TVxJQ/VtIyG7uFqjDK7vuGJT5xW +Pf+PSoJjJLfQfu6TzSZwMit5K8wfTk4egESeZ7KUH3IsLvlNs4Xegkpm1pkl8nZ jb3FT5vTPd425Qld6odkfVTj0QJ8JniL8U54YTBXjw6uEMRGsjMGsy91gXNQbgjf fhhhRrpFhnDE9rJFtLEVaXUbQ2j3+mjS5lSH/2erpXO+U19yeNLElwpltnHPFrJF vDjkvlWdoQKs1+JXNzVQZF9H+omQbTcU8gcRB+s8EbSV2+bcpIdNeas00GaumJW1 l6660A74mKPN4Vii5YioD9GcsJHgKRkbgJkoxu7QnegiHGoHTfToNVgwz2bQgT34 JXbZXyhfLOTR5zpczJ3gyBHX+Va3dyHJypyRMgvgyvDW+TZS9By8iAaqXs14eGjG 8nm5dlaiZyAeburIUyi8vFZZT/5BA42b1xyUZcduKmqlMjRu9fxCHlBCwj5rjcy5 Psin0EYZcwOtA4mKzIH+w1ZB0qsPLYtLYQZaJzPUsUfzoNvYtU7pbQZEVLtUPMf4 5MbOPLjT+aki4TGQOR+et29kusapeLEfrc3SgfLwYODmDXmR3cE= =N3oJ -----END PGP SIGNATURE----- Merge 5.4.64 into android11-5.4-lts Changes in 5.4.64 HID: quirks: Always poll three more Lenovo PixArt mice drm/msm/dpu: Fix scale params in plane validation tty: serial: qcom_geni_serial: Drop __init from qcom_geni_console_setup drm/msm: add shutdown support for display platform_driver hwmon: (applesmc) check status earlier. nvmet: Disable keep-alive timer when kato is cleared to 0h drm/msm: enable vblank during atomic commits habanalabs: validate FW file size habanalabs: check correct vmalloc return code drm/msm/a6xx: fix gmu start on newer firmware ceph: don't allow setlease on cephfs drm/omap: fix incorrect lock state cpuidle: Fixup IRQ state nbd: restore default timeout when setting it to zero s390: don't trace preemption in percpu macros drm/amd/display: Reject overlay plane configurations in multi-display scenarios drivers: gpu: amd: Initialize amdgpu_dm_backlight_caps object to 0 in amdgpu_dm_update_backlight_caps drm/amd/display: Retry AUX write when fail occurs drm/amd/display: Fix memleak in amdgpu_dm_mode_config_init xen/xenbus: Fix granting of vmalloc'd memory fsldma: fix very broken 32-bit ppc ioread64 functionality dmaengine: of-dma: Fix of_dma_router_xlate's of_dma_xlate handling batman-adv: Avoid uninitialized chaddr when handling DHCP batman-adv: Fix own OGM check in aggregated OGMs batman-adv: bla: use netif_rx_ni when not in interrupt context dmaengine: at_hdmac: check return value of of_find_device_by_node() in at_dma_xlate() rxrpc: Keep the ACK serial in a var in rxrpc_input_ack() rxrpc: Make rxrpc_kernel_get_srtt() indicate validity MIPS: mm: BMIPS5000 has inclusive physical caches MIPS: BMIPS: Also call bmips_cpu_setup() for secondary cores mmc: sdhci-acpi: Fix HS400 tuning for AMDI0040 netfilter: nf_tables: add NFTA_SET_USERDATA if not null netfilter: nf_tables: incorrect enum nft_list_attributes definition netfilter: nf_tables: fix destination register zeroing net: hns: Fix memleak in hns_nic_dev_probe net: systemport: Fix memleak in bcm_sysport_probe ravb: Fixed to be able to unload modules net: arc_emac: Fix memleak in arc_mdio_probe dmaengine: pl330: Fix burst length if burst size is smaller than bus width gtp: add GTPA_LINK info to msg sent to userspace net: ethernet: ti: cpsw: fix clean up of vlan mc entries for host port bnxt_en: Don't query FW when netif_running() is false. bnxt_en: Check for zero dir entries in NVRAM. bnxt_en: Fix PCI AER error recovery flow bnxt_en: Fix possible crash in bnxt_fw_reset_task(). bnxt_en: fix HWRM error when querying VF temperature xfs: fix boundary test in xfs_attr_shortform_verify bnxt: don't enable NAPI until rings are ready media: vicodec: add missing v4l2_ctrl_request_hdl_put() media: cedrus: Add missing v4l2_ctrl_request_hdl_put() selftests/bpf: Fix massive output from test_maps net: dsa: mt7530: fix advertising unsupported 1000baseT_Half netfilter: nfnetlink: nfnetlink_unicast() reports EAGAIN instead of ENOBUFS nvmet-fc: Fix a missed _irqsave version of spin_lock in 'nvmet_fc_fod_op_done()' nvme: fix controller instance leak cxgb4: fix thermal zone device registration perf tools: Correct SNOOPX field offset net: ethernet: mlx4: Fix memory allocation in mlx4_buddy_init() fix regression in "epoll: Keep a reference on files added to the check list" net: gemini: Fix another missing clk_disable_unprepare() in probe MIPS: add missing MSACSR and upper MSA initialization xfs: fix xfs_bmap_validate_extent_raw when checking attr fork of rt files perf jevents: Fix suspicious code in fixregex() tg3: Fix soft lockup when tg3_reset_task() fails. x86, fakenuma: Fix invalid starting node ID iommu/vt-d: Serialize IOMMU GCMD register modifications thermal: ti-soc-thermal: Fix bogus thermal shutdowns for omap4430 thermal: qcom-spmi-temp-alarm: Don't suppress negative temp iommu/amd: Restore IRTE.RemapEn bit after programming IRTE net/packet: fix overflow in tpacket_rcv include/linux/log2.h: add missing () around n in roundup_pow_of_two() vfio/type1: Support faulting PFNMAP vmas vfio-pci: Fault mmaps to enable vma tracking vfio-pci: Invalidate mmaps and block MMIO access on disabled memory iommu/vt-d: Handle 36bit addressing for x86-32 tracing/kprobes, x86/ptrace: Fix regs argument order for i386 ext2: don't update mtime on COW faults xfs: don't update mtime on COW faults ARC: perf: don't bail setup if pct irq missing in device-tree btrfs: drop path before adding new uuid tree entry btrfs: allocate scrub workqueues outside of locks btrfs: set the correct lockdep class for new nodes btrfs: set the lockdep class for log tree extent buffers btrfs: tree-checker: fix the error message for transid error net: core: use listified Rx for GRO_NORMAL in napi_gro_receive() btrfs: fix potential deadlock in the search ioctl Revert "net: dsa: microchip: set the correct number of ports" Revert "ALSA: hda: Add support for Loongson 7A1000 controller" ALSA: ca0106: fix error code handling ALSA: usb-audio: Add implicit feedback quirk for UR22C ALSA: pcm: oss: Remove superfluous WARN_ON() for mulaw sanity check ALSA: hda/hdmi: always check pin power status in i915 pin fixup ALSA: firewire-digi00x: exclude Avid Adrenaline from detection ALSA: hda - Fix silent audio output and corrupted input on MSI X570-A PRO ALSA; firewire-tascam: exclude Tascam FE-8 from detection ALSA: hda/realtek: Add quirk for Samsung Galaxy Book Ion NT950XCJ-X716A ALSA: hda/realtek - Improved routing for Thinkpad X1 7th/8th Gen arm64: dts: mt7622: add reset node for mmc device mmc: mediatek: add optional module reset property mmc: dt-bindings: Add resets/reset-names for Mediatek MMC bindings mmc: cqhci: Add cqhci_deactivate() mmc: sdhci-pci: Fix SDHCI_RESET_ALL for CQHCI for Intel GLK-based controllers media: rc: do not access device via sysfs after rc_unregister_device() media: rc: uevent sysfs file races with rc_unregister_device() affs: fix basic permission bits to actually work block: allow for_each_bvec to support zero len bvec block: ensure bdi->io_pages is always initialized libata: implement ATA_HORKAGE_MAX_TRIM_128M and apply to Sandisks blk-iocost: ioc_pd_free() shouldn't assume irq disabled dmaengine: dw-edma: Fix scatter-gather address calculation drm/amd/pm: avoid false alarm due to confusing softwareshutdowntemp setting dm writecache: handle DAX to partitions on persistent memory correctly dm mpath: fix racey management of PG initialization dm integrity: fix error reporting in bitmap mode after creation dm crypt: Initialize crypto wait structures dm cache metadata: Avoid returning cmd->bm wild pointer on error dm thin metadata: Avoid returning cmd->bm wild pointer on error dm thin metadata: Fix use-after-free in dm_bm_set_read_only mm: slub: fix conversion of freelist_corrupted() mm: madvise: fix vma user-after-free vfio/pci: Fix SR-IOV VF handling with MMIO blocking perf record: Correct the help info of option "--no-bpf-event" sdhci: tegra: Add missing TMCLK for data timeout checkpatch: fix the usage of capture group ( ... ) mm/hugetlb: fix a race between hugetlb sysctl handlers mm/khugepaged.c: fix khugepaged's request size in collapse_file cfg80211: regulatory: reject invalid hints net: usb: Fix uninit-was-stored issue in asix_read_phy_addr() Linux 5.4.64 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I34f83b15e9f9a59529f8d67a434577becf25d1a6
This commit is contained in:
commit
a3775e2a89
140 changed files with 1501 additions and 501 deletions
|
|
@ -49,6 +49,8 @@ Optional properties:
|
|||
error caused by stop clock(fifo full)
|
||||
Valid range = [0:0x7]. if not present, default value is 0.
|
||||
applied to compatible "mediatek,mt2701-mmc".
|
||||
- resets: Phandle and reset specifier pair to softreset line of MSDC IP.
|
||||
- reset-names: Should be "hrst".
|
||||
|
||||
Examples:
|
||||
mmc0: mmc@11230000 {
|
||||
|
|
|
|||
|
|
@ -93,13 +93,15 @@ The Amiga protection flags RWEDRWEDHSPARWED are handled as follows:
|
|||
|
||||
- R maps to r for user, group and others. On directories, R implies x.
|
||||
|
||||
- If both W and D are allowed, w will be set.
|
||||
- W maps to w.
|
||||
|
||||
- E maps to x.
|
||||
|
||||
- H and P are always retained and ignored under Linux.
|
||||
- D is ignored.
|
||||
|
||||
- A is always reset when a file is written to.
|
||||
- H, S and P are always retained and ignored under Linux.
|
||||
|
||||
- A is cleared when a file is written to.
|
||||
|
||||
User id and group id will be used unless set[gu]id are given as mount
|
||||
options. Since most of the Amiga file systems are single user systems
|
||||
|
|
@ -111,11 +113,13 @@ Linux -> Amiga:
|
|||
|
||||
The Linux rwxrwxrwx file mode is handled as follows:
|
||||
|
||||
- r permission will set R for user, group and others.
|
||||
- r permission will allow R for user, group and others.
|
||||
|
||||
- w permission will set W and D for user, group and others.
|
||||
- w permission will allow W for user, group and others.
|
||||
|
||||
- x permission of the user will set E for plain files.
|
||||
- x permission of the user will allow E for plain files.
|
||||
|
||||
- D will be allowed for user, group and others.
|
||||
|
||||
- All other flags (suid, sgid, ...) are ignored and will
|
||||
not be retained.
|
||||
|
|
|
|||
2
Makefile
2
Makefile
|
|
@ -1,7 +1,7 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 63
|
||||
SUBLEVEL = 64
|
||||
EXTRAVERSION =
|
||||
NAME = Kleptomaniac Octopus
|
||||
|
||||
|
|
|
|||
|
|
@ -562,7 +562,7 @@ static int arc_pmu_device_probe(struct platform_device *pdev)
|
|||
{
|
||||
struct arc_reg_pct_build pct_bcr;
|
||||
struct arc_reg_cc_build cc_bcr;
|
||||
int i, has_interrupts;
|
||||
int i, has_interrupts, irq;
|
||||
int counter_size; /* in bits */
|
||||
|
||||
union cc_name {
|
||||
|
|
@ -637,13 +637,7 @@ static int arc_pmu_device_probe(struct platform_device *pdev)
|
|||
.attr_groups = arc_pmu->attr_groups,
|
||||
};
|
||||
|
||||
if (has_interrupts) {
|
||||
int irq = platform_get_irq(pdev, 0);
|
||||
|
||||
if (irq < 0) {
|
||||
pr_err("Cannot get IRQ number for the platform\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
if (has_interrupts && (irq = platform_get_irq(pdev, 0) >= 0)) {
|
||||
|
||||
arc_pmu->irq = irq;
|
||||
|
||||
|
|
@ -652,9 +646,9 @@ static int arc_pmu_device_probe(struct platform_device *pdev)
|
|||
this_cpu_ptr(&arc_pmu_cpu));
|
||||
|
||||
on_each_cpu(arc_cpu_pmu_irq_init, &irq, 1);
|
||||
|
||||
} else
|
||||
} else {
|
||||
arc_pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;
|
||||
}
|
||||
|
||||
/*
|
||||
* perf parser doesn't really like '-' symbol in events name, so let's
|
||||
|
|
|
|||
|
|
@ -686,6 +686,8 @@
|
|||
clocks = <&pericfg CLK_PERI_MSDC30_0_PD>,
|
||||
<&topckgen CLK_TOP_MSDC50_0_SEL>;
|
||||
clock-names = "source", "hclk";
|
||||
resets = <&pericfg MT7622_PERI_MSDC0_SW_RST>;
|
||||
reset-names = "hrst";
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -241,6 +241,8 @@ static int bmips_boot_secondary(int cpu, struct task_struct *idle)
|
|||
*/
|
||||
static void bmips_init_secondary(void)
|
||||
{
|
||||
bmips_cpu_setup();
|
||||
|
||||
switch (current_cpu_type()) {
|
||||
case CPU_BMIPS4350:
|
||||
case CPU_BMIPS4380:
|
||||
|
|
|
|||
|
|
@ -1240,6 +1240,18 @@ static int enable_restore_fp_context(int msa)
|
|||
err = own_fpu_inatomic(1);
|
||||
if (msa && !err) {
|
||||
enable_msa();
|
||||
/*
|
||||
* with MSA enabled, userspace can see MSACSR
|
||||
* and MSA regs, but the values in them are from
|
||||
* other task before current task, restore them
|
||||
* from saved fp/msa context
|
||||
*/
|
||||
write_msa_csr(current->thread.fpu.msacsr);
|
||||
/*
|
||||
* own_fpu_inatomic(1) just restore low 64bit,
|
||||
* fix the high 64bit
|
||||
*/
|
||||
init_msa_upper();
|
||||
set_thread_flag(TIF_USEDMSA);
|
||||
set_thread_flag(TIF_MSA_CTX_LIVE);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1676,7 +1676,11 @@ static void setup_scache(void)
|
|||
printk("MIPS secondary cache %ldkB, %s, linesize %d bytes.\n",
|
||||
scache_size >> 10,
|
||||
way_string[c->scache.ways], c->scache.linesz);
|
||||
|
||||
if (current_cpu_type() == CPU_BMIPS5000)
|
||||
c->options |= MIPS_CPU_INCLUSIVE_CACHES;
|
||||
}
|
||||
|
||||
#else
|
||||
if (!(c->scache.flags & MIPS_CACHE_NOT_PRESENT))
|
||||
panic("Dunno how to handle MIPS32 / MIPS64 second level cache");
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@
|
|||
typedef typeof(pcp) pcp_op_T__; \
|
||||
pcp_op_T__ old__, new__, prev__; \
|
||||
pcp_op_T__ *ptr__; \
|
||||
preempt_disable(); \
|
||||
preempt_disable_notrace(); \
|
||||
ptr__ = raw_cpu_ptr(&(pcp)); \
|
||||
prev__ = *ptr__; \
|
||||
do { \
|
||||
|
|
@ -37,7 +37,7 @@
|
|||
new__ = old__ op (val); \
|
||||
prev__ = cmpxchg(ptr__, old__, new__); \
|
||||
} while (prev__ != old__); \
|
||||
preempt_enable(); \
|
||||
preempt_enable_notrace(); \
|
||||
new__; \
|
||||
})
|
||||
|
||||
|
|
@ -68,7 +68,7 @@
|
|||
typedef typeof(pcp) pcp_op_T__; \
|
||||
pcp_op_T__ val__ = (val); \
|
||||
pcp_op_T__ old__, *ptr__; \
|
||||
preempt_disable(); \
|
||||
preempt_disable_notrace(); \
|
||||
ptr__ = raw_cpu_ptr(&(pcp)); \
|
||||
if (__builtin_constant_p(val__) && \
|
||||
((szcast)val__ > -129) && ((szcast)val__ < 128)) { \
|
||||
|
|
@ -84,7 +84,7 @@
|
|||
: [val__] "d" (val__) \
|
||||
: "cc"); \
|
||||
} \
|
||||
preempt_enable(); \
|
||||
preempt_enable_notrace(); \
|
||||
}
|
||||
|
||||
#define this_cpu_add_4(pcp, val) arch_this_cpu_add(pcp, val, "laa", "asi", int)
|
||||
|
|
@ -95,14 +95,14 @@
|
|||
typedef typeof(pcp) pcp_op_T__; \
|
||||
pcp_op_T__ val__ = (val); \
|
||||
pcp_op_T__ old__, *ptr__; \
|
||||
preempt_disable(); \
|
||||
preempt_disable_notrace(); \
|
||||
ptr__ = raw_cpu_ptr(&(pcp)); \
|
||||
asm volatile( \
|
||||
op " %[old__],%[val__],%[ptr__]\n" \
|
||||
: [old__] "=d" (old__), [ptr__] "+Q" (*ptr__) \
|
||||
: [val__] "d" (val__) \
|
||||
: "cc"); \
|
||||
preempt_enable(); \
|
||||
preempt_enable_notrace(); \
|
||||
old__ + val__; \
|
||||
})
|
||||
|
||||
|
|
@ -114,14 +114,14 @@
|
|||
typedef typeof(pcp) pcp_op_T__; \
|
||||
pcp_op_T__ val__ = (val); \
|
||||
pcp_op_T__ old__, *ptr__; \
|
||||
preempt_disable(); \
|
||||
preempt_disable_notrace(); \
|
||||
ptr__ = raw_cpu_ptr(&(pcp)); \
|
||||
asm volatile( \
|
||||
op " %[old__],%[val__],%[ptr__]\n" \
|
||||
: [old__] "=d" (old__), [ptr__] "+Q" (*ptr__) \
|
||||
: [val__] "d" (val__) \
|
||||
: "cc"); \
|
||||
preempt_enable(); \
|
||||
preempt_enable_notrace(); \
|
||||
}
|
||||
|
||||
#define this_cpu_and_4(pcp, val) arch_this_cpu_to_op(pcp, val, "lan")
|
||||
|
|
@ -136,10 +136,10 @@
|
|||
typedef typeof(pcp) pcp_op_T__; \
|
||||
pcp_op_T__ ret__; \
|
||||
pcp_op_T__ *ptr__; \
|
||||
preempt_disable(); \
|
||||
preempt_disable_notrace(); \
|
||||
ptr__ = raw_cpu_ptr(&(pcp)); \
|
||||
ret__ = cmpxchg(ptr__, oval, nval); \
|
||||
preempt_enable(); \
|
||||
preempt_enable_notrace(); \
|
||||
ret__; \
|
||||
})
|
||||
|
||||
|
|
@ -152,10 +152,10 @@
|
|||
({ \
|
||||
typeof(pcp) *ptr__; \
|
||||
typeof(pcp) ret__; \
|
||||
preempt_disable(); \
|
||||
preempt_disable_notrace(); \
|
||||
ptr__ = raw_cpu_ptr(&(pcp)); \
|
||||
ret__ = xchg(ptr__, nval); \
|
||||
preempt_enable(); \
|
||||
preempt_enable_notrace(); \
|
||||
ret__; \
|
||||
})
|
||||
|
||||
|
|
@ -171,11 +171,11 @@
|
|||
typeof(pcp1) *p1__; \
|
||||
typeof(pcp2) *p2__; \
|
||||
int ret__; \
|
||||
preempt_disable(); \
|
||||
preempt_disable_notrace(); \
|
||||
p1__ = raw_cpu_ptr(&(pcp1)); \
|
||||
p2__ = raw_cpu_ptr(&(pcp2)); \
|
||||
ret__ = __cmpxchg_double(p1__, p2__, o1__, o2__, n1__, n2__); \
|
||||
preempt_enable(); \
|
||||
preempt_enable_notrace(); \
|
||||
ret__; \
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -309,8 +309,8 @@ static inline unsigned long regs_get_kernel_argument(struct pt_regs *regs,
|
|||
static const unsigned int argument_offs[] = {
|
||||
#ifdef __i386__
|
||||
offsetof(struct pt_regs, ax),
|
||||
offsetof(struct pt_regs, cx),
|
||||
offsetof(struct pt_regs, dx),
|
||||
offsetof(struct pt_regs, cx),
|
||||
#define NR_REG_ARGUMENTS 3
|
||||
#else
|
||||
offsetof(struct pt_regs, di),
|
||||
|
|
|
|||
|
|
@ -321,7 +321,7 @@ static int __init split_nodes_size_interleave(struct numa_meminfo *ei,
|
|||
u64 addr, u64 max_addr, u64 size)
|
||||
{
|
||||
return split_nodes_size_interleave_uniform(ei, pi, addr, max_addr, size,
|
||||
0, NULL, NUMA_NO_NODE);
|
||||
0, NULL, 0);
|
||||
}
|
||||
|
||||
int __init setup_emu2phys_nid(int *dfl_phys_nid)
|
||||
|
|
|
|||
|
|
@ -506,6 +506,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
|
|||
goto fail_stats;
|
||||
|
||||
q->backing_dev_info->ra_pages = VM_READAHEAD_PAGES;
|
||||
q->backing_dev_info->io_pages = VM_READAHEAD_PAGES;
|
||||
q->backing_dev_info->capabilities = BDI_CAP_CGROUP_WRITEBACK;
|
||||
q->backing_dev_info->name = "block";
|
||||
q->node = node_id;
|
||||
|
|
|
|||
|
|
@ -2074,14 +2074,15 @@ static void ioc_pd_free(struct blkg_policy_data *pd)
|
|||
{
|
||||
struct ioc_gq *iocg = pd_to_iocg(pd);
|
||||
struct ioc *ioc = iocg->ioc;
|
||||
unsigned long flags;
|
||||
|
||||
if (ioc) {
|
||||
spin_lock(&ioc->lock);
|
||||
spin_lock_irqsave(&ioc->lock, flags);
|
||||
if (!list_empty(&iocg->active_list)) {
|
||||
propagate_active_weight(iocg, 0, 0);
|
||||
list_del_init(&iocg->active_list);
|
||||
}
|
||||
spin_unlock(&ioc->lock);
|
||||
spin_unlock_irqrestore(&ioc->lock, flags);
|
||||
|
||||
hrtimer_cancel(&iocg->waitq_timer);
|
||||
hrtimer_cancel(&iocg->delay_timer);
|
||||
|
|
|
|||
|
|
@ -4474,9 +4474,8 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
|
|||
/* https://bugzilla.kernel.org/show_bug.cgi?id=15573 */
|
||||
{ "C300-CTFDDAC128MAG", "0001", ATA_HORKAGE_NONCQ, },
|
||||
|
||||
/* Some Sandisk SSDs lock up hard with NCQ enabled. Reported on
|
||||
SD7SN6S256G and SD8SN8U256G */
|
||||
{ "SanDisk SD[78]SN*G", NULL, ATA_HORKAGE_NONCQ, },
|
||||
/* Sandisk SD7/8/9s lock up hard on large trims */
|
||||
{ "SanDisk SD[789]*", NULL, ATA_HORKAGE_MAX_TRIM_128M, },
|
||||
|
||||
/* devices which puke on READ_NATIVE_MAX */
|
||||
{ "HDS724040KLSA80", "KFAOA20N", ATA_HORKAGE_BROKEN_HPA, },
|
||||
|
|
|
|||
|
|
@ -2374,6 +2374,7 @@ static unsigned int ata_scsiop_inq_89(struct ata_scsi_args *args, u8 *rbuf)
|
|||
|
||||
static unsigned int ata_scsiop_inq_b0(struct ata_scsi_args *args, u8 *rbuf)
|
||||
{
|
||||
struct ata_device *dev = args->dev;
|
||||
u16 min_io_sectors;
|
||||
|
||||
rbuf[1] = 0xb0;
|
||||
|
|
@ -2399,7 +2400,12 @@ static unsigned int ata_scsiop_inq_b0(struct ata_scsi_args *args, u8 *rbuf)
|
|||
* with the unmap bit set.
|
||||
*/
|
||||
if (ata_id_has_trim(args->id)) {
|
||||
put_unaligned_be64(65535 * ATA_MAX_TRIM_RNUM, &rbuf[36]);
|
||||
u64 max_blocks = 65535 * ATA_MAX_TRIM_RNUM;
|
||||
|
||||
if (dev->horkage & ATA_HORKAGE_MAX_TRIM_128M)
|
||||
max_blocks = 128 << (20 - SECTOR_SHIFT);
|
||||
|
||||
put_unaligned_be64(max_blocks, &rbuf[36]);
|
||||
put_unaligned_be32(1, &rbuf[28]);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1349,6 +1349,8 @@ static void nbd_set_cmd_timeout(struct nbd_device *nbd, u64 timeout)
|
|||
nbd->tag_set.timeout = timeout * HZ;
|
||||
if (timeout)
|
||||
blk_queue_rq_timeout(nbd->disk->queue, timeout * HZ);
|
||||
else
|
||||
blk_queue_rq_timeout(nbd->disk->queue, 30 * HZ);
|
||||
}
|
||||
|
||||
/* Must be called with config_lock held */
|
||||
|
|
|
|||
|
|
@ -148,7 +148,8 @@ static void enter_s2idle_proper(struct cpuidle_driver *drv,
|
|||
*/
|
||||
stop_critical_timings();
|
||||
drv->states[index].enter_s2idle(dev, drv, index);
|
||||
WARN_ON(!irqs_disabled());
|
||||
if (WARN_ON_ONCE(!irqs_disabled()))
|
||||
local_irq_disable();
|
||||
/*
|
||||
* timekeeping_resume() that will be called by tick_unfreeze() for the
|
||||
* first CPU executing it calls functions containing RCU read-side
|
||||
|
|
|
|||
|
|
@ -1667,6 +1667,8 @@ static struct dma_chan *at_dma_xlate(struct of_phandle_args *dma_spec,
|
|||
return NULL;
|
||||
|
||||
dmac_pdev = of_find_device_by_node(dma_spec->np);
|
||||
if (!dmac_pdev)
|
||||
return NULL;
|
||||
|
||||
dma_cap_zero(mask);
|
||||
dma_cap_set(DMA_SLAVE, mask);
|
||||
|
|
|
|||
|
|
@ -391,7 +391,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
|
|||
if (xfer->cyclic) {
|
||||
burst->dar = xfer->xfer.cyclic.paddr;
|
||||
} else {
|
||||
burst->dar = sg_dma_address(sg);
|
||||
burst->dar = dst_addr;
|
||||
/* Unlike the typical assumption by other
|
||||
* drivers/IPs the peripheral memory isn't
|
||||
* a FIFO memory, in this case, it's a
|
||||
|
|
@ -399,14 +399,13 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
|
|||
* and destination addresses are increased
|
||||
* by the same portion (data length)
|
||||
*/
|
||||
src_addr += sg_dma_len(sg);
|
||||
}
|
||||
} else {
|
||||
burst->dar = dst_addr;
|
||||
if (xfer->cyclic) {
|
||||
burst->sar = xfer->xfer.cyclic.paddr;
|
||||
} else {
|
||||
burst->sar = sg_dma_address(sg);
|
||||
burst->sar = src_addr;
|
||||
/* Unlike the typical assumption by other
|
||||
* drivers/IPs the peripheral memory isn't
|
||||
* a FIFO memory, in this case, it's a
|
||||
|
|
@ -414,12 +413,14 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
|
|||
* and destination addresses are increased
|
||||
* by the same portion (data length)
|
||||
*/
|
||||
dst_addr += sg_dma_len(sg);
|
||||
}
|
||||
}
|
||||
|
||||
if (!xfer->cyclic)
|
||||
if (!xfer->cyclic) {
|
||||
src_addr += sg_dma_len(sg);
|
||||
dst_addr += sg_dma_len(sg);
|
||||
sg = sg_next(sg);
|
||||
}
|
||||
}
|
||||
|
||||
return vchan_tx_prep(&chan->vc, &desc->vd, xfer->flags);
|
||||
|
|
|
|||
|
|
@ -205,10 +205,10 @@ struct fsldma_chan {
|
|||
#else
|
||||
static u64 fsl_ioread64(const u64 __iomem *addr)
|
||||
{
|
||||
u32 fsl_addr = lower_32_bits(addr);
|
||||
u64 fsl_addr_hi = (u64)in_le32((u32 *)(fsl_addr + 1)) << 32;
|
||||
u32 val_lo = in_le32((u32 __iomem *)addr);
|
||||
u32 val_hi = in_le32((u32 __iomem *)addr + 1);
|
||||
|
||||
return fsl_addr_hi | in_le32((u32 *)fsl_addr);
|
||||
return ((u64)val_hi << 32) + val_lo;
|
||||
}
|
||||
|
||||
static void fsl_iowrite64(u64 val, u64 __iomem *addr)
|
||||
|
|
@ -219,10 +219,10 @@ static void fsl_iowrite64(u64 val, u64 __iomem *addr)
|
|||
|
||||
static u64 fsl_ioread64be(const u64 __iomem *addr)
|
||||
{
|
||||
u32 fsl_addr = lower_32_bits(addr);
|
||||
u64 fsl_addr_hi = (u64)in_be32((u32 *)fsl_addr) << 32;
|
||||
u32 val_hi = in_be32((u32 __iomem *)addr);
|
||||
u32 val_lo = in_be32((u32 __iomem *)addr + 1);
|
||||
|
||||
return fsl_addr_hi | in_be32((u32 *)(fsl_addr + 1));
|
||||
return ((u64)val_hi << 32) + val_lo;
|
||||
}
|
||||
|
||||
static void fsl_iowrite64be(u64 val, u64 __iomem *addr)
|
||||
|
|
|
|||
|
|
@ -69,12 +69,12 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
|
|||
return NULL;
|
||||
|
||||
chan = ofdma_target->of_dma_xlate(&dma_spec_target, ofdma_target);
|
||||
if (chan) {
|
||||
chan->router = ofdma->dma_router;
|
||||
chan->route_data = route_data;
|
||||
} else {
|
||||
if (IS_ERR_OR_NULL(chan)) {
|
||||
ofdma->dma_router->route_free(ofdma->dma_router->dev,
|
||||
route_data);
|
||||
} else {
|
||||
chan->router = ofdma->dma_router;
|
||||
chan->route_data = route_data;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -2788,6 +2788,7 @@ pl330_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
|
|||
while (burst != (1 << desc->rqcfg.brst_size))
|
||||
desc->rqcfg.brst_size++;
|
||||
|
||||
desc->rqcfg.brst_len = get_burst_len(desc, len);
|
||||
/*
|
||||
* If burst size is smaller than bus width then make sure we only
|
||||
* transfer one at a time to avoid a burst stradling an MFIFO entry.
|
||||
|
|
@ -2795,7 +2796,6 @@ pl330_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
|
|||
if (desc->rqcfg.brst_size * 8 < pl330->pcfg.data_bus_width)
|
||||
desc->rqcfg.brst_len = 1;
|
||||
|
||||
desc->rqcfg.brst_len = get_burst_len(desc, len);
|
||||
desc->bytes_requested = len;
|
||||
|
||||
desc->txd.flags = flags;
|
||||
|
|
|
|||
|
|
@ -2043,12 +2043,18 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
|
|||
&dm_atomic_state_funcs);
|
||||
|
||||
r = amdgpu_display_modeset_create_props(adev);
|
||||
if (r)
|
||||
if (r) {
|
||||
dc_release_state(state->context);
|
||||
kfree(state);
|
||||
return r;
|
||||
}
|
||||
|
||||
r = amdgpu_dm_audio_init(adev);
|
||||
if (r)
|
||||
if (r) {
|
||||
dc_release_state(state->context);
|
||||
kfree(state);
|
||||
return r;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -2064,6 +2070,8 @@ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm)
|
|||
#if defined(CONFIG_ACPI)
|
||||
struct amdgpu_dm_backlight_caps caps;
|
||||
|
||||
memset(&caps, 0, sizeof(caps));
|
||||
|
||||
if (dm->backlight_caps.caps_valid)
|
||||
return;
|
||||
|
||||
|
|
|
|||
|
|
@ -101,7 +101,7 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
|
|||
result = dc_link_aux_transfer_raw(TO_DM_AUX(aux)->ddc_service, &payload,
|
||||
&operation_result);
|
||||
|
||||
if (payload.write)
|
||||
if (payload.write && result >= 0)
|
||||
result = msg->size;
|
||||
|
||||
if (result < 0)
|
||||
|
|
|
|||
|
|
@ -1151,6 +1151,7 @@ static enum dc_status dcn10_validate_global(struct dc *dc, struct dc_state *cont
|
|||
bool video_large = false;
|
||||
bool desktop_large = false;
|
||||
bool dcc_disabled = false;
|
||||
bool mpo_enabled = false;
|
||||
|
||||
for (i = 0; i < context->stream_count; i++) {
|
||||
if (context->stream_status[i].plane_count == 0)
|
||||
|
|
@ -1159,6 +1160,9 @@ static enum dc_status dcn10_validate_global(struct dc *dc, struct dc_state *cont
|
|||
if (context->stream_status[i].plane_count > 2)
|
||||
return DC_FAIL_UNSUPPORTED_1;
|
||||
|
||||
if (context->stream_status[i].plane_count > 1)
|
||||
mpo_enabled = true;
|
||||
|
||||
for (j = 0; j < context->stream_status[i].plane_count; j++) {
|
||||
struct dc_plane_state *plane =
|
||||
context->stream_status[i].plane_states[j];
|
||||
|
|
@ -1182,6 +1186,10 @@ static enum dc_status dcn10_validate_global(struct dc *dc, struct dc_state *cont
|
|||
}
|
||||
}
|
||||
|
||||
/* Disable MPO in multi-display configurations. */
|
||||
if (context->stream_count > 1 && mpo_enabled)
|
||||
return DC_FAIL_UNSUPPORTED_1;
|
||||
|
||||
/*
|
||||
* Workaround: On DCN10 there is UMC issue that causes underflow when
|
||||
* playing 4k video on 4k desktop with video downscaled and single channel
|
||||
|
|
|
|||
|
|
@ -375,8 +375,18 @@ static int vega10_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
|
|||
/* compare them in unit celsius degree */
|
||||
if (low < range->min / PP_TEMPERATURE_UNITS_PER_CENTIGRADES)
|
||||
low = range->min / PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
if (high > tdp_table->usSoftwareShutdownTemp)
|
||||
high = tdp_table->usSoftwareShutdownTemp;
|
||||
|
||||
/*
|
||||
* As a common sense, usSoftwareShutdownTemp should be bigger
|
||||
* than ThotspotLimit. For any invalid usSoftwareShutdownTemp,
|
||||
* we will just use the max possible setting VEGA10_THERMAL_MAXIMUM_ALERT_TEMP
|
||||
* to avoid false alarms.
|
||||
*/
|
||||
if ((tdp_table->usSoftwareShutdownTemp >
|
||||
range->hotspot_crit_max / PP_TEMPERATURE_UNITS_PER_CENTIGRADES)) {
|
||||
if (high > tdp_table->usSoftwareShutdownTemp)
|
||||
high = tdp_table->usSoftwareShutdownTemp;
|
||||
}
|
||||
|
||||
if (low > high)
|
||||
return -EINVAL;
|
||||
|
|
|
|||
|
|
@ -200,12 +200,22 @@ static int a6xx_gmu_start(struct a6xx_gmu *gmu)
|
|||
{
|
||||
int ret;
|
||||
u32 val;
|
||||
u32 mask, reset_val;
|
||||
|
||||
val = gmu_read(gmu, REG_A6XX_GMU_CM3_DTCM_START + 0xff8);
|
||||
if (val <= 0x20010004) {
|
||||
mask = 0xffffffff;
|
||||
reset_val = 0xbabeface;
|
||||
} else {
|
||||
mask = 0x1ff;
|
||||
reset_val = 0x100;
|
||||
}
|
||||
|
||||
gmu_write(gmu, REG_A6XX_GMU_CM3_SYSRESET, 1);
|
||||
gmu_write(gmu, REG_A6XX_GMU_CM3_SYSRESET, 0);
|
||||
|
||||
ret = gmu_poll_timeout(gmu, REG_A6XX_GMU_CM3_FW_INIT_RESULT, val,
|
||||
val == 0xbabeface, 100, 10000);
|
||||
(val & mask) == reset_val, 100, 10000);
|
||||
|
||||
if (ret)
|
||||
DRM_DEV_ERROR(gmu->dev, "GMU firmware initialization timed out\n");
|
||||
|
|
|
|||
|
|
@ -853,9 +853,9 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
|
|||
crtc_state = drm_atomic_get_new_crtc_state(state->state,
|
||||
state->crtc);
|
||||
|
||||
min_scale = FRAC_16_16(1, pdpu->pipe_sblk->maxdwnscale);
|
||||
min_scale = FRAC_16_16(1, pdpu->pipe_sblk->maxupscale);
|
||||
ret = drm_atomic_helper_check_plane_state(state, crtc_state, min_scale,
|
||||
pdpu->pipe_sblk->maxupscale << 16,
|
||||
pdpu->pipe_sblk->maxdwnscale << 16,
|
||||
true, true);
|
||||
if (ret) {
|
||||
DPU_DEBUG_PLANE(pdpu, "Check plane state failed (%d)\n", ret);
|
||||
|
|
|
|||
|
|
@ -27,6 +27,34 @@ int msm_atomic_prepare_fb(struct drm_plane *plane,
|
|||
return msm_framebuffer_prepare(new_state->fb, kms->aspace);
|
||||
}
|
||||
|
||||
/*
|
||||
* Helpers to control vblanks while we flush.. basically just to ensure
|
||||
* that vblank accounting is switched on, so we get valid seqn/timestamp
|
||||
* on pageflip events (if requested)
|
||||
*/
|
||||
|
||||
static void vblank_get(struct msm_kms *kms, unsigned crtc_mask)
|
||||
{
|
||||
struct drm_crtc *crtc;
|
||||
|
||||
for_each_crtc_mask(kms->dev, crtc, crtc_mask) {
|
||||
if (!crtc->state->active)
|
||||
continue;
|
||||
drm_crtc_vblank_get(crtc);
|
||||
}
|
||||
}
|
||||
|
||||
static void vblank_put(struct msm_kms *kms, unsigned crtc_mask)
|
||||
{
|
||||
struct drm_crtc *crtc;
|
||||
|
||||
for_each_crtc_mask(kms->dev, crtc, crtc_mask) {
|
||||
if (!crtc->state->active)
|
||||
continue;
|
||||
drm_crtc_vblank_put(crtc);
|
||||
}
|
||||
}
|
||||
|
||||
static void msm_atomic_async_commit(struct msm_kms *kms, int crtc_idx)
|
||||
{
|
||||
unsigned crtc_mask = BIT(crtc_idx);
|
||||
|
|
@ -44,6 +72,8 @@ static void msm_atomic_async_commit(struct msm_kms *kms, int crtc_idx)
|
|||
|
||||
kms->funcs->enable_commit(kms);
|
||||
|
||||
vblank_get(kms, crtc_mask);
|
||||
|
||||
/*
|
||||
* Flush hardware updates:
|
||||
*/
|
||||
|
|
@ -58,6 +88,8 @@ static void msm_atomic_async_commit(struct msm_kms *kms, int crtc_idx)
|
|||
kms->funcs->wait_flush(kms, crtc_mask);
|
||||
trace_msm_atomic_wait_flush_finish(crtc_mask);
|
||||
|
||||
vblank_put(kms, crtc_mask);
|
||||
|
||||
mutex_lock(&kms->commit_lock);
|
||||
kms->funcs->complete_commit(kms, crtc_mask);
|
||||
mutex_unlock(&kms->commit_lock);
|
||||
|
|
@ -221,6 +253,8 @@ void msm_atomic_commit_tail(struct drm_atomic_state *state)
|
|||
*/
|
||||
kms->pending_crtc_mask &= ~crtc_mask;
|
||||
|
||||
vblank_get(kms, crtc_mask);
|
||||
|
||||
/*
|
||||
* Flush hardware updates:
|
||||
*/
|
||||
|
|
@ -235,6 +269,8 @@ void msm_atomic_commit_tail(struct drm_atomic_state *state)
|
|||
kms->funcs->wait_flush(kms, crtc_mask);
|
||||
trace_msm_atomic_wait_flush_finish(crtc_mask);
|
||||
|
||||
vblank_put(kms, crtc_mask);
|
||||
|
||||
mutex_lock(&kms->commit_lock);
|
||||
kms->funcs->complete_commit(kms, crtc_mask);
|
||||
mutex_unlock(&kms->commit_lock);
|
||||
|
|
|
|||
|
|
@ -1321,6 +1321,13 @@ static int msm_pdev_remove(struct platform_device *pdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void msm_pdev_shutdown(struct platform_device *pdev)
|
||||
{
|
||||
struct drm_device *drm = platform_get_drvdata(pdev);
|
||||
|
||||
drm_atomic_helper_shutdown(drm);
|
||||
}
|
||||
|
||||
static const struct of_device_id dt_match[] = {
|
||||
{ .compatible = "qcom,mdp4", .data = (void *)KMS_MDP4 },
|
||||
{ .compatible = "qcom,mdss", .data = (void *)KMS_MDP5 },
|
||||
|
|
@ -1332,6 +1339,7 @@ MODULE_DEVICE_TABLE(of, dt_match);
|
|||
static struct platform_driver msm_platform_driver = {
|
||||
.probe = msm_pdev_probe,
|
||||
.remove = msm_pdev_remove,
|
||||
.shutdown = msm_pdev_shutdown,
|
||||
.driver = {
|
||||
.name = "msm",
|
||||
.of_match_table = dt_match,
|
||||
|
|
|
|||
|
|
@ -451,11 +451,12 @@ static void omap_crtc_atomic_enable(struct drm_crtc *crtc,
|
|||
if (omap_state->manually_updated)
|
||||
return;
|
||||
|
||||
spin_lock_irq(&crtc->dev->event_lock);
|
||||
drm_crtc_vblank_on(crtc);
|
||||
|
||||
ret = drm_crtc_vblank_get(crtc);
|
||||
WARN_ON(ret != 0);
|
||||
|
||||
spin_lock_irq(&crtc->dev->event_lock);
|
||||
omap_crtc_arm_event(crtc);
|
||||
spin_unlock_irq(&crtc->dev->event_lock);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -730,6 +730,9 @@
|
|||
#define USB_DEVICE_ID_LENOVO_X1_TAB 0x60a3
|
||||
#define USB_DEVICE_ID_LENOVO_X1_TAB3 0x60b5
|
||||
#define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D 0x608d
|
||||
#define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019 0x6019
|
||||
#define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E 0x602e
|
||||
#define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6093 0x6093
|
||||
|
||||
#define USB_VENDOR_ID_LG 0x1fd2
|
||||
#define USB_DEVICE_ID_LG_MULTITOUCH 0x0064
|
||||
|
|
|
|||
|
|
@ -105,6 +105,9 @@ static const struct hid_device_id hid_quirks[] = {
|
|||
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M406XE), HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE_ID2), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6093), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C007), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_KEYBOARD_G710_PLUS), HID_QUIRK_NOGET },
|
||||
|
|
|
|||
|
|
@ -748,15 +748,18 @@ static ssize_t applesmc_light_show(struct device *dev,
|
|||
}
|
||||
|
||||
ret = applesmc_read_key(LIGHT_SENSOR_LEFT_KEY, buffer, data_length);
|
||||
if (ret)
|
||||
goto out;
|
||||
/* newer macbooks report a single 10-bit bigendian value */
|
||||
if (data_length == 10) {
|
||||
left = be16_to_cpu(*(__be16 *)(buffer + 6)) >> 2;
|
||||
goto out;
|
||||
}
|
||||
left = buffer[2];
|
||||
|
||||
ret = applesmc_read_key(LIGHT_SENSOR_RIGHT_KEY, buffer, data_length);
|
||||
if (ret)
|
||||
goto out;
|
||||
ret = applesmc_read_key(LIGHT_SENSOR_RIGHT_KEY, buffer, data_length);
|
||||
right = buffer[2];
|
||||
|
||||
out:
|
||||
|
|
@ -805,12 +808,11 @@ static ssize_t applesmc_show_fan_speed(struct device *dev,
|
|||
to_index(attr));
|
||||
|
||||
ret = applesmc_read_key(newkey, buffer, 2);
|
||||
speed = ((buffer[0] << 8 | buffer[1]) >> 2);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
else
|
||||
return snprintf(sysfsbuf, PAGE_SIZE, "%u\n", speed);
|
||||
|
||||
speed = ((buffer[0] << 8 | buffer[1]) >> 2);
|
||||
return snprintf(sysfsbuf, PAGE_SIZE, "%u\n", speed);
|
||||
}
|
||||
|
||||
static ssize_t applesmc_store_fan_speed(struct device *dev,
|
||||
|
|
@ -846,12 +848,11 @@ static ssize_t applesmc_show_fan_manual(struct device *dev,
|
|||
u8 buffer[2];
|
||||
|
||||
ret = applesmc_read_key(FANS_MANUAL, buffer, 2);
|
||||
manual = ((buffer[0] << 8 | buffer[1]) >> to_index(attr)) & 0x01;
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
else
|
||||
return snprintf(sysfsbuf, PAGE_SIZE, "%d\n", manual);
|
||||
|
||||
manual = ((buffer[0] << 8 | buffer[1]) >> to_index(attr)) & 0x01;
|
||||
return snprintf(sysfsbuf, PAGE_SIZE, "%d\n", manual);
|
||||
}
|
||||
|
||||
static ssize_t applesmc_store_fan_manual(struct device *dev,
|
||||
|
|
@ -867,10 +868,11 @@ static ssize_t applesmc_store_fan_manual(struct device *dev,
|
|||
return -EINVAL;
|
||||
|
||||
ret = applesmc_read_key(FANS_MANUAL, buffer, 2);
|
||||
val = (buffer[0] << 8 | buffer[1]);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
val = (buffer[0] << 8 | buffer[1]);
|
||||
|
||||
if (input)
|
||||
val = val | (0x01 << to_index(attr));
|
||||
else
|
||||
|
|
@ -946,13 +948,12 @@ static ssize_t applesmc_key_count_show(struct device *dev,
|
|||
u32 count;
|
||||
|
||||
ret = applesmc_read_key(KEY_COUNT_KEY, buffer, 4);
|
||||
count = ((u32)buffer[0]<<24) + ((u32)buffer[1]<<16) +
|
||||
((u32)buffer[2]<<8) + buffer[3];
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
else
|
||||
return snprintf(sysfsbuf, PAGE_SIZE, "%d\n", count);
|
||||
|
||||
count = ((u32)buffer[0]<<24) + ((u32)buffer[1]<<16) +
|
||||
((u32)buffer[2]<<8) + buffer[3];
|
||||
return snprintf(sysfsbuf, PAGE_SIZE, "%d\n", count);
|
||||
}
|
||||
|
||||
static ssize_t applesmc_key_at_index_read_show(struct device *dev,
|
||||
|
|
|
|||
|
|
@ -4431,6 +4431,7 @@ int amd_iommu_deactivate_guest_mode(void *data)
|
|||
struct amd_ir_data *ir_data = (struct amd_ir_data *)data;
|
||||
struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
|
||||
struct irq_cfg *cfg = ir_data->cfg;
|
||||
u64 valid = entry->lo.fields_remap.valid;
|
||||
|
||||
if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) ||
|
||||
!entry || !entry->lo.fields_vapic.guest_mode)
|
||||
|
|
@ -4439,6 +4440,7 @@ int amd_iommu_deactivate_guest_mode(void *data)
|
|||
entry->lo.val = 0;
|
||||
entry->hi.val = 0;
|
||||
|
||||
entry->lo.fields_remap.valid = valid;
|
||||
entry->lo.fields_remap.dm = apic->irq_dest_mode;
|
||||
entry->lo.fields_remap.int_type = apic->irq_delivery_mode;
|
||||
entry->hi.fields.vector = cfg->vector;
|
||||
|
|
|
|||
|
|
@ -123,29 +123,29 @@ static inline unsigned int level_to_offset_bits(int level)
|
|||
return (level - 1) * LEVEL_STRIDE;
|
||||
}
|
||||
|
||||
static inline int pfn_level_offset(unsigned long pfn, int level)
|
||||
static inline int pfn_level_offset(u64 pfn, int level)
|
||||
{
|
||||
return (pfn >> level_to_offset_bits(level)) & LEVEL_MASK;
|
||||
}
|
||||
|
||||
static inline unsigned long level_mask(int level)
|
||||
static inline u64 level_mask(int level)
|
||||
{
|
||||
return -1UL << level_to_offset_bits(level);
|
||||
return -1ULL << level_to_offset_bits(level);
|
||||
}
|
||||
|
||||
static inline unsigned long level_size(int level)
|
||||
static inline u64 level_size(int level)
|
||||
{
|
||||
return 1UL << level_to_offset_bits(level);
|
||||
return 1ULL << level_to_offset_bits(level);
|
||||
}
|
||||
|
||||
static inline unsigned long align_to_level(unsigned long pfn, int level)
|
||||
static inline u64 align_to_level(u64 pfn, int level)
|
||||
{
|
||||
return (pfn + level_size(level) - 1) & level_mask(level);
|
||||
}
|
||||
|
||||
static inline unsigned long lvl_to_nr_pages(unsigned int lvl)
|
||||
{
|
||||
return 1 << min_t(int, (lvl - 1) * LEVEL_STRIDE, MAX_AGAW_PFN_WIDTH);
|
||||
return 1UL << min_t(int, (lvl - 1) * LEVEL_STRIDE, MAX_AGAW_PFN_WIDTH);
|
||||
}
|
||||
|
||||
/* VT-d pages must always be _smaller_ than MM pages. Otherwise things
|
||||
|
|
|
|||
|
|
@ -507,12 +507,18 @@ static void iommu_enable_irq_remapping(struct intel_iommu *iommu)
|
|||
|
||||
/* Enable interrupt-remapping */
|
||||
iommu->gcmd |= DMA_GCMD_IRE;
|
||||
iommu->gcmd &= ~DMA_GCMD_CFI; /* Block compatibility-format MSIs */
|
||||
writel(iommu->gcmd, iommu->reg + DMAR_GCMD_REG);
|
||||
|
||||
IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG,
|
||||
readl, (sts & DMA_GSTS_IRES), sts);
|
||||
|
||||
/* Block compatibility-format MSIs */
|
||||
if (sts & DMA_GSTS_CFIS) {
|
||||
iommu->gcmd &= ~DMA_GCMD_CFI;
|
||||
writel(iommu->gcmd, iommu->reg + DMAR_GCMD_REG);
|
||||
IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG,
|
||||
readl, !(sts & DMA_GSTS_CFIS), sts);
|
||||
}
|
||||
|
||||
/*
|
||||
* With CFI clear in the Global Command register, we should be
|
||||
* protected from dangerous (i.e. compatibility) interrupts
|
||||
|
|
|
|||
|
|
@ -537,12 +537,16 @@ static int __create_persistent_data_objects(struct dm_cache_metadata *cmd,
|
|||
CACHE_MAX_CONCURRENT_LOCKS);
|
||||
if (IS_ERR(cmd->bm)) {
|
||||
DMERR("could not create block manager");
|
||||
return PTR_ERR(cmd->bm);
|
||||
r = PTR_ERR(cmd->bm);
|
||||
cmd->bm = NULL;
|
||||
return r;
|
||||
}
|
||||
|
||||
r = __open_or_format_metadata(cmd, may_format_device);
|
||||
if (r)
|
||||
if (r) {
|
||||
dm_block_manager_destroy(cmd->bm);
|
||||
cmd->bm = NULL;
|
||||
}
|
||||
|
||||
return r;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -720,7 +720,7 @@ static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv,
|
|||
u8 buf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(__le64));
|
||||
struct skcipher_request *req;
|
||||
struct scatterlist src, dst;
|
||||
struct crypto_wait wait;
|
||||
DECLARE_CRYPTO_WAIT(wait);
|
||||
int err;
|
||||
|
||||
req = skcipher_request_alloc(any_tfm(cc), GFP_NOIO);
|
||||
|
|
|
|||
|
|
@ -2365,6 +2365,7 @@ next_chunk:
|
|||
range.logical_sector = le64_to_cpu(ic->sb->recalc_sector);
|
||||
if (unlikely(range.logical_sector >= ic->provided_data_sectors)) {
|
||||
if (ic->mode == 'B') {
|
||||
block_bitmap_op(ic, ic->recalc_bitmap, 0, ic->provided_data_sectors, BITMAP_OP_CLEAR);
|
||||
DEBUG_print("queue_delayed_work: bitmap_flush_work\n");
|
||||
queue_delayed_work(ic->commit_wq, &ic->bitmap_flush_work, 0);
|
||||
}
|
||||
|
|
@ -2442,6 +2443,17 @@ next_chunk:
|
|||
goto err;
|
||||
}
|
||||
|
||||
if (ic->mode == 'B') {
|
||||
sector_t start, end;
|
||||
start = (range.logical_sector >>
|
||||
(ic->sb->log2_sectors_per_block + ic->log2_blocks_per_bitmap_bit)) <<
|
||||
(ic->sb->log2_sectors_per_block + ic->log2_blocks_per_bitmap_bit);
|
||||
end = ((range.logical_sector + range.n_sectors) >>
|
||||
(ic->sb->log2_sectors_per_block + ic->log2_blocks_per_bitmap_bit)) <<
|
||||
(ic->sb->log2_sectors_per_block + ic->log2_blocks_per_bitmap_bit);
|
||||
block_bitmap_op(ic, ic->recalc_bitmap, start, end - start, BITMAP_OP_CLEAR);
|
||||
}
|
||||
|
||||
advance_and_next:
|
||||
cond_resched();
|
||||
|
||||
|
|
|
|||
|
|
@ -1190,17 +1190,25 @@ static void multipath_wait_for_pg_init_completion(struct multipath *m)
|
|||
static void flush_multipath_work(struct multipath *m)
|
||||
{
|
||||
if (m->hw_handler_name) {
|
||||
set_bit(MPATHF_PG_INIT_DISABLED, &m->flags);
|
||||
smp_mb__after_atomic();
|
||||
unsigned long flags;
|
||||
|
||||
if (!atomic_read(&m->pg_init_in_progress))
|
||||
goto skip;
|
||||
|
||||
spin_lock_irqsave(&m->lock, flags);
|
||||
if (atomic_read(&m->pg_init_in_progress) &&
|
||||
!test_and_set_bit(MPATHF_PG_INIT_DISABLED, &m->flags)) {
|
||||
spin_unlock_irqrestore(&m->lock, flags);
|
||||
|
||||
if (atomic_read(&m->pg_init_in_progress))
|
||||
flush_workqueue(kmpath_handlerd);
|
||||
multipath_wait_for_pg_init_completion(m);
|
||||
multipath_wait_for_pg_init_completion(m);
|
||||
|
||||
clear_bit(MPATHF_PG_INIT_DISABLED, &m->flags);
|
||||
smp_mb__after_atomic();
|
||||
spin_lock_irqsave(&m->lock, flags);
|
||||
clear_bit(MPATHF_PG_INIT_DISABLED, &m->flags);
|
||||
}
|
||||
spin_unlock_irqrestore(&m->lock, flags);
|
||||
}
|
||||
|
||||
skip:
|
||||
if (m->queue_mode == DM_TYPE_BIO_BASED)
|
||||
flush_work(&m->process_queued_bios);
|
||||
flush_work(&m->trigger_event);
|
||||
|
|
|
|||
|
|
@ -739,12 +739,16 @@ static int __create_persistent_data_objects(struct dm_pool_metadata *pmd, bool f
|
|||
THIN_MAX_CONCURRENT_LOCKS);
|
||||
if (IS_ERR(pmd->bm)) {
|
||||
DMERR("could not create block manager");
|
||||
return PTR_ERR(pmd->bm);
|
||||
r = PTR_ERR(pmd->bm);
|
||||
pmd->bm = NULL;
|
||||
return r;
|
||||
}
|
||||
|
||||
r = __open_or_format_metadata(pmd, format_device);
|
||||
if (r)
|
||||
if (r) {
|
||||
dm_block_manager_destroy(pmd->bm);
|
||||
pmd->bm = NULL;
|
||||
}
|
||||
|
||||
return r;
|
||||
}
|
||||
|
|
@ -954,7 +958,7 @@ int dm_pool_metadata_close(struct dm_pool_metadata *pmd)
|
|||
}
|
||||
|
||||
pmd_write_lock_in_core(pmd);
|
||||
if (!dm_bm_is_read_only(pmd->bm) && !pmd->fail_io) {
|
||||
if (!pmd->fail_io && !dm_bm_is_read_only(pmd->bm)) {
|
||||
r = __commit_transaction(pmd);
|
||||
if (r < 0)
|
||||
DMWARN("%s: __commit_transaction() failed, error = %d",
|
||||
|
|
|
|||
|
|
@ -224,6 +224,7 @@ static int persistent_memory_claim(struct dm_writecache *wc)
|
|||
pfn_t pfn;
|
||||
int id;
|
||||
struct page **pages;
|
||||
sector_t offset;
|
||||
|
||||
wc->memory_vmapped = false;
|
||||
|
||||
|
|
@ -242,9 +243,16 @@ static int persistent_memory_claim(struct dm_writecache *wc)
|
|||
goto err1;
|
||||
}
|
||||
|
||||
offset = get_start_sect(wc->ssd_dev->bdev);
|
||||
if (offset & (PAGE_SIZE / 512 - 1)) {
|
||||
r = -EINVAL;
|
||||
goto err1;
|
||||
}
|
||||
offset >>= PAGE_SHIFT - 9;
|
||||
|
||||
id = dax_read_lock();
|
||||
|
||||
da = dax_direct_access(wc->ssd_dev->dax_dev, 0, p, &wc->memory_map, &pfn);
|
||||
da = dax_direct_access(wc->ssd_dev->dax_dev, offset, p, &wc->memory_map, &pfn);
|
||||
if (da < 0) {
|
||||
wc->memory_map = NULL;
|
||||
r = da;
|
||||
|
|
@ -266,7 +274,7 @@ static int persistent_memory_claim(struct dm_writecache *wc)
|
|||
i = 0;
|
||||
do {
|
||||
long daa;
|
||||
daa = dax_direct_access(wc->ssd_dev->dax_dev, i, p - i,
|
||||
daa = dax_direct_access(wc->ssd_dev->dax_dev, offset + i, p - i,
|
||||
NULL, &pfn);
|
||||
if (daa <= 0) {
|
||||
r = daa ? daa : -EINVAL;
|
||||
|
|
|
|||
|
|
@ -493,7 +493,7 @@ int dm_bm_write_lock(struct dm_block_manager *bm,
|
|||
void *p;
|
||||
int r;
|
||||
|
||||
if (bm->read_only)
|
||||
if (dm_bm_is_read_only(bm))
|
||||
return -EPERM;
|
||||
|
||||
p = dm_bufio_read(bm->bufio, b, (struct dm_buffer **) result);
|
||||
|
|
@ -562,7 +562,7 @@ int dm_bm_write_lock_zero(struct dm_block_manager *bm,
|
|||
struct buffer_aux *aux;
|
||||
void *p;
|
||||
|
||||
if (bm->read_only)
|
||||
if (dm_bm_is_read_only(bm))
|
||||
return -EPERM;
|
||||
|
||||
p = dm_bufio_new(bm->bufio, b, (struct dm_buffer **) result);
|
||||
|
|
@ -602,7 +602,7 @@ EXPORT_SYMBOL_GPL(dm_bm_unlock);
|
|||
|
||||
int dm_bm_flush(struct dm_block_manager *bm)
|
||||
{
|
||||
if (bm->read_only)
|
||||
if (dm_bm_is_read_only(bm))
|
||||
return -EPERM;
|
||||
|
||||
return dm_bufio_write_dirty_buffers(bm->bufio);
|
||||
|
|
@ -616,19 +616,21 @@ void dm_bm_prefetch(struct dm_block_manager *bm, dm_block_t b)
|
|||
|
||||
bool dm_bm_is_read_only(struct dm_block_manager *bm)
|
||||
{
|
||||
return bm->read_only;
|
||||
return (bm ? bm->read_only : true);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dm_bm_is_read_only);
|
||||
|
||||
void dm_bm_set_read_only(struct dm_block_manager *bm)
|
||||
{
|
||||
bm->read_only = true;
|
||||
if (bm)
|
||||
bm->read_only = true;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dm_bm_set_read_only);
|
||||
|
||||
void dm_bm_set_read_write(struct dm_block_manager *bm)
|
||||
{
|
||||
bm->read_only = false;
|
||||
if (bm)
|
||||
bm->read_only = false;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dm_bm_set_read_write);
|
||||
|
||||
|
|
|
|||
|
|
@ -2052,6 +2052,7 @@ static int vicodec_request_validate(struct media_request *req)
|
|||
}
|
||||
ctrl = v4l2_ctrl_request_hdl_ctrl_find(hdl,
|
||||
vicodec_ctrl_stateless_state.id);
|
||||
v4l2_ctrl_request_hdl_put(hdl);
|
||||
if (!ctrl) {
|
||||
v4l2_info(&ctx->dev->v4l2_dev,
|
||||
"Missing required codec control\n");
|
||||
|
|
|
|||
|
|
@ -1256,6 +1256,10 @@ static ssize_t store_protocols(struct device *device,
|
|||
}
|
||||
|
||||
mutex_lock(&dev->lock);
|
||||
if (!dev->registered) {
|
||||
mutex_unlock(&dev->lock);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
old_protocols = *current_protocols;
|
||||
new_protocols = old_protocols;
|
||||
|
|
@ -1394,6 +1398,10 @@ static ssize_t store_filter(struct device *device,
|
|||
return -EINVAL;
|
||||
|
||||
mutex_lock(&dev->lock);
|
||||
if (!dev->registered) {
|
||||
mutex_unlock(&dev->lock);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
new_filter = *filter;
|
||||
if (fattr->mask)
|
||||
|
|
@ -1508,6 +1516,10 @@ static ssize_t store_wakeup_protocols(struct device *device,
|
|||
int i;
|
||||
|
||||
mutex_lock(&dev->lock);
|
||||
if (!dev->registered) {
|
||||
mutex_unlock(&dev->lock);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
allowed = dev->allowed_wakeup_protocols;
|
||||
|
||||
|
|
@ -1565,25 +1577,25 @@ static void rc_dev_release(struct device *device)
|
|||
kfree(dev);
|
||||
}
|
||||
|
||||
#define ADD_HOTPLUG_VAR(fmt, val...) \
|
||||
do { \
|
||||
int err = add_uevent_var(env, fmt, val); \
|
||||
if (err) \
|
||||
return err; \
|
||||
} while (0)
|
||||
|
||||
static int rc_dev_uevent(struct device *device, struct kobj_uevent_env *env)
|
||||
{
|
||||
struct rc_dev *dev = to_rc_dev(device);
|
||||
int ret = 0;
|
||||
|
||||
if (dev->rc_map.name)
|
||||
ADD_HOTPLUG_VAR("NAME=%s", dev->rc_map.name);
|
||||
if (dev->driver_name)
|
||||
ADD_HOTPLUG_VAR("DRV_NAME=%s", dev->driver_name);
|
||||
if (dev->device_name)
|
||||
ADD_HOTPLUG_VAR("DEV_NAME=%s", dev->device_name);
|
||||
mutex_lock(&dev->lock);
|
||||
|
||||
return 0;
|
||||
if (!dev->registered)
|
||||
ret = -ENODEV;
|
||||
if (ret == 0 && dev->rc_map.name)
|
||||
ret = add_uevent_var(env, "NAME=%s", dev->rc_map.name);
|
||||
if (ret == 0 && dev->driver_name)
|
||||
ret = add_uevent_var(env, "DRV_NAME=%s", dev->driver_name);
|
||||
if (ret == 0 && dev->device_name)
|
||||
ret = add_uevent_var(env, "DEV_NAME=%s", dev->device_name);
|
||||
|
||||
mutex_unlock(&dev->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
@ -1975,14 +1987,14 @@ void rc_unregister_device(struct rc_dev *dev)
|
|||
del_timer_sync(&dev->timer_keyup);
|
||||
del_timer_sync(&dev->timer_repeat);
|
||||
|
||||
rc_free_rx_device(dev);
|
||||
|
||||
mutex_lock(&dev->lock);
|
||||
if (dev->users && dev->close)
|
||||
dev->close(dev);
|
||||
dev->registered = false;
|
||||
mutex_unlock(&dev->lock);
|
||||
|
||||
rc_free_rx_device(dev);
|
||||
|
||||
/*
|
||||
* lirc device should be freed with dev->registered = false, so
|
||||
* that userspace polling will get notified.
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@
|
|||
#include <linux/genalloc.h>
|
||||
#include <linux/io-64-nonatomic-lo-hi.h>
|
||||
|
||||
#define FW_FILE_MAX_SIZE 0x1400000 /* maximum size of 20MB */
|
||||
/**
|
||||
* hl_fw_push_fw_to_device() - Push FW code to device.
|
||||
* @hdev: pointer to hl_device structure.
|
||||
|
|
@ -43,6 +44,14 @@ int hl_fw_push_fw_to_device(struct hl_device *hdev, const char *fw_name,
|
|||
|
||||
dev_dbg(hdev->dev, "%s firmware size == %zu\n", fw_name, fw_size);
|
||||
|
||||
if (fw_size > FW_FILE_MAX_SIZE) {
|
||||
dev_err(hdev->dev,
|
||||
"FW file size %zu exceeds maximum of %u bytes\n",
|
||||
fw_size, FW_FILE_MAX_SIZE);
|
||||
rc = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
fw_data = (const u64 *) fw->data;
|
||||
|
||||
memcpy_toio(dst, fw_data, fw_size);
|
||||
|
|
|
|||
|
|
@ -67,6 +67,11 @@ static int alloc_device_memory(struct hl_ctx *ctx, struct hl_mem_in *args,
|
|||
num_pgs = (args->alloc.mem_size + (page_size - 1)) >> page_shift;
|
||||
total_size = num_pgs << page_shift;
|
||||
|
||||
if (!total_size) {
|
||||
dev_err(hdev->dev, "Cannot allocate 0 bytes\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
contiguous = args->flags & HL_MEM_CONTIGUOUS;
|
||||
|
||||
if (contiguous) {
|
||||
|
|
@ -94,7 +99,7 @@ static int alloc_device_memory(struct hl_ctx *ctx, struct hl_mem_in *args,
|
|||
phys_pg_pack->contiguous = contiguous;
|
||||
|
||||
phys_pg_pack->pages = kvmalloc_array(num_pgs, sizeof(u64), GFP_KERNEL);
|
||||
if (!phys_pg_pack->pages) {
|
||||
if (ZERO_OR_NULL_PTR(phys_pg_pack->pages)) {
|
||||
rc = -ENOMEM;
|
||||
goto pages_arr_err;
|
||||
}
|
||||
|
|
@ -689,7 +694,7 @@ static int init_phys_pg_pack_from_userptr(struct hl_ctx *ctx,
|
|||
|
||||
phys_pg_pack->pages = kvmalloc_array(total_npages, sizeof(u64),
|
||||
GFP_KERNEL);
|
||||
if (!phys_pg_pack->pages) {
|
||||
if (ZERO_OR_NULL_PTR(phys_pg_pack->pages)) {
|
||||
rc = -ENOMEM;
|
||||
goto page_pack_arr_mem_err;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -422,7 +422,7 @@ int hl_mmu_init(struct hl_device *hdev)
|
|||
hdev->mmu_shadow_hop0 = kvmalloc_array(prop->max_asid,
|
||||
prop->mmu_hop_table_size,
|
||||
GFP_KERNEL | __GFP_ZERO);
|
||||
if (!hdev->mmu_shadow_hop0) {
|
||||
if (ZERO_OR_NULL_PTR(hdev->mmu_shadow_hop0)) {
|
||||
rc = -ENOMEM;
|
||||
goto err_pool_add;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -299,16 +299,16 @@ static void __cqhci_disable(struct cqhci_host *cq_host)
|
|||
cq_host->activated = false;
|
||||
}
|
||||
|
||||
int cqhci_suspend(struct mmc_host *mmc)
|
||||
int cqhci_deactivate(struct mmc_host *mmc)
|
||||
{
|
||||
struct cqhci_host *cq_host = mmc->cqe_private;
|
||||
|
||||
if (cq_host->enabled)
|
||||
if (cq_host->enabled && cq_host->activated)
|
||||
__cqhci_disable(cq_host);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(cqhci_suspend);
|
||||
EXPORT_SYMBOL(cqhci_deactivate);
|
||||
|
||||
int cqhci_resume(struct mmc_host *mmc)
|
||||
{
|
||||
|
|
|
|||
|
|
@ -230,7 +230,11 @@ irqreturn_t cqhci_irq(struct mmc_host *mmc, u32 intmask, int cmd_error,
|
|||
int data_error);
|
||||
int cqhci_init(struct cqhci_host *cq_host, struct mmc_host *mmc, bool dma64);
|
||||
struct cqhci_host *cqhci_pltfm_init(struct platform_device *pdev);
|
||||
int cqhci_suspend(struct mmc_host *mmc);
|
||||
int cqhci_deactivate(struct mmc_host *mmc);
|
||||
static inline int cqhci_suspend(struct mmc_host *mmc)
|
||||
{
|
||||
return cqhci_deactivate(mmc);
|
||||
}
|
||||
int cqhci_resume(struct mmc_host *mmc);
|
||||
|
||||
#endif
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/slab.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/reset.h>
|
||||
|
||||
#include <linux/mmc/card.h>
|
||||
#include <linux/mmc/core.h>
|
||||
|
|
@ -412,6 +413,7 @@ struct msdc_host {
|
|||
struct pinctrl_state *pins_uhs;
|
||||
struct delayed_work req_timeout;
|
||||
int irq; /* host interrupt */
|
||||
struct reset_control *reset;
|
||||
|
||||
struct clk *src_clk; /* msdc source clock */
|
||||
struct clk *h_clk; /* msdc h_clk */
|
||||
|
|
@ -1474,6 +1476,12 @@ static void msdc_init_hw(struct msdc_host *host)
|
|||
u32 val;
|
||||
u32 tune_reg = host->dev_comp->pad_tune_reg;
|
||||
|
||||
if (host->reset) {
|
||||
reset_control_assert(host->reset);
|
||||
usleep_range(10, 50);
|
||||
reset_control_deassert(host->reset);
|
||||
}
|
||||
|
||||
/* Configure to MMC/SD mode, clock free running */
|
||||
sdr_set_bits(host->base + MSDC_CFG, MSDC_CFG_MODE | MSDC_CFG_CKPDN);
|
||||
|
||||
|
|
@ -2232,6 +2240,11 @@ static int msdc_drv_probe(struct platform_device *pdev)
|
|||
if (IS_ERR(host->src_clk_cg))
|
||||
host->src_clk_cg = NULL;
|
||||
|
||||
host->reset = devm_reset_control_get_optional_exclusive(&pdev->dev,
|
||||
"hrst");
|
||||
if (IS_ERR(host->reset))
|
||||
return PTR_ERR(host->reset);
|
||||
|
||||
host->irq = platform_get_irq(pdev, 0);
|
||||
if (host->irq < 0) {
|
||||
ret = -EINVAL;
|
||||
|
|
|
|||
|
|
@ -532,6 +532,11 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_qcom_sd = {
|
|||
.caps = MMC_CAP_NONREMOVABLE,
|
||||
};
|
||||
|
||||
struct amd_sdhci_host {
|
||||
bool tuned_clock;
|
||||
bool dll_enabled;
|
||||
};
|
||||
|
||||
/* AMD sdhci reset dll register. */
|
||||
#define SDHCI_AMD_RESET_DLL_REGISTER 0x908
|
||||
|
||||
|
|
@ -551,26 +556,66 @@ static void sdhci_acpi_amd_hs400_dll(struct sdhci_host *host)
|
|||
}
|
||||
|
||||
/*
|
||||
* For AMD Platform it is required to disable the tuning
|
||||
* bit first controller to bring to HS Mode from HS200
|
||||
* mode, later enable to tune to HS400 mode.
|
||||
* The initialization sequence for HS400 is:
|
||||
* HS->HS200->Perform Tuning->HS->HS400
|
||||
*
|
||||
* The re-tuning sequence is:
|
||||
* HS400->DDR52->HS->HS200->Perform Tuning->HS->HS400
|
||||
*
|
||||
* The AMD eMMC Controller can only use the tuned clock while in HS200 and HS400
|
||||
* mode. If we switch to a different mode, we need to disable the tuned clock.
|
||||
* If we have previously performed tuning and switch back to HS200 or
|
||||
* HS400, we can re-enable the tuned clock.
|
||||
*
|
||||
*/
|
||||
static void amd_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
|
||||
{
|
||||
struct sdhci_host *host = mmc_priv(mmc);
|
||||
struct sdhci_acpi_host *acpi_host = sdhci_priv(host);
|
||||
struct amd_sdhci_host *amd_host = sdhci_acpi_priv(acpi_host);
|
||||
unsigned int old_timing = host->timing;
|
||||
u16 val;
|
||||
|
||||
sdhci_set_ios(mmc, ios);
|
||||
if (old_timing == MMC_TIMING_MMC_HS200 &&
|
||||
ios->timing == MMC_TIMING_MMC_HS)
|
||||
sdhci_writew(host, 0x9, SDHCI_HOST_CONTROL2);
|
||||
if (old_timing != MMC_TIMING_MMC_HS400 &&
|
||||
ios->timing == MMC_TIMING_MMC_HS400) {
|
||||
sdhci_writew(host, 0x80, SDHCI_HOST_CONTROL2);
|
||||
sdhci_acpi_amd_hs400_dll(host);
|
||||
|
||||
if (old_timing != host->timing && amd_host->tuned_clock) {
|
||||
if (host->timing == MMC_TIMING_MMC_HS400 ||
|
||||
host->timing == MMC_TIMING_MMC_HS200) {
|
||||
val = sdhci_readw(host, SDHCI_HOST_CONTROL2);
|
||||
val |= SDHCI_CTRL_TUNED_CLK;
|
||||
sdhci_writew(host, val, SDHCI_HOST_CONTROL2);
|
||||
} else {
|
||||
val = sdhci_readw(host, SDHCI_HOST_CONTROL2);
|
||||
val &= ~SDHCI_CTRL_TUNED_CLK;
|
||||
sdhci_writew(host, val, SDHCI_HOST_CONTROL2);
|
||||
}
|
||||
|
||||
/* DLL is only required for HS400 */
|
||||
if (host->timing == MMC_TIMING_MMC_HS400 &&
|
||||
!amd_host->dll_enabled) {
|
||||
sdhci_acpi_amd_hs400_dll(host);
|
||||
amd_host->dll_enabled = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int amd_sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)
|
||||
{
|
||||
int err;
|
||||
struct sdhci_host *host = mmc_priv(mmc);
|
||||
struct sdhci_acpi_host *acpi_host = sdhci_priv(host);
|
||||
struct amd_sdhci_host *amd_host = sdhci_acpi_priv(acpi_host);
|
||||
|
||||
amd_host->tuned_clock = false;
|
||||
|
||||
err = sdhci_execute_tuning(mmc, opcode);
|
||||
|
||||
if (!err && !host->tuning_err)
|
||||
amd_host->tuned_clock = true;
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static const struct sdhci_ops sdhci_acpi_ops_amd = {
|
||||
.set_clock = sdhci_set_clock,
|
||||
.set_bus_width = sdhci_set_bus_width,
|
||||
|
|
@ -598,6 +643,7 @@ static int sdhci_acpi_emmc_amd_probe_slot(struct platform_device *pdev,
|
|||
|
||||
host->mmc_host_ops.select_drive_strength = amd_select_drive_strength;
|
||||
host->mmc_host_ops.set_ios = amd_set_ios;
|
||||
host->mmc_host_ops.execute_tuning = amd_sdhci_execute_tuning;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -609,6 +655,7 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_amd_emmc = {
|
|||
SDHCI_QUIRK_32BIT_ADMA_SIZE,
|
||||
.quirks2 = SDHCI_QUIRK2_BROKEN_64_BIT_DMA,
|
||||
.probe_slot = sdhci_acpi_emmc_amd_probe_slot,
|
||||
.priv_size = sizeof(struct amd_sdhci_host),
|
||||
};
|
||||
|
||||
struct sdhci_acpi_uid_slot {
|
||||
|
|
|
|||
|
|
@ -232,6 +232,14 @@ static void sdhci_pci_dumpregs(struct mmc_host *mmc)
|
|||
sdhci_dumpregs(mmc_priv(mmc));
|
||||
}
|
||||
|
||||
static void sdhci_cqhci_reset(struct sdhci_host *host, u8 mask)
|
||||
{
|
||||
if ((host->mmc->caps2 & MMC_CAP2_CQE) && (mask & SDHCI_RESET_ALL) &&
|
||||
host->mmc->cqe_private)
|
||||
cqhci_deactivate(host->mmc);
|
||||
sdhci_reset(host, mask);
|
||||
}
|
||||
|
||||
/*****************************************************************************\
|
||||
* *
|
||||
* Hardware specific quirk handling *
|
||||
|
|
@ -722,7 +730,7 @@ static const struct sdhci_ops sdhci_intel_glk_ops = {
|
|||
.set_power = sdhci_intel_set_power,
|
||||
.enable_dma = sdhci_pci_enable_dma,
|
||||
.set_bus_width = sdhci_set_bus_width,
|
||||
.reset = sdhci_reset,
|
||||
.reset = sdhci_cqhci_reset,
|
||||
.set_uhs_signaling = sdhci_set_uhs_signaling,
|
||||
.hw_reset = sdhci_pci_hw_reset,
|
||||
.irq = sdhci_cqhci_irq,
|
||||
|
|
|
|||
|
|
@ -100,6 +100,12 @@
|
|||
#define NVQUIRK_DIS_CARD_CLK_CONFIG_TAP BIT(8)
|
||||
#define NVQUIRK_CQHCI_DCMD_R1B_CMD_TIMING BIT(9)
|
||||
|
||||
/*
|
||||
* NVQUIRK_HAS_TMCLK is for SoC's having separate timeout clock for Tegra
|
||||
* SDMMC hardware data timeout.
|
||||
*/
|
||||
#define NVQUIRK_HAS_TMCLK BIT(10)
|
||||
|
||||
/* SDMMC CQE Base Address for Tegra Host Ver 4.1 and Higher */
|
||||
#define SDHCI_TEGRA_CQE_BASE_ADDR 0xF000
|
||||
|
||||
|
|
@ -130,6 +136,7 @@ struct sdhci_tegra_autocal_offsets {
|
|||
struct sdhci_tegra {
|
||||
const struct sdhci_tegra_soc_data *soc_data;
|
||||
struct gpio_desc *power_gpio;
|
||||
struct clk *tmclk;
|
||||
bool ddr_signaling;
|
||||
bool pad_calib_required;
|
||||
bool pad_control_available;
|
||||
|
|
@ -1385,7 +1392,8 @@ static const struct sdhci_tegra_soc_data soc_data_tegra210 = {
|
|||
NVQUIRK_HAS_PADCALIB |
|
||||
NVQUIRK_DIS_CARD_CLK_CONFIG_TAP |
|
||||
NVQUIRK_ENABLE_SDR50 |
|
||||
NVQUIRK_ENABLE_SDR104,
|
||||
NVQUIRK_ENABLE_SDR104 |
|
||||
NVQUIRK_HAS_TMCLK,
|
||||
.min_tap_delay = 106,
|
||||
.max_tap_delay = 185,
|
||||
};
|
||||
|
|
@ -1422,6 +1430,7 @@ static const struct sdhci_tegra_soc_data soc_data_tegra186 = {
|
|||
NVQUIRK_DIS_CARD_CLK_CONFIG_TAP |
|
||||
NVQUIRK_ENABLE_SDR50 |
|
||||
NVQUIRK_ENABLE_SDR104 |
|
||||
NVQUIRK_HAS_TMCLK |
|
||||
NVQUIRK_CQHCI_DCMD_R1B_CMD_TIMING,
|
||||
.min_tap_delay = 84,
|
||||
.max_tap_delay = 136,
|
||||
|
|
@ -1434,7 +1443,8 @@ static const struct sdhci_tegra_soc_data soc_data_tegra194 = {
|
|||
NVQUIRK_HAS_PADCALIB |
|
||||
NVQUIRK_DIS_CARD_CLK_CONFIG_TAP |
|
||||
NVQUIRK_ENABLE_SDR50 |
|
||||
NVQUIRK_ENABLE_SDR104,
|
||||
NVQUIRK_ENABLE_SDR104 |
|
||||
NVQUIRK_HAS_TMCLK,
|
||||
.min_tap_delay = 96,
|
||||
.max_tap_delay = 139,
|
||||
};
|
||||
|
|
@ -1562,6 +1572,43 @@ static int sdhci_tegra_probe(struct platform_device *pdev)
|
|||
goto err_power_req;
|
||||
}
|
||||
|
||||
/*
|
||||
* Tegra210 has a separate SDMMC_LEGACY_TM clock used for host
|
||||
* timeout clock and SW can choose TMCLK or SDCLK for hardware
|
||||
* data timeout through the bit USE_TMCLK_FOR_DATA_TIMEOUT of
|
||||
* the register SDHCI_TEGRA_VENDOR_SYS_SW_CTRL.
|
||||
*
|
||||
* USE_TMCLK_FOR_DATA_TIMEOUT bit default is set to 1 and SDMMC uses
|
||||
* 12Mhz TMCLK which is advertised in host capability register.
|
||||
* With TMCLK of 12Mhz provides maximum data timeout period that can
|
||||
* be achieved is 11s better than using SDCLK for data timeout.
|
||||
*
|
||||
* So, TMCLK is set to 12Mhz and kept enabled all the time on SoC's
|
||||
* supporting separate TMCLK.
|
||||
*/
|
||||
|
||||
if (soc_data->nvquirks & NVQUIRK_HAS_TMCLK) {
|
||||
clk = devm_clk_get(&pdev->dev, "tmclk");
|
||||
if (IS_ERR(clk)) {
|
||||
rc = PTR_ERR(clk);
|
||||
if (rc == -EPROBE_DEFER)
|
||||
goto err_power_req;
|
||||
|
||||
dev_warn(&pdev->dev, "failed to get tmclk: %d\n", rc);
|
||||
clk = NULL;
|
||||
}
|
||||
|
||||
clk_set_rate(clk, 12000000);
|
||||
rc = clk_prepare_enable(clk);
|
||||
if (rc) {
|
||||
dev_err(&pdev->dev,
|
||||
"failed to enable tmclk: %d\n", rc);
|
||||
goto err_power_req;
|
||||
}
|
||||
|
||||
tegra_host->tmclk = clk;
|
||||
}
|
||||
|
||||
clk = devm_clk_get(mmc_dev(host->mmc), NULL);
|
||||
if (IS_ERR(clk)) {
|
||||
rc = PTR_ERR(clk);
|
||||
|
|
@ -1605,6 +1652,7 @@ err_add_host:
|
|||
err_rst_get:
|
||||
clk_disable_unprepare(pltfm_host->clk);
|
||||
err_clk_get:
|
||||
clk_disable_unprepare(tegra_host->tmclk);
|
||||
err_power_req:
|
||||
err_parse_dt:
|
||||
sdhci_pltfm_free(pdev);
|
||||
|
|
@ -1622,6 +1670,7 @@ static int sdhci_tegra_remove(struct platform_device *pdev)
|
|||
reset_control_assert(tegra_host->rst);
|
||||
usleep_range(2000, 4000);
|
||||
clk_disable_unprepare(pltfm_host->clk);
|
||||
clk_disable_unprepare(tegra_host->tmclk);
|
||||
|
||||
sdhci_pltfm_free(pdev);
|
||||
|
||||
|
|
|
|||
|
|
@ -1270,9 +1270,6 @@ static int ksz8795_switch_init(struct ksz_device *dev)
|
|||
/* set the real number of ports */
|
||||
dev->ds->num_ports = dev->port_cnt;
|
||||
|
||||
/* set the real number of ports */
|
||||
dev->ds->num_ports = dev->port_cnt;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -515,9 +515,6 @@ static int ksz9477_port_vlan_filtering(struct dsa_switch *ds, int port,
|
|||
PORT_VLAN_LOOKUP_VID_0, false);
|
||||
}
|
||||
|
||||
/* set the real number of ports */
|
||||
dev->ds->num_ports = dev->port_cnt;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1456,7 +1456,7 @@ unsupported:
|
|||
phylink_set(mask, 100baseT_Full);
|
||||
|
||||
if (state->interface != PHY_INTERFACE_MODE_MII) {
|
||||
phylink_set(mask, 1000baseT_Half);
|
||||
/* This switch only supports 1G full-duplex. */
|
||||
phylink_set(mask, 1000baseT_Full);
|
||||
if (port == 5)
|
||||
phylink_set(mask, 1000baseX_Full);
|
||||
|
|
|
|||
|
|
@ -153,6 +153,7 @@ int arc_mdio_probe(struct arc_emac_priv *priv)
|
|||
if (IS_ERR(data->reset_gpio)) {
|
||||
error = PTR_ERR(data->reset_gpio);
|
||||
dev_err(priv->dev, "Failed to request gpio: %d\n", error);
|
||||
mdiobus_free(bus);
|
||||
return error;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -2453,8 +2453,10 @@ static int bcm_sysport_probe(struct platform_device *pdev)
|
|||
priv->tx_rings = devm_kcalloc(&pdev->dev, txq,
|
||||
sizeof(struct bcm_sysport_tx_ring),
|
||||
GFP_KERNEL);
|
||||
if (!priv->tx_rings)
|
||||
return -ENOMEM;
|
||||
if (!priv->tx_rings) {
|
||||
ret = -ENOMEM;
|
||||
goto err_free_netdev;
|
||||
}
|
||||
|
||||
priv->is_lite = params->is_lite;
|
||||
priv->num_rx_desc_words = params->num_rx_desc_words;
|
||||
|
|
|
|||
|
|
@ -1143,6 +1143,9 @@ static int bnxt_discard_rx(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
|
|||
|
||||
static void bnxt_queue_fw_reset_work(struct bnxt *bp, unsigned long delay)
|
||||
{
|
||||
if (!(test_bit(BNXT_STATE_IN_FW_RESET, &bp->state)))
|
||||
return;
|
||||
|
||||
if (BNXT_PF(bp))
|
||||
queue_delayed_work(bnxt_pf_wq, &bp->fw_reset_task, delay);
|
||||
else
|
||||
|
|
@ -1159,10 +1162,12 @@ static void bnxt_queue_sp_work(struct bnxt *bp)
|
|||
|
||||
static void bnxt_cancel_sp_work(struct bnxt *bp)
|
||||
{
|
||||
if (BNXT_PF(bp))
|
||||
if (BNXT_PF(bp)) {
|
||||
flush_workqueue(bnxt_pf_wq);
|
||||
else
|
||||
} else {
|
||||
cancel_work_sync(&bp->sp_task);
|
||||
cancel_delayed_work_sync(&bp->fw_reset_task);
|
||||
}
|
||||
}
|
||||
|
||||
static void bnxt_sched_reset(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
|
||||
|
|
@ -8933,16 +8938,19 @@ static ssize_t bnxt_show_temp(struct device *dev,
|
|||
struct hwrm_temp_monitor_query_input req = {0};
|
||||
struct hwrm_temp_monitor_query_output *resp;
|
||||
struct bnxt *bp = dev_get_drvdata(dev);
|
||||
u32 temp = 0;
|
||||
u32 len = 0;
|
||||
|
||||
resp = bp->hwrm_cmd_resp_addr;
|
||||
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1);
|
||||
mutex_lock(&bp->hwrm_cmd_lock);
|
||||
if (!_hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT))
|
||||
temp = resp->temp * 1000; /* display millidegree */
|
||||
if (!_hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT))
|
||||
len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */
|
||||
mutex_unlock(&bp->hwrm_cmd_lock);
|
||||
|
||||
return sprintf(buf, "%u\n", temp);
|
||||
if (len)
|
||||
return len;
|
||||
|
||||
return sprintf(buf, "unknown\n");
|
||||
}
|
||||
static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0);
|
||||
|
||||
|
|
@ -9124,15 +9132,15 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
|
|||
}
|
||||
}
|
||||
|
||||
bnxt_enable_napi(bp);
|
||||
bnxt_debug_dev_init(bp);
|
||||
|
||||
rc = bnxt_init_nic(bp, irq_re_init);
|
||||
if (rc) {
|
||||
netdev_err(bp->dev, "bnxt_init_nic err: %x\n", rc);
|
||||
goto open_err;
|
||||
goto open_err_irq;
|
||||
}
|
||||
|
||||
bnxt_enable_napi(bp);
|
||||
bnxt_debug_dev_init(bp);
|
||||
|
||||
if (link_re_init) {
|
||||
mutex_lock(&bp->link_lock);
|
||||
rc = bnxt_update_phy_setting(bp);
|
||||
|
|
@ -9163,10 +9171,6 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
|
|||
bnxt_vf_reps_open(bp);
|
||||
return 0;
|
||||
|
||||
open_err:
|
||||
bnxt_debug_dev_exit(bp);
|
||||
bnxt_disable_napi(bp);
|
||||
|
||||
open_err_irq:
|
||||
bnxt_del_napi(bp);
|
||||
|
||||
|
|
@ -11386,6 +11390,7 @@ static void bnxt_remove_one(struct pci_dev *pdev)
|
|||
unregister_netdev(dev);
|
||||
bnxt_dl_unregister(bp);
|
||||
bnxt_shutdown_tc(bp);
|
||||
clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
|
||||
bnxt_cancel_sp_work(bp);
|
||||
bp->sp_event = 0;
|
||||
|
||||
|
|
@ -11900,6 +11905,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
(long)pci_resource_start(pdev, 0), dev->dev_addr);
|
||||
pcie_print_link_status(pdev);
|
||||
|
||||
pci_save_state(pdev);
|
||||
return 0;
|
||||
|
||||
init_err_cleanup:
|
||||
|
|
@ -12066,6 +12072,8 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
|
|||
"Cannot re-enable PCI device after reset.\n");
|
||||
} else {
|
||||
pci_set_master(pdev);
|
||||
pci_restore_state(pdev);
|
||||
pci_save_state(pdev);
|
||||
|
||||
err = bnxt_hwrm_func_reset(bp);
|
||||
if (!err && netif_running(netdev))
|
||||
|
|
|
|||
|
|
@ -769,7 +769,7 @@ static void bnxt_get_channels(struct net_device *dev,
|
|||
int max_tx_sch_inputs;
|
||||
|
||||
/* Get the most up-to-date max_tx_sch_inputs. */
|
||||
if (BNXT_NEW_RM(bp))
|
||||
if (netif_running(dev) && BNXT_NEW_RM(bp))
|
||||
bnxt_hwrm_func_resc_qcaps(bp, false);
|
||||
max_tx_sch_inputs = hw_resc->max_tx_sch_inputs;
|
||||
|
||||
|
|
@ -2161,6 +2161,9 @@ static int bnxt_get_nvram_directory(struct net_device *dev, u32 len, u8 *data)
|
|||
if (rc != 0)
|
||||
return rc;
|
||||
|
||||
if (!dir_entries || !entry_length)
|
||||
return -EIO;
|
||||
|
||||
/* Insert 2 bytes of directory info (count and size of entries) */
|
||||
if (len < 2)
|
||||
return -EINVAL;
|
||||
|
|
|
|||
|
|
@ -7227,8 +7227,8 @@ static inline void tg3_reset_task_schedule(struct tg3 *tp)
|
|||
|
||||
static inline void tg3_reset_task_cancel(struct tg3 *tp)
|
||||
{
|
||||
cancel_work_sync(&tp->reset_task);
|
||||
tg3_flag_clear(tp, RESET_TASK_PENDING);
|
||||
if (test_and_clear_bit(TG3_FLAG_RESET_TASK_PENDING, tp->tg3_flags))
|
||||
cancel_work_sync(&tp->reset_task);
|
||||
tg3_flag_clear(tp, TX_RECOVERY_PENDING);
|
||||
}
|
||||
|
||||
|
|
@ -11219,18 +11219,27 @@ static void tg3_reset_task(struct work_struct *work)
|
|||
|
||||
tg3_halt(tp, RESET_KIND_SHUTDOWN, 0);
|
||||
err = tg3_init_hw(tp, true);
|
||||
if (err)
|
||||
if (err) {
|
||||
tg3_full_unlock(tp);
|
||||
tp->irq_sync = 0;
|
||||
tg3_napi_enable(tp);
|
||||
/* Clear this flag so that tg3_reset_task_cancel() will not
|
||||
* call cancel_work_sync() and wait forever.
|
||||
*/
|
||||
tg3_flag_clear(tp, RESET_TASK_PENDING);
|
||||
dev_close(tp->dev);
|
||||
goto out;
|
||||
}
|
||||
|
||||
tg3_netif_start(tp);
|
||||
|
||||
out:
|
||||
tg3_full_unlock(tp);
|
||||
|
||||
if (!err)
|
||||
tg3_phy_start(tp);
|
||||
|
||||
tg3_flag_clear(tp, RESET_TASK_PENDING);
|
||||
out:
|
||||
rtnl_unlock();
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -62,6 +62,7 @@ static struct thermal_zone_device_ops cxgb4_thermal_ops = {
|
|||
int cxgb4_thermal_init(struct adapter *adap)
|
||||
{
|
||||
struct ch_thermal *ch_thermal = &adap->ch_thermal;
|
||||
char ch_tz_name[THERMAL_NAME_LENGTH];
|
||||
int num_trip = CXGB4_NUM_TRIPS;
|
||||
u32 param, val;
|
||||
int ret;
|
||||
|
|
@ -82,7 +83,8 @@ int cxgb4_thermal_init(struct adapter *adap)
|
|||
ch_thermal->trip_type = THERMAL_TRIP_CRITICAL;
|
||||
}
|
||||
|
||||
ch_thermal->tzdev = thermal_zone_device_register("cxgb4", num_trip,
|
||||
snprintf(ch_tz_name, sizeof(ch_tz_name), "cxgb4_%s", adap->name);
|
||||
ch_thermal->tzdev = thermal_zone_device_register(ch_tz_name, num_trip,
|
||||
0, adap,
|
||||
&cxgb4_thermal_ops,
|
||||
NULL, 0, 0);
|
||||
|
|
@ -97,7 +99,9 @@ int cxgb4_thermal_init(struct adapter *adap)
|
|||
|
||||
int cxgb4_thermal_remove(struct adapter *adap)
|
||||
{
|
||||
if (adap->ch_thermal.tzdev)
|
||||
if (adap->ch_thermal.tzdev) {
|
||||
thermal_zone_device_unregister(adap->ch_thermal.tzdev);
|
||||
adap->ch_thermal.tzdev = NULL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2445,8 +2445,8 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev)
|
|||
port->reset = devm_reset_control_get_exclusive(dev, NULL);
|
||||
if (IS_ERR(port->reset)) {
|
||||
dev_err(dev, "no reset\n");
|
||||
clk_disable_unprepare(port->pclk);
|
||||
return PTR_ERR(port->reset);
|
||||
ret = PTR_ERR(port->reset);
|
||||
goto unprepare;
|
||||
}
|
||||
reset_control_reset(port->reset);
|
||||
usleep_range(100, 500);
|
||||
|
|
@ -2501,25 +2501,25 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev)
|
|||
IRQF_SHARED,
|
||||
port_names[port->id],
|
||||
port);
|
||||
if (ret) {
|
||||
clk_disable_unprepare(port->pclk);
|
||||
return ret;
|
||||
}
|
||||
if (ret)
|
||||
goto unprepare;
|
||||
|
||||
ret = register_netdev(netdev);
|
||||
if (!ret) {
|
||||
netdev_info(netdev,
|
||||
"irq %d, DMA @ 0x%pap, GMAC @ 0x%pap\n",
|
||||
port->irq, &dmares->start,
|
||||
&gmacres->start);
|
||||
ret = gmac_setup_phy(netdev);
|
||||
if (ret)
|
||||
netdev_info(netdev,
|
||||
"PHY init failed, deferring to ifup time\n");
|
||||
return 0;
|
||||
}
|
||||
if (ret)
|
||||
goto unprepare;
|
||||
|
||||
port->netdev = NULL;
|
||||
netdev_info(netdev,
|
||||
"irq %d, DMA @ 0x%pap, GMAC @ 0x%pap\n",
|
||||
port->irq, &dmares->start,
|
||||
&gmacres->start);
|
||||
ret = gmac_setup_phy(netdev);
|
||||
if (ret)
|
||||
netdev_info(netdev,
|
||||
"PHY init failed, deferring to ifup time\n");
|
||||
return 0;
|
||||
|
||||
unprepare:
|
||||
clk_disable_unprepare(port->pclk);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -2296,8 +2296,10 @@ static int hns_nic_dev_probe(struct platform_device *pdev)
|
|||
priv->enet_ver = AE_VERSION_1;
|
||||
else if (acpi_dev_found(hns_enet_acpi_match[1].id))
|
||||
priv->enet_ver = AE_VERSION_2;
|
||||
else
|
||||
return -ENXIO;
|
||||
else {
|
||||
ret = -ENXIO;
|
||||
goto out_read_prop_fail;
|
||||
}
|
||||
|
||||
/* try to find port-idx-in-ae first */
|
||||
ret = acpi_node_get_property_reference(dev->fwnode,
|
||||
|
|
@ -2313,7 +2315,8 @@ static int hns_nic_dev_probe(struct platform_device *pdev)
|
|||
priv->fwnode = args.fwnode;
|
||||
} else {
|
||||
dev_err(dev, "cannot read cfg data from OF or acpi\n");
|
||||
return -ENXIO;
|
||||
ret = -ENXIO;
|
||||
goto out_read_prop_fail;
|
||||
}
|
||||
|
||||
ret = device_property_read_u32(dev, "port-idx-in-ae", &port_id);
|
||||
|
|
|
|||
|
|
@ -114,7 +114,7 @@ static int mlx4_buddy_init(struct mlx4_buddy *buddy, int max_order)
|
|||
goto err_out;
|
||||
|
||||
for (i = 0; i <= buddy->max_order; ++i) {
|
||||
s = BITS_TO_LONGS(1 << (buddy->max_order - i));
|
||||
s = BITS_TO_LONGS(1UL << (buddy->max_order - i));
|
||||
buddy->bits[i] = kvmalloc_array(s, sizeof(long), GFP_KERNEL | __GFP_ZERO);
|
||||
if (!buddy->bits[i])
|
||||
goto err_out_free;
|
||||
|
|
|
|||
|
|
@ -1336,6 +1336,51 @@ static inline int ravb_hook_irq(unsigned int irq, irq_handler_t handler,
|
|||
return error;
|
||||
}
|
||||
|
||||
/* MDIO bus init function */
|
||||
static int ravb_mdio_init(struct ravb_private *priv)
|
||||
{
|
||||
struct platform_device *pdev = priv->pdev;
|
||||
struct device *dev = &pdev->dev;
|
||||
int error;
|
||||
|
||||
/* Bitbang init */
|
||||
priv->mdiobb.ops = &bb_ops;
|
||||
|
||||
/* MII controller setting */
|
||||
priv->mii_bus = alloc_mdio_bitbang(&priv->mdiobb);
|
||||
if (!priv->mii_bus)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Hook up MII support for ethtool */
|
||||
priv->mii_bus->name = "ravb_mii";
|
||||
priv->mii_bus->parent = dev;
|
||||
snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
|
||||
pdev->name, pdev->id);
|
||||
|
||||
/* Register MDIO bus */
|
||||
error = of_mdiobus_register(priv->mii_bus, dev->of_node);
|
||||
if (error)
|
||||
goto out_free_bus;
|
||||
|
||||
return 0;
|
||||
|
||||
out_free_bus:
|
||||
free_mdio_bitbang(priv->mii_bus);
|
||||
return error;
|
||||
}
|
||||
|
||||
/* MDIO bus release function */
|
||||
static int ravb_mdio_release(struct ravb_private *priv)
|
||||
{
|
||||
/* Unregister mdio bus */
|
||||
mdiobus_unregister(priv->mii_bus);
|
||||
|
||||
/* Free bitbang info */
|
||||
free_mdio_bitbang(priv->mii_bus);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Network device open function for Ethernet AVB */
|
||||
static int ravb_open(struct net_device *ndev)
|
||||
{
|
||||
|
|
@ -1344,6 +1389,13 @@ static int ravb_open(struct net_device *ndev)
|
|||
struct device *dev = &pdev->dev;
|
||||
int error;
|
||||
|
||||
/* MDIO bus init */
|
||||
error = ravb_mdio_init(priv);
|
||||
if (error) {
|
||||
netdev_err(ndev, "failed to initialize MDIO\n");
|
||||
return error;
|
||||
}
|
||||
|
||||
napi_enable(&priv->napi[RAVB_BE]);
|
||||
napi_enable(&priv->napi[RAVB_NC]);
|
||||
|
||||
|
|
@ -1421,6 +1473,7 @@ out_free_irq:
|
|||
out_napi_off:
|
||||
napi_disable(&priv->napi[RAVB_NC]);
|
||||
napi_disable(&priv->napi[RAVB_BE]);
|
||||
ravb_mdio_release(priv);
|
||||
return error;
|
||||
}
|
||||
|
||||
|
|
@ -1730,6 +1783,8 @@ static int ravb_close(struct net_device *ndev)
|
|||
ravb_ring_free(ndev, RAVB_BE);
|
||||
ravb_ring_free(ndev, RAVB_NC);
|
||||
|
||||
ravb_mdio_release(priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -1881,51 +1936,6 @@ static const struct net_device_ops ravb_netdev_ops = {
|
|||
.ndo_set_features = ravb_set_features,
|
||||
};
|
||||
|
||||
/* MDIO bus init function */
|
||||
static int ravb_mdio_init(struct ravb_private *priv)
|
||||
{
|
||||
struct platform_device *pdev = priv->pdev;
|
||||
struct device *dev = &pdev->dev;
|
||||
int error;
|
||||
|
||||
/* Bitbang init */
|
||||
priv->mdiobb.ops = &bb_ops;
|
||||
|
||||
/* MII controller setting */
|
||||
priv->mii_bus = alloc_mdio_bitbang(&priv->mdiobb);
|
||||
if (!priv->mii_bus)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Hook up MII support for ethtool */
|
||||
priv->mii_bus->name = "ravb_mii";
|
||||
priv->mii_bus->parent = dev;
|
||||
snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
|
||||
pdev->name, pdev->id);
|
||||
|
||||
/* Register MDIO bus */
|
||||
error = of_mdiobus_register(priv->mii_bus, dev->of_node);
|
||||
if (error)
|
||||
goto out_free_bus;
|
||||
|
||||
return 0;
|
||||
|
||||
out_free_bus:
|
||||
free_mdio_bitbang(priv->mii_bus);
|
||||
return error;
|
||||
}
|
||||
|
||||
/* MDIO bus release function */
|
||||
static int ravb_mdio_release(struct ravb_private *priv)
|
||||
{
|
||||
/* Unregister mdio bus */
|
||||
mdiobus_unregister(priv->mii_bus);
|
||||
|
||||
/* Free bitbang info */
|
||||
free_mdio_bitbang(priv->mii_bus);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id ravb_match_table[] = {
|
||||
{ .compatible = "renesas,etheravb-r8a7790", .data = (void *)RCAR_GEN2 },
|
||||
{ .compatible = "renesas,etheravb-r8a7794", .data = (void *)RCAR_GEN2 },
|
||||
|
|
@ -2166,13 +2176,6 @@ static int ravb_probe(struct platform_device *pdev)
|
|||
eth_hw_addr_random(ndev);
|
||||
}
|
||||
|
||||
/* MDIO bus init */
|
||||
error = ravb_mdio_init(priv);
|
||||
if (error) {
|
||||
dev_err(&pdev->dev, "failed to initialize MDIO\n");
|
||||
goto out_dma_free;
|
||||
}
|
||||
|
||||
netif_napi_add(ndev, &priv->napi[RAVB_BE], ravb_poll, 64);
|
||||
netif_napi_add(ndev, &priv->napi[RAVB_NC], ravb_poll, 64);
|
||||
|
||||
|
|
@ -2194,8 +2197,6 @@ static int ravb_probe(struct platform_device *pdev)
|
|||
out_napi_del:
|
||||
netif_napi_del(&priv->napi[RAVB_NC]);
|
||||
netif_napi_del(&priv->napi[RAVB_BE]);
|
||||
ravb_mdio_release(priv);
|
||||
out_dma_free:
|
||||
dma_free_coherent(ndev->dev.parent, priv->desc_bat_size, priv->desc_bat,
|
||||
priv->desc_bat_dma);
|
||||
|
||||
|
|
@ -2227,7 +2228,6 @@ static int ravb_remove(struct platform_device *pdev)
|
|||
unregister_netdev(ndev);
|
||||
netif_napi_del(&priv->napi[RAVB_NC]);
|
||||
netif_napi_del(&priv->napi[RAVB_BE]);
|
||||
ravb_mdio_release(priv);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
free_netdev(ndev);
|
||||
platform_set_drvdata(pdev, NULL);
|
||||
|
|
|
|||
|
|
@ -2209,7 +2209,7 @@ static int cpsw_ndo_vlan_rx_kill_vid(struct net_device *ndev,
|
|||
HOST_PORT_NUM, ALE_VLAN, vid);
|
||||
ret |= cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast,
|
||||
0, ALE_VLAN, vid);
|
||||
ret |= cpsw_ale_flush_multicast(cpsw->ale, 0, vid);
|
||||
ret |= cpsw_ale_flush_multicast(cpsw->ale, ALE_PORT_HOST, vid);
|
||||
err:
|
||||
pm_runtime_put(cpsw->dev);
|
||||
return ret;
|
||||
|
|
|
|||
|
|
@ -1182,6 +1182,7 @@ static int gtp_genl_fill_info(struct sk_buff *skb, u32 snd_portid, u32 snd_seq,
|
|||
goto nlmsg_failure;
|
||||
|
||||
if (nla_put_u32(skb, GTPA_VERSION, pctx->gtp_version) ||
|
||||
nla_put_u32(skb, GTPA_LINK, pctx->dev->ifindex) ||
|
||||
nla_put_be32(skb, GTPA_PEER_ADDRESS, pctx->peer_addr_ip4.s_addr) ||
|
||||
nla_put_be32(skb, GTPA_MS_ADDRESS, pctx->ms_addr_ip4.s_addr))
|
||||
goto nla_put_failure;
|
||||
|
|
|
|||
|
|
@ -296,7 +296,7 @@ int asix_read_phy_addr(struct usbnet *dev, int internal)
|
|||
|
||||
netdev_dbg(dev->net, "asix_get_phy_addr()\n");
|
||||
|
||||
if (ret < 0) {
|
||||
if (ret < 2) {
|
||||
netdev_err(dev->net, "Error reading PHYID register: %02x\n", ret);
|
||||
goto out;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4012,7 +4012,7 @@ static void nvme_free_ctrl(struct device *dev)
|
|||
container_of(dev, struct nvme_ctrl, ctrl_device);
|
||||
struct nvme_subsystem *subsys = ctrl->subsys;
|
||||
|
||||
if (subsys && ctrl->instance != subsys->instance)
|
||||
if (!subsys || ctrl->instance != subsys->instance)
|
||||
ida_simple_remove(&nvme_instance_ida, ctrl->instance);
|
||||
|
||||
kfree(ctrl->effects);
|
||||
|
|
|
|||
|
|
@ -369,6 +369,9 @@ static void nvmet_keep_alive_timer(struct work_struct *work)
|
|||
|
||||
static void nvmet_start_keep_alive_timer(struct nvmet_ctrl *ctrl)
|
||||
{
|
||||
if (unlikely(ctrl->kato == 0))
|
||||
return;
|
||||
|
||||
pr_debug("ctrl %d start keep-alive timer for %d secs\n",
|
||||
ctrl->cntlid, ctrl->kato);
|
||||
|
||||
|
|
@ -378,6 +381,9 @@ static void nvmet_start_keep_alive_timer(struct nvmet_ctrl *ctrl)
|
|||
|
||||
static void nvmet_stop_keep_alive_timer(struct nvmet_ctrl *ctrl)
|
||||
{
|
||||
if (unlikely(ctrl->kato == 0))
|
||||
return;
|
||||
|
||||
pr_debug("ctrl %d stop keep-alive\n", ctrl->cntlid);
|
||||
|
||||
cancel_delayed_work_sync(&ctrl->ka_work);
|
||||
|
|
|
|||
|
|
@ -1994,9 +1994,9 @@ nvmet_fc_fod_op_done(struct nvmet_fc_fcp_iod *fod)
|
|||
return;
|
||||
if (fcpreq->fcp_error ||
|
||||
fcpreq->transferred_length != fcpreq->transfer_length) {
|
||||
spin_lock(&fod->flock);
|
||||
spin_lock_irqsave(&fod->flock, flags);
|
||||
fod->abort = true;
|
||||
spin_unlock(&fod->flock);
|
||||
spin_unlock_irqrestore(&fod->flock, flags);
|
||||
|
||||
nvmet_req_complete(&fod->req, NVME_SC_INTERNAL);
|
||||
return;
|
||||
|
|
|
|||
|
|
@ -159,6 +159,7 @@ static int cedrus_request_validate(struct media_request *req)
|
|||
struct v4l2_ctrl *ctrl_test;
|
||||
unsigned int count;
|
||||
unsigned int i;
|
||||
int ret = 0;
|
||||
|
||||
list_for_each_entry(obj, &req->objects, list) {
|
||||
struct vb2_buffer *vb;
|
||||
|
|
@ -203,12 +204,16 @@ static int cedrus_request_validate(struct media_request *req)
|
|||
if (!ctrl_test) {
|
||||
v4l2_info(&ctx->dev->v4l2_dev,
|
||||
"Missing required codec control\n");
|
||||
return -ENOENT;
|
||||
ret = -ENOENT;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
v4l2_ctrl_request_hdl_put(hdl);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return vb2_request_validate(req);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright (c) 2011-2015, 2017, The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2011-2015, 2017, 2020, The Linux Foundation. All rights reserved.
|
||||
*/
|
||||
|
||||
#include <linux/bitops.h>
|
||||
|
|
@ -191,7 +191,7 @@ static int qpnp_tm_get_temp(void *data, int *temp)
|
|||
chip->temp = mili_celsius;
|
||||
}
|
||||
|
||||
*temp = chip->temp < 0 ? 0 : chip->temp;
|
||||
*temp = chip->temp;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -37,20 +37,21 @@ static struct temp_sensor_data omap4430_mpu_temp_sensor_data = {
|
|||
|
||||
/*
|
||||
* Temperature values in milli degree celsius
|
||||
* ADC code values from 530 to 923
|
||||
* ADC code values from 13 to 107, see TRM
|
||||
* "18.4.10.2.3 ADC Codes Versus Temperature".
|
||||
*/
|
||||
static const int
|
||||
omap4430_adc_to_temp[OMAP4430_ADC_END_VALUE - OMAP4430_ADC_START_VALUE + 1] = {
|
||||
-38000, -35000, -34000, -32000, -30000, -28000, -26000, -24000, -22000,
|
||||
-20000, -18000, -17000, -15000, -13000, -12000, -10000, -8000, -6000,
|
||||
-5000, -3000, -1000, 0, 2000, 3000, 5000, 6000, 8000, 10000, 12000,
|
||||
13000, 15000, 17000, 19000, 21000, 23000, 25000, 27000, 28000, 30000,
|
||||
32000, 33000, 35000, 37000, 38000, 40000, 42000, 43000, 45000, 47000,
|
||||
48000, 50000, 52000, 53000, 55000, 57000, 58000, 60000, 62000, 64000,
|
||||
66000, 68000, 70000, 71000, 73000, 75000, 77000, 78000, 80000, 82000,
|
||||
83000, 85000, 87000, 88000, 90000, 92000, 93000, 95000, 97000, 98000,
|
||||
100000, 102000, 103000, 105000, 107000, 109000, 111000, 113000, 115000,
|
||||
117000, 118000, 120000, 122000, 123000,
|
||||
-40000, -38000, -35000, -34000, -32000, -30000, -28000, -26000, -24000,
|
||||
-22000, -20000, -18500, -17000, -15000, -13500, -12000, -10000, -8000,
|
||||
-6500, -5000, -3500, -1500, 0, 2000, 3500, 5000, 6500, 8500, 10000,
|
||||
12000, 13500, 15000, 17000, 19000, 21000, 23000, 25000, 27000, 28500,
|
||||
30000, 32000, 33500, 35000, 37000, 38500, 40000, 42000, 43500, 45000,
|
||||
47000, 48500, 50000, 52000, 53500, 55000, 57000, 58500, 60000, 62000,
|
||||
64000, 66000, 68000, 70000, 71500, 73500, 75000, 77000, 78500, 80000,
|
||||
82000, 83500, 85000, 87000, 88500, 90000, 92000, 93500, 95000, 97000,
|
||||
98500, 100000, 102000, 103500, 105000, 107000, 109000, 111000, 113000,
|
||||
115000, 117000, 118500, 120000, 122000, 123500, 125000,
|
||||
};
|
||||
|
||||
/* OMAP4430 data */
|
||||
|
|
|
|||
|
|
@ -53,9 +53,13 @@
|
|||
* and thresholds for OMAP4430.
|
||||
*/
|
||||
|
||||
/* ADC conversion table limits */
|
||||
#define OMAP4430_ADC_START_VALUE 0
|
||||
#define OMAP4430_ADC_END_VALUE 127
|
||||
/*
|
||||
* ADC conversion table limits. Ignore values outside the TRM listed
|
||||
* range to avoid bogus thermal shutdowns. See omap4430 TRM chapter
|
||||
* "18.4.10.2.3 ADC Codes Versus Temperature".
|
||||
*/
|
||||
#define OMAP4430_ADC_START_VALUE 13
|
||||
#define OMAP4430_ADC_END_VALUE 107
|
||||
/* bandgap clock limits (no control on 4430) */
|
||||
#define OMAP4430_MAX_FREQ 32768
|
||||
#define OMAP4430_MIN_FREQ 32768
|
||||
|
|
|
|||
|
|
@ -1063,7 +1063,7 @@ static unsigned int qcom_geni_serial_tx_empty(struct uart_port *uport)
|
|||
}
|
||||
|
||||
#ifdef CONFIG_SERIAL_QCOM_GENI_CONSOLE
|
||||
static int __init qcom_geni_console_setup(struct console *co, char *options)
|
||||
static int qcom_geni_console_setup(struct console *co, char *options)
|
||||
{
|
||||
struct uart_port *uport;
|
||||
struct qcom_geni_serial_port *port;
|
||||
|
|
|
|||
|
|
@ -27,6 +27,7 @@
|
|||
#include <linux/vfio.h>
|
||||
#include <linux/vgaarb.h>
|
||||
#include <linux/nospec.h>
|
||||
#include <linux/sched/mm.h>
|
||||
|
||||
#include "vfio_pci_private.h"
|
||||
|
||||
|
|
@ -177,6 +178,7 @@ no_mmap:
|
|||
|
||||
static void vfio_pci_try_bus_reset(struct vfio_pci_device *vdev);
|
||||
static void vfio_pci_disable(struct vfio_pci_device *vdev);
|
||||
static int vfio_pci_try_zap_and_vma_lock_cb(struct pci_dev *pdev, void *data);
|
||||
|
||||
/*
|
||||
* INTx masking requires the ability to disable INTx signaling via PCI_COMMAND
|
||||
|
|
@ -688,6 +690,12 @@ int vfio_pci_register_dev_region(struct vfio_pci_device *vdev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
struct vfio_devices {
|
||||
struct vfio_device **devices;
|
||||
int cur_index;
|
||||
int max_index;
|
||||
};
|
||||
|
||||
static long vfio_pci_ioctl(void *device_data,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
|
|
@ -761,7 +769,7 @@ static long vfio_pci_ioctl(void *device_data,
|
|||
{
|
||||
void __iomem *io;
|
||||
size_t size;
|
||||
u16 orig_cmd;
|
||||
u16 cmd;
|
||||
|
||||
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
|
||||
info.flags = 0;
|
||||
|
|
@ -781,10 +789,7 @@ static long vfio_pci_ioctl(void *device_data,
|
|||
* Is it really there? Enable memory decode for
|
||||
* implicit access in pci_map_rom().
|
||||
*/
|
||||
pci_read_config_word(pdev, PCI_COMMAND, &orig_cmd);
|
||||
pci_write_config_word(pdev, PCI_COMMAND,
|
||||
orig_cmd | PCI_COMMAND_MEMORY);
|
||||
|
||||
cmd = vfio_pci_memory_lock_and_enable(vdev);
|
||||
io = pci_map_rom(pdev, &size);
|
||||
if (io) {
|
||||
info.flags = VFIO_REGION_INFO_FLAG_READ;
|
||||
|
|
@ -792,8 +797,8 @@ static long vfio_pci_ioctl(void *device_data,
|
|||
} else {
|
||||
info.size = 0;
|
||||
}
|
||||
vfio_pci_memory_unlock_and_restore(vdev, cmd);
|
||||
|
||||
pci_write_config_word(pdev, PCI_COMMAND, orig_cmd);
|
||||
break;
|
||||
}
|
||||
case VFIO_PCI_VGA_REGION_INDEX:
|
||||
|
|
@ -936,8 +941,16 @@ static long vfio_pci_ioctl(void *device_data,
|
|||
return ret;
|
||||
|
||||
} else if (cmd == VFIO_DEVICE_RESET) {
|
||||
return vdev->reset_works ?
|
||||
pci_try_reset_function(vdev->pdev) : -EINVAL;
|
||||
int ret;
|
||||
|
||||
if (!vdev->reset_works)
|
||||
return -EINVAL;
|
||||
|
||||
vfio_pci_zap_and_down_write_memory_lock(vdev);
|
||||
ret = pci_try_reset_function(vdev->pdev);
|
||||
up_write(&vdev->memory_lock);
|
||||
|
||||
return ret;
|
||||
|
||||
} else if (cmd == VFIO_DEVICE_GET_PCI_HOT_RESET_INFO) {
|
||||
struct vfio_pci_hot_reset_info hdr;
|
||||
|
|
@ -1017,8 +1030,9 @@ reset_info_exit:
|
|||
int32_t *group_fds;
|
||||
struct vfio_pci_group_entry *groups;
|
||||
struct vfio_pci_group_info info;
|
||||
struct vfio_devices devs = { .cur_index = 0 };
|
||||
bool slot = false;
|
||||
int i, count = 0, ret = 0;
|
||||
int i, group_idx, mem_idx = 0, count = 0, ret = 0;
|
||||
|
||||
minsz = offsetofend(struct vfio_pci_hot_reset, count);
|
||||
|
||||
|
|
@ -1070,9 +1084,9 @@ reset_info_exit:
|
|||
* user interface and store the group and iommu ID. This
|
||||
* ensures the group is held across the reset.
|
||||
*/
|
||||
for (i = 0; i < hdr.count; i++) {
|
||||
for (group_idx = 0; group_idx < hdr.count; group_idx++) {
|
||||
struct vfio_group *group;
|
||||
struct fd f = fdget(group_fds[i]);
|
||||
struct fd f = fdget(group_fds[group_idx]);
|
||||
if (!f.file) {
|
||||
ret = -EBADF;
|
||||
break;
|
||||
|
|
@ -1085,8 +1099,9 @@ reset_info_exit:
|
|||
break;
|
||||
}
|
||||
|
||||
groups[i].group = group;
|
||||
groups[i].id = vfio_external_user_iommu_id(group);
|
||||
groups[group_idx].group = group;
|
||||
groups[group_idx].id =
|
||||
vfio_external_user_iommu_id(group);
|
||||
}
|
||||
|
||||
kfree(group_fds);
|
||||
|
|
@ -1105,13 +1120,63 @@ reset_info_exit:
|
|||
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
|
||||
vfio_pci_validate_devs,
|
||||
&info, slot);
|
||||
if (!ret)
|
||||
/* User has access, do the reset */
|
||||
ret = pci_reset_bus(vdev->pdev);
|
||||
if (ret)
|
||||
goto hot_reset_release;
|
||||
|
||||
devs.max_index = count;
|
||||
devs.devices = kcalloc(count, sizeof(struct vfio_device *),
|
||||
GFP_KERNEL);
|
||||
if (!devs.devices) {
|
||||
ret = -ENOMEM;
|
||||
goto hot_reset_release;
|
||||
}
|
||||
|
||||
/*
|
||||
* We need to get memory_lock for each device, but devices
|
||||
* can share mmap_sem, therefore we need to zap and hold
|
||||
* the vma_lock for each device, and only then get each
|
||||
* memory_lock.
|
||||
*/
|
||||
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
|
||||
vfio_pci_try_zap_and_vma_lock_cb,
|
||||
&devs, slot);
|
||||
if (ret)
|
||||
goto hot_reset_release;
|
||||
|
||||
for (; mem_idx < devs.cur_index; mem_idx++) {
|
||||
struct vfio_pci_device *tmp;
|
||||
|
||||
tmp = vfio_device_data(devs.devices[mem_idx]);
|
||||
|
||||
ret = down_write_trylock(&tmp->memory_lock);
|
||||
if (!ret) {
|
||||
ret = -EBUSY;
|
||||
goto hot_reset_release;
|
||||
}
|
||||
mutex_unlock(&tmp->vma_lock);
|
||||
}
|
||||
|
||||
/* User has access, do the reset */
|
||||
ret = pci_reset_bus(vdev->pdev);
|
||||
|
||||
hot_reset_release:
|
||||
for (i--; i >= 0; i--)
|
||||
vfio_group_put_external_user(groups[i].group);
|
||||
for (i = 0; i < devs.cur_index; i++) {
|
||||
struct vfio_device *device;
|
||||
struct vfio_pci_device *tmp;
|
||||
|
||||
device = devs.devices[i];
|
||||
tmp = vfio_device_data(device);
|
||||
|
||||
if (i < mem_idx)
|
||||
up_write(&tmp->memory_lock);
|
||||
else
|
||||
mutex_unlock(&tmp->vma_lock);
|
||||
vfio_device_put(device);
|
||||
}
|
||||
kfree(devs.devices);
|
||||
|
||||
for (group_idx--; group_idx >= 0; group_idx--)
|
||||
vfio_group_put_external_user(groups[group_idx].group);
|
||||
|
||||
kfree(groups);
|
||||
return ret;
|
||||
|
|
@ -1192,6 +1257,202 @@ static ssize_t vfio_pci_write(void *device_data, const char __user *buf,
|
|||
return vfio_pci_rw(device_data, (char __user *)buf, count, ppos, true);
|
||||
}
|
||||
|
||||
/* Return 1 on zap and vma_lock acquired, 0 on contention (only with @try) */
|
||||
static int vfio_pci_zap_and_vma_lock(struct vfio_pci_device *vdev, bool try)
|
||||
{
|
||||
struct vfio_pci_mmap_vma *mmap_vma, *tmp;
|
||||
|
||||
/*
|
||||
* Lock ordering:
|
||||
* vma_lock is nested under mmap_sem for vm_ops callback paths.
|
||||
* The memory_lock semaphore is used by both code paths calling
|
||||
* into this function to zap vmas and the vm_ops.fault callback
|
||||
* to protect the memory enable state of the device.
|
||||
*
|
||||
* When zapping vmas we need to maintain the mmap_sem => vma_lock
|
||||
* ordering, which requires using vma_lock to walk vma_list to
|
||||
* acquire an mm, then dropping vma_lock to get the mmap_sem and
|
||||
* reacquiring vma_lock. This logic is derived from similar
|
||||
* requirements in uverbs_user_mmap_disassociate().
|
||||
*
|
||||
* mmap_sem must always be the top-level lock when it is taken.
|
||||
* Therefore we can only hold the memory_lock write lock when
|
||||
* vma_list is empty, as we'd need to take mmap_sem to clear
|
||||
* entries. vma_list can only be guaranteed empty when holding
|
||||
* vma_lock, thus memory_lock is nested under vma_lock.
|
||||
*
|
||||
* This enables the vm_ops.fault callback to acquire vma_lock,
|
||||
* followed by memory_lock read lock, while already holding
|
||||
* mmap_sem without risk of deadlock.
|
||||
*/
|
||||
while (1) {
|
||||
struct mm_struct *mm = NULL;
|
||||
|
||||
if (try) {
|
||||
if (!mutex_trylock(&vdev->vma_lock))
|
||||
return 0;
|
||||
} else {
|
||||
mutex_lock(&vdev->vma_lock);
|
||||
}
|
||||
while (!list_empty(&vdev->vma_list)) {
|
||||
mmap_vma = list_first_entry(&vdev->vma_list,
|
||||
struct vfio_pci_mmap_vma,
|
||||
vma_next);
|
||||
mm = mmap_vma->vma->vm_mm;
|
||||
if (mmget_not_zero(mm))
|
||||
break;
|
||||
|
||||
list_del(&mmap_vma->vma_next);
|
||||
kfree(mmap_vma);
|
||||
mm = NULL;
|
||||
}
|
||||
if (!mm)
|
||||
return 1;
|
||||
mutex_unlock(&vdev->vma_lock);
|
||||
|
||||
if (try) {
|
||||
if (!down_read_trylock(&mm->mmap_sem)) {
|
||||
mmput(mm);
|
||||
return 0;
|
||||
}
|
||||
} else {
|
||||
down_read(&mm->mmap_sem);
|
||||
}
|
||||
if (mmget_still_valid(mm)) {
|
||||
if (try) {
|
||||
if (!mutex_trylock(&vdev->vma_lock)) {
|
||||
up_read(&mm->mmap_sem);
|
||||
mmput(mm);
|
||||
return 0;
|
||||
}
|
||||
} else {
|
||||
mutex_lock(&vdev->vma_lock);
|
||||
}
|
||||
list_for_each_entry_safe(mmap_vma, tmp,
|
||||
&vdev->vma_list, vma_next) {
|
||||
struct vm_area_struct *vma = mmap_vma->vma;
|
||||
|
||||
if (vma->vm_mm != mm)
|
||||
continue;
|
||||
|
||||
list_del(&mmap_vma->vma_next);
|
||||
kfree(mmap_vma);
|
||||
|
||||
zap_vma_ptes(vma, vma->vm_start,
|
||||
vma->vm_end - vma->vm_start);
|
||||
}
|
||||
mutex_unlock(&vdev->vma_lock);
|
||||
}
|
||||
up_read(&mm->mmap_sem);
|
||||
mmput(mm);
|
||||
}
|
||||
}
|
||||
|
||||
void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_device *vdev)
|
||||
{
|
||||
vfio_pci_zap_and_vma_lock(vdev, false);
|
||||
down_write(&vdev->memory_lock);
|
||||
mutex_unlock(&vdev->vma_lock);
|
||||
}
|
||||
|
||||
u16 vfio_pci_memory_lock_and_enable(struct vfio_pci_device *vdev)
|
||||
{
|
||||
u16 cmd;
|
||||
|
||||
down_write(&vdev->memory_lock);
|
||||
pci_read_config_word(vdev->pdev, PCI_COMMAND, &cmd);
|
||||
if (!(cmd & PCI_COMMAND_MEMORY))
|
||||
pci_write_config_word(vdev->pdev, PCI_COMMAND,
|
||||
cmd | PCI_COMMAND_MEMORY);
|
||||
|
||||
return cmd;
|
||||
}
|
||||
|
||||
void vfio_pci_memory_unlock_and_restore(struct vfio_pci_device *vdev, u16 cmd)
|
||||
{
|
||||
pci_write_config_word(vdev->pdev, PCI_COMMAND, cmd);
|
||||
up_write(&vdev->memory_lock);
|
||||
}
|
||||
|
||||
/* Caller holds vma_lock */
|
||||
static int __vfio_pci_add_vma(struct vfio_pci_device *vdev,
|
||||
struct vm_area_struct *vma)
|
||||
{
|
||||
struct vfio_pci_mmap_vma *mmap_vma;
|
||||
|
||||
mmap_vma = kmalloc(sizeof(*mmap_vma), GFP_KERNEL);
|
||||
if (!mmap_vma)
|
||||
return -ENOMEM;
|
||||
|
||||
mmap_vma->vma = vma;
|
||||
list_add(&mmap_vma->vma_next, &vdev->vma_list);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Zap mmaps on open so that we can fault them in on access and therefore
|
||||
* our vma_list only tracks mappings accessed since last zap.
|
||||
*/
|
||||
static void vfio_pci_mmap_open(struct vm_area_struct *vma)
|
||||
{
|
||||
zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
|
||||
}
|
||||
|
||||
static void vfio_pci_mmap_close(struct vm_area_struct *vma)
|
||||
{
|
||||
struct vfio_pci_device *vdev = vma->vm_private_data;
|
||||
struct vfio_pci_mmap_vma *mmap_vma;
|
||||
|
||||
mutex_lock(&vdev->vma_lock);
|
||||
list_for_each_entry(mmap_vma, &vdev->vma_list, vma_next) {
|
||||
if (mmap_vma->vma == vma) {
|
||||
list_del(&mmap_vma->vma_next);
|
||||
kfree(mmap_vma);
|
||||
break;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&vdev->vma_lock);
|
||||
}
|
||||
|
||||
static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf)
|
||||
{
|
||||
struct vm_area_struct *vma = vmf->vma;
|
||||
struct vfio_pci_device *vdev = vma->vm_private_data;
|
||||
vm_fault_t ret = VM_FAULT_NOPAGE;
|
||||
|
||||
mutex_lock(&vdev->vma_lock);
|
||||
down_read(&vdev->memory_lock);
|
||||
|
||||
if (!__vfio_pci_memory_enabled(vdev)) {
|
||||
ret = VM_FAULT_SIGBUS;
|
||||
mutex_unlock(&vdev->vma_lock);
|
||||
goto up_out;
|
||||
}
|
||||
|
||||
if (__vfio_pci_add_vma(vdev, vma)) {
|
||||
ret = VM_FAULT_OOM;
|
||||
mutex_unlock(&vdev->vma_lock);
|
||||
goto up_out;
|
||||
}
|
||||
|
||||
mutex_unlock(&vdev->vma_lock);
|
||||
|
||||
if (remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
|
||||
vma->vm_end - vma->vm_start, vma->vm_page_prot))
|
||||
ret = VM_FAULT_SIGBUS;
|
||||
|
||||
up_out:
|
||||
up_read(&vdev->memory_lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct vm_operations_struct vfio_pci_mmap_ops = {
|
||||
.open = vfio_pci_mmap_open,
|
||||
.close = vfio_pci_mmap_close,
|
||||
.fault = vfio_pci_mmap_fault,
|
||||
};
|
||||
|
||||
static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
|
||||
{
|
||||
struct vfio_pci_device *vdev = device_data;
|
||||
|
|
@ -1250,8 +1511,14 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
|
|||
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
|
||||
vma->vm_pgoff = (pci_resource_start(pdev, index) >> PAGE_SHIFT) + pgoff;
|
||||
|
||||
return remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
|
||||
req_len, vma->vm_page_prot);
|
||||
/*
|
||||
* See remap_pfn_range(), called from vfio_pci_fault() but we can't
|
||||
* change vm_flags within the fault handler. Set them now.
|
||||
*/
|
||||
vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
|
||||
vma->vm_ops = &vfio_pci_mmap_ops;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void vfio_pci_request(void *device_data, unsigned int count)
|
||||
|
|
@ -1327,6 +1594,9 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
spin_lock_init(&vdev->irqlock);
|
||||
mutex_init(&vdev->ioeventfds_lock);
|
||||
INIT_LIST_HEAD(&vdev->ioeventfds_list);
|
||||
mutex_init(&vdev->vma_lock);
|
||||
INIT_LIST_HEAD(&vdev->vma_list);
|
||||
init_rwsem(&vdev->memory_lock);
|
||||
|
||||
ret = vfio_add_group_dev(&pdev->dev, &vfio_pci_ops, vdev);
|
||||
if (ret) {
|
||||
|
|
@ -1516,12 +1786,6 @@ static void vfio_pci_reflck_put(struct vfio_pci_reflck *reflck)
|
|||
kref_put_mutex(&reflck->kref, vfio_pci_reflck_release, &reflck_lock);
|
||||
}
|
||||
|
||||
struct vfio_devices {
|
||||
struct vfio_device **devices;
|
||||
int cur_index;
|
||||
int max_index;
|
||||
};
|
||||
|
||||
static int vfio_pci_get_unused_devs(struct pci_dev *pdev, void *data)
|
||||
{
|
||||
struct vfio_devices *devs = data;
|
||||
|
|
@ -1552,6 +1816,39 @@ static int vfio_pci_get_unused_devs(struct pci_dev *pdev, void *data)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int vfio_pci_try_zap_and_vma_lock_cb(struct pci_dev *pdev, void *data)
|
||||
{
|
||||
struct vfio_devices *devs = data;
|
||||
struct vfio_device *device;
|
||||
struct vfio_pci_device *vdev;
|
||||
|
||||
if (devs->cur_index == devs->max_index)
|
||||
return -ENOSPC;
|
||||
|
||||
device = vfio_device_get_from_dev(&pdev->dev);
|
||||
if (!device)
|
||||
return -EINVAL;
|
||||
|
||||
if (pci_dev_driver(pdev) != &vfio_pci_driver) {
|
||||
vfio_device_put(device);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
vdev = vfio_device_data(device);
|
||||
|
||||
/*
|
||||
* Locking multiple devices is prone to deadlock, runaway and
|
||||
* unwind if we hit contention.
|
||||
*/
|
||||
if (!vfio_pci_zap_and_vma_lock(vdev, true)) {
|
||||
vfio_device_put(device);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
devs->devices[devs->cur_index++] = device;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* If a bus or slot reset is available for the provided device and:
|
||||
* - All of the devices affected by that bus or slot reset are unused
|
||||
|
|
|
|||
|
|
@ -395,6 +395,20 @@ static inline void p_setd(struct perm_bits *p, int off, u32 virt, u32 write)
|
|||
*(__le32 *)(&p->write[off]) = cpu_to_le32(write);
|
||||
}
|
||||
|
||||
/* Caller should hold memory_lock semaphore */
|
||||
bool __vfio_pci_memory_enabled(struct vfio_pci_device *vdev)
|
||||
{
|
||||
struct pci_dev *pdev = vdev->pdev;
|
||||
u16 cmd = le16_to_cpu(*(__le16 *)&vdev->vconfig[PCI_COMMAND]);
|
||||
|
||||
/*
|
||||
* SR-IOV VF memory enable is handled by the MSE bit in the
|
||||
* PF SR-IOV capability, there's therefore no need to trigger
|
||||
* faults based on the virtual value.
|
||||
*/
|
||||
return pdev->is_virtfn || (cmd & PCI_COMMAND_MEMORY);
|
||||
}
|
||||
|
||||
/*
|
||||
* Restore the *real* BARs after we detect a FLR or backdoor reset.
|
||||
* (backdoor = some device specific technique that we didn't catch)
|
||||
|
|
@ -554,13 +568,18 @@ static int vfio_basic_config_write(struct vfio_pci_device *vdev, int pos,
|
|||
|
||||
new_cmd = le32_to_cpu(val);
|
||||
|
||||
phys_io = !!(phys_cmd & PCI_COMMAND_IO);
|
||||
virt_io = !!(le16_to_cpu(*virt_cmd) & PCI_COMMAND_IO);
|
||||
new_io = !!(new_cmd & PCI_COMMAND_IO);
|
||||
|
||||
phys_mem = !!(phys_cmd & PCI_COMMAND_MEMORY);
|
||||
virt_mem = !!(le16_to_cpu(*virt_cmd) & PCI_COMMAND_MEMORY);
|
||||
new_mem = !!(new_cmd & PCI_COMMAND_MEMORY);
|
||||
|
||||
phys_io = !!(phys_cmd & PCI_COMMAND_IO);
|
||||
virt_io = !!(le16_to_cpu(*virt_cmd) & PCI_COMMAND_IO);
|
||||
new_io = !!(new_cmd & PCI_COMMAND_IO);
|
||||
if (!new_mem)
|
||||
vfio_pci_zap_and_down_write_memory_lock(vdev);
|
||||
else
|
||||
down_write(&vdev->memory_lock);
|
||||
|
||||
/*
|
||||
* If the user is writing mem/io enable (new_mem/io) and we
|
||||
|
|
@ -577,8 +596,11 @@ static int vfio_basic_config_write(struct vfio_pci_device *vdev, int pos,
|
|||
}
|
||||
|
||||
count = vfio_default_config_write(vdev, pos, count, perm, offset, val);
|
||||
if (count < 0)
|
||||
if (count < 0) {
|
||||
if (offset == PCI_COMMAND)
|
||||
up_write(&vdev->memory_lock);
|
||||
return count;
|
||||
}
|
||||
|
||||
/*
|
||||
* Save current memory/io enable bits in vconfig to allow for
|
||||
|
|
@ -589,6 +611,8 @@ static int vfio_basic_config_write(struct vfio_pci_device *vdev, int pos,
|
|||
|
||||
*virt_cmd &= cpu_to_le16(~mask);
|
||||
*virt_cmd |= cpu_to_le16(new_cmd & mask);
|
||||
|
||||
up_write(&vdev->memory_lock);
|
||||
}
|
||||
|
||||
/* Emulate INTx disable */
|
||||
|
|
@ -826,8 +850,11 @@ static int vfio_exp_config_write(struct vfio_pci_device *vdev, int pos,
|
|||
pos - offset + PCI_EXP_DEVCAP,
|
||||
&cap);
|
||||
|
||||
if (!ret && (cap & PCI_EXP_DEVCAP_FLR))
|
||||
if (!ret && (cap & PCI_EXP_DEVCAP_FLR)) {
|
||||
vfio_pci_zap_and_down_write_memory_lock(vdev);
|
||||
pci_try_reset_function(vdev->pdev);
|
||||
up_write(&vdev->memory_lock);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
@ -905,8 +932,11 @@ static int vfio_af_config_write(struct vfio_pci_device *vdev, int pos,
|
|||
pos - offset + PCI_AF_CAP,
|
||||
&cap);
|
||||
|
||||
if (!ret && (cap & PCI_AF_CAP_FLR) && (cap & PCI_AF_CAP_TP))
|
||||
if (!ret && (cap & PCI_AF_CAP_FLR) && (cap & PCI_AF_CAP_TP)) {
|
||||
vfio_pci_zap_and_down_write_memory_lock(vdev);
|
||||
pci_try_reset_function(vdev->pdev);
|
||||
up_write(&vdev->memory_lock);
|
||||
}
|
||||
}
|
||||
|
||||
return count;
|
||||
|
|
@ -1702,6 +1732,15 @@ int vfio_config_init(struct vfio_pci_device *vdev)
|
|||
vconfig[PCI_INTERRUPT_PIN]);
|
||||
|
||||
vconfig[PCI_INTERRUPT_PIN] = 0; /* Gratuitous for good VFs */
|
||||
|
||||
/*
|
||||
* VFs do no implement the memory enable bit of the COMMAND
|
||||
* register therefore we'll not have it set in our initial
|
||||
* copy of config space after pci_enable_device(). For
|
||||
* consistency with PFs, set the virtual enable bit here.
|
||||
*/
|
||||
*(__le16 *)&vconfig[PCI_COMMAND] |=
|
||||
cpu_to_le16(PCI_COMMAND_MEMORY);
|
||||
}
|
||||
|
||||
if (!IS_ENABLED(CONFIG_VFIO_PCI_INTX) || vdev->nointx)
|
||||
|
|
|
|||
|
|
@ -249,6 +249,7 @@ static int vfio_msi_enable(struct vfio_pci_device *vdev, int nvec, bool msix)
|
|||
struct pci_dev *pdev = vdev->pdev;
|
||||
unsigned int flag = msix ? PCI_IRQ_MSIX : PCI_IRQ_MSI;
|
||||
int ret;
|
||||
u16 cmd;
|
||||
|
||||
if (!is_irq_none(vdev))
|
||||
return -EINVAL;
|
||||
|
|
@ -258,13 +259,16 @@ static int vfio_msi_enable(struct vfio_pci_device *vdev, int nvec, bool msix)
|
|||
return -ENOMEM;
|
||||
|
||||
/* return the number of supported vectors if we can't get all: */
|
||||
cmd = vfio_pci_memory_lock_and_enable(vdev);
|
||||
ret = pci_alloc_irq_vectors(pdev, 1, nvec, flag);
|
||||
if (ret < nvec) {
|
||||
if (ret > 0)
|
||||
pci_free_irq_vectors(pdev);
|
||||
vfio_pci_memory_unlock_and_restore(vdev, cmd);
|
||||
kfree(vdev->ctx);
|
||||
return ret;
|
||||
}
|
||||
vfio_pci_memory_unlock_and_restore(vdev, cmd);
|
||||
|
||||
vdev->num_ctx = nvec;
|
||||
vdev->irq_type = msix ? VFIO_PCI_MSIX_IRQ_INDEX :
|
||||
|
|
@ -287,6 +291,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev,
|
|||
struct pci_dev *pdev = vdev->pdev;
|
||||
struct eventfd_ctx *trigger;
|
||||
int irq, ret;
|
||||
u16 cmd;
|
||||
|
||||
if (vector < 0 || vector >= vdev->num_ctx)
|
||||
return -EINVAL;
|
||||
|
|
@ -295,7 +300,11 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev,
|
|||
|
||||
if (vdev->ctx[vector].trigger) {
|
||||
irq_bypass_unregister_producer(&vdev->ctx[vector].producer);
|
||||
|
||||
cmd = vfio_pci_memory_lock_and_enable(vdev);
|
||||
free_irq(irq, vdev->ctx[vector].trigger);
|
||||
vfio_pci_memory_unlock_and_restore(vdev, cmd);
|
||||
|
||||
kfree(vdev->ctx[vector].name);
|
||||
eventfd_ctx_put(vdev->ctx[vector].trigger);
|
||||
vdev->ctx[vector].trigger = NULL;
|
||||
|
|
@ -323,6 +332,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev,
|
|||
* such a reset it would be unsuccessful. To avoid this, restore the
|
||||
* cached value of the message prior to enabling.
|
||||
*/
|
||||
cmd = vfio_pci_memory_lock_and_enable(vdev);
|
||||
if (msix) {
|
||||
struct msi_msg msg;
|
||||
|
||||
|
|
@ -332,6 +342,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev,
|
|||
|
||||
ret = request_irq(irq, vfio_msihandler, 0,
|
||||
vdev->ctx[vector].name, trigger);
|
||||
vfio_pci_memory_unlock_and_restore(vdev, cmd);
|
||||
if (ret) {
|
||||
kfree(vdev->ctx[vector].name);
|
||||
eventfd_ctx_put(trigger);
|
||||
|
|
@ -376,6 +387,7 @@ static void vfio_msi_disable(struct vfio_pci_device *vdev, bool msix)
|
|||
{
|
||||
struct pci_dev *pdev = vdev->pdev;
|
||||
int i;
|
||||
u16 cmd;
|
||||
|
||||
for (i = 0; i < vdev->num_ctx; i++) {
|
||||
vfio_virqfd_disable(&vdev->ctx[i].unmask);
|
||||
|
|
@ -384,7 +396,9 @@ static void vfio_msi_disable(struct vfio_pci_device *vdev, bool msix)
|
|||
|
||||
vfio_msi_set_block(vdev, 0, vdev->num_ctx, NULL, msix);
|
||||
|
||||
cmd = vfio_pci_memory_lock_and_enable(vdev);
|
||||
pci_free_irq_vectors(pdev);
|
||||
vfio_pci_memory_unlock_and_restore(vdev, cmd);
|
||||
|
||||
/*
|
||||
* Both disable paths above use pci_intx_for_msi() to clear DisINTx
|
||||
|
|
|
|||
|
|
@ -84,6 +84,11 @@ struct vfio_pci_reflck {
|
|||
struct mutex lock;
|
||||
};
|
||||
|
||||
struct vfio_pci_mmap_vma {
|
||||
struct vm_area_struct *vma;
|
||||
struct list_head vma_next;
|
||||
};
|
||||
|
||||
struct vfio_pci_device {
|
||||
struct pci_dev *pdev;
|
||||
void __iomem *barmap[PCI_STD_RESOURCE_END + 1];
|
||||
|
|
@ -122,6 +127,9 @@ struct vfio_pci_device {
|
|||
struct list_head dummy_resources_list;
|
||||
struct mutex ioeventfds_lock;
|
||||
struct list_head ioeventfds_list;
|
||||
struct mutex vma_lock;
|
||||
struct list_head vma_list;
|
||||
struct rw_semaphore memory_lock;
|
||||
};
|
||||
|
||||
#define is_intx(vdev) (vdev->irq_type == VFIO_PCI_INTX_IRQ_INDEX)
|
||||
|
|
@ -164,6 +172,13 @@ extern int vfio_pci_register_dev_region(struct vfio_pci_device *vdev,
|
|||
extern int vfio_pci_set_power_state(struct vfio_pci_device *vdev,
|
||||
pci_power_t state);
|
||||
|
||||
extern bool __vfio_pci_memory_enabled(struct vfio_pci_device *vdev);
|
||||
extern void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_device
|
||||
*vdev);
|
||||
extern u16 vfio_pci_memory_lock_and_enable(struct vfio_pci_device *vdev);
|
||||
extern void vfio_pci_memory_unlock_and_restore(struct vfio_pci_device *vdev,
|
||||
u16 cmd);
|
||||
|
||||
#ifdef CONFIG_VFIO_PCI_IGD
|
||||
extern int vfio_pci_igd_init(struct vfio_pci_device *vdev);
|
||||
#else
|
||||
|
|
|
|||
|
|
@ -162,6 +162,7 @@ ssize_t vfio_pci_bar_rw(struct vfio_pci_device *vdev, char __user *buf,
|
|||
size_t x_start = 0, x_end = 0;
|
||||
resource_size_t end;
|
||||
void __iomem *io;
|
||||
struct resource *res = &vdev->pdev->resource[bar];
|
||||
ssize_t done;
|
||||
|
||||
if (pci_resource_start(pdev, bar))
|
||||
|
|
@ -177,6 +178,14 @@ ssize_t vfio_pci_bar_rw(struct vfio_pci_device *vdev, char __user *buf,
|
|||
|
||||
count = min(count, (size_t)(end - pos));
|
||||
|
||||
if (res->flags & IORESOURCE_MEM) {
|
||||
down_read(&vdev->memory_lock);
|
||||
if (!__vfio_pci_memory_enabled(vdev)) {
|
||||
up_read(&vdev->memory_lock);
|
||||
return -EIO;
|
||||
}
|
||||
}
|
||||
|
||||
if (bar == PCI_ROM_RESOURCE) {
|
||||
/*
|
||||
* The ROM can fill less space than the BAR, so we start the
|
||||
|
|
@ -184,13 +193,17 @@ ssize_t vfio_pci_bar_rw(struct vfio_pci_device *vdev, char __user *buf,
|
|||
* filling large ROM BARs much faster.
|
||||
*/
|
||||
io = pci_map_rom(pdev, &x_start);
|
||||
if (!io)
|
||||
return -ENOMEM;
|
||||
if (!io) {
|
||||
done = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
x_end = end;
|
||||
} else {
|
||||
int ret = vfio_pci_setup_barmap(vdev, bar);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (ret) {
|
||||
done = ret;
|
||||
goto out;
|
||||
}
|
||||
|
||||
io = vdev->barmap[bar];
|
||||
}
|
||||
|
|
@ -207,6 +220,9 @@ ssize_t vfio_pci_bar_rw(struct vfio_pci_device *vdev, char __user *buf,
|
|||
|
||||
if (bar == PCI_ROM_RESOURCE)
|
||||
pci_unmap_rom(pdev, io);
|
||||
out:
|
||||
if (res->flags & IORESOURCE_MEM)
|
||||
up_read(&vdev->memory_lock);
|
||||
|
||||
return done;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -335,6 +335,32 @@ static int put_pfn(unsigned long pfn, int prot)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
|
||||
unsigned long vaddr, unsigned long *pfn,
|
||||
bool write_fault)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = follow_pfn(vma, vaddr, pfn);
|
||||
if (ret) {
|
||||
bool unlocked = false;
|
||||
|
||||
ret = fixup_user_fault(NULL, mm, vaddr,
|
||||
FAULT_FLAG_REMOTE |
|
||||
(write_fault ? FAULT_FLAG_WRITE : 0),
|
||||
&unlocked);
|
||||
if (unlocked)
|
||||
return -EAGAIN;
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = follow_pfn(vma, vaddr, pfn);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
|
||||
int prot, unsigned long *pfn)
|
||||
{
|
||||
|
|
@ -377,12 +403,16 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
|
|||
|
||||
vaddr = untagged_addr(vaddr);
|
||||
|
||||
retry:
|
||||
vma = find_vma_intersection(mm, vaddr, vaddr + 1);
|
||||
|
||||
if (vma && vma->vm_flags & VM_PFNMAP) {
|
||||
if (!follow_pfn(vma, vaddr, pfn) &&
|
||||
is_invalid_reserved_pfn(*pfn))
|
||||
ret = 0;
|
||||
ret = follow_fault_pfn(vma, mm, vaddr, pfn, prot & IOMMU_WRITE);
|
||||
if (ret == -EAGAIN)
|
||||
goto retry;
|
||||
|
||||
if (!ret && !is_invalid_reserved_pfn(*pfn))
|
||||
ret = -EFAULT;
|
||||
}
|
||||
|
||||
up_read(&mm->mmap_sem);
|
||||
|
|
|
|||
|
|
@ -363,8 +363,14 @@ int xenbus_grant_ring(struct xenbus_device *dev, void *vaddr,
|
|||
int i, j;
|
||||
|
||||
for (i = 0; i < nr_pages; i++) {
|
||||
err = gnttab_grant_foreign_access(dev->otherend_id,
|
||||
virt_to_gfn(vaddr), 0);
|
||||
unsigned long gfn;
|
||||
|
||||
if (is_vmalloc_addr(vaddr))
|
||||
gfn = pfn_to_gfn(vmalloc_to_pfn(vaddr));
|
||||
else
|
||||
gfn = virt_to_gfn(vaddr);
|
||||
|
||||
err = gnttab_grant_foreign_access(dev->otherend_id, gfn, 0);
|
||||
if (err < 0) {
|
||||
xenbus_dev_fatal(dev, err,
|
||||
"granting access to ring page");
|
||||
|
|
|
|||
|
|
@ -420,24 +420,51 @@ affs_mode_to_prot(struct inode *inode)
|
|||
u32 prot = AFFS_I(inode)->i_protect;
|
||||
umode_t mode = inode->i_mode;
|
||||
|
||||
/*
|
||||
* First, clear all RWED bits for owner, group, other.
|
||||
* Then, recalculate them afresh.
|
||||
*
|
||||
* We'll always clear the delete-inhibit bit for the owner, as that is
|
||||
* the classic single-user mode AmigaOS protection bit and we need to
|
||||
* stay compatible with all scenarios.
|
||||
*
|
||||
* Since multi-user AmigaOS is an extension, we'll only set the
|
||||
* delete-allow bit if any of the other bits in the same user class
|
||||
* (group/other) are used.
|
||||
*/
|
||||
prot &= ~(FIBF_NOEXECUTE | FIBF_NOREAD
|
||||
| FIBF_NOWRITE | FIBF_NODELETE
|
||||
| FIBF_GRP_EXECUTE | FIBF_GRP_READ
|
||||
| FIBF_GRP_WRITE | FIBF_GRP_DELETE
|
||||
| FIBF_OTR_EXECUTE | FIBF_OTR_READ
|
||||
| FIBF_OTR_WRITE | FIBF_OTR_DELETE);
|
||||
|
||||
/* Classic single-user AmigaOS flags. These are inverted. */
|
||||
if (!(mode & 0100))
|
||||
prot |= FIBF_NOEXECUTE;
|
||||
if (!(mode & 0400))
|
||||
prot |= FIBF_NOREAD;
|
||||
if (!(mode & 0200))
|
||||
prot |= FIBF_NOWRITE;
|
||||
|
||||
/* Multi-user extended flags. Not inverted. */
|
||||
if (mode & 0010)
|
||||
prot |= FIBF_GRP_EXECUTE;
|
||||
if (mode & 0040)
|
||||
prot |= FIBF_GRP_READ;
|
||||
if (mode & 0020)
|
||||
prot |= FIBF_GRP_WRITE;
|
||||
if (mode & 0070)
|
||||
prot |= FIBF_GRP_DELETE;
|
||||
|
||||
if (mode & 0001)
|
||||
prot |= FIBF_OTR_EXECUTE;
|
||||
if (mode & 0004)
|
||||
prot |= FIBF_OTR_READ;
|
||||
if (mode & 0002)
|
||||
prot |= FIBF_OTR_WRITE;
|
||||
if (mode & 0007)
|
||||
prot |= FIBF_OTR_DELETE;
|
||||
|
||||
AFFS_I(inode)->i_protect = prot;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -428,6 +428,24 @@ static int affs_write_begin(struct file *file, struct address_space *mapping,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int affs_write_end(struct file *file, struct address_space *mapping,
|
||||
loff_t pos, unsigned int len, unsigned int copied,
|
||||
struct page *page, void *fsdata)
|
||||
{
|
||||
struct inode *inode = mapping->host;
|
||||
int ret;
|
||||
|
||||
ret = generic_write_end(file, mapping, pos, len, copied, page, fsdata);
|
||||
|
||||
/* Clear Archived bit on file writes, as AmigaOS would do */
|
||||
if (AFFS_I(inode)->i_protect & FIBF_ARCHIVED) {
|
||||
AFFS_I(inode)->i_protect &= ~FIBF_ARCHIVED;
|
||||
mark_inode_dirty(inode);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static sector_t _affs_bmap(struct address_space *mapping, sector_t block)
|
||||
{
|
||||
return generic_block_bmap(mapping,block,affs_get_block);
|
||||
|
|
@ -437,7 +455,7 @@ const struct address_space_operations affs_aops = {
|
|||
.readpage = affs_readpage,
|
||||
.writepage = affs_writepage,
|
||||
.write_begin = affs_write_begin,
|
||||
.write_end = generic_write_end,
|
||||
.write_end = affs_write_end,
|
||||
.direct_IO = affs_direct_IO,
|
||||
.bmap = _affs_bmap
|
||||
};
|
||||
|
|
@ -794,6 +812,12 @@ done:
|
|||
if (tmp > inode->i_size)
|
||||
inode->i_size = AFFS_I(inode)->mmu_private = tmp;
|
||||
|
||||
/* Clear Archived bit on file writes, as AmigaOS would do */
|
||||
if (AFFS_I(inode)->i_protect & FIBF_ARCHIVED) {
|
||||
AFFS_I(inode)->i_protect &= ~FIBF_ARCHIVED;
|
||||
mark_inode_dirty(inode);
|
||||
}
|
||||
|
||||
err_first_bh:
|
||||
unlock_page(page);
|
||||
put_page(page);
|
||||
|
|
|
|||
|
|
@ -92,8 +92,8 @@ responded:
|
|||
}
|
||||
}
|
||||
|
||||
rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall);
|
||||
if (rtt_us < server->probe.rtt) {
|
||||
if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) &&
|
||||
rtt_us < server->probe.rtt) {
|
||||
server->probe.rtt = rtt_us;
|
||||
alist->preferred = index;
|
||||
have_result = true;
|
||||
|
|
|
|||
|
|
@ -92,8 +92,8 @@ responded:
|
|||
}
|
||||
}
|
||||
|
||||
rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall);
|
||||
if (rtt_us < server->probe.rtt) {
|
||||
if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) &&
|
||||
rtt_us < server->probe.rtt) {
|
||||
server->probe.rtt = rtt_us;
|
||||
alist->preferred = index;
|
||||
have_result = true;
|
||||
|
|
|
|||
|
|
@ -1339,6 +1339,8 @@ tree_mod_log_rewind(struct btrfs_fs_info *fs_info, struct btrfs_path *path,
|
|||
btrfs_tree_read_unlock_blocking(eb);
|
||||
free_extent_buffer(eb);
|
||||
|
||||
btrfs_set_buffer_lockdep_class(btrfs_header_owner(eb_rewin),
|
||||
eb_rewin, btrfs_header_level(eb_rewin));
|
||||
btrfs_tree_read_lock(eb_rewin);
|
||||
__tree_mod_log_rewind(fs_info, eb_rewin, time_seq, tm);
|
||||
WARN_ON(btrfs_header_nritems(eb_rewin) >
|
||||
|
|
@ -1412,7 +1414,6 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
|
|||
|
||||
if (!eb)
|
||||
return NULL;
|
||||
btrfs_tree_read_lock(eb);
|
||||
if (old_root) {
|
||||
btrfs_set_header_bytenr(eb, eb->start);
|
||||
btrfs_set_header_backref_rev(eb, BTRFS_MIXED_BACKREF_REV);
|
||||
|
|
@ -1420,6 +1421,9 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
|
|||
btrfs_set_header_level(eb, old_root->level);
|
||||
btrfs_set_header_generation(eb, old_generation);
|
||||
}
|
||||
btrfs_set_buffer_lockdep_class(btrfs_header_owner(eb), eb,
|
||||
btrfs_header_level(eb));
|
||||
btrfs_tree_read_lock(eb);
|
||||
if (tm)
|
||||
__tree_mod_log_rewind(fs_info, eb, time_seq, tm);
|
||||
else
|
||||
|
|
|
|||
|
|
@ -4446,7 +4446,7 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
|
|||
return ERR_PTR(-EUCLEAN);
|
||||
}
|
||||
|
||||
btrfs_set_buffer_lockdep_class(root->root_key.objectid, buf, level);
|
||||
btrfs_set_buffer_lockdep_class(owner, buf, level);
|
||||
btrfs_tree_lock(buf);
|
||||
btrfs_clean_tree_block(buf);
|
||||
clear_bit(EXTENT_BUFFER_STALE, &buf->bflags);
|
||||
|
|
|
|||
|
|
@ -5607,9 +5607,9 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dstv,
|
|||
}
|
||||
}
|
||||
|
||||
int read_extent_buffer_to_user(const struct extent_buffer *eb,
|
||||
void __user *dstv,
|
||||
unsigned long start, unsigned long len)
|
||||
int read_extent_buffer_to_user_nofault(const struct extent_buffer *eb,
|
||||
void __user *dstv,
|
||||
unsigned long start, unsigned long len)
|
||||
{
|
||||
size_t cur;
|
||||
size_t offset;
|
||||
|
|
@ -5630,7 +5630,7 @@ int read_extent_buffer_to_user(const struct extent_buffer *eb,
|
|||
|
||||
cur = min(len, (PAGE_SIZE - offset));
|
||||
kaddr = page_address(page);
|
||||
if (copy_to_user(dst, kaddr + offset, cur)) {
|
||||
if (probe_user_write(dst, kaddr + offset, cur)) {
|
||||
ret = -EFAULT;
|
||||
break;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -457,9 +457,9 @@ int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv,
|
|||
void read_extent_buffer(const struct extent_buffer *eb, void *dst,
|
||||
unsigned long start,
|
||||
unsigned long len);
|
||||
int read_extent_buffer_to_user(const struct extent_buffer *eb,
|
||||
void __user *dst, unsigned long start,
|
||||
unsigned long len);
|
||||
int read_extent_buffer_to_user_nofault(const struct extent_buffer *eb,
|
||||
void __user *dst, unsigned long start,
|
||||
unsigned long len);
|
||||
void write_extent_buffer_fsid(struct extent_buffer *eb, const void *src);
|
||||
void write_extent_buffer_chunk_tree_uuid(struct extent_buffer *eb,
|
||||
const void *src);
|
||||
|
|
|
|||
|
|
@ -2105,9 +2105,14 @@ static noinline int copy_to_sk(struct btrfs_path *path,
|
|||
sh.len = item_len;
|
||||
sh.transid = found_transid;
|
||||
|
||||
/* copy search result header */
|
||||
if (copy_to_user(ubuf + *sk_offset, &sh, sizeof(sh))) {
|
||||
ret = -EFAULT;
|
||||
/*
|
||||
* Copy search result header. If we fault then loop again so we
|
||||
* can fault in the pages and -EFAULT there if there's a
|
||||
* problem. Otherwise we'll fault and then copy the buffer in
|
||||
* properly this next time through
|
||||
*/
|
||||
if (probe_user_write(ubuf + *sk_offset, &sh, sizeof(sh))) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
|
@ -2115,10 +2120,14 @@ static noinline int copy_to_sk(struct btrfs_path *path,
|
|||
|
||||
if (item_len) {
|
||||
char __user *up = ubuf + *sk_offset;
|
||||
/* copy the item */
|
||||
if (read_extent_buffer_to_user(leaf, up,
|
||||
item_off, item_len)) {
|
||||
ret = -EFAULT;
|
||||
/*
|
||||
* Copy the item, same behavior as above, but reset the
|
||||
* * sk_offset so we copy the full thing again.
|
||||
*/
|
||||
if (read_extent_buffer_to_user_nofault(leaf, up,
|
||||
item_off, item_len)) {
|
||||
ret = 0;
|
||||
*sk_offset -= sizeof(sh);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
|
@ -2206,6 +2215,10 @@ static noinline int search_ioctl(struct inode *inode,
|
|||
key.offset = sk->min_offset;
|
||||
|
||||
while (1) {
|
||||
ret = fault_in_pages_writeable(ubuf, *buf_size - sk_offset);
|
||||
if (ret)
|
||||
break;
|
||||
|
||||
ret = btrfs_search_forward(root, &key, path, sk->min_transid);
|
||||
if (ret != 0) {
|
||||
if (ret > 0)
|
||||
|
|
|
|||
126
fs/btrfs/scrub.c
126
fs/btrfs/scrub.c
|
|
@ -3742,50 +3742,84 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void scrub_workers_put(struct btrfs_fs_info *fs_info)
|
||||
{
|
||||
if (refcount_dec_and_mutex_lock(&fs_info->scrub_workers_refcnt,
|
||||
&fs_info->scrub_lock)) {
|
||||
struct btrfs_workqueue *scrub_workers = NULL;
|
||||
struct btrfs_workqueue *scrub_wr_comp = NULL;
|
||||
struct btrfs_workqueue *scrub_parity = NULL;
|
||||
|
||||
scrub_workers = fs_info->scrub_workers;
|
||||
scrub_wr_comp = fs_info->scrub_wr_completion_workers;
|
||||
scrub_parity = fs_info->scrub_parity_workers;
|
||||
|
||||
fs_info->scrub_workers = NULL;
|
||||
fs_info->scrub_wr_completion_workers = NULL;
|
||||
fs_info->scrub_parity_workers = NULL;
|
||||
mutex_unlock(&fs_info->scrub_lock);
|
||||
|
||||
btrfs_destroy_workqueue(scrub_workers);
|
||||
btrfs_destroy_workqueue(scrub_wr_comp);
|
||||
btrfs_destroy_workqueue(scrub_parity);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* get a reference count on fs_info->scrub_workers. start worker if necessary
|
||||
*/
|
||||
static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info,
|
||||
int is_dev_replace)
|
||||
{
|
||||
struct btrfs_workqueue *scrub_workers = NULL;
|
||||
struct btrfs_workqueue *scrub_wr_comp = NULL;
|
||||
struct btrfs_workqueue *scrub_parity = NULL;
|
||||
unsigned int flags = WQ_FREEZABLE | WQ_UNBOUND;
|
||||
int max_active = fs_info->thread_pool_size;
|
||||
int ret = -ENOMEM;
|
||||
|
||||
lockdep_assert_held(&fs_info->scrub_lock);
|
||||
if (refcount_inc_not_zero(&fs_info->scrub_workers_refcnt))
|
||||
return 0;
|
||||
|
||||
scrub_workers = btrfs_alloc_workqueue(fs_info, "scrub", flags,
|
||||
is_dev_replace ? 1 : max_active, 4);
|
||||
if (!scrub_workers)
|
||||
goto fail_scrub_workers;
|
||||
|
||||
scrub_wr_comp = btrfs_alloc_workqueue(fs_info, "scrubwrc", flags,
|
||||
max_active, 2);
|
||||
if (!scrub_wr_comp)
|
||||
goto fail_scrub_wr_completion_workers;
|
||||
|
||||
scrub_parity = btrfs_alloc_workqueue(fs_info, "scrubparity", flags,
|
||||
max_active, 2);
|
||||
if (!scrub_parity)
|
||||
goto fail_scrub_parity_workers;
|
||||
|
||||
mutex_lock(&fs_info->scrub_lock);
|
||||
if (refcount_read(&fs_info->scrub_workers_refcnt) == 0) {
|
||||
ASSERT(fs_info->scrub_workers == NULL);
|
||||
fs_info->scrub_workers = btrfs_alloc_workqueue(fs_info, "scrub",
|
||||
flags, is_dev_replace ? 1 : max_active, 4);
|
||||
if (!fs_info->scrub_workers)
|
||||
goto fail_scrub_workers;
|
||||
|
||||
ASSERT(fs_info->scrub_wr_completion_workers == NULL);
|
||||
fs_info->scrub_wr_completion_workers =
|
||||
btrfs_alloc_workqueue(fs_info, "scrubwrc", flags,
|
||||
max_active, 2);
|
||||
if (!fs_info->scrub_wr_completion_workers)
|
||||
goto fail_scrub_wr_completion_workers;
|
||||
|
||||
ASSERT(fs_info->scrub_parity_workers == NULL);
|
||||
fs_info->scrub_parity_workers =
|
||||
btrfs_alloc_workqueue(fs_info, "scrubparity", flags,
|
||||
max_active, 2);
|
||||
if (!fs_info->scrub_parity_workers)
|
||||
goto fail_scrub_parity_workers;
|
||||
|
||||
ASSERT(fs_info->scrub_workers == NULL &&
|
||||
fs_info->scrub_wr_completion_workers == NULL &&
|
||||
fs_info->scrub_parity_workers == NULL);
|
||||
fs_info->scrub_workers = scrub_workers;
|
||||
fs_info->scrub_wr_completion_workers = scrub_wr_comp;
|
||||
fs_info->scrub_parity_workers = scrub_parity;
|
||||
refcount_set(&fs_info->scrub_workers_refcnt, 1);
|
||||
} else {
|
||||
refcount_inc(&fs_info->scrub_workers_refcnt);
|
||||
mutex_unlock(&fs_info->scrub_lock);
|
||||
return 0;
|
||||
}
|
||||
return 0;
|
||||
/* Other thread raced in and created the workers for us */
|
||||
refcount_inc(&fs_info->scrub_workers_refcnt);
|
||||
mutex_unlock(&fs_info->scrub_lock);
|
||||
|
||||
ret = 0;
|
||||
btrfs_destroy_workqueue(scrub_parity);
|
||||
fail_scrub_parity_workers:
|
||||
btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
|
||||
btrfs_destroy_workqueue(scrub_wr_comp);
|
||||
fail_scrub_wr_completion_workers:
|
||||
btrfs_destroy_workqueue(fs_info->scrub_workers);
|
||||
btrfs_destroy_workqueue(scrub_workers);
|
||||
fail_scrub_workers:
|
||||
return -ENOMEM;
|
||||
return ret;
|
||||
}
|
||||
|
||||
int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
|
||||
|
|
@ -3796,9 +3830,6 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
|
|||
int ret;
|
||||
struct btrfs_device *dev;
|
||||
unsigned int nofs_flag;
|
||||
struct btrfs_workqueue *scrub_workers = NULL;
|
||||
struct btrfs_workqueue *scrub_wr_comp = NULL;
|
||||
struct btrfs_workqueue *scrub_parity = NULL;
|
||||
|
||||
if (btrfs_fs_closing(fs_info))
|
||||
return -EAGAIN;
|
||||
|
|
@ -3845,13 +3876,17 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
|
|||
if (IS_ERR(sctx))
|
||||
return PTR_ERR(sctx);
|
||||
|
||||
ret = scrub_workers_get(fs_info, is_dev_replace);
|
||||
if (ret)
|
||||
goto out_free_ctx;
|
||||
|
||||
mutex_lock(&fs_info->fs_devices->device_list_mutex);
|
||||
dev = btrfs_find_device(fs_info->fs_devices, devid, NULL, NULL, true);
|
||||
if (!dev || (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state) &&
|
||||
!is_dev_replace)) {
|
||||
mutex_unlock(&fs_info->fs_devices->device_list_mutex);
|
||||
ret = -ENODEV;
|
||||
goto out_free_ctx;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!is_dev_replace && !readonly &&
|
||||
|
|
@ -3860,7 +3895,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
|
|||
btrfs_err_in_rcu(fs_info, "scrub: device %s is not writable",
|
||||
rcu_str_deref(dev->name));
|
||||
ret = -EROFS;
|
||||
goto out_free_ctx;
|
||||
goto out;
|
||||
}
|
||||
|
||||
mutex_lock(&fs_info->scrub_lock);
|
||||
|
|
@ -3869,7 +3904,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
|
|||
mutex_unlock(&fs_info->scrub_lock);
|
||||
mutex_unlock(&fs_info->fs_devices->device_list_mutex);
|
||||
ret = -EIO;
|
||||
goto out_free_ctx;
|
||||
goto out;
|
||||
}
|
||||
|
||||
down_read(&fs_info->dev_replace.rwsem);
|
||||
|
|
@ -3880,17 +3915,10 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
|
|||
mutex_unlock(&fs_info->scrub_lock);
|
||||
mutex_unlock(&fs_info->fs_devices->device_list_mutex);
|
||||
ret = -EINPROGRESS;
|
||||
goto out_free_ctx;
|
||||
goto out;
|
||||
}
|
||||
up_read(&fs_info->dev_replace.rwsem);
|
||||
|
||||
ret = scrub_workers_get(fs_info, is_dev_replace);
|
||||
if (ret) {
|
||||
mutex_unlock(&fs_info->scrub_lock);
|
||||
mutex_unlock(&fs_info->fs_devices->device_list_mutex);
|
||||
goto out_free_ctx;
|
||||
}
|
||||
|
||||
sctx->readonly = readonly;
|
||||
dev->scrub_ctx = sctx;
|
||||
mutex_unlock(&fs_info->fs_devices->device_list_mutex);
|
||||
|
|
@ -3943,24 +3971,14 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
|
|||
|
||||
mutex_lock(&fs_info->scrub_lock);
|
||||
dev->scrub_ctx = NULL;
|
||||
if (refcount_dec_and_test(&fs_info->scrub_workers_refcnt)) {
|
||||
scrub_workers = fs_info->scrub_workers;
|
||||
scrub_wr_comp = fs_info->scrub_wr_completion_workers;
|
||||
scrub_parity = fs_info->scrub_parity_workers;
|
||||
|
||||
fs_info->scrub_workers = NULL;
|
||||
fs_info->scrub_wr_completion_workers = NULL;
|
||||
fs_info->scrub_parity_workers = NULL;
|
||||
}
|
||||
mutex_unlock(&fs_info->scrub_lock);
|
||||
|
||||
btrfs_destroy_workqueue(scrub_workers);
|
||||
btrfs_destroy_workqueue(scrub_wr_comp);
|
||||
btrfs_destroy_workqueue(scrub_parity);
|
||||
scrub_workers_put(fs_info);
|
||||
scrub_put_ctx(sctx);
|
||||
|
||||
return ret;
|
||||
|
||||
out:
|
||||
scrub_workers_put(fs_info);
|
||||
out_free_ctx:
|
||||
scrub_free_ctx(sctx);
|
||||
|
||||
|
|
|
|||
|
|
@ -772,7 +772,7 @@ static int check_inode_item(struct extent_buffer *leaf,
|
|||
/* Here we use super block generation + 1 to handle log tree */
|
||||
if (btrfs_inode_generation(leaf, iitem) > super_gen + 1) {
|
||||
inode_item_err(fs_info, leaf, slot,
|
||||
"invalid inode generation: has %llu expect (0, %llu]",
|
||||
"invalid inode transid: has %llu expect [0, %llu]",
|
||||
btrfs_inode_generation(leaf, iitem),
|
||||
super_gen + 1);
|
||||
return -EUCLEAN;
|
||||
|
|
|
|||
|
|
@ -4568,6 +4568,7 @@ static int btrfs_uuid_scan_kthread(void *data)
|
|||
goto skip;
|
||||
}
|
||||
update_tree:
|
||||
btrfs_release_path(path);
|
||||
if (!btrfs_is_empty_uuid(root_item.uuid)) {
|
||||
ret = btrfs_uuid_tree_add(trans, root_item.uuid,
|
||||
BTRFS_UUID_KEY_SUBVOL,
|
||||
|
|
@ -4592,6 +4593,7 @@ update_tree:
|
|||
}
|
||||
|
||||
skip:
|
||||
btrfs_release_path(path);
|
||||
if (trans) {
|
||||
ret = btrfs_end_transaction(trans);
|
||||
trans = NULL;
|
||||
|
|
@ -4599,7 +4601,6 @@ skip:
|
|||
break;
|
||||
}
|
||||
|
||||
btrfs_release_path(path);
|
||||
if (key.offset < (u64)-1) {
|
||||
key.offset++;
|
||||
} else if (key.type < BTRFS_ROOT_ITEM_KEY) {
|
||||
|
|
|
|||
|
|
@ -2198,6 +2198,7 @@ const struct file_operations ceph_file_fops = {
|
|||
.mmap = ceph_mmap,
|
||||
.fsync = ceph_fsync,
|
||||
.lock = ceph_lock,
|
||||
.setlease = simple_nosetlease,
|
||||
.flock = ceph_flock,
|
||||
.splice_read = generic_file_splice_read,
|
||||
.splice_write = iter_file_splice_write,
|
||||
|
|
|
|||
|
|
@ -1994,9 +1994,9 @@ static int ep_loop_check_proc(void *priv, void *cookie, int call_nests)
|
|||
* during ep_insert().
|
||||
*/
|
||||
if (list_empty(&epi->ffd.file->f_tfile_llink)) {
|
||||
get_file(epi->ffd.file);
|
||||
list_add(&epi->ffd.file->f_tfile_llink,
|
||||
&tfile_check_list);
|
||||
if (get_file_rcu(epi->ffd.file))
|
||||
list_add(&epi->ffd.file->f_tfile_llink,
|
||||
&tfile_check_list);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue